* ci(bump-deps): register ds4 + move version pin into the Makefile
The initial ds4 PR (#9758) put the upstream commit pin in
backend/cpp/ds4/prepare.sh as a shell variable. The auto-bump bot at
.github/bump_deps.sh greps for ^$VAR?= in a Makefile, so DS4_VERSION
was invisible to it - other backends (llama-cpp, ik-llama-cpp,
turboquant, voxtral, etc.) all pin in their Makefile.
This change:
- Moves DS4_VERSION?= and DS4_REPO?= to the top of
backend/cpp/ds4/Makefile.
- Inlines the git init/fetch/checkout recipe into the 'ds4:' target
(matches llama-cpp's 'llama.cpp:' target pattern). Directory acts
as the target so make only re-clones when missing.
- Deletes the now-redundant prepare.sh.
- Adds antirez/ds4 + DS4_VERSION + main + backend/cpp/ds4/Makefile to
the .github/workflows/bump_deps.yaml matrix so the daily bot opens
PRs against this pin.
- Updates .agents/ds4-backend.md to point at the Makefile.
Verified:
$ grep -m1 '^DS4_VERSION?=' backend/cpp/ds4/Makefile
DS4_VERSION?=ae302c2fa18cc6d9aefc021d0f27ae03c9ad2fc0
$ make -C backend/cpp/ds4 ds4 # clones into ds4/ at the pin
$ make -C backend/cpp/ds4 ds4 # no-op on second invocation
make: 'ds4' is up to date.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: route backend/cpp/ds4/ changes through changed-backends.js
scripts/changed-backends.js:inferBackendPath has an explicit branch per
cpp dockerfile suffix (ik-llama-cpp, turboquant, llama-cpp). Without a
matching branch the function returns null, the backend never lands in
the path map, and PR change-detection cannot map "backend/cpp/ds4/X
changed" -> "rebuild ds4 image".
This is why PR #9761 produced zero ds4 jobs even though it directly
edits backend/cpp/ds4/Makefile.
Adds the missing branch (Dockerfile.ds4 -> backend/cpp/ds4/), placed
before the llama-cpp branch (since both share the .cpp ancestry but
ds4 is more specific - same ordering rule documented in
.agents/adding-backends.md).
Verified with a local Node simulation of the script against this PR's
diff: the path map now contains 'ds4 -> backend/cpp/ds4/' and a
'backend/cpp/ds4/Makefile' change correctly triggers the ds4 backend
in the rebuild set.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs(adding-backends): harden the two gotchas that bit ds4
Both omissions are silent at the time you ADD a backend - the failure
mode only appears later (the bump bot stays silent forever, or the path
filter shows up on the next PR that touches your backend with zero CI
jobs and looks broken for unrelated reasons). Expanding the
`scripts/changed-backends.js` paragraph from a one-liner to a fully
worked example, and adding a new sibling paragraph for the
`bump_deps.yaml` + Makefile-pin contract.
Both call out the specific mistakes from the ds4 timeline (#9758
→ #9761) so future contributors can pattern-match on the cause.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
* test(e2e-backends): allow BACKEND_BINARY for native-built backends
Adds an escape hatch for hardware-gated backends (e.g. ds4) where the
model is too large for Docker build context. When BACKEND_BINARY points
at a run.sh produced by 'make -C backend/cpp/<name> package', the suite
skips docker image extraction and drives the binary directly.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* test(e2e-backends): validate BACKEND_BINARY basename + log actual source
Two follow-ups from the cbcf5148 code review:
- BACKEND_BINARY now requires a path whose basename is `run.sh`. Without
this check, `filepath.Dir(binary)` silently discarded the filename, so
pointing the env var at an arbitrary binary failed later with a
confusing assertion that named a path the user never typed.
- The "Testing image=..." debug line printed an empty string when the
binary path was used, hiding the actual source in CI logs. The line
now reports whichever of BACKEND_IMAGE / BACKEND_BINARY is in effect
as `src=...`.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): scaffold ds4 backend dir
Adds prepare.sh, run.sh, and a .gitignore. CMakeLists, Makefile, and the
implementation arrive in follow-up commits.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): add backend Makefile
Drives ds4's upstream Makefile to produce engine .o files (CUDA on Linux
when BUILD_TYPE=cublas, Metal on Darwin, otherwise CPU debug path), then
invokes CMake on our wrapper.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): add CMakeLists for grpc-server
Generates protoc stubs from backend.proto, links grpc-server.cpp +
dsml_parser.cpp + dsml_renderer.cpp + kv_cache.cpp against pre-built
ds4 engine .o files. DS4_GPU=cuda|metal|cpu selects the backend.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): grpc-server skeleton + module stubs
The minimum that links: Backend service with Health + Free; other RPCs
default to UNIMPLEMENTED. Stub headers/sources for dsml_parser,
dsml_renderer, and kv_cache are in place so CMake links cleanly even
before those modules ship.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): implement LoadModel
Opens engine + creates session sized to ContextSize (default 32768).
Backend is compile-time: CPU when DS4_NO_GPU, Metal on __APPLE__, else
CUDA. MTP/speculative options are accepted via ModelOptions.Options[]
(mtp_path, mtp_draft, mtp_margin). kv_cache_dir option is captured into
g_kv_cache_dir for the cache module (Task 19 wires it in).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): implement TokenizeString
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): implement Predict (plain text)
Tool calls + thinking-mode split arrive in Task 13 once dsml_parser is in.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): implement PredictStream (plain text)
ChatDelta + reasoning/tool_calls split arrives in Task 14.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): implement Status RPC
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): add DSML streaming parser
Classifies raw model-emitted token text into CONTENT / REASONING /
TOOL_START / TOOL_ARGS / TOOL_END events. Markers it watches for are the
literal DSML strings rendered by ds4_server.c's prompt template
(<|DSML|tool_calls>, <|DSML|invoke name=...>, <think>, etc.) - these are
plain text the model emits, not special tokens.
Partial markers split across token chunks are buffered until a full marker
or a definitively-not-a-marker '<' is observed. RandomToolId() generates
the API-side tool call id (call_xxx) that exact-replay would key on.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): split hex escapes in DSML markers + add cstring/cstdio includes
C++ \x hex escapes have no length cap. '\x9cD' was read as a single escape
producing byte 0xCD, eating the 'D'. The markers were never actually matching
the DSML text the model emits. Split each escape with adjacent string literal
concatenation so the byte sequence is exactly EF BD 9C 44 (|D) at runtime.
Also adds <cstring> and <cstdio> includes (libstdc++ 13 does not transitively
expose std::strlen / std::snprintf via <string>).
The local plan file (uncommitted) was also updated with the same fixes so
Task 16's dsml_renderer.cpp does not re-introduce the bug.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): wire DsmlParser into Predict (ChatDelta)
Non-streaming Predict now emits one ChatDelta carrying content,
reasoning_content, and tool_calls[] parsed from the model's DSML output.
Reply.message still carries the raw model bytes for backends that prefer
the regex fallback path.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): wire DsmlParser into PredictStream
Per-token ChatDelta writes: content/reasoning_content go incrementally,
tool_calls emit TOOL_START as one delta (id + name) followed by
TOOL_ARGS deltas with incremental JSON. The Go-side aggregator
(pkg/functions/chat_deltas.go) reassembles them.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): chat template + reasoning_effort mapping
UseTokenizerTemplate=true + Messages -> ds4_chat_begin / append /
assistant_prefix. PredictOptions.Metadata['enable_thinking'] and
['reasoning_effort'] map to ds4_think_mode (DS4_THINK_HIGH default;
'max'/'xhigh' -> DS4_THINK_MAX; disabled -> DS4_THINK_NONE).
Tool-call rendering for assistant turns with tool_calls JSON arrives in
the next commit (dsml_renderer).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): render assistant tool_calls + tool results to DSML
Closes the round-trip: when an OpenAI client sends a multi-turn chat
where prior turns contain tool_calls or role=tool messages, build_prompt
serializes them back to the DSML shape the model was trained on. Mirrors
ds4_server.c's prompt renderer; uses nlohmann::json for parsing the
OpenAI tool_calls payload.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): disk KV cache module
Dir-based cache keyed by SHA1(rendered prompt prefix). File format:
'DS4G' magic + version + ctx_size + prefix_len + prefix + payload_bytes
+ ds4_session_save_payload output. NOT bit-compatible with ds4-server's
KVC files - that interop is a follow-up plan. LoadLongestPrefix walks
the dir picking the longest stored prefix that prefixes the incoming
prompt.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): wire KvCache into Predict/PredictStream
LoadModel reads 'kv_cache_dir' from ModelOptions.Options[], passes it to
g_kv_cache.SetDir. Each Predict/PredictStream computes a render text for
the request, tries LoadLongestPrefix to recover state, then Saves the
new state after generation. ds4_session_sync handles the live-cache
fast path internally, so the disk cache only matters for cold-starts
and cross-session reuse.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): add package.sh
Linux: bundles libc + ld + libstdc++ + libgomp + GPU runtime libs into
package/lib so the FROM scratch image boots without a host libc.
Darwin is handled by scripts/build/ds4-darwin.sh which uses otool -L.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): rename namespace ds4_backend -> ds4cpp
ds4.h defines 'typedef enum {...} ds4_backend' which collides with our
C++ 'namespace ds4_backend' anywhere a TU includes both. kv_cache.h
includes ds4.h directly and surfaces the conflict immediately; other
TUs would hit it once gRPC dev headers are available.
Renames the C++ namespace to ds4cpp across all wrapper files and the
plan, leaving the upstream ds4 typedef untouched.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend): add Dockerfile.ds4
Single-stage builder (CUDA devel image for cublas, ubuntu:24.04 for cpu)
-> FROM scratch with packaged grpc-server + bundled runtime libs.
nlohmann-json3-dev is required for dsml_renderer's JSON handling.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(make): wire backend/cpp/ds4 + ds4-darwin into root Makefile
BACKEND_DS4 entry + generate-docker-build-target eval + docker-build-ds4
in docker-build-backends + .NOTPARALLEL guards. Also adds the
backends/ds4-darwin target which delegates to scripts/build/ds4-darwin.sh
(landed in Task 24).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: add backend-matrix entries for ds4 (cpu + cuda13, per-arch)
Two entries per build (amd64 + arm64) so backend-merge-jobs assembles a
multi-arch manifest. Skipping cuda12 - ds4 was validated against CUDA 13.
Darwin Metal is handled outside this matrix by backend_build_darwin.yml.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/index): add ds4 meta + image entries
cpu + cuda13 x latest + master. Darwin Metal builds publish under
ds4-darwin via the existing llama-cpp-darwin OCI pipeline.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(scripts/build): add ds4-darwin.sh
Native macOS/Metal build for the ds4 backend. Mirrors llama-cpp-darwin.sh:
make grpc-server -> otool -L for dylib bundling -> OCI tar that
'local-ai backends install' consumes via the backends/ds4-darwin
Makefile target.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci(darwin): build ds4-darwin in backend_build_darwin
Adds a 'Build ds4 backend (Darwin Metal)' step that runs the
backends/ds4-darwin Makefile target on the macOS runner.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(import): auto-detect ds4 weights via DS4Importer
Adds core/gallery/importers/ds4.go which matches on the antirez/deepseek-v4-gguf
repo URI and the DeepSeek-V4-Flash-*.gguf filename pattern. Registered before
LlamaCPPImporter so ds4 weights route to backend: ds4 instead of falling
through to llama-cpp.
Also lists ds4 in /backends/known so the /import-model UI surfaces it as a
manual choice for users who want to force the backend on a non-canonical URI.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): add deepseek-v4-flash-q2 (ds4 backend)
One-click install of the q2 weights with backend: ds4.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs(.agents): add ds4-backend.md
Documents the backend shape, DSML state machine, thinking-mode mapping,
disk KV cache, build matrix (cpu/cuda13/Darwin), and the BACKEND_BINARY
hardware-validation path.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): pass UBUNTU_VERSION + arch env vars to install-base-deps
The .docker/install-base-deps.sh script needs UBUNTU_VERSION (defaults to
2404), TARGETARCH, SKIP_DRIVERS, and APT_MIRROR/APT_PORTS_MIRROR exported
into the environment so it can pick the right cuda-keyring / cudss / nvpl
debs and apt mirrors. Dockerfile.ds4 was declaring some of the ARGs but not
re-exporting them via ENV. Mirrors Dockerfile.llama-cpp's pattern.
Without this fix 'make docker-build-ds4 BUILD_TYPE=cublas CUDA_MAJOR_VERSION=13'
failed at:
/usr/local/sbin/install-base-deps: line 120: UBUNTU_VERSION: unbound variable
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/index): add Metal image entries for ds4
Adds metal-ds4 + metal-ds4-development image entries pointing at
quay.io/go-skynet/local-ai-backends:{latest,master}-metal-darwin-arm64-ds4
(built by scripts/build/ds4-darwin.sh on macOS arm64 runners), plus the
'metal' and 'metal-darwin-arm64' capability mappings on the ds4 meta and
ds4-development variant.
Closes a gap from the initial Task 23 landing - the Darwin Metal build
script and CI workflow step were already wired (Tasks 24-25), but the
gallery had no image entry for users to install the Metal variant.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(ci): use ubuntu:24.04 base for ds4 cuda13 matrix entries
The initial Task 22 matrix landing used base-image: 'nvidia/cuda:13.0.0-devel-ubuntu24.04'
which clashes with install-base-deps.sh's cuda-keyring step:
E: Conflicting values set for option Signed-By regarding source
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/sbsa/
The canonical pattern (llama-cpp, ik-llama-cpp, turboquant) uses plain
'ubuntu:24.04' + 'skip-drivers: false' so install-base-deps installs CUDA
from scratch via its own keyring setup. Adopting that here.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): drop install-base-deps.sh dependency
The .docker/install-base-deps.sh pipeline is built around the llama-cpp
needs: NVIDIA keyring + cuda-toolkit apt + gRPC-from-source build at
/opt/grpc. For ds4 we don't need any of that:
- CUDA: nvidia/cuda:13.0.0-devel-ubuntu24.04 ships /usr/local/cuda
ready to go; install-base-deps's keyring step then conflicts with
the pre-installed Signed-By.
- gRPC: ds4's grpc-server.cpp only links against grpc++; system
libgrpc++-dev (apt) is sufficient, no source build needed.
Replaced the install-base-deps invocation in Dockerfile.ds4 with a
direct 'apt-get install libgrpc++-dev libprotobuf-dev protobuf-compiler-grpc
nlohmann-json3-dev cmake build-essential pkg-config git'. Matrix entries
back to nvidia/cuda base + skip-drivers=true so install-base-deps would
no-op even if some downstream tooling calls it.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): correct proto accessors + alias grpc::Status as GStatus
Two compile bugs caught by the docker build:
1. proto::Message uses snake_case accessors. The build_prompt loop called
m.toolcalls() / m.toolcallid() - the protoc-generated names are
m.tool_calls() / m.tool_call_id(). Plan-text bug propagated to the
wrapper.
2. The Status RPC method shadowed the 'using grpc::Status' alias, so any
later method declaration using Status as a return type failed to parse
('Status does not name a type' starting at LoadModel). Solution: alias
grpc::Status as GStatus instead, with no 'using' clause that would
conflict. All RPC method declarations and return-statement constructions
now use GStatus.
Pre-existing code reviewer flagged the Status-shadow concern as 'minor'
in the original Task 10 commit; it turned out to be a real compile blocker
under libstdc++ 13 once the surrounding methods were filled in.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): preserve TOOL_ARGS content in dsml_parser Flush
When the model emitted a parameter value that arrived in the same buffer
as the surrounding tool_call markers (e.g. the buffered tail after a
literal '</think>' opened the model output), the parser deferred all
buffered bytes to Flush() because looks_like_prefix() always returns
true while buf starts with '<'. Flush() then drained the buffer as
plain CONTENT/REASONING regardless of parser state, so the bytes
between the parameter open and close markers were classified as
CONTENT instead of TOOL_ARGS.
Symptom: the model emitted
<|DSML|parameter name="location" string="true">Paris, France</|DSML|parameter>
and the assembled tool_call arguments came out as {"location":""} -
the opener and closer were emitted into the args stream but the
"Paris, France" content went to the assistant message instead.
Fix:
1. Flush() now uses the same state-aware emit logic as DrainPlain:
PARAM_VALUE bytes become TOOL_ARGS (json-escaped when string),
THINK bytes become REASONING, TEXT bytes become CONTENT, and
INVOKE / TOOL_CALLS structural whitespace is discarded.
2. looks_like_prefix() restricts its leading-'<' fallback to buffers
that have not yet seen a '>'. Without that change, char-by-char
feeds would discard the '<' of '<|DSML|invoke name="..."' once
the marker prefix length was reached but the closing quote/'>'
were still in flight.
Verified with a standalone harness that runs the failing input three
ways (single Feed, split-after-'>', and char-by-char) and aggregates
TOOL_ARGS for tool index 0: all three now produce
{"location":"Paris, France"}.
Assisted-by: Claude:opus-4.7 [Read,Edit,Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(backend/cpp/ds4): use ds4_session_sync + manual generation loop for KV persistence
ds4_engine_generate_argmax() is a self-contained helper that doesn't take or
update a ds4_session - it manages its own internal state. Our Predict and
PredictStream methods created g_session via ds4_session_create() but then
called ds4_engine_generate_argmax(), so g_session's KV state never advanced.
ds4_session_payload_bytes(g_session) returned 0 and the disk KV cache save
correctly rejected with 'session has no valid checkpoint to save'.
Switch both RPCs to the proper session API:
ds4_session_sync(g_session, &prompt, ...)
loop:
int token = ds4_session_argmax(g_session)
if token == eos: break
emit(token)
ds4_session_eval(g_session, token, ...)
After the loop the session has a real checkpoint and ds4_session_save_payload
writes the KV state to disk. Verified end-to-end on a DGX Spark GB10: three
.kv files (15-30 MB each) are written when BACKEND_TEST_OPTIONS sets
kv_cache_dir, and the e2e tool-call assertion still passes.
Also added stderr diagnostics to KvCache (enabled/disabled at SetDir; per-save
path + payload_bytes + result) so future failures are visible instead of
silent. The 'wrote ok' lines are low-volume - one per Predict/PredictStream
when the cache is enabled - and skipped entirely when the option is unset.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): use ds4_session_eval_speculative_argmax when MTP loaded
Wires MTP (Multi-Token Prediction) speculative decoding into the manual
generation loop in both Predict and PredictStream. When the upstream MTP
weights are loaded via 'mtp_path:' option AND we're on CUDA / Metal,
ds4_engine_mtp_draft_tokens() returns >0 and we switch the inner loop to
ds4_session_eval_speculative_argmax(), which can accept N>1 tokens per
verifier step. When MTP is not loaded (no option, CPU backend, or weights
absent), we fall through to the simple ds4_session_argmax + ds4_session_eval
path with no behavior change.
Validated on a DGX Spark GB10 with the optional MTP GGUF
(DeepSeek-V4-Flash-MTP-Q4K-Q8_0-F32.gguf, ~3.6 GB). LoadModel logs
'ds4: MTP support model loaded ... (draft=2)' on stderr.
Caveat per upstream README: 'currently provides at most a slight speedup,
not a meaningful generation-speed win'. Wired now mainly to track the
upstream API; bigger speedups arrive when ds4 improves the speculative path.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backend/cpp/ds4): honor PredictOptions sampling with DSML-aware override
Mirrors ds4_server.c:7102-7115 sampling-policy semantics on the LocalAI
gRPC side. The generation loop now consults compute_sample_params() per
token to pick the effective (temperature, top_k, top_p, min_p), based on:
1. Request defaults: PredictOptions.temperature / .topk / .topp / .minp
2. Thinking-mode override: when enable_thinking != false, force T=1.0,
top_k=0, top_p=1.0, min_p=0.0 (creativity for the reasoning pass and
the trailing content)
3. DSML structural override: when DsmlParser::IsInDsmlStructural()
returns true (we are between tool-call markers but NOT in a param
value payload), force T=0.0 so protocol bytes parse cleanly
When the effective temperature is 0, we keep using ds4_session_argmax +
MTP speculative path (matches ds4-server's gate that only enables MTP for
greedy positions). When > 0, we call ds4_session_sample(s, T, ...) with
a per-thread RNG seeded from system_clock and fall back to single-token
ds4_session_eval.
New public method on DsmlParser: IsInDsmlStructural() encodes which states
need protocol-byte determinism. PARAM_VALUE is excluded (payload uses user
sampling); TEXT and THINK are excluded (no tool-call context to protect).
Verified on the DGX Spark GB10: the e2e suite still passes with all 5
specs including tools, and the Predict output now varies between runs
(creative sampling active) while the tool-call args remain a clean
'{"location":"Paris, France"}' because the parser-state check forces
greedy on the structural bytes.
UX note: thinking mode is ON by default (matching ds4-server). Users who
want deterministic output should set Metadata.enable_thinking = false.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(gallery): add sha256 to deepseek-v4-flash-q2 entry
Per HF LFS metadata for antirez/deepseek-v4-gguf:
size: 86720111200 bytes (~80.76 GiB)
sha256: 31598c67c8b8744d3bcebcd19aa62253c6dc43cef3b8adf9f593656c9e86fd8c
LocalAI's downloader verifies sha256 when present, so users who install
deepseek-v4-flash-q2 from the gallery get integrity-checked weights and
the partial-download issue (an 81 GB file is easy to truncate) becomes
recoverable instead of silently producing a broken backend.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
* fix(ci): fix AMDGPU_TARGETS empty-string bypass in hipblas builds
399c1dec wired amdgpu-targets through the backend_build workflow_call
interface, intending the input's default value to cover matrix entries
that don't specify targets. However, GitHub Actions only applies a
workflow_call input default when the caller omits the input entirely.
When backend.yml passes `amdgpu-targets: ${{ matrix.amdgpu-targets }}`
and the matrix entry has no amdgpu-targets key, the expression evaluates
to an empty string, which is treated as an explicit value — bypassing
the default. The result is Docker receiving AMDGPU_TARGETS="" which in
turn causes Make's ?= default to be skipped (since the variable is
already set in the environment, even to empty), and cmake gets
-DAMDGPU_TARGETS= with no targets, so the HIP backend compiles for an
indeterminate target rather than the intended GPU list.
Fix this at two levels:
1. backend.yml: use a || fallback in the expression so that an undefined
matrix.amdgpu-targets never reaches the reusable workflow as an empty
string. The target list is the canonical default and lives here.
2. backend_build.yml: remove the now-misleading default value from the
input declaration. The default never fired due to the above bug, so
keeping it implied a guarantee that didn't exist.
3. backend/cpp/llama-cpp/Makefile: add an explicit $(error ...) guard
after the ?= assignment so that if AMDGPU_TARGETS is empty (whether
from environment or any future CI wiring mistake) the build fails
immediately with a clear message rather than silently producing a
binary compiled for an unknown GPU target.
Assisted-by: Claude Code:claude-sonnet-4-6
Signed-off-by: Russell Sim <rsl@simopolis.xyz>
* fix(build): plumb AMDGPU_TARGETS through to Docker builds
The docker-build-backend Makefile macro and Dockerfile.golang did not
pass AMDGPU_TARGETS to the inner make invocation, so hipblas builds
always used the backend Makefile's hardcoded default GPU targets
regardless of what was specified via environment or CI inputs.
Signed-off-by: Russell Sim <rsl@simopolis.xyz>
---------
Signed-off-by: Russell Sim <rsl@simopolis.xyz>
Bumps backend/cpp/llama-cpp/Makefile LLAMA_VERSION from 665abc6 to
d775992, picking up upstream PR ggml-org/llama.cpp#22397 which splits
common_params_speculative into nested draft / ngram_simple / ngram_mod
sub-structs. Renames every grpc-server.cpp reference to match:
speculative.mparams_dft.path -> speculative.draft.mparams.path
speculative.{n_max,n_min} -> speculative.draft.{n_max,n_min}
speculative.{p_min,p_split} -> speculative.draft.{p_min,p_split}
speculative.{n_gpu_layers,n_ctx} -> speculative.draft.{n_gpu_layers,n_ctx}
speculative.ngram_size_n -> speculative.ngram_simple.size_n
speculative.ngram_size_m -> speculative.ngram_simple.size_m
speculative.ngram_min_hits -> speculative.ngram_simple.min_hits
The "speculative.n_max" JSON key sent to the upstream server stays
unchanged — server-task.cpp still reads it and routes the value into
draft.n_max internally.
The turboquant fork (TheTom/llama-cpp-turboquant @ 11a241d) branched
before #22397 and still exposes the flat layout. Since turboquant
reuses the shared backend/cpp/llama-cpp/grpc-server.cpp, extend
patch-grpc-server.sh with an idempotent sed block that reverts the
ten field references back to the legacy flat names on the build copy
only — the original under backend/cpp/llama-cpp/ stays compiling
against vanilla upstream. Drop the block once the fork rebases.
ik-llama-cpp has its own grpc-server.cpp with no speculative refs
(0/2661 lines), so it is unaffected.
Validated locally with `make docker-build-llama-cpp` (avx, avx2,
avx512, fallback, grpc + rpc-server all built; image exported).
Assisted-by: Claude:claude-opus-4-7 [Bash Read Edit]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Adds split_mode (alias sm) to the llama.cpp backend options allowlist,
accepting none|layer|row|tensor. The tensor value targets the experimental
backend-agnostic tensor parallelism from ggml-org/llama.cpp#19378 and
requires a llama.cpp build that includes that PR, FlashAttention enabled,
KV-cache quantization disabled, and a manually set context size.
Assisted-by: Claude:claude-opus-4-7
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Bumps ik_llama.cpp pin to 16996aeab7. Upstream 286ce32...16996ae adds a
trailing `const struct quantize_user_data *` parameter to
`ggml_quantize_chunk` (PR ikawrakow/ik_llama.cpp#1677) but leaves
`examples/llava/clip.cpp` unchanged because their build has moved to
`examples/mtmd/`. LocalAI's prepare.sh still copies from
`examples/llava/`, so the dead 7-arg call reaches the grpc-server
compile and fails. Patch the call site to pass `nullptr` for the new
param.
Assisted-by: Claude:Opus-4.7 [Read] [Edit] [Bash]
* fix(llama-cpp): include server-chat.cpp in grpc-server translation unit
Upstream llama.cpp refactor (ggml-org/llama.cpp#20690) moved the
OAI/Anthropic/Responses and transcription conversion helpers out of
server-common.cpp into a new server-chat.cpp, and server-task.cpp and
server-context.cpp now call those symbols (convert_transcriptions_to_chatcmpl,
server_chat_convert_responses_to_chatcmpl, server_chat_convert_anthropic_to_oai,
server_chat_msg_diff_to_json_oaicompat) via server-chat.h.
grpc-server.cpp builds as a single translation unit by #include-ing the
upstream .cpp files directly. Without including server-chat.cpp, the
declarations are satisfied at compile time via server-chat.h but the
link step fails with undefined references once LLAMA_VERSION crosses
the refactor commit (134d6e54).
Guard the include with __has_include so the same source stays buildable
on older LLAMA_VERSION pins that predate the refactor (where prepare.sh
won't copy server-chat.cpp into tools/grpc-server/).
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(llama-cpp): bump LLAMA_VERSION to 0d0764dfd
Bump to ggml-org/llama.cpp@0d0764dfd2.
Paired with the preceding grpc-server server-chat.cpp include so the
refactor at 134d6e54 links cleanly. Supersedes PR #9494.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Upstream ik_llama.cpp commit e0596bf6 ("Autoparser") changed
common_params_sampling::grammar from std::string to a common_grammar
struct (type + grammar), which broke our two direct accesses:
- JSON ingest fed the field through json_value<common_grammar>(...),
for which nlohmann has no from_json adapter.
- JSON export emitted the struct directly, for which nlohmann has no
to_json adapter.
Wrap the incoming JSON string in common_grammar{COMMON_GRAMMAR_TYPE_USER, ...}
and serialize via the inner .grammar member, mirroring upstream's
examples/server/server-context.cpp.
Also bump IK_LLAMA_VERSION to 286ce324baed17c95faec77792eaa6bdb1c7a5f5
so the local-ai side lines up with the dependency bump in #9496.
Assisted-by: Claude-Code:claude-opus-4-7
* fix(turboquant): drop ignore-eos patch, bump fork to b8967-627ebbc
The upstream PR #21203 (server: respect the ignore_eos flag) has been
merged into the TheTom/llama-cpp-turboquant feature/turboquant-kv-cache
branch. With the fix now in-tree, 0001-server-respect-the-ignore-eos-flag.patch
no longer applies (git apply sees its additions already present) and the
nightly turboquant bump fails.
Retire the patch and bump the pin to the first fork revision that carries
the merged fix (tag feature-turboquant-kv-cache-b8967-627ebbc). This matches
the contract in apply-patches.sh: drop patches once the fork catches up.
* fix(turboquant): patch out get_media_marker() call in grpc-server copy
CI turboquant docker build was failing with:
grpc-server.cpp:2825:40: error: use of undeclared identifier
'get_media_marker'
The call was added by 7809c5f5 (PR #9412) to propagate the mtmd random
per-server media marker upstream landed in ggml-org/llama.cpp#21962. The
TheTom/llama-cpp-turboquant fork branched before that PR, so its
server-common.cpp has no such symbol.
Extend patch-grpc-server.sh to substitute get_media_marker() with the
legacy "<__media__>" literal in the build-time grpc-server.cpp copy
under turboquant-<flavor>-build/. The fork's mtmd_default_marker()
returns exactly that string, and the Go layer falls back to the same
sentinel when media_marker is empty, so behavior on the turboquant path
is unchanged. Patched copy only — the shared source under
backend/cpp/llama-cpp/ keeps compiling against vanilla upstream.
Verified by running `make docker-build-turboquant` locally end-to-end:
all five flavors (avx, avx2, avx512, fallback, grpc+rpc-server) now
compile past the previous failure and the image tags successfully.
Add gfx1151 (AMD Strix Halo / Ryzen AI MAX) to the default AMDGPU_TARGETS
list in the llama-cpp backend Makefile. ROCm 7.2.1 ships with gfx1151
Tensile libraries, so this architecture should be included in default builds.
Also expose AMDGPU_TARGETS as an ARG/ENV in Dockerfile.llama-cpp so that
users building for non-default GPU architectures can override the target
list via --build-arg AMDGPU_TARGETS=<arch>. Previously, passing
-DAMDGPU_TARGETS=<arch> through CMAKE_ARGS was silently overridden by
the Makefile's own append of the default target list.
Fixes#9374
Signed-off-by: Keith Mattix <keithmattix2@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
The shared grpc-server CMakeLists hardcoded `llama-common`, the post-rename
target name in upstream llama.cpp. The turboquant fork branched before that
rename and still exposes the helpers library as `common`, so the name
silently degraded to a plain `-llama-common` link flag, the PUBLIC include
directory was never propagated, and tools/server/server-task.h failed to
find common.h during turboquant-<flavor> builds.
Upstream llama.cpp (PR #21962) switched the server-side mtmd media
marker to a random per-server string and removed the legacy
"<__media__>" backward-compat replacement in mtmd_tokenizer. The
Go layer still emitted the hardcoded "<__media__>", so on the
non-tokenizer-template path the prompt arrived with a marker mtmd
did not recognize and tokenization failed with "number of bitmaps
(1) does not match number of markers (0)".
Report the active media marker via ModelMetadataResponse.media_marker
and substitute the sentinel "<__media__>" with it right before the
gRPC call, after the backend has been loaded and probed. Also skip
the Go-side multimodal templating entirely when UseTokenizerTemplate
is true — llama.cpp's oaicompat_chat_params_parse already injects its
own marker and StringContent is unused in that path. Backends that do
not expose the field keep the legacy "<__media__>" behavior.