mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-30 03:55:58 -04:00
f877942d97d45f2c0e09a9ee93d642ead4283196
6124 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
f877942d97 |
fix(openresponses): parse OpenAI-spec nested tool_choice + use correct setter (#9509)
Two bugs in MergeOpenResponsesConfig (/v1/responses + WebSocket, *not*
/v1/chat/completions — that has a separate, working path via Tool
unmarshal + SetFunctionCallNameString):
1. **Shape mismatch.** OpenAI's specific-function tool_choice nests the
name under "function":
{"type": "function", "function": {"name": "my_function"}}
The legacy flat shape was:
{"type": "function", "name": "my_function"}
Only the flat shape was handled. OpenAI-compliant clients that reach
/v1/responses (openai-python with the Responses API, Stainless-generated
SDKs, …) silently failed to force the function.
2. **Wrong setter.** The code called SetFunctionCallString(name), which
writes the mode field (functionCallString: "none"/"auto"/"required").
The specific-function name lives in a separate field
(functionCallNameString), read by ShouldCallSpecificFunction and
FunctionToCall. Net effect: a correctly-formed tool_choice never
engaged grammar-based forcing.
The fix preserves backward compatibility by accepting both shapes
(nested preferred, flat as fallback) and routes to the correct setter.
Note: The same "wrong setter" pattern appears at three other sites —
anthropic/messages.go:883, openai/realtime_model.go:171, and
openresponses/responses.go:776 — and /v1/chat/completions has its own
issue parsing tool_choice="required" as a string (json.Unmarshal on a
raw string fails silently). Those are filed as a tracking issue rather
than bundled here to keep this PR focused.
## Test plan
9 new Ginkgo specs under "MergeOpenResponsesConfig tool_choice parsing":
- string modes: "required" / "auto" / "none"
- OpenAI-spec nested shape: {type:function, function:{name}}
- Legacy Anthropic-compat flat shape: {type:function, name}
- Shape-preference: nested wins over flat when both present
- Malformed: missing type, wrong type, missing name, empty name, nil
$ go test ./core/http/middleware/ -count=1 -run TestMiddleware
Ran 28 of 28 Specs in 0.003 seconds -- PASS
## Repro (against /v1/responses)
curl -N http://localai/v1/responses \
-H 'Content-Type: application/json' \
-d '{"model":"qwen3.6-35b-a3b-apex",
"input":"Weather in Berlin?",
"tools":[{"type":"function","name":"get_weather",
"parameters":{"type":"object",
"properties":{"city":{"type":"string"}},
"required":["city"]}}],
"tool_choice":{"type":"function",
"function":{"name":"get_weather"}}}'
Before: grammar-based forcing silently inactive; model free-texts.
After : grammar forces get_weather invocation; output contains
tool_calls with function:{name:"get_weather", arguments:{...}}.
|
||
|
|
f5eb13d3c2 |
feat(insightface): add antispoofing (liveness) detection (#9515)
* feat(insightface): add antispoofing (liveness) detection
Light up the anti_spoofing flag that was parked during the first pass.
Both FaceVerify and FaceAnalyze now run the Silent-Face MiniFASNetV2 +
MiniFASNetV1SE ensemble (~4 MB, Apache 2.0, CPU <10ms) when the flag is
set. Failed liveness on either image vetoes FaceVerify regardless of
embedding similarity. Every insightface* gallery entry now ships the
MiniFASNet ONNX weights so existing packs light up after reinstall.
Setting the flag against a model without the MiniFASNet files returns
FAILED_PRECONDITION (HTTP 412) with a clear install message — no
silent is_real=false.
FaceVerifyResponse gained per-image img{1,2}_is_real and
img{1,2}_antispoof_score (proto 9-12); FaceAnalysis's existing
is_real/antispoof_score fields are now populated. Schema fields are
pointers so they are fully absent from the JSON response when
anti_spoofing was not requested — avoids collapsing "not checked" with
"checked and fake" under Go's omitempty on bool.
Validated end-to-end over HTTP against a local install:
- verify + anti_spoofing, both real -> verified=true, score ~0.76
- verify + anti_spoofing, img2 spoof -> verified=false, img2_is_real=false
- analyze + anti_spoofing -> is_real and score per face
- flag against model without MiniFASNet -> HTTP 412 fail-loud
Assisted-by: Claude:claude-opus-4-7 go vet
* test(insightface): wire test target into test-extra
The root Makefile's `test-extra` already runs
`$(MAKE) -C backend/python/insightface test`, but the backend's
Makefile never defined the target — so the command silently errored
and the suite was never executed in CI. Adding the two-line target
(matching ace-step/Makefile) hooks `test.sh` → `runUnittests` →
`python -m unittest test.py`, which discovers both the pre-existing
engine classes (InsightFaceEngineTest, OnnxDirectEngineTest) and the
new AntispoofingTest. Each class skips gracefully when its weights
can't be downloaded from a network-restricted runner.
Assisted-by: Claude:claude-opus-4-7
* test(insightface): exercise antispoofing in e2e-backends (both paths)
Add a `face_antispoof` capability to the Ginkgo e2e suite and extend
the existing FaceVerify + FaceAnalyze specs with liveness assertions
covering BOTH paths:
real fixture -> is_real=true, score>0, verified stays true
spoof fixture -> is_real=false, verified vetoed to false
The spoof fixture is upstream's own `image_F2.jpg` (via the yakhyo
mirror) — verified locally against the MiniFASNetV2+V1SE ensemble to
classify as is_real=false with score ~0.013. That makes the assertion
deterministic across CI runs; synthetic/derived spoofs fool the model
unpredictably and would be flaky.
Makefile wires it up end-to-end:
- New INSIGHTFACE_ANTISPOOF_* cache dir + two ONNX downloads with
pinned SHAs, matching the gallery entries.
- insightface-antispoof-models target shared by both backend configs.
- FACE_SPOOF_IMAGE_URL passed via BACKEND_TEST_FACE_SPOOF_IMAGE_URL.
- Both e2e targets (buffalo-sc + opencv) now:
* depend on insightface-antispoof-models
* pass antispoof_v2_onnx / antispoof_v1se_onnx in BACKEND_TEST_OPTIONS
* include face_antispoof in BACKEND_TEST_CAPS
backend_test.go adds the new capability constant and a faceSpoofFile
fixture resolved the same way as faceFile1/2/3. Spoof assertions are
gated on both capFaceAntispoof AND faceSpoofFile being set, so a test
config that omits the spoof fixture degrades gracefully to "real path
only" instead of failing.
Assisted-by: Claude:claude-opus-4-7 go vet
|
||
|
|
c1f923b2bc |
fix(importer): emit all shards for multi-part GGUF models (#9513)
The llama-cpp HuggingFace importer iterated files one at a time and kept overwriting `lastGGUFFile`, so sharded repos such as `unsloth/Kimi-K2.6-GGUF` (14 `Q8_K_XL` parts) produced a gallery entry pointing only at the final shard — useless to llama.cpp's split loader, which needs shard 1 to discover the set. Group shards up front via new helpers in `pkg/huggingface-api` (`SplitShardSuffix`, `ShardGroup`, `GroupShards`). The llama-cpp importer now picks a group (preferred quant, then last-group fallback) and emits every shard, with `Model:` pointing at shard 1. `FindPreferredModelFile` returns shard 1 of the first matching group so the gallery agent's preview stays coherent for sharded repos. Adds unit coverage for the HuggingFace branch of the importer (which had none), plus shard-detection tests in the hfapi package. Assisted-by: Claude:Opus-4.7 [Read] [Edit] [Bash] |
||
|
|
ed648b3b4e |
fix(llama-cpp): include server-chat.cpp in grpc-server translation unit (#9511)
* fix(llama-cpp): include server-chat.cpp in grpc-server translation unit Upstream llama.cpp refactor (ggml-org/llama.cpp#20690) moved the OAI/Anthropic/Responses and transcription conversion helpers out of server-common.cpp into a new server-chat.cpp, and server-task.cpp and server-context.cpp now call those symbols (convert_transcriptions_to_chatcmpl, server_chat_convert_responses_to_chatcmpl, server_chat_convert_anthropic_to_oai, server_chat_msg_diff_to_json_oaicompat) via server-chat.h. grpc-server.cpp builds as a single translation unit by #include-ing the upstream .cpp files directly. Without including server-chat.cpp, the declarations are satisfied at compile time via server-chat.h but the link step fails with undefined references once LLAMA_VERSION crosses the refactor commit (134d6e54). Guard the include with __has_include so the same source stays buildable on older LLAMA_VERSION pins that predate the refactor (where prepare.sh won't copy server-chat.cpp into tools/grpc-server/). Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore(llama-cpp): bump LLAMA_VERSION to 0d0764dfd Bump to ggml-org/llama.cpp@0d0764dfd2. Paired with the preceding grpc-server server-chat.cpp include so the refactor at 134d6e54 links cleanly. Supersedes PR #9494. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
3ce5248126 |
Update expected length of instructions in test
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|
|
04f1a0285d |
fix(ik-llama-cpp): adapt to common_grammar struct in sampling.h (#9512)
Upstream ik_llama.cpp commit e0596bf6 ("Autoparser") changed
common_params_sampling::grammar from std::string to a common_grammar
struct (type + grammar), which broke our two direct accesses:
- JSON ingest fed the field through json_value<common_grammar>(...),
for which nlohmann has no from_json adapter.
- JSON export emitted the struct directly, for which nlohmann has no
to_json adapter.
Wrap the incoming JSON string in common_grammar{COMMON_GRAMMAR_TYPE_USER, ...}
and serialize via the inner .grammar member, mirroring upstream's
examples/server/server-context.cpp.
Also bump IK_LLAMA_VERSION to 286ce324baed17c95faec77792eaa6bdb1c7a5f5
so the local-ai side lines up with the dependency bump in #9496.
Assisted-by: Claude-Code:claude-opus-4-7
|
||
|
|
181ebb6df4 |
feat: voice recognition (#9500)
* feat(voice-recognition): add /v1/voice/{verify,analyze,embed} + speaker-recognition backend
Audio analog to face recognition. Adds three gRPC RPCs
(VoiceVerify / VoiceAnalyze / VoiceEmbed), their Go service and HTTP
layers, a new FLAG_SPEAKER_RECOGNITION capability flag, and a Python
backend scaffold under backend/python/speaker-recognition/ wrapping
SpeechBrain ECAPA-TDNN with a parallel OnnxDirectEngine for
WeSpeaker / 3D-Speaker ONNX exports.
The kokoros Rust backend gets matching unimplemented trait stubs —
tonic's async_trait has no defaults, so adding an RPC without Rust
stubs breaks the build (same regression fixed by
|
||
|
|
1c59165d63 |
chore(model gallery): 🤖 add 1 new models via gallery agent (#9505)
chore(model gallery): 🤖 add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
eb00d9b178 |
chore: ⬆️ Update leejet/stable-diffusion.cpp to c97702e1057c2fe13a7074cd9069cb9dd6edc1bf (#9495)
⬆️ Update leejet/stable-diffusion.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
2068b6f43c |
feat(swagger): update swagger (#9498)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
eb01c77214 |
fix(kokoros): implement face_verify and face_analyze trait stubs (#9499)
The backend.proto was updated to add FaceVerify and FaceAnalyze RPCs
(face detection support), but the Rust KokorosService was never updated
to match the regenerated tonic trait, breaking compilation with E0046:
not all trait items implemented, missing: `face_verify`, `face_analyze`
Stubs both methods as unimplemented, matching the pattern used for the
other RPCs Kokoros does not support.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
|
||
|
|
bb4fda6f0e |
chore(agents): Update the backend creation instructions to include Rust and extra tests (#9490)
Signed-off-by: Richard Palethorpe <io@richiejp.com> |
||
|
|
f0c92610a1 |
feat(importer): expand importer flow to almost all backends (#9466)
* docs(agents): require importer integration when adding backends
Document the importer registry workflow so contributors know that adding
a new backend also requires updating the /import-model dropdown source:
either a new importer in core/gallery/importers/, extending an existing
one for drop-in replacements, or the pref-only slice for backends with
no reliable auto-detect signal. Always covered by a table-driven test.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for Batch 0 primitives
Introduce failing tests that drive Batch 0 of the importer expansion:
- pkg/huggingface-api: assert GetModelDetails populates PipelineTag and
LibraryName from /api/models/{repo}, and that a failing metadata
endpoint still returns file details (best-effort fetch).
- core/gallery/importers/helpers_test.go: new table-driven coverage for
HasFile, HasExtension, HasONNX, HasONNXConfigPair, HasGGMLFile.
- core/gallery/importers/importers_test.go: assert ErrAmbiguousImport
sentinel exists and round-trips through errors.Is.
- core/gallery/importers/local_test.go: extend with detection cases for
ggml-*.bin (whisper), silero_vad.onnx (silero-vad), and the piper
.onnx + .onnx.json pair.
- core/http/endpoints/localai/import_model_test.go: assert
ImportModelURIEndpoint returns HTTP 400 with a structured
{error, detail, hint} body when ErrAmbiguousImport surfaces.
All tests fail in the expected places (missing fields, missing
helpers, missing sentinel, endpoint still wraps as 500).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): Batch 0 foundation — helpers, sentinel, local detection
Implements the Batch 0 primitives that subsequent importer batches build on:
- pkg/huggingface-api: ModelDetails gains PipelineTag and LibraryName.
GetModelDetails now layers a best-effort GET /api/models/{repo} fetch
on top of ListFiles — a metadata outage leaves the fields empty but
still returns full file details. Uses a dedicated response struct
because the single-model endpoint uses snake_case keys while the list
endpoint historically returned camelCase.
- core/gallery/importers/helpers.go: generic HasFile, HasExtension,
HasONNX, HasONNXConfigPair, HasGGMLFile helpers working on
[]hfapi.ModelFile so per-backend importers can detect artefact
patterns without duplicating string wrangling.
- core/gallery/importers/importers.go: adds the ErrAmbiguousImport
sentinel. DiscoverModelConfig now returns it (wrapped with
fmt.Errorf("%w: ...")) when no importer matched AND the HF
pipeline_tag falls in a whitelist of narrow modalities (ASR, TTS,
sentence-similarity, text-classification, object-detection). The
whitelist is intentionally narrow — unknown tags keep the previous
"no importer matched" behaviour to avoid blocking rare repos.
- core/gallery/importers/local.go: three new local-path detections,
inserted before the existing merged-transformers branch:
* ggml-*.bin → whisper
* silero*.onnx → silero-vad
* *.onnx + *.onnx.json pair → piper
- core/http/endpoints/localai/import_model.go: ImportModelURIEndpoint
surfaces ErrAmbiguousImport as HTTP 400 with
{error, detail, hint} JSON, preserving existing behaviour for
unrelated errors.
Green tests:
go test ./core/gallery/importers/... ./pkg/huggingface-api/... \
./core/http/endpoints/localai/...
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(importers): red tests for KnownBackend endpoint and importer metadata
Add failing tests that drive Batch UI-Dropdown:
- importers_test.go: assert importers expose Name/Modality/AutoDetects
and that LlamaCPPImporter advertises drop-in replacements via a new
AdditionalBackendsProvider interface. A Registry() accessor is also
expected.
- backend_test.go (new): assert GET /backends/known returns
[]schema.KnownBackend, covers every importer, exposes drop-in
llama-cpp replacements, includes curated pref-only backends, has no
duplicates, and is sorted by Modality+Name.
These tests fail at compile time against master; they are intentionally
red so the follow-up green commit is reviewable.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery): add /backends/known endpoint for importer-aware backend list
Extend the Importer interface with Name/Modality/AutoDetects so the
import system can self-describe its registry, and introduce the
AdditionalBackendsProvider interface so importers can advertise drop-in
replacements (llama-cpp advertises ik-llama-cpp and turboquant).
Expose the new GET /backends/known endpoint that merges:
- the importer registry (auto-detect supported),
- drop-in replacements hosted by importers (preference-only),
- a curated knownPrefOnlyBackends slice for backends with no dedicated
importer (sglang, tinygrad, trl, mlx-vlm, whisperx, kokoros, Qwen TTS
variants, sam3-cpp) — kept at the top of backend.go so contributors
adding a new pref-only backend have one obvious place to edit,
- backends installed on disk but unknown to the importer (marked
AutoDetect=false, empty Modality).
The endpoint deliberately does NOT filter by gallery membership or host
capability (unlike /backends/available): LocalAI may auto-install a
backend that is not yet present, so the import form dropdown must show
everything the importer knows about.
Response is deduplicated (importer wins over pref-only) and sorted by
Modality+Name for deterministic output.
Registered in core/http/routes/localai.go next to /backends/available
under the same admin middleware.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui): source import form backend dropdown from /backends/known
Replace the hard-coded BACKENDS constant in ImportModel.jsx with a
live fetch of /backends/known on mount. Users now see every backend
the importer layer knows about (including preference-only entries)
grouped by modality, not a stale subset.
Changes:
- config.js: add backendsKnown endpoint constant next to
backendsAvailable.
- api.js: add backendsApi.listKnown() wrapper.
- ImportModel.jsx: remove BACKENDS constant, fetch the list via
useEffect, and derive grouped options via buildBackendOptions.
Preference-only entries render with a " (preference-only)" suffix.
Loading state disables the dropdown with a "Loading backends…"
placeholder; on fetch failure the form falls back to auto-detect
only and surfaces a non-blocking toast.
- SearchableSelect.jsx: accept items flagged isHeader=true and render
them as non-selectable section dividers. Keyboard navigation skips
headers and search queries hide them so filtered output stays
relevant.
Vitest is not set up in this project (devDependencies ship Playwright
only). Per the brief's guard-rail, no frontend test framework is
introduced; coverage is provided by the Go handler tests that assert
the /backends/known contract consumed by the React form.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for whisper importer
Asserts detection on ggerganov/whisper.cpp (via ggml-*.bin filename),
the preferences.backend=whisper override path for arbitrary URIs,
and the Importer interface metadata (name/modality/autodetect).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add whisper importer
Recognises whisper.cpp GGML models by the "ggml-*.bin" filename
convention (direct URL or HF repo member) and by the explicit
preferences.backend="whisper" override. Emits backend: whisper with
the transcript use-case. Registered before llama-cpp so the narrow
filename signal wins before any generic GGUF match is attempted.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for moonshine importer
Asserts detection on UsefulSensors/moonshine-tiny via owner + ONNX
files, the preferences.backend=moonshine override for arbitrary URIs,
and the Importer interface metadata (name/modality/autodetect).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add moonshine importer
Matches UsefulSensors-owned HF repos whose artefacts or metadata
identify them as ASR: on-disk .onnx files (the canonical Moonshine
packaging) OR pipeline_tag=automatic-speech-recognition (covers
transformers/safetensors-only sibling repos). preferences.backend=
moonshine overrides detection. Test uses the live moonshine-tiny
repo because the canonical UsefulSensors/moonshine repo currently
hits a recursive-subfolder bug in pkg/huggingface-api ListFiles.
Registered after WhisperImporter but before LlamaCPPImporter and
TransformersImporter so the narrower owner+ASR signal wins before
the generic tokenizer.json check routes the repo to transformers.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for nemo importer
Asserts detection on nvidia/parakeet-tdt-0.6b-v3 via owner + .nemo
file, the preferences.backend=nemo override for arbitrary URIs, and
the Importer interface metadata (name/modality/autodetect).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add nemo importer
Matches nvidia-owned HF repos that ship a .nemo checkpoint archive,
the canonical NeMo ASR packaging. preferences.backend=nemo forces
detection. Registered between moonshine and llama-cpp so the narrow
owner + extension signal wins before any downstream generic matcher.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for faster-whisper importer
Asserts detection on Systran/faster-whisper-large-v3 (owner +
model.bin + config.json + ASR pipeline), the preferences.backend=
faster-whisper override for arbitrary URIs, and the Importer
interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add faster-whisper importer
Recognises CTranslate2-packaged whisper checkpoints distributed for
the faster-whisper runtime: model.bin + config.json + ASR
pipeline_tag, narrowed to Systran-owned repos or repo names
containing "faster-whisper" to avoid falsely claiming vanilla
OpenAI whisper HF repos. preferences.backend=faster-whisper
overrides detection. Registered before llama-cpp and transformers
so the narrow signal wins before tokenizer.json routes the repo to
the generic transformers importer.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for qwen-asr importer
Asserts detection on Qwen/Qwen3-ASR-1.7B via owner + ASR substring
in the repo name, the preferences.backend=qwen-asr override for
arbitrary URIs, and the Importer interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add qwen-asr importer
Matches Qwen-owned HF repos whose name contains "ASR"
(case-insensitive), routing them to the qwen-asr backend rather
than the generic transformers/vllm path. The substring check scans
the repo portion only so the owner field cannot leak a false match.
preferences.backend=qwen-asr forces detection. Registered before
llama-cpp and transformers so the narrow owner+name signal wins.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): ASR ambiguity surfaces ErrAmbiguousImport
Locks in the behaviour added in Batch 0: an HF repo whose pipeline_tag
marks it as automatic-speech-recognition but whose artefacts match no
ASR importer (and no generic importer) must fail with
ErrAmbiguousImport so callers know to pass preferences.backend rather
than silently guess. pyannote/voice-activity-detection is the fixture
— its file list is only config.yaml + README, leaving every importer's
artefact check negative.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for piper importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add piper importer
Detects piper TTS voices by the canonical <voice>.onnx + <voice>.onnx.json
pair packaging (via HasONNXConfigPair). Narrow enough to skip generic
ONNX repos used by other backends (Moonshine ASR, sentence-transformers).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for bark importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add bark importer
Detects Suno's Bark TTS checkpoints by HF owner "suno" + repo name
prefix "bark". Adds HFOwnerRepoFromURI() helper so importers can fall
back to URI parsing when pkg/huggingface-api's recursive tree listing
errors on repos with nested subdirectories (suno/bark ships a
speaker_embeddings/v2 subtree that trips a pre-existing path-doubling
bug in the listFilesInPath recursion).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for fish-speech importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add fish-speech importer
Detects Fish Audio TTS releases by HF owner "fishaudio" with a URI-based
fallback for repos whose tree recursion trips the pre-existing hfapi
path-doubling bug.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for outetts importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add outetts importer
Detects OuteAI's OuteTTS releases by HF owner "OuteAI" or a case-
insensitive "OuteTTS" substring in the repo name, with a URI-based
fallback for recursion-bugged repos.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for voxcpm importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add voxcpm importer
Detects OpenBMB's VoxCPM TTS family by repo-name substring (community
mirrors re-host the weights under many owners — mlx-community,
bluryar, callgg, etc).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for kokoro importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add kokoro importer
Detects hexgrad's Kokoro TTS by the "Kokoro" repo-name substring paired
with a PyTorch .pth/.pt checkpoint — the pairing excludes ONNX-only
mirrors (handled by the pref-only `kokoros` Rust runtime) and GGUF
mirrors (handled by llama-cpp).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for kitten-tts importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add kitten-tts importer
Detects KittenML's kitten-tts releases by owner or "kitten-tts" repo-name
substring, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for neutts importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add neutts importer
Detects Neuphonic's NeuTTS releases by owner "neuphonic" or "neutts"
repo-name substring, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for chatterbox importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add chatterbox importer
Detects Resemble AI's Chatterbox TTS by owner "ResembleAI" or
"chatterbox" repo-name substring, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for vibevoice importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add vibevoice importer
Detects Microsoft's VibeVoice TTS by "vibevoice" repo-name substring
(case-insensitive) so community mirrors still route here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for coqui importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add coqui importer
Detects Coqui AI's TTS releases (XTTS-v2, YourTTS, …) by the
authoritative `coqui` HF owner, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): TTS ambiguity surfaces ErrAmbiguousImport
Adds a Ginkgo spec that imports nari-labs/Dia-1.6B — a real HF repo
carrying pipeline_tag="text-to-speech" whose artefacts (*.pth, one
safetensors shard, preprocessor_config.json, config.json) match none of
the Batch-2 TTS importers nor the generic text/image importers — and
asserts DiscoverModelConfig wraps ErrAmbiguousImport via errors.Is.
Also pivots the endpoint-level ambiguity fixture from hexgrad/Kokoro-82M
to nari-labs/Dia-1.6B. Batch 2 added a dedicated kokoro importer that
now claims the original fixture; Dia remains genuinely unclaimed and
so exercises the same ambiguity code path at the HTTP layer.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for stablediffusion-ggml importer
Covers HF repo detection (city96/FLUX.1-dev-gguf), raw .gguf URL matching on
filename arch tokens, preference override, and Importer interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add stablediffusion-ggml importer
Detects GGUF-packed Stable Diffusion and FLUX checkpoints (leejet owner,
city96 FLUX mirrors, second-state SD dumps, raw .gguf URLs with arch
tokens) and routes them to the stablediffusion-ggml backend. Registered
BEFORE LlamaCPPImporter so .gguf image checkpoints are not stolen by
llama-cpp's generic .gguf match. Reuses HFOwnerRepoFromURI for the
hfapi-recursion-bug fallback. preferences.backend overrides detection.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for ace-step importer
Covers HF repo-name detection (ACE-Step/ACE-Step-v1-3.5B), preference
override, and Importer interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add ace-step importer
Routes ACE-Step music generation checkpoints (ACE-Step/ACE-Step-v1-3.5B,
ACE-Step/Ace-Step1.5, community mirrors) to the ace-step backend.
Matching is case-insensitive on the "ace-step" repo-name substring and
owner, with an HFOwnerRepoFromURI fallback for the hfapi recursion bug.
KnownUsecaseStrings mirrors the gallery's ace-step-turbo entry
(sound_generation, tts). preferences.backend overrides.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): surface ErrAmbiguousImport on text-to-image misses
Adds text-to-image to ambiguousModalities whitelist and covers the
h94/IP-Adapter-FaceID case — pipeline_tag=text-to-image but ships only
.bin/.safetensors so diffusers, stablediffusion-ggml, llama-cpp,
transformers, vllm, mlx, and ace-step all miss. DiscoverModelConfig now
surfaces ErrAmbiguousImport for that shape instead of the opaque
"no importer matched" error.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for vllm-omni importer
Introduces the test surface for the forthcoming VLLMOmniImporter:
detection via preferences.backend, Qwen owner + Omni repo token,
URI-only fallback, negative cases (plain Qwen, random OmniX repo), and
Import() emitting backend: vllm-omni with chat + multimodal usecases.
Includes a registration-order assertion via DiscoverModelConfig to pin
the requirement that vllm-omni wins over vllm for Qwen Omni repos
(tokenizer files are usually present too).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add vllm-omni importer
Adds VLLMOmniImporter for Qwen Omni-style multimodal checkpoints
(Qwen3-Omni, Qwen2.5-Omni, …). Detection is narrow: HF owner "Qwen"
combined with "omni" in the repo name, or a repo name matching the
-Omni-/Omni- naming pattern. preferences.backend="vllm-omni" always
wins; HFOwnerRepoFromURI provides a URI-only fallback for the hfapi
recursion-bug edge case.
Emitted YAML sets backend: vllm-omni and known_usecases: [chat,
multimodal], matching the gallery/index.yaml vllm-omni entries. The
importer is registered ahead of VLLMImporter so Qwen Omni repos —
which also carry tokenizer files — route to vllm-omni rather than the
plain vllm backend.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for llama-cpp drop-in preferences
Pins the expected drop-in replacement behaviour: preferences.backend
of ik-llama-cpp or turboquant must swap the emitted YAML backend
field while keeping the llama-cpp file layout identical. Also covers
the unknown-backend case (must stay llama-cpp) and re-asserts
AdditionalBackends() returns the two curated entries with non-empty
descriptions.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): llama-cpp honours ik-llama-cpp and turboquant drop-in preferences
preferences.backend set to ik-llama-cpp or turboquant now swaps the
emitted YAML backend field while leaving the file layout, model path,
mmproj handling and everything else in the llama-cpp Import pipeline
untouched. Unknown values are ignored and fall back to backend:
llama-cpp so arbitrary input can't leak into the config.
Aligns the AdditionalBackends() descriptions with the user-facing
naming conventions surfaced via /backends/known. No changes to the
pref-only curated list in endpoints/localai/backend.go: the two
drop-in names have always lived on the importer side via
AdditionalBackends.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for silero-vad importer
Add the SileroVADImporter test fixtures covering metadata, preference
overrides, snakers4 + onnx detection, silero_vad.onnx canonical filename,
URI fallback, and live HF discovery. Implementation follows in the next
commit.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add silero-vad importer
Recognise the Silero VAD ONNX packaging: the canonical silero_vad.onnx
filename or any ONNX file under the snakers4 owner. Emits a
backend: silero-vad config with the vad known_usecase, and attaches the
canonical file entry when present so the weights download on import.
Registered before the generic importers so the unique-filename signal
takes precedence over any downstream tokenizer-based matcher.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for rerankers importer
Cover the RerankersImporter contract: interface metadata, preference
override, cross-encoder owner detection, case-insensitive 'reranker'
substring match (BAAI/bge-reranker, Alibaba-NLP/gte-reranker), URI
fallback, and the full-discovery ordering check that a BAAI reranker
repo must route to the rerankers importer rather than transformers.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add rerankers importer
Recognise reranker repositories — cross-encoder owner or any repo whose
name contains 'reranker' (case-insensitive). Emits backend: rerankers
with reranking: true and the rerank known_usecase.
Registered ahead of sentencetransformers and transformers so reranker
repos that happen to ship tokenizer.json or modules.json still route
here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for sentencetransformers importer
Cover the SentenceTransformersImporter contract: interface metadata,
preference override, modules.json marker file, sentence_bert_config.json
marker file, sentence-transformers owner, URI fallback, and the
full-discovery ordering check that ensures a sentence-transformers HF
URI routes here rather than transformers.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add sentencetransformers importer
Recognise sentence-transformers embedding repos by modules.json,
sentence_bert_config.json, or the sentence-transformers owner. Emits
backend: sentencetransformers with embeddings: true and the embeddings
known_usecase.
Registered ahead of transformers so ST repos that carry tokenizer.json
still route here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for rfdetr importer
Cover the RFDetrImporter contract: interface metadata, preference
override, case-insensitive rf-detr and rfdetr substring matches, URI
fallback, and negative cases. Implementation follows in the next
commit.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add rfdetr importer
Recognise RF-DETR object-detection repositories by a case-insensitive
'rf-detr' / 'rfdetr' substring in the repo name. Emits backend: rfdetr
with the detection known_usecase.
Registered ahead of transformers so RF-DETR repos with tokenizer
artefacts still route here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): surface ErrAmbiguousImport on sentence-similarity misses
Add an ambiguity fixture covering the embeddings/rerankers modality.
Qdrant/bm25 carries pipeline_tag=sentence-similarity but ships only
config.json + stopword .txt files — none of the Batch 5 importers
(silero-vad, rerankers, sentencetransformers, rfdetr) or the generic
vllm/transformers/llama-cpp/mlx/diffusers importers match. Because the
modality is in the ambiguous whitelist, DiscoverModelConfig must
surface ErrAmbiguousImport.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(localai/backend): red tests for KnownBackend.Installed flag
Extend the /backends/known suite with three failing cases that pin down
the forthcoming Installed field: JSON field presence on every entry,
flipping to true when an importer-registered backend is also present on
disk (and staying false for non-installed pref-only entries), and
surfacing system-only backends with empty modality and AutoDetect=false.
A small writeFakeSystemBackend helper plants a run.sh under the backends
dir so gallery.ListSystemBackends recognises the fixture.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(schema,localai/backend): add Installed flag to KnownBackend
Add an Installed bool to schema.KnownBackend and populate it from the
/backends/known handler so the React import form can warn users that
picking a not-yet-installed backend will trigger an automatic download
on submit.
Computation: after merging the importer registry, additional backends
provider entries and the curated pref-only slice, the handler walks
gallery.ListSystemBackends(systemState) and either flips the existing
map entry's Installed flag to true (preserving modality / autodetect /
description metadata) or inserts a bare {Installed:true} entry for
system-only backends the importer layer doesn't know about.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(localai/import_model): structured ambiguous-import response
Add red tests covering the extended ambiguity shape the React import
form needs:
- ImportModelURIEndpoint must return an HTTP 400 body that exposes the
detected `modality` (normalised to the importer modality key, e.g.
"tts" for pipeline_tag=text-to-speech) and a list of `candidates`
(backend names filtered by modality, excluding text-LLM backends).
- The importers package must surface a typed AmbiguousImportError so
HTTP consumers can read Modality + Candidates without parsing the
error string. errors.Is against the existing sentinel keeps working.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(localai/import_model): structured ambiguity response with modality + candidates
DiscoverModelConfig now returns a typed AmbiguousImportError that
carries the importer modality key, candidate backend names, the
original URI, and the raw HF pipeline_tag. Its Is() preserves
errors.Is(err, ErrAmbiguousImport) for legacy callers.
The importer modality is pre-mapped from the HF pipeline_tag
(automatic-speech-recognition → asr, text-to-speech → tts, etc) via
PipelineTagToModality — surfaced as an exported helper so downstream
consumers can avoid duplicating the table. CandidatesForModality
filters the default importer registry plus AdditionalBackendsProvider
drop-ins by modality, sorts deterministically, and is the single
source of truth used by ImportModelURIEndpoint.
ImportModelURIEndpoint now returns HTTP 400 with
{ error, detail, modality, candidates, hint }
when ambiguity fires, letting the React form render a modality-scoped
picker inline instead of a generic toast.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): manual pick badge + tooltip
Red Playwright coverage for the preference-only → manual pick rename:
- The Backend dropdown renders a "manual pick" badge on every option
whose KnownBackend.auto_detect is false.
- The badge carries a title attribute with hover-tooltip copy that
explains auto-detect won't route to this backend.
- Auto-detectable backends must NOT carry the badge.
- The legacy " (preference-only)" suffix is gone from every label.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): replace preference-only suffix with manual pick badge
SearchableSelect option rows now support an optional badge field — a
muted pill rendered to the right of the label with an optional title
attribute for native hover tooltips. Plain text so screen readers read
it alongside the option name.
buildBackendOptions in ImportModel stops appending " (preference-only)"
to the label and instead sets badge="manual pick" plus a descriptive
tooltip on every option whose auto_detect is false. The Backend help
text explains what "manual pick" means so users aren't left wondering
about the badge.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): inline ambiguity picker
Red Playwright coverage for Batch A2 — when the server returns a 400
ambiguity body, the form must render an inline alert instead of a
toast, expose one clickable chip per candidate backend, and support
both auto-resubmit on pick and silent dismiss.
- Mocks /api/models/import-uri with the structured ambiguity body
(error, detail, modality, candidates, hint).
- On first click of Import, the alert is visible, carries
modality-specific copy, and shows a chip per candidate.
- Clicking a chip clears the alert, sets the Backend dropdown, and
triggers a second POST to /api/models/import-uri.
- Dismissing the alert leaves the Backend dropdown on Auto-detect —
no implicit backend assignment.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): inline ambiguity alert with candidate chips
Adds AmbiguityAlert — a soft, info-coloured card rendered above the URI
input when the server returns a structured 400 with { modality,
candidates }. Message is modality-aware (tts/asr/embeddings/image/
reranker/detection get purpose-written copy, everything else falls back
to a generic template). Each candidate is a clickable chip that shows a
download icon when /backends/known marks the backend as not yet
installed, so users aren't surprised by an implicit install.
ImportModel wires the alert to handleSimpleImport's error path:
- api.handleResponse now attaches { status, body } to the thrown Error
so pages can pattern-match on structured responses instead of string
error messages.
- handleSimpleImport detects `status === 400 && body.error === 'ambiguous
import'` and flips into the inline-picker mode instead of toasting.
- Clicking a chip sets prefs.backend and auto-resubmits (passing the
picked backend as an override so setPrefs's asynchrony doesn't leak
a stale value).
- Dismissing clears the alert; changing the URI or the backend also
clears it so a stale alert never sticks around.
Test fixtures mock GET /backends/known + POST /models/import-uri so the
Playwright specs don't depend on real network reachability.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): auto-install warning
Red Playwright coverage for Batch A3 — when the user picks a backend
whose KnownBackend.installed is false, the form must render a muted
inline note under the Backend dropdown warning that submitting will
download the backend first. Picking an installed backend or leaving
Auto-detect selected must keep the note hidden.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): auto-install warning under backend dropdown
When the user picks a backend whose KnownBackend.installed is false,
render a muted inline note under the Backend dropdown's help text
warning that submitting will download the backend first. The note
lives inside the same form-group so it lines up with the existing
hint text; it's hidden when Auto-detect is selected (the selected
backend is unknowable at that point) or when the chosen backend is
already on disk.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): drop redundant section header, adjust icons, rename HF shortcut
- Remove the "Import from URI" card-level <h2> — the page title already
says "Import New Model" one row up, so the secondary header was
duplicating information.
- Swap the fa-star on "Common Preferences" for fa-sliders (stars imply
favourites/ratings; this is just a preferences block) and move the
Custom Preferences fa-sliders-h to fa-plus-circle so the two blocks
read as distinct rather than as two sliders.
- Rename the HF shortcut from "Search GGUF on HF" → "Browse models on
HF" and drop the `search=gguf` filter on the linked URL. The import
form now supports ~40 backends; hard-coding GGUF in the copy no
longer matches the form's actual reach.
- Pure polish — no behaviour change, covered by the existing Batch A
Playwright suite.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): batch B — simple/power switch, options, tabs, dialog
Adds a failing Playwright suite covering the full Batch B surface ahead
of implementation:
- B1: SimplePowerSwitch segmented control renders, toggles, persists to
localStorage across reloads.
- B2: Simple-mode Options disclosure is collapsed by default; expanding
exposes only Backend, Model Name, Description (no quantizations,
mmproj, model type, or custom prefs).
- B3: Power mode has Preferences and YAML tabs with a persistent
selection across reloads; URI/name/description typed in Simple carry
over to Power; YAML tab swaps the primary action to Create.
- B4: Switching Power -> Simple with a custom preference set triggers
the 3-button confirmation dialog (Keep / Discard / Cancel) with the
documented semantics.
Tests fail against master — implementation lands in the following
commits.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): add SimplePowerSwitch segmented control
Replaces the previous "Advanced Mode / Simple Mode" toggle button in the
page header with a two-segment control that flips between Simple and
Power. The control reuses the existing .segmented CSS shared with the
Sound page for visual consistency.
Mode state is persisted to localStorage under `import-form-mode` so
reloads land on the same view (default: simple). The boolean alias
`isAdvancedMode` is retained internally to minimise diff — subsequent
commits reshape the Simple and Power surfaces independently.
Closes B1 from the Batch B Playwright suite.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): simple mode collapsible options, power tabs, switch dialog
Completes the Batch B surface in a single structural pass so Simple and
Power mode can evolve independently:
Simple mode
- URI input + Ambiguity alert + Import button, plus a collapsible
"Options" disclosure that exposes ONLY Backend, Model Name,
Description. Quantizations / MMProj / Model Type / Diffusers fields
/ Custom Preferences are no longer rendered in Simple mode.
Power mode
- In-page segmented "Preferences · YAML" tab strip. Active tab
persists to localStorage under `import-form-power-tab`.
- Preferences tab = the full existing preferences + custom prefs
panel (no progressive disclosure yet — that's Batch D).
- YAML tab = the existing CodeEditor. Primary button reads "Create"
here, "Import Model" everywhere else.
Switch dialog
- Power -> Simple with non-default prefs (advanced pref keys set,
any custom-pref key non-empty, or YAML edited away from the
template) opens a 3-button dialog: Keep & switch / Discard &
switch / Cancel.
- Keep preserves all state. Discard resets prefs + customPrefs + YAML
to defaults. Cancel leaves the user in Power mode.
Page subtitle reflects the current surface (Simple, Power/Preferences,
Power/YAML). Estimate banner renders everywhere except Power/YAML.
Closes B2/B3/B4 from the Batch B Playwright suite.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): expand Options disclosure in Batch A tests
Batch B hid the Backend dropdown behind a collapsible Options disclosure
in Simple mode. The Batch A tests that exercise the dropdown directly
(manual-pick badge, ambiguity chip sets the selected backend, auto-
install warning) now click the disclosure toggle before asserting on
dropdown contents. Test intent is unchanged.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): strip decorative icons from field labels
The preference panel had 12 Font Awesome icons decorating field labels
(Backend, Model Name, Description, Quantizations, MMProj Quantizations,
Model Type, Pipeline Type, Scheduler Type, Enable Parameters, Embeddings,
CUDA, plus fa-link on Model URI). Every label screamed equally, flattening
the visual hierarchy.
Remove them. Keep icons where they carry meaning: page-level section
headers, URI format guide entries, primary buttons, the Simple-mode
Options disclosure, the ambiguity alert's fa-lightbulb, the auto-install
note's fa-download, and the Estimated-requirements banner's
fa-memory / fa-microchip / fa-download.
No new behaviour, no layout / spacing changes beyond removing the
orphaned icon margin. Playwright suite green.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): progressive disclosure of preference fields
Cover the Batch D visibility matrix for Power > Preferences: Quantizations,
MMProj Quantizations, and Model Type each render only for the backends that
can consume them, stay visible when the backend is unset, and preserve any
value the user already typed when toggled off and back on. Also pin the
shrunk Description textarea at rows=2.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): progressive disclosure + shorter description textarea
Gate Quantizations, MMProj Quantizations, and Model Type in the Power >
Preferences tab so each field only renders for the backends that can
actually consume it. Backend unset keeps everything visible. Hidden
fields' state is preserved (the JSX wrapper is guarded, not the
underlying prefs state) so users flipping backends back and forth don't
lose input.
Also shrink the Description textarea from rows=3 to rows=2 — it's
shared between Simple Options and Power Preferences so the change
applies to both.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): enter-to-submit in Simple mode
Red test for Batch F3 — pressing Enter in the URI input must POST
/models/import-uri, and Enter in the Description textarea must insert
a newline without submitting the form.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): enter-to-submit in Simple mode
Wrap the Simple-mode URI input + ambiguity alert + Options disclosure
in a <form> whose onSubmit calls handleSimpleImport. Pressing Enter in
the URI input (or any Simple-mode text input) now submits the import
without having to move the mouse to the header button. The Description
textarea keeps its native behaviour — Enter inserts a newline.
A hidden submit button is included because the visible Import button
lives outside the form in the page header; some browsers only fire
implicit Enter-submit when the form contains a submit-capable element.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import,SearchableSelect,components): aria-hidden on decorative icons
Every Font Awesome icon in the import form is decorative — its meaning
is already conveyed by adjacent visible text. Adding aria-hidden="true"
prevents screen readers from announcing the unicode glyph point as
content. Covers ImportModel.jsx (all remaining <i> glyphs) and
SearchableSelect.jsx (the trigger chevron).
AmbiguityAlert and SimplePowerSwitch already set aria-hidden on their
icons when the components landed in Batches A and B — no change needed
there.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(SearchableSelect): responsive dropdown maxHeight + hover focus guard
F2 — replace fixed pixel heights with min(pixel, vh) so the dropdown
and its inner scroll region don't overflow short viewports. Outer
container: 260px -> min(260px, 60vh); inner listbox: 200px ->
min(200px, 50vh). Tall viewports still get the original pixel caps.
F5 — short-circuit onMouseEnter when the hovered row is already the
focused row. Avoids queueing a setFocusIndex call (and a render) for
every mousemove inside the same item — the state would be identical.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): aria-label on custom preference rows
The Key / Value inputs and trash button in each Custom Preferences row
previously relied on placeholder text alone. Placeholders are not
accessible names — they vanish on input and screen readers do not
announce them consistently. Add row-indexed aria-labels so assistive
tech can distinguish "Preference key for row 1" from "row 2", and give
the trash button an explicit "Remove this preference" label.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): modality chip row
Red tests for Batch E — a horizontal modality chip row that filters the
Backend dropdown by modality. Covers visibility in Simple-mode Options
and Power/Preferences (and absence in Power/YAML), filter behaviour,
mismatched-backend clearing with toast, ambiguity-alert auto-selection,
and radiogroup keyboard navigation.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): add ModalityChips component + filter integration
Horizontal chip row (Any, Text, Speech, TTS, Image, Embeddings,
Rerankers, Detection, VAD) filters the Backend dropdown options to the
selected modality. Default is Any — no filter, current behaviour.
- New ModalityChips component (radiogroup pattern, roving tabindex,
arrow-key navigation, Home/End).
- buildBackendOptions now accepts an optional modalityFilter so grouped
output is narrowed before rendering.
- Chips render inside Simple-mode Options disclosure and Power >
Preferences tab. Power > YAML stays unaffected.
- Switching the filter drops a mismatched backend selection and
surfaces a toast so the auto-clear is visible.
- Ambiguity alerts auto-activate the matching chip so users see only
relevant backends even if they dismiss the alert.
Tightens the Batch E tests' option-matching to the label <span> so the
"↵" keybind hint on the focused row doesn't break accessible-name
lookups.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* fix(ui/import): rename Power to Advanced + stop URI-formats toggle from submitting form
The "Supported URI Formats" disclosure button inside the Simple-mode form
lacked an explicit type attribute, so it defaulted to type="submit". Every
click triggered the form's onSubmit and surfaced the empty-URI validation
toast ("Please enter a model URI"). Marking it type="button" lets it
behave as a pure toggle.
While here, rename the user-visible "Power" label to "Advanced" in the
mode switch (button text + tooltip) and the Power-mode tab's aria-label,
matching the term users actually expect. The internal mode key stays
'power' so tests, localStorage, and data-testid selectors are untouched.
Assisted-by: Claude:claude-opus-4-7
* fix(system): fall back to cpu when meta backend lacks default capability
Meta backends like vllm and sglang enumerate concrete variants for
nvidia/amd/intel/cpu but omit a default: catch-all entry. On a no-GPU
host the reported capability is "default", so the previous Capability()
returned "default" unconditionally on a miss — IsCompatibleWith then saw
no "default" key and filtered the meta out of AvailableBackends. The
import flow's auto-install step then failed with "no backend found with
name <meta>", contradicting the UI's promise that the backend would be
downloaded on demand.
Try the explicit "default" key first, then fall back to "cpu" before
giving up. vllm now resolves to cpu-vllm on CPU-only Linux without
touching the gallery YAML.
Assisted-by: Claude:claude-opus-4-7
|
||
|
|
bbeacf140d |
fix: remove unsafe sprintf() in grpc-server.cpp (#9486)
fix: V-001 security vulnerability Automated security fix generated by Orbis Security AI |
||
|
|
6820ec468f |
chore(model gallery): 🤖 add 1 new models via gallery agent (#9491)
chore(model gallery): 🤖 add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
20baec77ab |
feat(face-recognition): add insightface/onnx backend for 1:1 verify, 1:N identify, embedding, detection, analysis (#9480)
* feat(face-recognition): add insightface backend for 1:1 verify, 1:N identify, embedding, detection, analysis
Adds face recognition as a new first-class capability in LocalAI via the
`insightface` Python backend, with a pluggable two-engine design so
non-commercial (insightface model packs) and commercial-safe
(OpenCV Zoo YuNet + SFace) models share the same gRPC/HTTP surface.
New gRPC RPCs (backend/backend.proto):
* FaceVerify(FaceVerifyRequest) returns FaceVerifyResponse
* FaceAnalyze(FaceAnalyzeRequest) returns FaceAnalyzeResponse
Existing Embedding and Detect RPCs are reused (face image in
PredictOptions.Images / DetectOptions.src) for face embedding and
face detection respectively.
New HTTP endpoints under /v1/face/:
* verify — 1:1 image pair same-person decision
* analyze — per-face age + gender (emotion/race reserved)
* register — 1:N enrollment; stores embedding in vector store
* identify — 1:N recognition; detect → embed → StoresFind
* forget — remove a registered face by opaque ID
Service layer (core/services/facerecognition/) introduces a
`Registry` interface with one in-memory `storeRegistry` impl backed
by LocalAI's existing local-store gRPC vector backend. HTTP handlers
depend on the interface, not on StoresSet/StoresFind directly, so a
persistent PostgreSQL/pgvector implementation can be slotted in via a
single constructor change in core/application (TODO marker in the
package doc).
New usecase flag FLAG_FACE_RECOGNITION; insightface is also wired
into FLAG_DETECTION so /v1/detection works for face bounding boxes.
Gallery (backend/index.yaml) ships three entries:
* insightface-buffalo-l — SCRFD-10GF + ArcFace R50 + genderage
(~326MB pre-baked; non-commercial research use only)
* insightface-opencv — YuNet + SFace (~40MB pre-baked; Apache 2.0)
* insightface-buffalo-s — SCRFD-500MF + MBF (runtime download; non-commercial)
Python backend (backend/python/insightface/):
* engines.py — FaceEngine protocol with InsightFaceEngine and
OnnxDirectEngine; resolves model paths relative to the backend
directory so the same gallery config works in docker-scratch and
in the e2e-backends rootfs-extraction harness.
* backend.py — gRPC servicer implementing Health, LoadModel, Status,
Embedding, Detect, FaceVerify, FaceAnalyze.
* install.sh — pre-bakes buffalo_l + OpenCV YuNet/SFace inside the
backend directory so first-run is offline-clean (the final scratch
image only preserves files under /<backend>/).
* test.py — parametrized unit tests over both engines.
Tests:
* Registry unit tests (go test -race ./core/services/facerecognition/...)
— in-memory fake grpc.Backend, table-driven, covers register/
identify/forget/error paths + concurrent access.
* tests/e2e-backends/backend_test.go extended with face caps
(face_detect, face_embed, face_verify, face_analyze); relative
ordering + configurable verifyCeiling per engine.
* Makefile targets: test-extra-backend-insightface-buffalo-l,
-opencv, and the -all aggregate.
* CI: .github/workflows/test-extra.yml gains tests-insightface-grpc,
auto-triggered by changes under backend/python/insightface/.
Docs:
* docs/content/features/face-recognition.md — feature page with
license table, quickstart (defaults to the commercial-safe model),
models matrix, API reference, 1:N workflow, storage caveats.
* Cross-refs in object-detection.md, stores.md, embeddings.md, and
whats-new.md.
* Contributor README at backend/python/insightface/README.md.
Verified end-to-end:
* buffalo_l: 6/6 specs (health, load, face_detect, face_embed,
face_verify, face_analyze).
* opencv: 5/5 specs (same minus face_analyze — SFace has no
demographic head; correctly skipped via BACKEND_TEST_CAPS).
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): move engine selection to model gallery, collapse backend entries
The previous commit put engine/model_pack options on backend gallery
entries (`backend/index.yaml`). That was wrong — `GalleryBackend`
(core/gallery/backend_types.go:32) has no `options` field, so the
YAML decoder silently dropped those keys and all three "different
insightface-*" backend entries resolved to the same container image
with no distinguishing configuration.
Correct split:
* `backend/index.yaml` now has ONE `insightface` backend entry
shipping the CPU + CUDA 12 container images. The Python backend
bundles both the non-commercial insightface model packs
(buffalo_l / buffalo_s) and the commercial-safe OpenCV Zoo
weights (YuNet + SFace); the active engine is selected at
LoadModel time via `options: ["engine:..."]`.
* `gallery/index.yaml` gains three model entries —
`insightface-buffalo-l`, `insightface-opencv`,
`insightface-buffalo-s` — each setting the appropriate
`overrides.backend` + `overrides.options` so installing one
actually gives the user the intended engine. This matches how
`rfdetr-base` lives in the model gallery against the `rfdetr`
backend.
The earlier e2e tests passed despite this bug because the Makefile
targets pass `BACKEND_TEST_OPTIONS` directly to LoadModel via gRPC,
bypassing any gallery resolution entirely. No code changes needed.
Assisted-by: Claude:claude-opus-4-7
* feat(face-recognition): cover all supported models in the gallery + drop weight baking
Follows up on the model-gallery split: adds entries for every model
configuration either engine actually supports, and switches weight
delivery from image-baked to LocalAI's standard gallery mechanism.
Gallery now has seven `insightface-*` model entries (gallery/index.yaml):
insightface (family) — non-commercial research use
• buffalo-l (326MB) — SCRFD-10GF + ResNet50 + genderage, default
• buffalo-m (313MB) — SCRFD-2.5GF + ResNet50 + genderage
• buffalo-s (159MB) — SCRFD-500MF + MBF + genderage
• buffalo-sc (16MB) — SCRFD-500MF + MBF, recognition only
(no landmarks, no demographics — analyze
returns empty attributes)
• antelopev2 (407MB) — SCRFD-10GF + ResNet100@Glint360K + genderage
OpenCV Zoo family — Apache 2.0 commercial-safe
• opencv — YuNet + SFace fp32 (~40MB)
• opencv-int8 — YuNet + SFace int8 (~12MB, ~3x smaller, faster on CPU)
Model weights are no longer baked into the backend image. The image
now ships only the Python runtime + libraries (~275MB content size,
~1.18GB disk vs ~1.21GB when weights were baked). Weights flow through
LocalAI's gallery mechanism:
* OpenCV variants list `files:` with ONNX URIs + SHA-256, so
`local-ai models install insightface-opencv` pulls them into the
models directory exactly like any other gallery-managed model.
* insightface packs (upstream distributes .zip archives only, not
individual ONNX files) auto-download on first LoadModel via
FaceAnalysis' built-in machinery, rooted at the LocalAI models
directory so they live alongside everything else — same pattern
`rfdetr` uses with `inference.get_model()`.
Backend changes (backend/python/insightface/):
* backend.py — LoadModel propagates `ModelOptions.ModelPath` (the
LocalAI models directory) to engines via a `_model_dir` hint.
This replaces the earlier ModelFile-dirname approach; ModelPath
is the canonical "models directory" variable set by the Go loader
(pkg/model/initializers.go:144) and is always populated.
* engines.py::_resolve_model_path — picks up `model_dir` and searches
it (plus basename-in-model-dir) before falling back to the dev
script-dir. This is how OnnxDirectEngine finds gallery-downloaded
YuNet/SFace files by filename only.
* engines.py::_flatten_insightface_pack — new helper that works
around an upstream packaging inconsistency: buffalo_l/s/sc zips
expand flat, but buffalo_m and antelopev2 zips wrap their ONNX
files in a redundant `<name>/` directory. insightface's own
loader looks one level too shallow and fails. We call
`ensure_available()` explicitly, flatten if nested, then hand to
FaceAnalysis.
* engines.py::InsightFaceEngine.prepare — root-resolution order now
includes the `_model_dir` hint so packs download into the LocalAI
models directory by default.
* install.sh — no longer pre-downloads any weights. Everything is
gallery-managed now.
* smoke.py (new) — parametrized smoke test that iterates over every
gallery configuration, simulating the LocalAI install flow
(creates a models dir, fetches OpenCV files with checksum
verification, lets insightface auto-download its packs), then
runs detect + embed + verify (+ analyze where supported) through
the in-process BackendServicer.
* test.py — OnnxDirectEngineTest no longer hardcodes `/models/opencv/`
paths; downloads ONNX files to a temp dir at setUpClass time and
passes ModelPath accordingly.
Registry change (core/services/facerecognition/store_registry.go):
* `dim=0` in NewStoreRegistry now means "accept whatever dimension
arrives" — needed because the backend supports 512-d ArcFace/MBF
and 128-d SFace via the same Registry. A non-zero dim still fails
fast with ErrDimensionMismatch.
* core/application plumbs `faceEmbeddingDim = 0`, explaining the
rationale in the comment.
Backend gallery description updated to reflect that the image carries
no weights — it's just Python + engines.
Smoke-tested all 7 configurations against the rebuilt image (with the
flatten fix applied), exit 0:
PASS: insightface-buffalo-l faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-sc faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-s faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-m faces=6 dim=512 same-dist=0.000
PASS: insightface-antelopev2 faces=6 dim=512 same-dist=0.000
PASS: insightface-opencv faces=6 dim=128 same-dist=0.000
PASS: insightface-opencv-int8 faces=6 dim=128 same-dist=0.000
7/7 passed
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): pre-fetch OpenCV ONNX for e2e target; drop stale pre-baked claim
CI regression from the previous commit: I moved OpenCV Zoo weight
delivery to LocalAI's gallery `files:` mechanism, but the
test-extra-backend-insightface-opencv target was still passing
relative paths `detector_onnx:models/opencv/yunet.onnx` in
BACKEND_TEST_OPTIONS. The e2e suite drives LoadModel directly over
gRPC without going through the gallery, so those relative paths
resolved to nothing and OpenCV's ONNXImporter failed:
LoadModel failed: Failed to load face engine:
OpenCV(4.13.0) ... Can't read ONNX file: models/opencv/yunet.onnx
Fix: add an `insightface-opencv-models` prerequisite target that
fetches the two ONNX files (YuNet + SFace) to a deterministic host
cache at /tmp/localai-insightface-opencv-cache/, verifies SHA-256,
and skips the download on re-runs. The opencv test target depends on
it and passes absolute paths in BACKEND_TEST_OPTIONS, so the backend
finds the files via its normal absolute-path resolution branch.
Also refresh the buffalo_l comment: it no longer says "pre-baked"
(nothing is — the pack auto-downloads from upstream's GitHub release
on first LoadModel, same as in CI).
Locally verified: `make test-extra-backend-insightface-opencv` passes
5/5 specs (health, load, face_detect, face_embed, face_verify).
Assisted-by: Claude:claude-opus-4-7
* feat(face-recognition): add POST /v1/face/embed + correct /v1/embeddings docs
The docs promised that /v1/embeddings returns face vectors when you
send an image data-URI. That was never true: /v1/embeddings is
OpenAI-compatible and text-only by contract — its handler goes
through `core/backend/embeddings.go::ModelEmbedding`, which sets
`predictOptions.Embeddings = s` (a string of TEXT to embed) and never
populates `predictOptions.Images[]`. The Python backend's Embedding
gRPC method does handle Images[] (that's how /v1/face/register reaches
it internally via `backend.FaceEmbed`), but the HTTP embeddings
endpoint wasn't wired to populate it.
Rather than overload /v1/embeddings with image-vs-text detection —
messy, and the endpoint is OpenAI-compatible by design — add a
dedicated /v1/face/embed endpoint that wraps `backend.FaceEmbed`
(already used internally by /v1/face/register and /v1/face/identify).
Matches LocalAI's convention of a dedicated path per non-standard flow
(/v1/rerank, /v1/detection, /v1/face/verify etc.).
Response:
{
"embedding": [<dim> floats, L2-normed],
"dim": int, // 512 for ArcFace R50 / MBF, 128 for SFace
"model": "<name>"
}
Live-tested on the opencv engine: returns a 128-d L2-normalized vector
(sum(x^2) = 1.0000). Sentinel in docs updated to note /v1/embeddings
is text-only and point image users at /v1/face/embed instead.
Assisted-by: Claude:claude-opus-4-7
* fix(http): map malformed image input + gRPC status codes to proper 4xx
Image-input failures on LocalAI's single-image endpoints (/v1/detection,
/v1/face/{verify,analyze,embed,register,identify}) have historically
returned 500 — even when the client was the one who sent garbage.
Classic example: you POST an "image" that isn't a URL, isn't a
data-URI, and isn't a valid JPEG/PNG — the server shouldn't claim
that's its fault.
Two helpers land in core/http/endpoints/localai/images.go and every
single-image handler is switched over:
* decodeImageInput(s)
Wraps utils.GetContentURIAsBase64 and turns any failure
(invalid URL, not a data-URI, download error, etc.) into
echo.NewHTTPError(400, "invalid image input: ...").
* mapBackendError(err)
Inspects the gRPC status on a backend call error and maps:
INVALID_ARGUMENT → 400 Bad Request
NOT_FOUND → 404 Not Found
FAILED_PRECONDITION → 412 Precondition Failed
Unimplemented → 501 Not Implemented
All other codes fall through unchanged (still 500).
Before, my 1×1 PNG error-path test returned:
HTTP 500 "rpc error: code = InvalidArgument desc = failed to decode one or both images"
After:
HTTP 400 "failed to decode one or both images"
Scope-limited to the LocalAI single-image endpoints. The multi-modal
paths (middleware/request.go, openresponses/responses.go,
openai/realtime.go) intentionally log-and-skip individual media parts
when decoding fails — different design intent (graceful degradation
of a multi-part message), not a 400-worthy failure. Left untouched.
Live-verified: every error case in /tmp/face_errors.py now returns
4xx with a meaningful message; the "image with no face (1x1 PNG)"
case specifically went from 500 → 400.
Assisted-by: Claude:claude-opus-4-7
* refactor(face-recognition): insightface packs go through gallery files:, drop FaceAnalysis
Follows up on the discovery that LocalAI's gallery `files:` mechanism
handles archives (zip, tar.gz, …) via mholt/archiver/v3 — the rhasspy
piper voices use exactly this pattern. Insightface packs are zip
archives, so we can now deliver them the same way every other
gallery-managed model gets delivered: declaratively, checksum-verified,
through LocalAI's standard download+extract pipeline.
Two changes:
1. Gallery (gallery/index.yaml) — every insightface-* entry gains a
`files:` list with the pack zip's URI + SHA-256. `local-ai models
install insightface-buffalo-l` now fetches the zip, verifies the
hash, and extracts it into the models directory. No more reliance
on insightface's library-internal `ensure_available()` auto-download
or its hardcoded `BASE_REPO_URL`.
2. InsightFaceEngine (backend/python/insightface/engines.py) — drops
the FaceAnalysis wrapper and drives insightface's `model_zoo`
directly. The ~50 lines FaceAnalysis provides — glob ONNX files,
route each through `model_zoo.get_model()`, build a
`{taskname: model}` dict, loop per-face at inference — are
reimplemented in `InsightFaceEngine`. The actual inference classes
(RetinaFace, ArcFaceONNX, Attribute, Landmark) are still
insightface's — we only replicate the glue, so drift risk against
upstream is minimal.
Why drop FaceAnalysis: it hard-codes a `<root>/models/<name>/*.onnx`
layout that doesn't match what LocalAI's zip extraction produces.
LocalAI unpacks archives flat into `<models_dir>`. Upstream packs
are inconsistent — buffalo_l/s/sc ship ONNX at the zip root (lands
at `<models_dir>/*.onnx`), buffalo_m/antelopev2 wrap in a redundant
`<name>/` dir (lands at `<models_dir>/<name>/*.onnx`). The new
`_locate_insightface_pack` helper searches both locations plus
legacy paths and returns whichever has ONNX files. Replaces the
earlier `_flatten_insightface_pack` helper (which tried to fight
FaceAnalysis's layout expectations; now we just find the files
wherever they are).
Net effect for users: install once via LocalAI's managed flow,
weights live alongside every other model, progress shows in the
jobs endpoint, no first-load network call. Same API surface,
cleaner plumbing.
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): CI's insightface e2e path needs the pack pre-fetched
The e2e suite drives LoadModel over gRPC without going through LocalAI's
gallery flow, so the engine's `_model_dir` option (normally populated
from ModelPath) is empty. Previously the insightface target relied on
FaceAnalysis auto-download to paper over this, but we dropped
FaceAnalysis in favor of direct model_zoo calls — so the buffalo_l
target started failing at LoadModel with "no insightface pack found".
Mirror the opencv target's pre-fetch pattern: download buffalo_sc.zip
(same SHA as the gallery entry), extract it on the host, and pass
`root:<dir>` so the engine locates the pack without needing
ModelPath. Switched to buffalo_sc (smallest pack, ~16MB) to keep CI
fast; it covers the same insightface engine code path as buffalo_l.
Face analyze cap dropped since buffalo_sc has no age/gender head.
Assisted-by: Claude:claude-opus-4-7[1m]
* feat(face-recognition): surface face-recognition in advertised feature maps
The six /v1/face/* endpoints were missing from every place LocalAI
advertises its feature surface to clients:
* api_instructions — the machine-readable capability index at
GET /api/instructions. Added `face-recognition` as a dedicated
instruction area with an intro that calls out the in-memory
registry caveat and the /v1/face/embed vs /v1/embeddings split.
* auth/permissions — added FeatureFaceRecognition constant, routed
all six face endpoints through it so admins can gate them per-user
like any other API feature. Default ON (matches the other API
features).
* React UI capabilities — CAP_FACE_RECOGNITION symbol mapped to
FLAG_FACE_RECOGNITION. Declared only for now; the Face page is a
follow-up (noted in the plan).
Instruction count bumped 9 → 10; test updated.
Assisted-by: Claude:claude-opus-4-7[1m]
* docs(agents): capture advertising-surface steps in the endpoint guide
Before this change, adding a new /v1/* endpoint reliably missed one or
more of: the swagger @Tags annotation, the /api/instructions registry,
the auth RouteFeatureRegistry, and the React UI CAP_* symbol. The
endpoint would work but be invisible to API consumers, admins, and the
UI — and nothing in the existing docs said to look in those places.
Extend .agents/api-endpoints-and-auth.md with a new "Advertising
surfaces" section covering all four surfaces (swagger tags, /api/
instructions, capabilities.js, docs/), and expand the closing checklist
so it's impossible to ship a feature without visiting each one. Hoist a
one-liner reminder into AGENTS.md's Quick Reference so agents skim it
before diving in.
Assisted-by: Claude:claude-opus-4-7[1m]
|
||
|
|
d16f19f1eb |
fix(kokoros): Build and publish the backend images from CI/CD (#9487)
* fix(kokoros): Build and publish the backend images from CI/CD Signed-off-by: Richard Palethorpe <io@richiejp.com> * Delete .claude/agents Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Delete .claude/commands Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Delete .claude/settings.json Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Delete .claude/skills Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> --------- Signed-off-by: Richard Palethorpe <io@richiejp.com> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|
|
cd7b035716 |
chore: ⬆️ Update ggml-org/llama.cpp to 5a4cd6741fc33227cdacb329f355ab21f8481de2 (#9479)
⬆️ Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
0f3bb2d647 |
chore(model gallery): 🤖 add 1 new models via gallery agent (#9481)
chore(model gallery): 🤖 add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
607efe5a4c |
fix(backend-monitor): accept model as a query parameter (#9411)
The /backend/monitor endpoint is routed as GET but its handler bound the
model name from a request body, which is invalid per REST and breaks
Swagger UI and OpenAPI codegen tools that refuse to send bodies with GET.
Switch to reading ?model=<name> as a query parameter and update the
Swagger annotation, regenerated spec files, and documentation. The
handler still falls back to body binding when the query parameter is
absent, so existing clients sending {"model": "..."} continue to work.
Fixes #9207
Signed-off-by: Adira Denis Muhando <dennisadira@gmail.com>
|
||
|
|
7d8c1d5e45 |
fix(streaming): dedupe content, recover reasoning, unique tool_call IDs in deferred flush (#9470)
* fix(streaming): dedupe content, recover reasoning, unique tool IDs
When tool calls are discovered only during final parsing (after the
streaming token callback returns), processTools' default switch branch
used to emit the full accumulated content alongside the tool_call args
chunk. Clients that accumulate delta.content per the OpenAI streaming
contract end up showing every narration line twice. Three related bugs
in the same flush path:
1. Content duplication: the args chunk carried Content:textContentToReturn
even though the text had already been streamed token-by-token via
the token callback, so delta.content was both the running total and
bundled with tool_calls in one delta (two spec violations).
2. Reasoning drop: when the C++ autoparser surfaces reasoning only as
a final aggregate (no incremental tokens), the callback never emits
it and the flush branch didn't either, silently losing it.
3. tool_call ID collision: empty ss.ID fell back to the request id, so
multiple empty-ID calls in the same turn all shared the same id,
breaking tool_result matching by tool_call_id.
Extracted the block into buildDeferredToolCallChunks (pure function,
unit-testable) and added 19 Ginkgo specs covering streamed vs.
not-streamed content/reasoning, single vs. multi call, and
incremental-vs-deferred emission. Every case asserts the invariant
that no delta carries both non-empty Content/Reasoning and non-empty
ToolCalls.
Fix summary:
- emit reasoning in its own leading chunk when !reasoningAlreadyStreamed
- emit role+content in their own chunks when !contentAlreadyStreamed
- drop Content from the tool_call args chunk
- fallback to fmt.Sprintf("%s-%d", id, i) for empty ss.ID so calls stay
uniquely addressable
Reproduced live against qwen3.6-35b-a3b-apex served by LocalAI with
the C++ autoparser; the full-content replay chunk that preceded each
tool_calls block is gone after the fix.
Assisted-by: Claude:claude-opus-4-7 go vet
* fix(streaming): dedupe reasoning in the noActionToRun final chunk
extractor.Reasoning() returns only the Go-side extractor's lastReasoning
accumulator (pkg/reasoning/extractor.go:129). ChatDelta reasoning
coming through ProcessChatDeltaReasoning lives in a separate
accumulator (cdLastStrippedReasoning) that Reasoning() does not
expose. The "reasoning != \"\" && extractor.Reasoning() == \"\"" guard
therefore fires exactly when the autoparser streamed reasoning
incrementally via the callback — producing a duplicate final delivery.
Replace both guard sites in the noActionToRun branch with the
sentReasoning flag introduced in the previous commit. Extract the
closing-chunk logic into buildNoActionFinalChunks so the refactor is
testable; the helper mirrors buildDeferredToolCallChunks.
Add Ginkgo coverage for both the content-streamed and
content-not-streamed paths: reasoning is dropped when it was streamed,
delivered once when it arrived only as a final aggregate, and omitted
when empty. Metadata invariants carried over from the sibling helper.
Assisted-by: Claude:claude-opus-4-7 go vet
* fix(streaming): detect noActionToRun anywhere in functionResults
The previous condition only looked at functionResults[0].Name, which
misbehaved when a real tool call followed a noAction sentinel — the
noAction shadowed the real call and the whole turn was treated as a
question to answer, silently dropping the tool call. The mirror case,
[realCall, noActionCall], fell into the default branch and emitted the
noAction entry as if it were a real tool_call.
Replace with hasRealCall, which scans the slice and returns true as
soon as it finds a non-noAction entry. noActionToRun now matches the
semantic intent: "every entry is the noAction sentinel (or the slice
is empty)".
Note: this does not change incremental emission, where noAction
entries may still be forwarded as tool_call chunks by the XML/JSON
iterative parsers. That is a separate layer (functions.Parse*) and
addressing it requires threading noAction through the parser APIs —
out of scope for this change.
Assisted-by: Claude:claude-opus-4-7 go vet
|
||
|
|
d18d434bb2 |
Respect explicit reasoning config during GGUF thinking probe (#9463)
Signed-off-by: leinasi2014 <leinasi2014@gmail.com> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|
|
39573ecd2a |
chore(whisperx): drop ROCm/hipblas build target (#9474)
whisperx has no upstream AMD GPU support and its core transcription path (faster-whisper -> ctranslate2) falls back to CPU on AMD since the PyPI ctranslate2 is CUDA-only. The torch rocm wheels would accelerate only the alignment/diarization stages, producing a misleadingly half-working image. Drop the hipblas variant rather than shipping a partially accelerated build users can't distinguish from the real thing. AMD hosts now fall through the capability map to cpu-whisperx / cpu-whisperx-development. Also removes the now-dangling rocm-whisperx assertion from pkg/system/capabilities_test.go and the ROCm mention from the whisperx row in docs/content/reference/compatibility-table.md. Assisted-by: Claude Code:claude-opus-4-7 |
||
|
|
a7dbb2a83d |
fix(gallery-agent): process blacklist command on recently-closed PRs (#9473)
The command-processing step only walked open PRs, so when a maintainer wrote `/gallery-agent blacklist` and immediately closed the PR, the next scheduled run missed the command, the `gallery-agent/blacklisted` label was never applied, and the skip-URL step (which only pulls URLs from closed PRs carrying that label) re-proposed the model on the next cron. Also scan closed gallery-agent PRs from the last 14 days that don't already carry the blacklist label, and apply the label retroactively when the command is present. Close/recreate actions still only run on open PRs. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
3ad9b16c29 |
chore(deps): bump github.com/coreos/go-oidc/v3 from 3.17.0 to 3.18.0 (#9455)
Bumps [github.com/coreos/go-oidc/v3](https://github.com/coreos/go-oidc) from 3.17.0 to 3.18.0. - [Release notes](https://github.com/coreos/go-oidc/releases) - [Commits](https://github.com/coreos/go-oidc/compare/v3.17.0...v3.18.0) --- updated-dependencies: - dependency-name: github.com/coreos/go-oidc/v3 dependency-version: 3.18.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
c806d5ab73 |
chore(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.32.14 to 1.32.16 (#9456)
chore(deps): bump github.com/aws/aws-sdk-go-v2/config Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.32.14 to 1.32.16. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.32.14...config/v1.32.16) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2/config dependency-version: 1.32.16 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
47efaf5b43 |
Fix: Add model parameter to neutts-air gallery definition (#8793)
fix: Add model parameter to neutts-air gallery definition The neutts-air model entry was missing the 'model' parameter in its configuration, which caused LocalAI to fail with an 'Unrecognized model' error when trying to use it. This change adds the required model parameter pointing to the HuggingFace repository (neuphonic/neutts-air) so the backend can properly load the model. Fixes #8792 Signed-off-by: localai-bot <localai-bot@example.com> Co-authored-by: localai-bot <localai-bot@example.com> |
||
|
|
315b634a91 |
feat: improve CLI error messages with actionable guidance (#8880)
- transcript.go: Model not found error now suggests available models commands - util.go: GGUF error explains format and how to get models - worker_p2p.go: Token error explains purpose and how to obtain one - run.go: Startup failure includes troubleshooting steps and docs link - model_config_loader.go: Config validation errors include file path and guidance Refs: H2 - UX Review Issue Signed-off-by: localai-bot <localai-bot@noreply.github.com> Co-authored-by: localai-bot <localai-bot@noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> |
||
|
|
6b245299d7 |
chore(deps): bump github.com/modelcontextprotocol/go-sdk from 1.4.1 to 1.5.0 (#9454)
chore(deps): bump github.com/modelcontextprotocol/go-sdk Bumps [github.com/modelcontextprotocol/go-sdk](https://github.com/modelcontextprotocol/go-sdk) from 1.4.1 to 1.5.0. - [Release notes](https://github.com/modelcontextprotocol/go-sdk/releases) - [Commits](https://github.com/modelcontextprotocol/go-sdk/compare/v1.4.1...v1.5.0) --- updated-dependencies: - dependency-name: github.com/modelcontextprotocol/go-sdk dependency-version: 1.5.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
677c0315c1 |
chore(deps): bump github.com/containerd/containerd from 1.7.30 to 1.7.31 (#9453)
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.7.30 to 1.7.31. - [Release notes](https://github.com/containerd/containerd/releases) - [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md) - [Commits](https://github.com/containerd/containerd/compare/v1.7.30...v1.7.31) --- updated-dependencies: - dependency-name: github.com/containerd/containerd dependency-version: 1.7.31 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
478522ce4d |
chore(deps): bump github.com/aws/aws-sdk-go-v2/service/s3 from 1.97.1 to 1.99.1 (#9452)
chore(deps): bump github.com/aws/aws-sdk-go-v2/service/s3 Bumps [github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2) from 1.97.1 to 1.99.1. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.97.1...service/s3/v1.99.1) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2/service/s3 dependency-version: 1.99.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
c54897ad44 |
fix(tests): update InstallBackend call sites for new URI/Name/Alias params (#9467)
Commit
|
||
|
|
8bb1e8f21f |
chore: ⬆️ Update ggml-org/llama.cpp to cf8b0dbda9ac0eac30ee33f87bc6702ead1c4664 (#9448)
⬆️ Update ggml-org/llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
cd94a0b61a |
chore: ⬆️ Update ggml-org/whisper.cpp to fc674574ca27cac59a15e5b22a09b9d9ad62aafe (#9450)
⬆️ Update ggml-org/whisper.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
047bc48fa9 |
chore(model gallery): 🤖 add 1 new models via gallery agent (#9464)
chore(model gallery): 🤖 add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
01bd8ae5d0 |
[gallery] Fix duplicate sha256 keys in Wan models (#9461)
Fix duplicate sha256 keys in wan models gallery The wan models previously defined the `sha256` key twice in their files lists, which triggered strict mapping key checks in the YAML parser and resulted in unmarshal errors that crashed the `/api/models` loading. This removes the redundant trailing `sha256` keys from the Wan model definitions. Assisted-by: Antigravity:Gemini-3.1-Pro-High [multi_replace_file_content, run_command] Signed-off-by: Alex <codecrusher24@gmail.com> |
||
|
|
d9808769be |
chore(model-gallery): ⬆️ update checksum (#9451)
⬆️ Checksum updates in gallery/index.yaml Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
5973c0a9df |
chore: ⬆️ Update ikawrakow/ik_llama.cpp to d4824131580b94ffa7b0e91c955e2b237c2fe16e (#9447)
⬆️ Update ikawrakow/ik_llama.cpp Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
486b5e25a3 |
fix(config): ignore yaml backup files in model loader (#9443)
Only load files whose real extension is .yaml or .yml so backup files like model.yaml.bak do not override active configs. Add a regression test covering plain and timestamped backup files. Assisted-by: Codex:gpt-5.4 docker Signed-off-by: leinasi2014 <leinasi2014@gmail.com> |
||
|
|
c66c41e8d7 |
fix(ci): wire AMDGPU_TARGETS through backend build workflow (#9445)
Commit
|
||
|
|
02bb715c0a |
fix(distributed): pass ExternalURI through NATS backend install (#9446)
When installing a backend with a custom OCI URI in distributed mode, the URI was captured in ManagementOp.ExternalURI by the HTTP handler but never forwarded to workers. BackendInstallRequest had no URI field, so workers fell through to the gallery lookup and failed with "no backend found with name <custom-name>". Add URI/Name/Alias fields to BackendInstallRequest and thread them from ManagementOp through DistributedBackendManager.InstallBackend() and the RemoteUnloaderAdapter. On the worker side, route to InstallExternalBackend when URI is set instead of InstallBackendFromGallery. Update all remaining InstallBackend call sites (UpgradeBackend, reconciler pending-op drain, router auto-install) to pass empty strings for the new params. Assisted-by: Claude Code:claude-sonnet-4-6 Signed-off-by: Russell Sim <rsl@simopolis.xyz> |
||
|
|
8ab56e2ad3 |
feat(gallery): add wan i2v 720p (#9457)
feat(gallery): add Wan 2.1 I2V 14B 720P + pin all wan ggufs by sha256 Adds a new entry for the native-720p image-to-video sibling of the 480p I2V model (wan-2.1-i2v-14b-480p-ggml). The 720p I2V model is trained purely as image-to-video — no first-last-frame interpolation path — so motion is freer than repurposing the FLF2V 720P variant as an i2v. Shares the same VAE, umt5_xxl text encoder, and clip_vision_h auxiliary files as the existing 480p I2V and 720p FLF2V entries, so no new aux downloads are introduced. Also pins the main diffusion gguf by sha256 for the new entry and for the three existing wan entries that were previously missing a hash (wan-2.1-t2v-1.3b-ggml, wan-2.1-i2v-14b-480p-ggml, wan-2.1-flf2v-14b-720p-ggml). Hashes were fetched from HuggingFace's x-linked-etag header per .agents/adding-gallery-models.md. Assisted-by: Claude:claude-opus-4-7 |
||
|
|
ecf85fde9e |
fix(api): remove duplicate /api/traces endpoint that broke React UI (#9427)
The API Traces tab in /app/traces always showed (0) traces despite requests
being recorded.
The /api/traces endpoint was registered in both localai.go and ui_api.go.
The ui_api.go version wrapped the response as {"traces": [...]} instead of
the flat []APIExchange array that both the React UI (Traces.jsx) and the
legacy Alpine.js UI (traces.html) expect. Because Echo matched the ui_api.go
handler, Array.isArray(apiData) always returned false, making the API Traces
tab permanently empty.
Remove the duplicate endpoints from ui_api.go so only the correct flat-array
version in localai.go is served.
Also use mime.ParseMediaType for the Content-Type check in the trace
middleware so requests with parameters (e.g. application/json; charset=utf-8)
are still traced.
Signed-off-by: Pawel Brzozowski <paul@ontux.net>
Co-authored-by: Pawel Brzozowski <paul@ontux.net>
|
||
|
|
6480715a16 |
fix(settings): strip env-supplied ApiKeys from the request before persisting (#9438)
GET /api/settings returns settings.ApiKeys as the merged env+runtime list
via ApplicationConfig.ToRuntimeSettings(). The WebUI displays that list and
round-trips it back on POST /api/settings unchanged.
UpdateSettingsEndpoint was then doing:
appConfig.ApiKeys = append(envKeys, runtimeKeys...)
where runtimeKeys already contained envKeys (because the UI got them from
the merged GET). Every save therefore duplicated the env keys on top of
the previous merge, and also wrote the duplicates to runtime_settings.json
so the duplication survived restarts and compounded with each save. This
is the user-visible behaviour in #9071: the Web UI shows the keys
twice / three times after consecutive saves.
Before we marshal the settings to disk or call ApplyRuntimeSettings, drop
any incoming key that already appears in startupConfig.ApiKeys. The file
on disk now stores only the genuinely runtime-added keys; the subsequent
append(envKeys, runtimeKeys...) produces one copy of each env key, as
intended. Behaviour is unchanged for users who never had env keys set.
Fixes #9071
Co-authored-by: SAY-5 <SAY-5@users.noreply.github.com>
|
||
|
|
f683231811 |
feat(gallery): add Wan 2.1 FLF2V 14B 720P (#9440)
First-last-frame-to-video variant of the 14B Wan family. Accepts a start and end reference image and — unlike the pure i2v path — runs both through clip_vision, so the final frame lands on the end image both in pixel and semantic space. Right pick for seamless loops (start_image == end_image) and narrative A→B cuts. Shares the same VAE, umt5_xxl text encoder, and clip_vision_h as the I2V 14B entry. Options block mirrors i2v's full-list-in-override style so the template merge doesn't drop fields. Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
960757f0e8 |
chore(model gallery): 🤖 add 1 new models via gallery agent (#9436)
chore(model gallery): 🤖 add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
865fd552f5 |
docs(agents): adopt kernel's AI coding assistants policy
Align LocalAI with the Linux kernel project's policy for AI-assisted contributions (https://docs.kernel.org/process/coding-assistants.html). - Add .agents/ai-coding-assistants.md with the full policy adapted to LocalAI's MIT license: no Signed-off-by or Co-Authored-By from AI, attribute AI involvement via an Assisted-by: trailer, human submitter owns the contribution. - Surface the rules at the entry points: AGENTS.md (and its CLAUDE.md symlink) and CONTRIBUTING.md. - Publish a user-facing reference page at docs/content/reference/ai-coding-assistants.md and link it from the references index. Assisted-by: Claude:claude-opus-4-7 |
||
|
|
cb77a5a4b9 |
chore(model gallery): 🤖 add 1 new models via gallery agent (#9425)
chore(model gallery): 🤖 add new models via gallery agent Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
60633c4dd5 |
fix(stable-diffusion.ggml): force mp4 container in ffmpeg mux (#9435)
gen_video's ffmpeg subprocess was relying on the filename extension to choose the output container. Distributed LocalAI hands the backend a staging path (e.g. /staging/localai-output-NNN.tmp) that is renamed to .mp4 only after the backend returns, so ffmpeg saw a .tmp extension and bailed with "Unable to choose an output format". Inference had already completed and the frames were piped in, producing the cryptic "video inference failed (code 1)" at the API layer. Pass -f mp4 explicitly so the container is selected by flag instead of by filename suffix. Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
9e44944cc1 |
fix(i2v): Add new options to the model configuration
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |