mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-30 03:55:58 -04:00
24505e57f59fb05da1c4e666e05724928ca48cf0
95 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
f5eb13d3c2 |
feat(insightface): add antispoofing (liveness) detection (#9515)
* feat(insightface): add antispoofing (liveness) detection
Light up the anti_spoofing flag that was parked during the first pass.
Both FaceVerify and FaceAnalyze now run the Silent-Face MiniFASNetV2 +
MiniFASNetV1SE ensemble (~4 MB, Apache 2.0, CPU <10ms) when the flag is
set. Failed liveness on either image vetoes FaceVerify regardless of
embedding similarity. Every insightface* gallery entry now ships the
MiniFASNet ONNX weights so existing packs light up after reinstall.
Setting the flag against a model without the MiniFASNet files returns
FAILED_PRECONDITION (HTTP 412) with a clear install message — no
silent is_real=false.
FaceVerifyResponse gained per-image img{1,2}_is_real and
img{1,2}_antispoof_score (proto 9-12); FaceAnalysis's existing
is_real/antispoof_score fields are now populated. Schema fields are
pointers so they are fully absent from the JSON response when
anti_spoofing was not requested — avoids collapsing "not checked" with
"checked and fake" under Go's omitempty on bool.
Validated end-to-end over HTTP against a local install:
- verify + anti_spoofing, both real -> verified=true, score ~0.76
- verify + anti_spoofing, img2 spoof -> verified=false, img2_is_real=false
- analyze + anti_spoofing -> is_real and score per face
- flag against model without MiniFASNet -> HTTP 412 fail-loud
Assisted-by: Claude:claude-opus-4-7 go vet
* test(insightface): wire test target into test-extra
The root Makefile's `test-extra` already runs
`$(MAKE) -C backend/python/insightface test`, but the backend's
Makefile never defined the target — so the command silently errored
and the suite was never executed in CI. Adding the two-line target
(matching ace-step/Makefile) hooks `test.sh` → `runUnittests` →
`python -m unittest test.py`, which discovers both the pre-existing
engine classes (InsightFaceEngineTest, OnnxDirectEngineTest) and the
new AntispoofingTest. Each class skips gracefully when its weights
can't be downloaded from a network-restricted runner.
Assisted-by: Claude:claude-opus-4-7
* test(insightface): exercise antispoofing in e2e-backends (both paths)
Add a `face_antispoof` capability to the Ginkgo e2e suite and extend
the existing FaceVerify + FaceAnalyze specs with liveness assertions
covering BOTH paths:
real fixture -> is_real=true, score>0, verified stays true
spoof fixture -> is_real=false, verified vetoed to false
The spoof fixture is upstream's own `image_F2.jpg` (via the yakhyo
mirror) — verified locally against the MiniFASNetV2+V1SE ensemble to
classify as is_real=false with score ~0.013. That makes the assertion
deterministic across CI runs; synthetic/derived spoofs fool the model
unpredictably and would be flaky.
Makefile wires it up end-to-end:
- New INSIGHTFACE_ANTISPOOF_* cache dir + two ONNX downloads with
pinned SHAs, matching the gallery entries.
- insightface-antispoof-models target shared by both backend configs.
- FACE_SPOOF_IMAGE_URL passed via BACKEND_TEST_FACE_SPOOF_IMAGE_URL.
- Both e2e targets (buffalo-sc + opencv) now:
* depend on insightface-antispoof-models
* pass antispoof_v2_onnx / antispoof_v1se_onnx in BACKEND_TEST_OPTIONS
* include face_antispoof in BACKEND_TEST_CAPS
backend_test.go adds the new capability constant and a faceSpoofFile
fixture resolved the same way as faceFile1/2/3. Spoof assertions are
gated on both capFaceAntispoof AND faceSpoofFile being set, so a test
config that omits the spoof fixture degrades gracefully to "real path
only" instead of failing.
Assisted-by: Claude:claude-opus-4-7 go vet
|
||
|
|
181ebb6df4 |
feat: voice recognition (#9500)
* feat(voice-recognition): add /v1/voice/{verify,analyze,embed} + speaker-recognition backend
Audio analog to face recognition. Adds three gRPC RPCs
(VoiceVerify / VoiceAnalyze / VoiceEmbed), their Go service and HTTP
layers, a new FLAG_SPEAKER_RECOGNITION capability flag, and a Python
backend scaffold under backend/python/speaker-recognition/ wrapping
SpeechBrain ECAPA-TDNN with a parallel OnnxDirectEngine for
WeSpeaker / 3D-Speaker ONNX exports.
The kokoros Rust backend gets matching unimplemented trait stubs —
tonic's async_trait has no defaults, so adding an RPC without Rust
stubs breaks the build (same regression fixed by
|
||
|
|
f0c92610a1 |
feat(importer): expand importer flow to almost all backends (#9466)
* docs(agents): require importer integration when adding backends
Document the importer registry workflow so contributors know that adding
a new backend also requires updating the /import-model dropdown source:
either a new importer in core/gallery/importers/, extending an existing
one for drop-in replacements, or the pref-only slice for backends with
no reliable auto-detect signal. Always covered by a table-driven test.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for Batch 0 primitives
Introduce failing tests that drive Batch 0 of the importer expansion:
- pkg/huggingface-api: assert GetModelDetails populates PipelineTag and
LibraryName from /api/models/{repo}, and that a failing metadata
endpoint still returns file details (best-effort fetch).
- core/gallery/importers/helpers_test.go: new table-driven coverage for
HasFile, HasExtension, HasONNX, HasONNXConfigPair, HasGGMLFile.
- core/gallery/importers/importers_test.go: assert ErrAmbiguousImport
sentinel exists and round-trips through errors.Is.
- core/gallery/importers/local_test.go: extend with detection cases for
ggml-*.bin (whisper), silero_vad.onnx (silero-vad), and the piper
.onnx + .onnx.json pair.
- core/http/endpoints/localai/import_model_test.go: assert
ImportModelURIEndpoint returns HTTP 400 with a structured
{error, detail, hint} body when ErrAmbiguousImport surfaces.
All tests fail in the expected places (missing fields, missing
helpers, missing sentinel, endpoint still wraps as 500).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): Batch 0 foundation — helpers, sentinel, local detection
Implements the Batch 0 primitives that subsequent importer batches build on:
- pkg/huggingface-api: ModelDetails gains PipelineTag and LibraryName.
GetModelDetails now layers a best-effort GET /api/models/{repo} fetch
on top of ListFiles — a metadata outage leaves the fields empty but
still returns full file details. Uses a dedicated response struct
because the single-model endpoint uses snake_case keys while the list
endpoint historically returned camelCase.
- core/gallery/importers/helpers.go: generic HasFile, HasExtension,
HasONNX, HasONNXConfigPair, HasGGMLFile helpers working on
[]hfapi.ModelFile so per-backend importers can detect artefact
patterns without duplicating string wrangling.
- core/gallery/importers/importers.go: adds the ErrAmbiguousImport
sentinel. DiscoverModelConfig now returns it (wrapped with
fmt.Errorf("%w: ...")) when no importer matched AND the HF
pipeline_tag falls in a whitelist of narrow modalities (ASR, TTS,
sentence-similarity, text-classification, object-detection). The
whitelist is intentionally narrow — unknown tags keep the previous
"no importer matched" behaviour to avoid blocking rare repos.
- core/gallery/importers/local.go: three new local-path detections,
inserted before the existing merged-transformers branch:
* ggml-*.bin → whisper
* silero*.onnx → silero-vad
* *.onnx + *.onnx.json pair → piper
- core/http/endpoints/localai/import_model.go: ImportModelURIEndpoint
surfaces ErrAmbiguousImport as HTTP 400 with
{error, detail, hint} JSON, preserving existing behaviour for
unrelated errors.
Green tests:
go test ./core/gallery/importers/... ./pkg/huggingface-api/... \
./core/http/endpoints/localai/...
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(importers): red tests for KnownBackend endpoint and importer metadata
Add failing tests that drive Batch UI-Dropdown:
- importers_test.go: assert importers expose Name/Modality/AutoDetects
and that LlamaCPPImporter advertises drop-in replacements via a new
AdditionalBackendsProvider interface. A Registry() accessor is also
expected.
- backend_test.go (new): assert GET /backends/known returns
[]schema.KnownBackend, covers every importer, exposes drop-in
llama-cpp replacements, includes curated pref-only backends, has no
duplicates, and is sorted by Modality+Name.
These tests fail at compile time against master; they are intentionally
red so the follow-up green commit is reviewable.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery): add /backends/known endpoint for importer-aware backend list
Extend the Importer interface with Name/Modality/AutoDetects so the
import system can self-describe its registry, and introduce the
AdditionalBackendsProvider interface so importers can advertise drop-in
replacements (llama-cpp advertises ik-llama-cpp and turboquant).
Expose the new GET /backends/known endpoint that merges:
- the importer registry (auto-detect supported),
- drop-in replacements hosted by importers (preference-only),
- a curated knownPrefOnlyBackends slice for backends with no dedicated
importer (sglang, tinygrad, trl, mlx-vlm, whisperx, kokoros, Qwen TTS
variants, sam3-cpp) — kept at the top of backend.go so contributors
adding a new pref-only backend have one obvious place to edit,
- backends installed on disk but unknown to the importer (marked
AutoDetect=false, empty Modality).
The endpoint deliberately does NOT filter by gallery membership or host
capability (unlike /backends/available): LocalAI may auto-install a
backend that is not yet present, so the import form dropdown must show
everything the importer knows about.
Response is deduplicated (importer wins over pref-only) and sorted by
Modality+Name for deterministic output.
Registered in core/http/routes/localai.go next to /backends/available
under the same admin middleware.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui): source import form backend dropdown from /backends/known
Replace the hard-coded BACKENDS constant in ImportModel.jsx with a
live fetch of /backends/known on mount. Users now see every backend
the importer layer knows about (including preference-only entries)
grouped by modality, not a stale subset.
Changes:
- config.js: add backendsKnown endpoint constant next to
backendsAvailable.
- api.js: add backendsApi.listKnown() wrapper.
- ImportModel.jsx: remove BACKENDS constant, fetch the list via
useEffect, and derive grouped options via buildBackendOptions.
Preference-only entries render with a " (preference-only)" suffix.
Loading state disables the dropdown with a "Loading backends…"
placeholder; on fetch failure the form falls back to auto-detect
only and surfaces a non-blocking toast.
- SearchableSelect.jsx: accept items flagged isHeader=true and render
them as non-selectable section dividers. Keyboard navigation skips
headers and search queries hide them so filtered output stays
relevant.
Vitest is not set up in this project (devDependencies ship Playwright
only). Per the brief's guard-rail, no frontend test framework is
introduced; coverage is provided by the Go handler tests that assert
the /backends/known contract consumed by the React form.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for whisper importer
Asserts detection on ggerganov/whisper.cpp (via ggml-*.bin filename),
the preferences.backend=whisper override path for arbitrary URIs,
and the Importer interface metadata (name/modality/autodetect).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add whisper importer
Recognises whisper.cpp GGML models by the "ggml-*.bin" filename
convention (direct URL or HF repo member) and by the explicit
preferences.backend="whisper" override. Emits backend: whisper with
the transcript use-case. Registered before llama-cpp so the narrow
filename signal wins before any generic GGUF match is attempted.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for moonshine importer
Asserts detection on UsefulSensors/moonshine-tiny via owner + ONNX
files, the preferences.backend=moonshine override for arbitrary URIs,
and the Importer interface metadata (name/modality/autodetect).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add moonshine importer
Matches UsefulSensors-owned HF repos whose artefacts or metadata
identify them as ASR: on-disk .onnx files (the canonical Moonshine
packaging) OR pipeline_tag=automatic-speech-recognition (covers
transformers/safetensors-only sibling repos). preferences.backend=
moonshine overrides detection. Test uses the live moonshine-tiny
repo because the canonical UsefulSensors/moonshine repo currently
hits a recursive-subfolder bug in pkg/huggingface-api ListFiles.
Registered after WhisperImporter but before LlamaCPPImporter and
TransformersImporter so the narrower owner+ASR signal wins before
the generic tokenizer.json check routes the repo to transformers.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for nemo importer
Asserts detection on nvidia/parakeet-tdt-0.6b-v3 via owner + .nemo
file, the preferences.backend=nemo override for arbitrary URIs, and
the Importer interface metadata (name/modality/autodetect).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add nemo importer
Matches nvidia-owned HF repos that ship a .nemo checkpoint archive,
the canonical NeMo ASR packaging. preferences.backend=nemo forces
detection. Registered between moonshine and llama-cpp so the narrow
owner + extension signal wins before any downstream generic matcher.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for faster-whisper importer
Asserts detection on Systran/faster-whisper-large-v3 (owner +
model.bin + config.json + ASR pipeline), the preferences.backend=
faster-whisper override for arbitrary URIs, and the Importer
interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add faster-whisper importer
Recognises CTranslate2-packaged whisper checkpoints distributed for
the faster-whisper runtime: model.bin + config.json + ASR
pipeline_tag, narrowed to Systran-owned repos or repo names
containing "faster-whisper" to avoid falsely claiming vanilla
OpenAI whisper HF repos. preferences.backend=faster-whisper
overrides detection. Registered before llama-cpp and transformers
so the narrow signal wins before tokenizer.json routes the repo to
the generic transformers importer.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for qwen-asr importer
Asserts detection on Qwen/Qwen3-ASR-1.7B via owner + ASR substring
in the repo name, the preferences.backend=qwen-asr override for
arbitrary URIs, and the Importer interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add qwen-asr importer
Matches Qwen-owned HF repos whose name contains "ASR"
(case-insensitive), routing them to the qwen-asr backend rather
than the generic transformers/vllm path. The substring check scans
the repo portion only so the owner field cannot leak a false match.
preferences.backend=qwen-asr forces detection. Registered before
llama-cpp and transformers so the narrow owner+name signal wins.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): ASR ambiguity surfaces ErrAmbiguousImport
Locks in the behaviour added in Batch 0: an HF repo whose pipeline_tag
marks it as automatic-speech-recognition but whose artefacts match no
ASR importer (and no generic importer) must fail with
ErrAmbiguousImport so callers know to pass preferences.backend rather
than silently guess. pyannote/voice-activity-detection is the fixture
— its file list is only config.yaml + README, leaving every importer's
artefact check negative.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for piper importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add piper importer
Detects piper TTS voices by the canonical <voice>.onnx + <voice>.onnx.json
pair packaging (via HasONNXConfigPair). Narrow enough to skip generic
ONNX repos used by other backends (Moonshine ASR, sentence-transformers).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for bark importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add bark importer
Detects Suno's Bark TTS checkpoints by HF owner "suno" + repo name
prefix "bark". Adds HFOwnerRepoFromURI() helper so importers can fall
back to URI parsing when pkg/huggingface-api's recursive tree listing
errors on repos with nested subdirectories (suno/bark ships a
speaker_embeddings/v2 subtree that trips a pre-existing path-doubling
bug in the listFilesInPath recursion).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for fish-speech importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add fish-speech importer
Detects Fish Audio TTS releases by HF owner "fishaudio" with a URI-based
fallback for repos whose tree recursion trips the pre-existing hfapi
path-doubling bug.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for outetts importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add outetts importer
Detects OuteAI's OuteTTS releases by HF owner "OuteAI" or a case-
insensitive "OuteTTS" substring in the repo name, with a URI-based
fallback for recursion-bugged repos.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for voxcpm importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add voxcpm importer
Detects OpenBMB's VoxCPM TTS family by repo-name substring (community
mirrors re-host the weights under many owners — mlx-community,
bluryar, callgg, etc).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for kokoro importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add kokoro importer
Detects hexgrad's Kokoro TTS by the "Kokoro" repo-name substring paired
with a PyTorch .pth/.pt checkpoint — the pairing excludes ONNX-only
mirrors (handled by the pref-only `kokoros` Rust runtime) and GGUF
mirrors (handled by llama-cpp).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for kitten-tts importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add kitten-tts importer
Detects KittenML's kitten-tts releases by owner or "kitten-tts" repo-name
substring, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for neutts importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add neutts importer
Detects Neuphonic's NeuTTS releases by owner "neuphonic" or "neutts"
repo-name substring, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for chatterbox importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add chatterbox importer
Detects Resemble AI's Chatterbox TTS by owner "ResembleAI" or
"chatterbox" repo-name substring, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for vibevoice importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add vibevoice importer
Detects Microsoft's VibeVoice TTS by "vibevoice" repo-name substring
(case-insensitive) so community mirrors still route here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for coqui importer
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add coqui importer
Detects Coqui AI's TTS releases (XTTS-v2, YourTTS, …) by the
authoritative `coqui` HF owner, with URI-parsing fallback.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): TTS ambiguity surfaces ErrAmbiguousImport
Adds a Ginkgo spec that imports nari-labs/Dia-1.6B — a real HF repo
carrying pipeline_tag="text-to-speech" whose artefacts (*.pth, one
safetensors shard, preprocessor_config.json, config.json) match none of
the Batch-2 TTS importers nor the generic text/image importers — and
asserts DiscoverModelConfig wraps ErrAmbiguousImport via errors.Is.
Also pivots the endpoint-level ambiguity fixture from hexgrad/Kokoro-82M
to nari-labs/Dia-1.6B. Batch 2 added a dedicated kokoro importer that
now claims the original fixture; Dia remains genuinely unclaimed and
so exercises the same ambiguity code path at the HTTP layer.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for stablediffusion-ggml importer
Covers HF repo detection (city96/FLUX.1-dev-gguf), raw .gguf URL matching on
filename arch tokens, preference override, and Importer interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add stablediffusion-ggml importer
Detects GGUF-packed Stable Diffusion and FLUX checkpoints (leejet owner,
city96 FLUX mirrors, second-state SD dumps, raw .gguf URLs with arch
tokens) and routes them to the stablediffusion-ggml backend. Registered
BEFORE LlamaCPPImporter so .gguf image checkpoints are not stolen by
llama-cpp's generic .gguf match. Reuses HFOwnerRepoFromURI for the
hfapi-recursion-bug fallback. preferences.backend overrides detection.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for ace-step importer
Covers HF repo-name detection (ACE-Step/ACE-Step-v1-3.5B), preference
override, and Importer interface metadata.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add ace-step importer
Routes ACE-Step music generation checkpoints (ACE-Step/ACE-Step-v1-3.5B,
ACE-Step/Ace-Step1.5, community mirrors) to the ace-step backend.
Matching is case-insensitive on the "ace-step" repo-name substring and
owner, with an HFOwnerRepoFromURI fallback for the hfapi recursion bug.
KnownUsecaseStrings mirrors the gallery's ace-step-turbo entry
(sound_generation, tts). preferences.backend overrides.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): surface ErrAmbiguousImport on text-to-image misses
Adds text-to-image to ambiguousModalities whitelist and covers the
h94/IP-Adapter-FaceID case — pipeline_tag=text-to-image but ships only
.bin/.safetensors so diffusers, stablediffusion-ggml, llama-cpp,
transformers, vllm, mlx, and ace-step all miss. DiscoverModelConfig now
surfaces ErrAmbiguousImport for that shape instead of the opaque
"no importer matched" error.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for vllm-omni importer
Introduces the test surface for the forthcoming VLLMOmniImporter:
detection via preferences.backend, Qwen owner + Omni repo token,
URI-only fallback, negative cases (plain Qwen, random OmniX repo), and
Import() emitting backend: vllm-omni with chat + multimodal usecases.
Includes a registration-order assertion via DiscoverModelConfig to pin
the requirement that vllm-omni wins over vllm for Qwen Omni repos
(tokenizer files are usually present too).
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add vllm-omni importer
Adds VLLMOmniImporter for Qwen Omni-style multimodal checkpoints
(Qwen3-Omni, Qwen2.5-Omni, …). Detection is narrow: HF owner "Qwen"
combined with "omni" in the repo name, or a repo name matching the
-Omni-/Omni- naming pattern. preferences.backend="vllm-omni" always
wins; HFOwnerRepoFromURI provides a URI-only fallback for the hfapi
recursion-bug edge case.
Emitted YAML sets backend: vllm-omni and known_usecases: [chat,
multimodal], matching the gallery/index.yaml vllm-omni entries. The
importer is registered ahead of VLLMImporter so Qwen Omni repos —
which also carry tokenizer files — route to vllm-omni rather than the
plain vllm backend.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for llama-cpp drop-in preferences
Pins the expected drop-in replacement behaviour: preferences.backend
of ik-llama-cpp or turboquant must swap the emitted YAML backend
field while keeping the llama-cpp file layout identical. Also covers
the unknown-backend case (must stay llama-cpp) and re-asserts
AdditionalBackends() returns the two curated entries with non-empty
descriptions.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): llama-cpp honours ik-llama-cpp and turboquant drop-in preferences
preferences.backend set to ik-llama-cpp or turboquant now swaps the
emitted YAML backend field while leaving the file layout, model path,
mmproj handling and everything else in the llama-cpp Import pipeline
untouched. Unknown values are ignored and fall back to backend:
llama-cpp so arbitrary input can't leak into the config.
Aligns the AdditionalBackends() descriptions with the user-facing
naming conventions surfaced via /backends/known. No changes to the
pref-only curated list in endpoints/localai/backend.go: the two
drop-in names have always lived on the importer side via
AdditionalBackends.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for silero-vad importer
Add the SileroVADImporter test fixtures covering metadata, preference
overrides, snakers4 + onnx detection, silero_vad.onnx canonical filename,
URI fallback, and live HF discovery. Implementation follows in the next
commit.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add silero-vad importer
Recognise the Silero VAD ONNX packaging: the canonical silero_vad.onnx
filename or any ONNX file under the snakers4 owner. Emits a
backend: silero-vad config with the vad known_usecase, and attaches the
canonical file entry when present so the weights download on import.
Registered before the generic importers so the unique-filename signal
takes precedence over any downstream tokenizer-based matcher.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for rerankers importer
Cover the RerankersImporter contract: interface metadata, preference
override, cross-encoder owner detection, case-insensitive 'reranker'
substring match (BAAI/bge-reranker, Alibaba-NLP/gte-reranker), URI
fallback, and the full-discovery ordering check that a BAAI reranker
repo must route to the rerankers importer rather than transformers.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add rerankers importer
Recognise reranker repositories — cross-encoder owner or any repo whose
name contains 'reranker' (case-insensitive). Emits backend: rerankers
with reranking: true and the rerank known_usecase.
Registered ahead of sentencetransformers and transformers so reranker
repos that happen to ship tokenizer.json or modules.json still route
here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for sentencetransformers importer
Cover the SentenceTransformersImporter contract: interface metadata,
preference override, modules.json marker file, sentence_bert_config.json
marker file, sentence-transformers owner, URI fallback, and the
full-discovery ordering check that ensures a sentence-transformers HF
URI routes here rather than transformers.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add sentencetransformers importer
Recognise sentence-transformers embedding repos by modules.json,
sentence_bert_config.json, or the sentence-transformers owner. Emits
backend: sentencetransformers with embeddings: true and the embeddings
known_usecase.
Registered ahead of transformers so ST repos that carry tokenizer.json
still route here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): add failing tests for rfdetr importer
Cover the RFDetrImporter contract: interface metadata, preference
override, case-insensitive rf-detr and rfdetr substring matches, URI
fallback, and negative cases. Implementation follows in the next
commit.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(gallery/importers): add rfdetr importer
Recognise RF-DETR object-detection repositories by a case-insensitive
'rf-detr' / 'rfdetr' substring in the repo name. Emits backend: rfdetr
with the detection known_usecase.
Registered ahead of transformers so RF-DETR repos with tokenizer
artefacts still route here.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(gallery/importers): surface ErrAmbiguousImport on sentence-similarity misses
Add an ambiguity fixture covering the embeddings/rerankers modality.
Qdrant/bm25 carries pipeline_tag=sentence-similarity but ships only
config.json + stopword .txt files — none of the Batch 5 importers
(silero-vad, rerankers, sentencetransformers, rfdetr) or the generic
vllm/transformers/llama-cpp/mlx/diffusers importers match. Because the
modality is in the ambiguous whitelist, DiscoverModelConfig must
surface ErrAmbiguousImport.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(localai/backend): red tests for KnownBackend.Installed flag
Extend the /backends/known suite with three failing cases that pin down
the forthcoming Installed field: JSON field presence on every entry,
flipping to true when an importer-registered backend is also present on
disk (and staying false for non-installed pref-only entries), and
surfacing system-only backends with empty modality and AutoDetect=false.
A small writeFakeSystemBackend helper plants a run.sh under the backends
dir so gallery.ListSystemBackends recognises the fixture.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(schema,localai/backend): add Installed flag to KnownBackend
Add an Installed bool to schema.KnownBackend and populate it from the
/backends/known handler so the React import form can warn users that
picking a not-yet-installed backend will trigger an automatic download
on submit.
Computation: after merging the importer registry, additional backends
provider entries and the curated pref-only slice, the handler walks
gallery.ListSystemBackends(systemState) and either flips the existing
map entry's Installed flag to true (preserving modality / autodetect /
description metadata) or inserts a bare {Installed:true} entry for
system-only backends the importer layer doesn't know about.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(localai/import_model): structured ambiguous-import response
Add red tests covering the extended ambiguity shape the React import
form needs:
- ImportModelURIEndpoint must return an HTTP 400 body that exposes the
detected `modality` (normalised to the importer modality key, e.g.
"tts" for pipeline_tag=text-to-speech) and a list of `candidates`
(backend names filtered by modality, excluding text-LLM backends).
- The importers package must surface a typed AmbiguousImportError so
HTTP consumers can read Modality + Candidates without parsing the
error string. errors.Is against the existing sentinel keeps working.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(localai/import_model): structured ambiguity response with modality + candidates
DiscoverModelConfig now returns a typed AmbiguousImportError that
carries the importer modality key, candidate backend names, the
original URI, and the raw HF pipeline_tag. Its Is() preserves
errors.Is(err, ErrAmbiguousImport) for legacy callers.
The importer modality is pre-mapped from the HF pipeline_tag
(automatic-speech-recognition → asr, text-to-speech → tts, etc) via
PipelineTagToModality — surfaced as an exported helper so downstream
consumers can avoid duplicating the table. CandidatesForModality
filters the default importer registry plus AdditionalBackendsProvider
drop-ins by modality, sorts deterministically, and is the single
source of truth used by ImportModelURIEndpoint.
ImportModelURIEndpoint now returns HTTP 400 with
{ error, detail, modality, candidates, hint }
when ambiguity fires, letting the React form render a modality-scoped
picker inline instead of a generic toast.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): manual pick badge + tooltip
Red Playwright coverage for the preference-only → manual pick rename:
- The Backend dropdown renders a "manual pick" badge on every option
whose KnownBackend.auto_detect is false.
- The badge carries a title attribute with hover-tooltip copy that
explains auto-detect won't route to this backend.
- Auto-detectable backends must NOT carry the badge.
- The legacy " (preference-only)" suffix is gone from every label.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): replace preference-only suffix with manual pick badge
SearchableSelect option rows now support an optional badge field — a
muted pill rendered to the right of the label with an optional title
attribute for native hover tooltips. Plain text so screen readers read
it alongside the option name.
buildBackendOptions in ImportModel stops appending " (preference-only)"
to the label and instead sets badge="manual pick" plus a descriptive
tooltip on every option whose auto_detect is false. The Backend help
text explains what "manual pick" means so users aren't left wondering
about the badge.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): inline ambiguity picker
Red Playwright coverage for Batch A2 — when the server returns a 400
ambiguity body, the form must render an inline alert instead of a
toast, expose one clickable chip per candidate backend, and support
both auto-resubmit on pick and silent dismiss.
- Mocks /api/models/import-uri with the structured ambiguity body
(error, detail, modality, candidates, hint).
- On first click of Import, the alert is visible, carries
modality-specific copy, and shows a chip per candidate.
- Clicking a chip clears the alert, sets the Backend dropdown, and
triggers a second POST to /api/models/import-uri.
- Dismissing the alert leaves the Backend dropdown on Auto-detect —
no implicit backend assignment.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): inline ambiguity alert with candidate chips
Adds AmbiguityAlert — a soft, info-coloured card rendered above the URI
input when the server returns a structured 400 with { modality,
candidates }. Message is modality-aware (tts/asr/embeddings/image/
reranker/detection get purpose-written copy, everything else falls back
to a generic template). Each candidate is a clickable chip that shows a
download icon when /backends/known marks the backend as not yet
installed, so users aren't surprised by an implicit install.
ImportModel wires the alert to handleSimpleImport's error path:
- api.handleResponse now attaches { status, body } to the thrown Error
so pages can pattern-match on structured responses instead of string
error messages.
- handleSimpleImport detects `status === 400 && body.error === 'ambiguous
import'` and flips into the inline-picker mode instead of toasting.
- Clicking a chip sets prefs.backend and auto-resubmits (passing the
picked backend as an override so setPrefs's asynchrony doesn't leak
a stale value).
- Dismissing clears the alert; changing the URI or the backend also
clears it so a stale alert never sticks around.
Test fixtures mock GET /backends/known + POST /models/import-uri so the
Playwright specs don't depend on real network reachability.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): auto-install warning
Red Playwright coverage for Batch A3 — when the user picks a backend
whose KnownBackend.installed is false, the form must render a muted
inline note under the Backend dropdown warning that submitting will
download the backend first. Picking an installed backend or leaving
Auto-detect selected must keep the note hidden.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): auto-install warning under backend dropdown
When the user picks a backend whose KnownBackend.installed is false,
render a muted inline note under the Backend dropdown's help text
warning that submitting will download the backend first. The note
lives inside the same form-group so it lines up with the existing
hint text; it's hidden when Auto-detect is selected (the selected
backend is unknowable at that point) or when the chosen backend is
already on disk.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): drop redundant section header, adjust icons, rename HF shortcut
- Remove the "Import from URI" card-level <h2> — the page title already
says "Import New Model" one row up, so the secondary header was
duplicating information.
- Swap the fa-star on "Common Preferences" for fa-sliders (stars imply
favourites/ratings; this is just a preferences block) and move the
Custom Preferences fa-sliders-h to fa-plus-circle so the two blocks
read as distinct rather than as two sliders.
- Rename the HF shortcut from "Search GGUF on HF" → "Browse models on
HF" and drop the `search=gguf` filter on the linked URL. The import
form now supports ~40 backends; hard-coding GGUF in the copy no
longer matches the form's actual reach.
- Pure polish — no behaviour change, covered by the existing Batch A
Playwright suite.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): batch B — simple/power switch, options, tabs, dialog
Adds a failing Playwright suite covering the full Batch B surface ahead
of implementation:
- B1: SimplePowerSwitch segmented control renders, toggles, persists to
localStorage across reloads.
- B2: Simple-mode Options disclosure is collapsed by default; expanding
exposes only Backend, Model Name, Description (no quantizations,
mmproj, model type, or custom prefs).
- B3: Power mode has Preferences and YAML tabs with a persistent
selection across reloads; URI/name/description typed in Simple carry
over to Power; YAML tab swaps the primary action to Create.
- B4: Switching Power -> Simple with a custom preference set triggers
the 3-button confirmation dialog (Keep / Discard / Cancel) with the
documented semantics.
Tests fail against master — implementation lands in the following
commits.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): add SimplePowerSwitch segmented control
Replaces the previous "Advanced Mode / Simple Mode" toggle button in the
page header with a two-segment control that flips between Simple and
Power. The control reuses the existing .segmented CSS shared with the
Sound page for visual consistency.
Mode state is persisted to localStorage under `import-form-mode` so
reloads land on the same view (default: simple). The boolean alias
`isAdvancedMode` is retained internally to minimise diff — subsequent
commits reshape the Simple and Power surfaces independently.
Closes B1 from the Batch B Playwright suite.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): simple mode collapsible options, power tabs, switch dialog
Completes the Batch B surface in a single structural pass so Simple and
Power mode can evolve independently:
Simple mode
- URI input + Ambiguity alert + Import button, plus a collapsible
"Options" disclosure that exposes ONLY Backend, Model Name,
Description. Quantizations / MMProj / Model Type / Diffusers fields
/ Custom Preferences are no longer rendered in Simple mode.
Power mode
- In-page segmented "Preferences · YAML" tab strip. Active tab
persists to localStorage under `import-form-power-tab`.
- Preferences tab = the full existing preferences + custom prefs
panel (no progressive disclosure yet — that's Batch D).
- YAML tab = the existing CodeEditor. Primary button reads "Create"
here, "Import Model" everywhere else.
Switch dialog
- Power -> Simple with non-default prefs (advanced pref keys set,
any custom-pref key non-empty, or YAML edited away from the
template) opens a 3-button dialog: Keep & switch / Discard &
switch / Cancel.
- Keep preserves all state. Discard resets prefs + customPrefs + YAML
to defaults. Cancel leaves the user in Power mode.
Page subtitle reflects the current surface (Simple, Power/Preferences,
Power/YAML). Estimate banner renders everywhere except Power/YAML.
Closes B2/B3/B4 from the Batch B Playwright suite.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): expand Options disclosure in Batch A tests
Batch B hid the Backend dropdown behind a collapsible Options disclosure
in Simple mode. The Batch A tests that exercise the dropdown directly
(manual-pick badge, ambiguity chip sets the selected backend, auto-
install warning) now click the disclosure toggle before asserting on
dropdown contents. Test intent is unchanged.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): strip decorative icons from field labels
The preference panel had 12 Font Awesome icons decorating field labels
(Backend, Model Name, Description, Quantizations, MMProj Quantizations,
Model Type, Pipeline Type, Scheduler Type, Enable Parameters, Embeddings,
CUDA, plus fa-link on Model URI). Every label screamed equally, flattening
the visual hierarchy.
Remove them. Keep icons where they carry meaning: page-level section
headers, URI format guide entries, primary buttons, the Simple-mode
Options disclosure, the ambiguity alert's fa-lightbulb, the auto-install
note's fa-download, and the Estimated-requirements banner's
fa-memory / fa-microchip / fa-download.
No new behaviour, no layout / spacing changes beyond removing the
orphaned icon margin. Playwright suite green.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): progressive disclosure of preference fields
Cover the Batch D visibility matrix for Power > Preferences: Quantizations,
MMProj Quantizations, and Model Type each render only for the backends that
can consume them, stay visible when the backend is unset, and preserve any
value the user already typed when toggled off and back on. Also pin the
shrunk Description textarea at rows=2.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): progressive disclosure + shorter description textarea
Gate Quantizations, MMProj Quantizations, and Model Type in the Power >
Preferences tab so each field only renders for the backends that can
actually consume it. Backend unset keeps everything visible. Hidden
fields' state is preserved (the JSX wrapper is guarded, not the
underlying prefs state) so users flipping backends back and forth don't
lose input.
Also shrink the Description textarea from rows=3 to rows=2 — it's
shared between Simple Options and Power Preferences so the change
applies to both.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): enter-to-submit in Simple mode
Red test for Batch F3 — pressing Enter in the URI input must POST
/models/import-uri, and Enter in the Description textarea must insert
a newline without submitting the form.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): enter-to-submit in Simple mode
Wrap the Simple-mode URI input + ambiguity alert + Options disclosure
in a <form> whose onSubmit calls handleSimpleImport. Pressing Enter in
the URI input (or any Simple-mode text input) now submits the import
without having to move the mouse to the header button. The Description
textarea keeps its native behaviour — Enter inserts a newline.
A hidden submit button is included because the visible Import button
lives outside the form in the page header; some browsers only fire
implicit Enter-submit when the form contains a submit-capable element.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import,SearchableSelect,components): aria-hidden on decorative icons
Every Font Awesome icon in the import form is decorative — its meaning
is already conveyed by adjacent visible text. Adding aria-hidden="true"
prevents screen readers from announcing the unicode glyph point as
content. Covers ImportModel.jsx (all remaining <i> glyphs) and
SearchableSelect.jsx (the trigger chevron).
AmbiguityAlert and SimplePowerSwitch already set aria-hidden on their
icons when the components landed in Batches A and B — no change needed
there.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(SearchableSelect): responsive dropdown maxHeight + hover focus guard
F2 — replace fixed pixel heights with min(pixel, vh) so the dropdown
and its inner scroll region don't overflow short viewports. Outer
container: 260px -> min(260px, 60vh); inner listbox: 200px ->
min(200px, 50vh). Tall viewports still get the original pixel caps.
F5 — short-circuit onMouseEnter when the hovered row is already the
focused row. Avoids queueing a setFocusIndex call (and a render) for
every mousemove inside the same item — the state would be identical.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* ui(import): aria-label on custom preference rows
The Key / Value inputs and trash button in each Custom Preferences row
previously relied on placeholder text alone. Placeholders are not
accessible names — they vanish on input and screen readers do not
announce them consistently. Add row-indexed aria-labels so assistive
tech can distinguish "Preference key for row 1" from "row 2", and give
the trash button an explicit "Remove this preference" label.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* test(ui/import): modality chip row
Red tests for Batch E — a horizontal modality chip row that filters the
Backend dropdown by modality. Covers visibility in Simple-mode Options
and Power/Preferences (and absence in Power/YAML), filter behaviour,
mismatched-backend clearing with toast, ambiguity-alert auto-selection,
and radiogroup keyboard navigation.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* feat(ui/import): add ModalityChips component + filter integration
Horizontal chip row (Any, Text, Speech, TTS, Image, Embeddings,
Rerankers, Detection, VAD) filters the Backend dropdown options to the
selected modality. Default is Any — no filter, current behaviour.
- New ModalityChips component (radiogroup pattern, roving tabindex,
arrow-key navigation, Home/End).
- buildBackendOptions now accepts an optional modalityFilter so grouped
output is narrowed before rendering.
- Chips render inside Simple-mode Options disclosure and Power >
Preferences tab. Power > YAML stays unaffected.
- Switching the filter drops a mismatched backend selection and
surfaces a toast so the auto-clear is visible.
- Ambiguity alerts auto-activate the matching chip so users see only
relevant backends even if they dismiss the alert.
Tightens the Batch E tests' option-matching to the label <span> so the
"↵" keybind hint on the focused row doesn't break accessible-name
lookups.
Assisted-by: Claude:claude-opus-4-7[1m] [Agent]
* fix(ui/import): rename Power to Advanced + stop URI-formats toggle from submitting form
The "Supported URI Formats" disclosure button inside the Simple-mode form
lacked an explicit type attribute, so it defaulted to type="submit". Every
click triggered the form's onSubmit and surfaced the empty-URI validation
toast ("Please enter a model URI"). Marking it type="button" lets it
behave as a pure toggle.
While here, rename the user-visible "Power" label to "Advanced" in the
mode switch (button text + tooltip) and the Power-mode tab's aria-label,
matching the term users actually expect. The internal mode key stays
'power' so tests, localStorage, and data-testid selectors are untouched.
Assisted-by: Claude:claude-opus-4-7
* fix(system): fall back to cpu when meta backend lacks default capability
Meta backends like vllm and sglang enumerate concrete variants for
nvidia/amd/intel/cpu but omit a default: catch-all entry. On a no-GPU
host the reported capability is "default", so the previous Capability()
returned "default" unconditionally on a miss — IsCompatibleWith then saw
no "default" key and filtered the meta out of AvailableBackends. The
import flow's auto-install step then failed with "no backend found with
name <meta>", contradicting the UI's promise that the backend would be
downloaded on demand.
Try the explicit "default" key first, then fall back to "cpu" before
giving up. vllm now resolves to cpu-vllm on CPU-only Linux without
touching the gallery YAML.
Assisted-by: Claude:claude-opus-4-7
|
||
|
|
20baec77ab |
feat(face-recognition): add insightface/onnx backend for 1:1 verify, 1:N identify, embedding, detection, analysis (#9480)
* feat(face-recognition): add insightface backend for 1:1 verify, 1:N identify, embedding, detection, analysis
Adds face recognition as a new first-class capability in LocalAI via the
`insightface` Python backend, with a pluggable two-engine design so
non-commercial (insightface model packs) and commercial-safe
(OpenCV Zoo YuNet + SFace) models share the same gRPC/HTTP surface.
New gRPC RPCs (backend/backend.proto):
* FaceVerify(FaceVerifyRequest) returns FaceVerifyResponse
* FaceAnalyze(FaceAnalyzeRequest) returns FaceAnalyzeResponse
Existing Embedding and Detect RPCs are reused (face image in
PredictOptions.Images / DetectOptions.src) for face embedding and
face detection respectively.
New HTTP endpoints under /v1/face/:
* verify — 1:1 image pair same-person decision
* analyze — per-face age + gender (emotion/race reserved)
* register — 1:N enrollment; stores embedding in vector store
* identify — 1:N recognition; detect → embed → StoresFind
* forget — remove a registered face by opaque ID
Service layer (core/services/facerecognition/) introduces a
`Registry` interface with one in-memory `storeRegistry` impl backed
by LocalAI's existing local-store gRPC vector backend. HTTP handlers
depend on the interface, not on StoresSet/StoresFind directly, so a
persistent PostgreSQL/pgvector implementation can be slotted in via a
single constructor change in core/application (TODO marker in the
package doc).
New usecase flag FLAG_FACE_RECOGNITION; insightface is also wired
into FLAG_DETECTION so /v1/detection works for face bounding boxes.
Gallery (backend/index.yaml) ships three entries:
* insightface-buffalo-l — SCRFD-10GF + ArcFace R50 + genderage
(~326MB pre-baked; non-commercial research use only)
* insightface-opencv — YuNet + SFace (~40MB pre-baked; Apache 2.0)
* insightface-buffalo-s — SCRFD-500MF + MBF (runtime download; non-commercial)
Python backend (backend/python/insightface/):
* engines.py — FaceEngine protocol with InsightFaceEngine and
OnnxDirectEngine; resolves model paths relative to the backend
directory so the same gallery config works in docker-scratch and
in the e2e-backends rootfs-extraction harness.
* backend.py — gRPC servicer implementing Health, LoadModel, Status,
Embedding, Detect, FaceVerify, FaceAnalyze.
* install.sh — pre-bakes buffalo_l + OpenCV YuNet/SFace inside the
backend directory so first-run is offline-clean (the final scratch
image only preserves files under /<backend>/).
* test.py — parametrized unit tests over both engines.
Tests:
* Registry unit tests (go test -race ./core/services/facerecognition/...)
— in-memory fake grpc.Backend, table-driven, covers register/
identify/forget/error paths + concurrent access.
* tests/e2e-backends/backend_test.go extended with face caps
(face_detect, face_embed, face_verify, face_analyze); relative
ordering + configurable verifyCeiling per engine.
* Makefile targets: test-extra-backend-insightface-buffalo-l,
-opencv, and the -all aggregate.
* CI: .github/workflows/test-extra.yml gains tests-insightface-grpc,
auto-triggered by changes under backend/python/insightface/.
Docs:
* docs/content/features/face-recognition.md — feature page with
license table, quickstart (defaults to the commercial-safe model),
models matrix, API reference, 1:N workflow, storage caveats.
* Cross-refs in object-detection.md, stores.md, embeddings.md, and
whats-new.md.
* Contributor README at backend/python/insightface/README.md.
Verified end-to-end:
* buffalo_l: 6/6 specs (health, load, face_detect, face_embed,
face_verify, face_analyze).
* opencv: 5/5 specs (same minus face_analyze — SFace has no
demographic head; correctly skipped via BACKEND_TEST_CAPS).
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): move engine selection to model gallery, collapse backend entries
The previous commit put engine/model_pack options on backend gallery
entries (`backend/index.yaml`). That was wrong — `GalleryBackend`
(core/gallery/backend_types.go:32) has no `options` field, so the
YAML decoder silently dropped those keys and all three "different
insightface-*" backend entries resolved to the same container image
with no distinguishing configuration.
Correct split:
* `backend/index.yaml` now has ONE `insightface` backend entry
shipping the CPU + CUDA 12 container images. The Python backend
bundles both the non-commercial insightface model packs
(buffalo_l / buffalo_s) and the commercial-safe OpenCV Zoo
weights (YuNet + SFace); the active engine is selected at
LoadModel time via `options: ["engine:..."]`.
* `gallery/index.yaml` gains three model entries —
`insightface-buffalo-l`, `insightface-opencv`,
`insightface-buffalo-s` — each setting the appropriate
`overrides.backend` + `overrides.options` so installing one
actually gives the user the intended engine. This matches how
`rfdetr-base` lives in the model gallery against the `rfdetr`
backend.
The earlier e2e tests passed despite this bug because the Makefile
targets pass `BACKEND_TEST_OPTIONS` directly to LoadModel via gRPC,
bypassing any gallery resolution entirely. No code changes needed.
Assisted-by: Claude:claude-opus-4-7
* feat(face-recognition): cover all supported models in the gallery + drop weight baking
Follows up on the model-gallery split: adds entries for every model
configuration either engine actually supports, and switches weight
delivery from image-baked to LocalAI's standard gallery mechanism.
Gallery now has seven `insightface-*` model entries (gallery/index.yaml):
insightface (family) — non-commercial research use
• buffalo-l (326MB) — SCRFD-10GF + ResNet50 + genderage, default
• buffalo-m (313MB) — SCRFD-2.5GF + ResNet50 + genderage
• buffalo-s (159MB) — SCRFD-500MF + MBF + genderage
• buffalo-sc (16MB) — SCRFD-500MF + MBF, recognition only
(no landmarks, no demographics — analyze
returns empty attributes)
• antelopev2 (407MB) — SCRFD-10GF + ResNet100@Glint360K + genderage
OpenCV Zoo family — Apache 2.0 commercial-safe
• opencv — YuNet + SFace fp32 (~40MB)
• opencv-int8 — YuNet + SFace int8 (~12MB, ~3x smaller, faster on CPU)
Model weights are no longer baked into the backend image. The image
now ships only the Python runtime + libraries (~275MB content size,
~1.18GB disk vs ~1.21GB when weights were baked). Weights flow through
LocalAI's gallery mechanism:
* OpenCV variants list `files:` with ONNX URIs + SHA-256, so
`local-ai models install insightface-opencv` pulls them into the
models directory exactly like any other gallery-managed model.
* insightface packs (upstream distributes .zip archives only, not
individual ONNX files) auto-download on first LoadModel via
FaceAnalysis' built-in machinery, rooted at the LocalAI models
directory so they live alongside everything else — same pattern
`rfdetr` uses with `inference.get_model()`.
Backend changes (backend/python/insightface/):
* backend.py — LoadModel propagates `ModelOptions.ModelPath` (the
LocalAI models directory) to engines via a `_model_dir` hint.
This replaces the earlier ModelFile-dirname approach; ModelPath
is the canonical "models directory" variable set by the Go loader
(pkg/model/initializers.go:144) and is always populated.
* engines.py::_resolve_model_path — picks up `model_dir` and searches
it (plus basename-in-model-dir) before falling back to the dev
script-dir. This is how OnnxDirectEngine finds gallery-downloaded
YuNet/SFace files by filename only.
* engines.py::_flatten_insightface_pack — new helper that works
around an upstream packaging inconsistency: buffalo_l/s/sc zips
expand flat, but buffalo_m and antelopev2 zips wrap their ONNX
files in a redundant `<name>/` directory. insightface's own
loader looks one level too shallow and fails. We call
`ensure_available()` explicitly, flatten if nested, then hand to
FaceAnalysis.
* engines.py::InsightFaceEngine.prepare — root-resolution order now
includes the `_model_dir` hint so packs download into the LocalAI
models directory by default.
* install.sh — no longer pre-downloads any weights. Everything is
gallery-managed now.
* smoke.py (new) — parametrized smoke test that iterates over every
gallery configuration, simulating the LocalAI install flow
(creates a models dir, fetches OpenCV files with checksum
verification, lets insightface auto-download its packs), then
runs detect + embed + verify (+ analyze where supported) through
the in-process BackendServicer.
* test.py — OnnxDirectEngineTest no longer hardcodes `/models/opencv/`
paths; downloads ONNX files to a temp dir at setUpClass time and
passes ModelPath accordingly.
Registry change (core/services/facerecognition/store_registry.go):
* `dim=0` in NewStoreRegistry now means "accept whatever dimension
arrives" — needed because the backend supports 512-d ArcFace/MBF
and 128-d SFace via the same Registry. A non-zero dim still fails
fast with ErrDimensionMismatch.
* core/application plumbs `faceEmbeddingDim = 0`, explaining the
rationale in the comment.
Backend gallery description updated to reflect that the image carries
no weights — it's just Python + engines.
Smoke-tested all 7 configurations against the rebuilt image (with the
flatten fix applied), exit 0:
PASS: insightface-buffalo-l faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-sc faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-s faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-m faces=6 dim=512 same-dist=0.000
PASS: insightface-antelopev2 faces=6 dim=512 same-dist=0.000
PASS: insightface-opencv faces=6 dim=128 same-dist=0.000
PASS: insightface-opencv-int8 faces=6 dim=128 same-dist=0.000
7/7 passed
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): pre-fetch OpenCV ONNX for e2e target; drop stale pre-baked claim
CI regression from the previous commit: I moved OpenCV Zoo weight
delivery to LocalAI's gallery `files:` mechanism, but the
test-extra-backend-insightface-opencv target was still passing
relative paths `detector_onnx:models/opencv/yunet.onnx` in
BACKEND_TEST_OPTIONS. The e2e suite drives LoadModel directly over
gRPC without going through the gallery, so those relative paths
resolved to nothing and OpenCV's ONNXImporter failed:
LoadModel failed: Failed to load face engine:
OpenCV(4.13.0) ... Can't read ONNX file: models/opencv/yunet.onnx
Fix: add an `insightface-opencv-models` prerequisite target that
fetches the two ONNX files (YuNet + SFace) to a deterministic host
cache at /tmp/localai-insightface-opencv-cache/, verifies SHA-256,
and skips the download on re-runs. The opencv test target depends on
it and passes absolute paths in BACKEND_TEST_OPTIONS, so the backend
finds the files via its normal absolute-path resolution branch.
Also refresh the buffalo_l comment: it no longer says "pre-baked"
(nothing is — the pack auto-downloads from upstream's GitHub release
on first LoadModel, same as in CI).
Locally verified: `make test-extra-backend-insightface-opencv` passes
5/5 specs (health, load, face_detect, face_embed, face_verify).
Assisted-by: Claude:claude-opus-4-7
* feat(face-recognition): add POST /v1/face/embed + correct /v1/embeddings docs
The docs promised that /v1/embeddings returns face vectors when you
send an image data-URI. That was never true: /v1/embeddings is
OpenAI-compatible and text-only by contract — its handler goes
through `core/backend/embeddings.go::ModelEmbedding`, which sets
`predictOptions.Embeddings = s` (a string of TEXT to embed) and never
populates `predictOptions.Images[]`. The Python backend's Embedding
gRPC method does handle Images[] (that's how /v1/face/register reaches
it internally via `backend.FaceEmbed`), but the HTTP embeddings
endpoint wasn't wired to populate it.
Rather than overload /v1/embeddings with image-vs-text detection —
messy, and the endpoint is OpenAI-compatible by design — add a
dedicated /v1/face/embed endpoint that wraps `backend.FaceEmbed`
(already used internally by /v1/face/register and /v1/face/identify).
Matches LocalAI's convention of a dedicated path per non-standard flow
(/v1/rerank, /v1/detection, /v1/face/verify etc.).
Response:
{
"embedding": [<dim> floats, L2-normed],
"dim": int, // 512 for ArcFace R50 / MBF, 128 for SFace
"model": "<name>"
}
Live-tested on the opencv engine: returns a 128-d L2-normalized vector
(sum(x^2) = 1.0000). Sentinel in docs updated to note /v1/embeddings
is text-only and point image users at /v1/face/embed instead.
Assisted-by: Claude:claude-opus-4-7
* fix(http): map malformed image input + gRPC status codes to proper 4xx
Image-input failures on LocalAI's single-image endpoints (/v1/detection,
/v1/face/{verify,analyze,embed,register,identify}) have historically
returned 500 — even when the client was the one who sent garbage.
Classic example: you POST an "image" that isn't a URL, isn't a
data-URI, and isn't a valid JPEG/PNG — the server shouldn't claim
that's its fault.
Two helpers land in core/http/endpoints/localai/images.go and every
single-image handler is switched over:
* decodeImageInput(s)
Wraps utils.GetContentURIAsBase64 and turns any failure
(invalid URL, not a data-URI, download error, etc.) into
echo.NewHTTPError(400, "invalid image input: ...").
* mapBackendError(err)
Inspects the gRPC status on a backend call error and maps:
INVALID_ARGUMENT → 400 Bad Request
NOT_FOUND → 404 Not Found
FAILED_PRECONDITION → 412 Precondition Failed
Unimplemented → 501 Not Implemented
All other codes fall through unchanged (still 500).
Before, my 1×1 PNG error-path test returned:
HTTP 500 "rpc error: code = InvalidArgument desc = failed to decode one or both images"
After:
HTTP 400 "failed to decode one or both images"
Scope-limited to the LocalAI single-image endpoints. The multi-modal
paths (middleware/request.go, openresponses/responses.go,
openai/realtime.go) intentionally log-and-skip individual media parts
when decoding fails — different design intent (graceful degradation
of a multi-part message), not a 400-worthy failure. Left untouched.
Live-verified: every error case in /tmp/face_errors.py now returns
4xx with a meaningful message; the "image with no face (1x1 PNG)"
case specifically went from 500 → 400.
Assisted-by: Claude:claude-opus-4-7
* refactor(face-recognition): insightface packs go through gallery files:, drop FaceAnalysis
Follows up on the discovery that LocalAI's gallery `files:` mechanism
handles archives (zip, tar.gz, …) via mholt/archiver/v3 — the rhasspy
piper voices use exactly this pattern. Insightface packs are zip
archives, so we can now deliver them the same way every other
gallery-managed model gets delivered: declaratively, checksum-verified,
through LocalAI's standard download+extract pipeline.
Two changes:
1. Gallery (gallery/index.yaml) — every insightface-* entry gains a
`files:` list with the pack zip's URI + SHA-256. `local-ai models
install insightface-buffalo-l` now fetches the zip, verifies the
hash, and extracts it into the models directory. No more reliance
on insightface's library-internal `ensure_available()` auto-download
or its hardcoded `BASE_REPO_URL`.
2. InsightFaceEngine (backend/python/insightface/engines.py) — drops
the FaceAnalysis wrapper and drives insightface's `model_zoo`
directly. The ~50 lines FaceAnalysis provides — glob ONNX files,
route each through `model_zoo.get_model()`, build a
`{taskname: model}` dict, loop per-face at inference — are
reimplemented in `InsightFaceEngine`. The actual inference classes
(RetinaFace, ArcFaceONNX, Attribute, Landmark) are still
insightface's — we only replicate the glue, so drift risk against
upstream is minimal.
Why drop FaceAnalysis: it hard-codes a `<root>/models/<name>/*.onnx`
layout that doesn't match what LocalAI's zip extraction produces.
LocalAI unpacks archives flat into `<models_dir>`. Upstream packs
are inconsistent — buffalo_l/s/sc ship ONNX at the zip root (lands
at `<models_dir>/*.onnx`), buffalo_m/antelopev2 wrap in a redundant
`<name>/` dir (lands at `<models_dir>/<name>/*.onnx`). The new
`_locate_insightface_pack` helper searches both locations plus
legacy paths and returns whichever has ONNX files. Replaces the
earlier `_flatten_insightface_pack` helper (which tried to fight
FaceAnalysis's layout expectations; now we just find the files
wherever they are).
Net effect for users: install once via LocalAI's managed flow,
weights live alongside every other model, progress shows in the
jobs endpoint, no first-load network call. Same API surface,
cleaner plumbing.
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): CI's insightface e2e path needs the pack pre-fetched
The e2e suite drives LoadModel over gRPC without going through LocalAI's
gallery flow, so the engine's `_model_dir` option (normally populated
from ModelPath) is empty. Previously the insightface target relied on
FaceAnalysis auto-download to paper over this, but we dropped
FaceAnalysis in favor of direct model_zoo calls — so the buffalo_l
target started failing at LoadModel with "no insightface pack found".
Mirror the opencv target's pre-fetch pattern: download buffalo_sc.zip
(same SHA as the gallery entry), extract it on the host, and pass
`root:<dir>` so the engine locates the pack without needing
ModelPath. Switched to buffalo_sc (smallest pack, ~16MB) to keep CI
fast; it covers the same insightface engine code path as buffalo_l.
Face analyze cap dropped since buffalo_sc has no age/gender head.
Assisted-by: Claude:claude-opus-4-7[1m]
* feat(face-recognition): surface face-recognition in advertised feature maps
The six /v1/face/* endpoints were missing from every place LocalAI
advertises its feature surface to clients:
* api_instructions — the machine-readable capability index at
GET /api/instructions. Added `face-recognition` as a dedicated
instruction area with an intro that calls out the in-memory
registry caveat and the /v1/face/embed vs /v1/embeddings split.
* auth/permissions — added FeatureFaceRecognition constant, routed
all six face endpoints through it so admins can gate them per-user
like any other API feature. Default ON (matches the other API
features).
* React UI capabilities — CAP_FACE_RECOGNITION symbol mapped to
FLAG_FACE_RECOGNITION. Declared only for now; the Face page is a
follow-up (noted in the plan).
Instruction count bumped 9 → 10; test updated.
Assisted-by: Claude:claude-opus-4-7[1m]
* docs(agents): capture advertising-surface steps in the endpoint guide
Before this change, adding a new /v1/* endpoint reliably missed one or
more of: the swagger @Tags annotation, the /api/instructions registry,
the auth RouteFeatureRegistry, and the React UI CAP_* symbol. The
endpoint would work but be invisible to API consumers, admins, and the
UI — and nothing in the existing docs said to look in those places.
Extend .agents/api-endpoints-and-auth.md with a new "Advertising
surfaces" section covering all four surfaces (swagger tags, /api/
instructions, capabilities.js, docs/), and expand the closing checklist
so it's impossible to ship a feature without visiting each one. Hoist a
one-liner reminder into AGENTS.md's Quick Reference so agents skim it
before diving in.
Assisted-by: Claude:claude-opus-4-7[1m]
|
||
|
|
87e6de1989 |
feat: wire transcription for llama.cpp, add streaming support (#9353)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
d67623230f |
feat(vllm): parity with llama.cpp backend (#9328)
* fix(schema): serialize ToolCallID and Reasoning in Messages.ToProto
The ToProto conversion was dropping tool_call_id and reasoning_content
even though both proto and Go fields existed, breaking multi-turn tool
calling and reasoning passthrough to backends.
* refactor(config): introduce backend hook system and migrate llama-cpp defaults
Adds RegisterBackendHook/runBackendHooks so each backend can register
default-filling functions that run during ModelConfig.SetDefaults().
Migrates the existing GGUF guessing logic into hooks_llamacpp.go,
registered for both 'llama-cpp' and the empty backend (auto-detect).
Removes the old guesser.go shim.
* feat(config): add vLLM parser defaults hook and importer auto-detection
Introduces parser_defaults.json mapping model families to vLLM
tool_parser/reasoning_parser names, with longest-pattern-first matching.
The vllmDefaults hook auto-fills tool_parser and reasoning_parser
options at load time for known families, while the VLLMImporter writes
the same values into generated YAML so users can review and edit them.
Adds tests covering MatchParserDefaults, hook registration via
SetDefaults, and the user-override behavior.
* feat(vllm): wire native tool/reasoning parsers + chat deltas + logprobs
- Use vLLM's ToolParserManager/ReasoningParserManager to extract structured
output (tool calls, reasoning content) instead of reimplementing parsing
- Convert proto Messages to dicts and pass tools to apply_chat_template
- Emit ChatDelta with content/reasoning_content/tool_calls in Reply
- Extract prompt_tokens, completion_tokens, and logprobs from output
- Replace boolean GuidedDecoding with proper GuidedDecodingParams from Grammar
- Add TokenizeString and Free RPC methods
- Fix missing `time` import used by load_video()
* feat(vllm): CPU support + shared utils + vllm-omni feature parity
- Split vllm install per acceleration: move generic `vllm` out of
requirements-after.txt into per-profile after files (cublas12, hipblas,
intel) and add CPU wheel URL for cpu-after.txt
- requirements-cpu.txt now pulls torch==2.7.0+cpu from PyTorch CPU index
- backend/index.yaml: register cpu-vllm / cpu-vllm-development variants
- New backend/python/common/vllm_utils.py: shared parse_options,
messages_to_dicts, setup_parsers helpers (used by both vllm backends)
- vllm-omni: replace hardcoded chat template with tokenizer.apply_chat_template,
wire native parsers via shared utils, emit ChatDelta with token counts,
add TokenizeString and Free RPCs, detect CPU and set VLLM_TARGET_DEVICE
- Add test_cpu_inference.py: standalone script to validate CPU build with
a small model (Qwen2.5-0.5B-Instruct)
* fix(vllm): CPU build compatibility with vllm 0.14.1
Validated end-to-end on CPU with Qwen2.5-0.5B-Instruct (LoadModel, Predict,
TokenizeString, Free all working).
- requirements-cpu-after.txt: pin vllm to 0.14.1+cpu (pre-built wheel from
GitHub releases) for x86_64 and aarch64. vllm 0.14.1 is the newest CPU
wheel whose torch dependency resolves against published PyTorch builds
(torch==2.9.1+cpu). Later vllm CPU wheels currently require
torch==2.10.0+cpu which is only available on the PyTorch test channel
with incompatible torchvision.
- requirements-cpu.txt: bump torch to 2.9.1+cpu, add torchvision/torchaudio
so uv resolves them consistently from the PyTorch CPU index.
- install.sh: add --index-strategy=unsafe-best-match for CPU builds so uv
can mix the PyTorch index and PyPI for transitive deps (matches the
existing intel profile behaviour).
- backend.py LoadModel: vllm >= 0.14 removed AsyncLLMEngine.get_model_config
so the old code path errored out with AttributeError on model load.
Switch to the new get_tokenizer()/tokenizer accessor with a fallback
to building the tokenizer directly from request.Model.
* fix(vllm): tool parser constructor compat + e2e tool calling test
Concrete vLLM tool parsers override the abstract base's __init__ and
drop the tools kwarg (e.g. Hermes2ProToolParser only takes tokenizer).
Instantiating with tools= raised TypeError which was silently caught,
leaving chat_deltas.tool_calls empty.
Retry the constructor without the tools kwarg on TypeError — tools
aren't required by these parsers since extract_tool_calls finds tool
syntax in the raw model output directly.
Validated with Qwen/Qwen2.5-0.5B-Instruct + hermes parser on CPU:
the backend correctly returns ToolCallDelta{name='get_weather',
arguments='{"location": "Paris, France"}'} in ChatDelta.
test_tool_calls.py is a standalone smoke test that spawns the gRPC
backend, sends a chat completion with tools, and asserts the response
contains a structured tool call.
* ci(backend): build cpu-vllm container image
Add the cpu-vllm variant to the backend container build matrix so the
image registered in backend/index.yaml (cpu-vllm / cpu-vllm-development)
is actually produced by CI.
Follows the same pattern as the other CPU python backends
(cpu-diffusers, cpu-chatterbox, etc.) with build-type='' and no CUDA.
backend_pr.yml auto-picks this up via its matrix filter from backend.yml.
* test(e2e-backends): add tools capability + HF model name support
Extends tests/e2e-backends to cover backends that:
- Resolve HuggingFace model ids natively (vllm, vllm-omni) instead of
loading a local file: BACKEND_TEST_MODEL_NAME is passed verbatim as
ModelOptions.Model with no download/ModelFile.
- Parse tool calls into ChatDelta.tool_calls: new "tools" capability
sends a Predict with a get_weather function definition and asserts
the Reply contains a matching ToolCallDelta. Uses UseTokenizerTemplate
with OpenAI-style Messages so the backend can wire tools into the
model's chat template.
- Need backend-specific Options[]: BACKEND_TEST_OPTIONS lets a test set
e.g. "tool_parser:hermes,reasoning_parser:qwen3" at LoadModel time.
Adds make target test-extra-backend-vllm that:
- docker-build-vllm
- loads Qwen/Qwen2.5-0.5B-Instruct
- runs health,load,predict,stream,tools with tool_parser:hermes
Drops backend/python/vllm/test_{cpu_inference,tool_calls}.py — those
standalone scripts were scaffolding used while bringing up the Python
backend; the e2e-backends harness now covers the same ground uniformly
alongside llama-cpp and ik-llama-cpp.
* ci(test-extra): run vllm e2e tests on CPU
Adds tests-vllm-grpc to the test-extra workflow, mirroring the
llama-cpp and ik-llama-cpp gRPC jobs. Triggers when files under
backend/python/vllm/ change (or on run-all), builds the local-ai
vllm container image, and runs the tests/e2e-backends harness with
BACKEND_TEST_MODEL_NAME=Qwen/Qwen2.5-0.5B-Instruct, tool_parser:hermes,
and the tools capability enabled.
Uses ubuntu-latest (no GPU) — vllm runs on CPU via the cpu-vllm
wheel we pinned in requirements-cpu-after.txt. Frees disk space
before the build since the docker image + torch + vllm wheel is
sizeable.
* fix(vllm): build from source on CI to avoid SIGILL on prebuilt wheel
The prebuilt vllm 0.14.1+cpu wheel from GitHub releases is compiled with
SIMD instructions (AVX-512 VNNI/BF16 or AMX-BF16) that not every CPU
supports. GitHub Actions ubuntu-latest runners SIGILL when vllm spawns
the model_executor.models.registry subprocess for introspection, so
LoadModel never reaches the actual inference path.
- install.sh: when FROM_SOURCE=true on a CPU build, temporarily hide
requirements-cpu-after.txt so installRequirements installs the base
deps + torch CPU without pulling the prebuilt wheel, then clone vllm
and compile it with VLLM_TARGET_DEVICE=cpu. The resulting binaries
target the host's actual CPU.
- backend/Dockerfile.python: accept a FROM_SOURCE build-arg and expose
it as an ENV so install.sh sees it during `make`.
- Makefile docker-build-backend: forward FROM_SOURCE as --build-arg
when set, so backends that need source builds can opt in.
- Makefile test-extra-backend-vllm: call docker-build-vllm via a
recursive $(MAKE) invocation so FROM_SOURCE flows through.
- .github/workflows/test-extra.yml: set FROM_SOURCE=true on the
tests-vllm-grpc job. Slower but reliable — the prebuilt wheel only
works on hosts that share the build-time SIMD baseline.
Answers 'did you test locally?': yes, end-to-end on my local machine
with the prebuilt wheel (CPU supports AVX-512 VNNI). The CI runner CPU
gap was not covered locally — this commit plugs that gap.
* ci(vllm): use bigger-runner instead of source build
The prebuilt vllm 0.14.1+cpu wheel requires SIMD instructions (AVX-512
VNNI/BF16) that stock ubuntu-latest GitHub runners don't support —
vllm.model_executor.models.registry SIGILLs on import during LoadModel.
Source compilation works but takes 30-40 minutes per CI run, which is
too slow for an e2e smoke test. Instead, switch tests-vllm-grpc to the
bigger-runner self-hosted label (already used by backend.yml for the
llama-cpp CUDA build) — that hardware has the required SIMD baseline
and the prebuilt wheel runs cleanly.
FROM_SOURCE=true is kept as an opt-in escape hatch:
- install.sh still has the CPU source-build path for hosts that need it
- backend/Dockerfile.python still declares the ARG + ENV
- Makefile docker-build-backend still forwards the build-arg when set
Default CI path uses the fast prebuilt wheel; source build can be
re-enabled by exporting FROM_SOURCE=true in the environment.
* ci(vllm): install make + build deps on bigger-runner
bigger-runner is a bare self-hosted runner used by backend.yml for
docker image builds — it has docker but not the usual ubuntu-latest
toolchain. The make-based test target needs make, build-essential
(cgo in 'go test'), and curl/unzip (the Makefile protoc target
downloads protoc from github releases).
protoc-gen-go and protoc-gen-go-grpc come via 'go install' in the
install-go-tools target, which setup-go makes possible.
* ci(vllm): install libnuma1 + libgomp1 on bigger-runner
The vllm 0.14.1+cpu wheel ships a _C C++ extension that dlopens
libnuma.so.1 at import time. When the runner host doesn't have it,
the extension silently fails to register its torch ops, so
EngineCore crashes on init_device with:
AttributeError: '_OpNamespace' '_C_utils' object has no attribute
'init_cpu_threads_env'
Also add libgomp1 (OpenMP runtime, used by torch CPU kernels) to be
safe on stripped-down runners.
* feat(vllm): bundle libnuma/libgomp via package.sh
The vllm CPU wheel ships a _C extension that dlopens libnuma.so.1 at
import time; torch's CPU kernels in turn use libgomp.so.1 (OpenMP).
Without these on the host, vllm._C silently fails to register its
torch ops and EngineCore crashes with:
AttributeError: '_OpNamespace' '_C_utils' object has no attribute
'init_cpu_threads_env'
Rather than asking every user to install libnuma1/libgomp1 on their
host (or every LocalAI base image to ship them), bundle them into
the backend image itself — same pattern fish-speech and the GPU libs
already use. libbackend.sh adds ${EDIR}/lib to LD_LIBRARY_PATH at
run time so the bundled copies are picked up automatically.
- backend/python/vllm/package.sh (new): copies libnuma.so.1 and
libgomp.so.1 from the builder's multilib paths into ${BACKEND}/lib,
preserving soname symlinks. Runs during Dockerfile.python's
'Run backend-specific packaging' step (which already invokes
package.sh if present).
- backend/Dockerfile.python: install libnuma1 + libgomp1 in the
builder stage so package.sh has something to copy (the Ubuntu
base image otherwise only has libgomp in the gcc dep chain).
- test-extra.yml: drop the workaround that installed these libs on
the runner host — with the backend image self-contained, the
runner no longer needs them, and the test now exercises the
packaging path end-to-end the way a production host would.
* ci(vllm): disable tests-vllm-grpc job (heterogeneous runners)
Both ubuntu-latest and bigger-runner have inconsistent CPU baselines:
some instances support the AVX-512 VNNI/BF16 instructions the prebuilt
vllm 0.14.1+cpu wheel was compiled with, others SIGILL on import of
vllm.model_executor.models.registry. The libnuma packaging fix doesn't
help when the wheel itself can't be loaded.
FROM_SOURCE=true compiles vllm against the actual host CPU and works
everywhere, but takes 30-50 minutes per run — too slow for a smoke
test on every PR.
Comment out the job for now. The test itself is intact and passes
locally; run it via 'make test-extra-backend-vllm' on a host with the
required SIMD baseline. Re-enable when:
- we have a self-hosted runner label with guaranteed AVX-512 VNNI/BF16, or
- vllm publishes a CPU wheel with a wider baseline, or
- we set up a docker layer cache that makes FROM_SOURCE acceptable
The detect-changes vllm output, the test harness changes (tests/
e2e-backends + tools cap), the make target (test-extra-backend-vllm),
the package.sh and the Dockerfile/install.sh plumbing all stay in
place.
|
||
|
|
706cf5d43c |
feat(sam.cpp): add sam.cpp detection backend (#9288)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
85be4ff03c |
feat(api): add ollama compatibility (#9284)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
557d0f0f04 |
feat(api): Allow coding agents to interactively discover how to control and configure LocalAI (#9084)
Signed-off-by: Richard Palethorpe <io@richiejp.com> |
||
|
|
b7e3589875 |
fix(anthropic): show null index when not present, default to 0 (#9225)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
59108fbe32 |
feat: add distributed mode (#9124)
* feat: add distributed mode (experimental) Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix data races, mutexes, transactions Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactorings Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix events and tool stream in agent chat Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * use ginkgo Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix(cron): compute correctly time boundaries avoiding re-triggering Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * enhancements, refactorings Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * do not flood of healthy checks Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * do not list obvious backends as text backends Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * tests fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring and consolidation Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop redundant healthcheck Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * enhancements, refactorings Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
00fcf6936c |
fix: implement encoding_format=base64 for embeddings endpoint (#9135)
The OpenAI Node.js SDK v4+ sends encoding_format=base64 by default.
LocalAI previously ignored this parameter and always returned a float
JSON array, causing a silent data corruption bug in any Node.js client
(AnythingLLM Desktop, LangChain.js, LlamaIndex.TS, …):
// What the client does when it expects base64 but receives a float array:
Buffer.from(floatArray, 'base64')
Node.js treats a non-string first argument as a byte array — each
float32 value is truncated to a single byte — and Float32Array then
reads those bytes as floats, yielding dims/4 values. Vector databases
(Qdrant, pgvector, …) then create collections with the wrong dimension,
causing all similarity searches to fail silently.
e.g. granite-embedding-107m (384 dims) → 96 stored in Qdrant
jina-embeddings-v3 (1024 dims) → 256 stored in Qdrant
Changes:
- core/schema/prediction.go: add EncodingFormat string field to
PredictionOptions so the request parameter is parsed and available
throughout the request pipeline
- core/schema/openai.go: add EmbeddingBase64 string field to Item;
add MarshalJSON so the "embedding" JSON key emits either []float32
or a base64 string depending on which field is populated — all other
Item consumers (image, video endpoints) are unaffected
- core/http/endpoints/openai/embeddings.go: add floatsToBase64()
which packs a float32 slice as little-endian bytes and base64-encodes
it; add embeddingItem() helper; both InputToken and InputStrings loops
now honour encoding_format=base64
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
|
||
|
|
031a36c995 |
feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092)
* feat: wire min_p Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat: inferencing defaults Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore(refactor): re-use iterative parser Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore: generate automatically inference defaults from unsloth Instead of trying to re-invent the wheel and maintain here the inference defaults, prefer to consume unsloth ones, and contribute there as necessary. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore: apply defaults also to models installed via gallery Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore: be consistent and apply fallback to all endpoint Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
f7e8d9e791 |
feat(quantization): add quantization backend (#9096)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
d9c1db2b87 |
feat: add (experimental) fine-tuning support with TRL (#9088)
* feat: add fine-tuning endpoint Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(experimental): add fine-tuning endpoint and TRL support This changeset defines new GRPC signatues for Fine tuning backends, and add TRL backend as initial fine-tuning engine. This implementation also supports exporting to GGUF and automatically importing it to LocalAI after fine-tuning. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * commit TRL backend, stop by killing process Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * move fine-tune to generic features Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * add evals, reorder menu Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fix tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
a6d0e29eba |
fix(openresponses): do not omit required field ORItemParam.Arguments (#9074)
See #9047 |
||
|
|
8a0edd0809 |
Always populate ORItemParam.Summary (#9049)
* fix(openresponses): do not omit required fields summary and id * fix(openresponses): ensure ORItemParam.Summary is never null Normalize Summary to an empty slice at serialization chokepoints (sendSSEEvent, bufferEvent, buildORResponse) so it always serializes as [] instead of null. Closes #9047 |
||
|
|
8818452d85 |
feat(ui): MCP Apps, mcp streaming and client-side support (#8947)
* Revert "fix: Add timeout-based wait for model deletion completion (#8756)"
This reverts commit
|
||
|
|
a026277ab9 |
feat(mlx-distributed): add new MLX-distributed backend (#8801)
* feat(mlx-distributed): add new MLX-distributed backend Add new MLX distributed backend with support for both TCP and RDMA for model sharding. This implementation ties in the discovery implementation already in place, and re-uses the same P2P mechanism for the TCP MLX-distributed inferencing. The Auto-parallel implementation is inspired by Exo's ones (who have been added to acknowledgement for the great work!) Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * expose a CLI to facilitate backend starting Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat: make manual rank0 configurable via model configs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add missing features from mlx backend Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|
|
96efa4fce0 |
feat: add WebSocket mode support for the response api (#8676)
* feat: add WebSocket mode support for the response api Signed-off-by: bittoby <218712309+bittoby@users.noreply.github.com> * test: add e2e tests for WebSocket Responses API Signed-off-by: bittoby <218712309+bittoby@users.noreply.github.com> --------- Signed-off-by: bittoby <218712309+bittoby@users.noreply.github.com> |
||
|
|
983db7bedc |
feat(ui): add model size estimation (#8684)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
3ac7301f31 |
Add sample_rate support to TTS API via post-processing resampling (#8650)
* Initial plan * Add TTS sample_rate support via AudioResample post-processing Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
ed0bfb8732 |
fix: rename json_verbose to verbose_json (#8627)
Signed-off-by: Lukas Schaefer <lukas@lschaefer.xyz> |
||
|
|
53276d28e7 |
feat(musicgen): add ace-step and UI interface (#8396)
* feat(musicgen): add ace-step and UI interface Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Correctly handle model dir Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop auto-download Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add to models, fixup UIs icons Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Update docs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * l4t13 is incompatbile Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * avoid pinning version for cuda12 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop l4t12 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
10a1e6c74d |
feat(whisperx): add whisperx backend for transcription with speaker diarization (#8299)
* feat(proto): add speaker field to TranscriptSegment for diarization
Add speaker field to the gRPC TranscriptSegment message and map it
through the Go schema, enabling backends to return speaker labels.
Signed-off-by: eureka928 <meobius123@gmail.com>
* feat(whisperx): add whisperx backend for transcription with diarization
Add Python gRPC backend using WhisperX for speech-to-text with
word-level timestamps, forced alignment, and speaker diarization
via pyannote-audio when HF_TOKEN is provided.
Signed-off-by: eureka928 <meobius123@gmail.com>
* feat(whisperx): register whisperx backend in Makefile
Signed-off-by: eureka928 <meobius123@gmail.com>
* feat(whisperx): add whisperx meta and image entries to index.yaml
Signed-off-by: eureka928 <meobius123@gmail.com>
* ci(whisperx): add build matrix entries for CPU, CUDA 12/13, and ROCm
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): unpin torch versions and use CPU index for cpu requirements
Address review feedback:
- Use --extra-index-url for CPU torch wheels to reduce size
- Remove torch version pins, let uv resolve compatible versions
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): pin torch ROCm variant to fix CI build failure
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): pin torch CPU variant to fix uv resolution failure
Pin torch==2.8.0+cpu so uv resolves the CPU wheel from the extra
index instead of picking torch==2.8.0+cu128 from PyPI, which pulls
unresolvable CUDA dependencies.
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): use unsafe-best-match index strategy to fix uv resolution failure
uv's default first-match strategy finds torch on PyPI before checking
the extra index, causing it to pick torch==2.8.0+cu128 instead of the
CPU variant. This makes whisperx's transitive torch dependency
unresolvable. Using unsafe-best-match lets uv consider all indexes.
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): drop +cpu local version suffix to fix uv resolution failure
PEP 440 ==2.8.0 matches 2.8.0+cpu from the extra index, avoiding the
issue where uv cannot locate an explicit +cpu local version specifier.
This aligns with the pattern used by all other CPU backends.
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(backends): drop +rocm local version suffixes from hipblas requirements to fix uv resolution
uv cannot resolve PEP 440 local version specifiers (e.g. +rocm6.4,
+rocm6.3) in pinned requirements. The --extra-index-url already points
to the correct ROCm wheel index and --index-strategy unsafe-best-match
(set in libbackend.sh) ensures the ROCm variant is preferred.
Applies the same fix as
|
||
|
|
b6459ddd57 |
feat(api): Add transcribe response format request parameter & adjust STT backends (#8318)
* WIP response format implementation for audio transcriptions (cherry picked from commit e271dd764bbc13846accf3beb8b6522153aa276f) Signed-off-by: Andres Smith <andressmithdev@pm.me> * Rework transcript response_format and add more formats (cherry picked from commit 6a93a8f63e2ee5726bca2980b0c9cf4ef8b7aeb8) Signed-off-by: Andres Smith <andressmithdev@pm.me> * Add test and replace go-openai package with official openai go client (cherry picked from commit f25d1a04e46526429c89db4c739e1e65942ca893) Signed-off-by: Andres Smith <andressmithdev@pm.me> * Fix faster-whisper backend and refactor transcription formatting to also work on CLI Signed-off-by: Andres Smith <andressmithdev@pm.me> (cherry picked from commit 69a93977d5e113eb7172bd85a0f918592d3d2168) Signed-off-by: Andres Smith <andressmithdev@pm.me> --------- Signed-off-by: Andres Smith <andressmithdev@pm.me> Co-authored-by: nanoandrew4 <nanoandrew4@gmail.com> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|
|
4077aaf978 |
chore: re-enable e2e tests, fixups anthropic API tools support (#8296)
* chore(tests): add mock backend e2e tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixup anthropic tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * prepare e2e tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop repetitive tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop specific CI workflow Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixup anthropic issues, move all e2e tests to use mocked backend Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
68dd9765a0 |
feat(tts): add support for streaming mode (#8291)
* feat(tts): add support for streaming mode Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Send first audio, make sure it's 16 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
c491c6ca90 |
feat(openresponses): Support reasoning blocks (#8133)
* feat(openresponses): support reasoning blocks Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * allow to disable reasoning, refactor common logic Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add option to only strip reasoning Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add configurations for custom reasoning tokens Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
3387bfaee0 |
feat(api): add support for open responses specification (#8063)
* feat: openresponses Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add ttl settings, fix tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix: register cors middleware by default Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * satisfy schema Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Logitbias and logprobs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add grammar Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * SSE compliance Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * tool JSON conversion Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * support background mode Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * swagger Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * drop code. This is handled in the handler Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Small refactorings Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * background mode for MCP Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
c88074a19e |
feat(api): support 'reasoning' api field (#7959)
This PR adds support to support the 'reasoning' API field of the OpenAI spec. LocalAI now will extract automatically thinking tags in both SSE and non-SSE mode. The changes are adapted as well to the Chat UI now that will use the reasoning field to extract the thinking process and display it in the chat. This fixes https://github.com/mudler/LocalAI/issues/7944 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
5ca8f0aea0 |
feat: add tool/function calling support to Anthropic Messages API (#7956)
* Initial plan * Add tool/function calling schema support to Anthropic Messages API Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Add E2E tests for Anthropic tool calling Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Make tool calling tests require model to use tools - First test now expects hasToolUse to be true with clear error message - Third test now expects toolUseID to be non-empty (removed conditional) - Both tests will now fail if model doesn't call the expected tools Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Add E2E test for tool calling with streaming responses - Tests that streaming events are properly emitted (content_block_start/delta/stop) - Verifies tool_use blocks are accumulated correctly in streaming mode - Ensures model calls tools and stop_reason is set to tool_use Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
4cbf9abfef |
feat: Add Anthropic Messages API support (#7948)
* Initial plan * Add Anthropic Messages API support Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Fix code review comments: add error handling for JSON operations Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Fix test suite to use existing schema test runner Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Add Anthropic e2e tests using anthropic-sdk-go for streaming and non-streaming Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> |
||
|
|
1642b39cb8 |
[gallery] add JSON schema for gallery model specification (#7890)
Add JSON Schema for gallery model specification Signed-off-by: devmanishofficial <devmanishofficial@gmail.com> |
||
|
|
797f27f09f |
feat(UI): image generation improvements (#7804)
* chore: drop mode from image generation(unused) Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(UI): improve image generation front-end Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(UI): only ref images. files is to be deprecated Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * do not override default steps Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
0d0ef0121c |
fix: Usage for image generation is incorrect (and causes error in LiteLLM) (#7786)
* fix: Add usage fields to image generation response for OpenAI API compatibility Fixes #7354 Added input_tokens, output_tokens, and input_tokens_details fields to the image generation API response to comply with OpenAI's image generation API specification. This resolves validation errors in LiteLLM and the OpenAI SDK. Changes: - Added InputTokensDetails struct with text_tokens and image_tokens fields - Extended OpenAIUsage struct with input_tokens, output_tokens, and input_tokens_details - Updated ImageEndpoint to populate usage object with required fields - Updated InpaintingEndpoint to populate usage object with required fields - All fields initialized to 0 as per current behavior 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: majiayu000 <1835304752@qq.com> * fix: Correct usage field types for image generation API compatibility Changed InputTokens and OutputTokens from pointer types (*int) to regular int types to match OpenAI API specification. This fixes validation errors with LiteLLM and OpenAI SDK when parsing image generation responses. Fixes #7354 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: majiayu000 <1835304752@qq.com> --------- Signed-off-by: majiayu000 <1835304752@qq.com> Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com> |
||
|
|
c37785b78c |
chore(refactor): move logging to common package based on slog (#7668)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
50f9c9a058 |
feat(watchdog): add Memory resource reclaimer (#7583)
* feat(watchdog): add GPU reclaimer Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Handle vram calculation for unified memory devices Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Support RAM eviction, set watchdog interval from runtime settings Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
a3423f33e1 |
feat(agent-jobs): add multimedia support (#7398)
* feat(agent-jobs): add multimedia support Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Refactoring Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
53e5b2d6be |
feat: agent jobs panel (#7390)
* feat(agent): agent jobs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Multiple webhooks, simplify Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Do not use cron with seconds Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Create separate pages for details Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Detect if no models have MCP configuration, show wizard Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Make services test to run Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
c313b2c671 |
fix(reranker): tests and top_n check fix #7212 (#7284)
reranker tests and top_n check fix #7212 Signed-off-by: Mikhail Khludnev <mkhl@apache.org> |
||
|
|
d7f9f3ac93 |
feat: add support to logitbias and logprobs (#7283)
* feat: add support to logprobs in results Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat: add support to logitbias Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
3728552e94 |
feat: import models via URI (#7245)
* feat: initial hook to install elements directly Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * WIP: ui changes Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Move HF api client to pkg Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add simple importer for gguf files Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add opcache Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * wire importers to CLI Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add omitempty to config fields Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fix tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add MLX importer Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Small refactors to star to use HF for discovery Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Common preferences Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add support to bare HF repos Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(importer/llama.cpp): add support for mmproj files Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * add mmproj quants to common preferences Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fix vlm usage in tokenizer mode with llama.cpp Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
34bc1bda1e |
fix(api): SSE streaming format to comply with specification (#7182)
* Initial plan * Fix SSE streaming format to comply with specification - Replace json.Encoder with json.Marshal for explicit formatting - Use explicit \n\n for all SSE messages (instead of relying on implicit newlines) - Change %v to %s format specifier for proper string formatting - Fix error message streaming to include proper SSE format - Ensure consistency between chat.go and completion.go endpoints Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Add proper error handling for JSON marshal failures in streaming - Handle json.Marshal errors explicitly in error response paths - Add fallback simple error message if marshal fails - Prevents sending 'data: <nil>' on marshal failures - Addresses code review feedback Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Fix SSE streaming format to comply with specification Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Fix finish_reason field to use pointer for proper null handling - Change FinishReason from string to *string in Choice schema - Streaming chunks now omit finish_reason (null) instead of empty string - Final chunks properly set finish_reason to "stop", "tool_calls", etc. - Remove empty content from initial streaming chunks (only send role) - Final streaming chunk sends empty delta with finish_reason - Addresses OpenAI API compliance issues causing client failures Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Improve code consistency for string pointer creation - Use consistent pattern: declare variable then take address - Remove inline anonymous function for better readability - Addresses code review feedback Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Move common finish reasons to constants - Create constants.go with FinishReasonStop, FinishReasonToolCalls, FinishReasonFunctionCall - Replace all string literals with constants in chat.go, completion.go, realtime.go - Improves code maintainability and prevents typos Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> * Make it build Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fix finish_reason to always be present with null or string value - Remove omitempty from FinishReason field in Choice struct - Explicitly set FinishReason to nil for all streaming chunks - Ensures finish_reason appears as null in JSON for streaming chunks - Final chunks still properly set finish_reason to "stop", "tool_calls", etc. - Complies with OpenAI API specification example Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mudler <2420543+mudler@users.noreply.github.com> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com> Co-authored-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
02cc8cbcaa |
feat(llama.cpp): consolidate options and respect tokenizer template when enabled (#7120)
* feat(llama.cpp): expose env vars as options for consistency This allows to configure everything in the YAML file of the model rather than have global configurations Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat(llama.cpp): respect usetokenizertemplate and use llama.cpp templating system to process messages Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * WIP Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Detect template exists if use tokenizer template is enabled Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Better recognization of chat Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixes to support tool calls while using templates from tokenizer Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixups Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop template guessing, fix passing tools to tokenizer Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Extract grammar and other options from chat template, add schema struct Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * WIP Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * WIP Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Automatically set use_jinja Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Cleanups, identify by default gguf models for chat Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Update docs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
4408ed4f88 |
feat(api): OpenAI video create enpoint integration (#6777)
* feat: add OpenAI-compatible /v1/videos endpoint - Add VideoEndpoint handler with OpenAI request mapping - Add MapOpenAIToVideo function to convert OpenAI format to LocalAI VideoRequest - Add Swagger documentation for API endpoint - Add Ginkgo unit tests for mapping logic - Add Ginkgo integration test with embedded fake backend Signed-off-by: Greg <marianigregory@pm.me> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * Apply suggestion from @mudler Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> --------- Signed-off-by: Greg <marianigregory@pm.me> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |
||
|
|
9621edb4c5 |
feat(diffusers): add support for wan2.2 (#6153)
* feat(diffusers): add support for wan2.2
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(ci): use ttl.sh for PRs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add ftfy deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Revert "chore(ci): use ttl.sh for PRs"
This reverts commit
|
||
|
|
9c7f92c81f |
feat(p2p): automatically sync installed models between instances (#6108)
* feat(p2p): sync models between federated nodes This change makes sure that between federated nodes all the models are synced with each other. Note: this works exclusively with models belonging to a gallery. It does not sync files between the nodes, but rather it synces the node setup. E.g. All the nodes needs to have configured the same galleries and install models without any local editing. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Make nodes stable Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixups on syncing Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ui: improve p2p view Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
b9a25b16e6 |
feat: add reasoning effort and metadata to template (#5981)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
|
|
3d22bfc27c |
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935)
* feat(stablediffusion-ggml): add support to ref images Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add it to the model gallery Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |