Commit Graph

78 Commits

Author SHA1 Message Date
Ettore Di Giacinto
f0c92610a1 feat(importer): expand importer flow to almost all backends (#9466)
* docs(agents): require importer integration when adding backends

Document the importer registry workflow so contributors know that adding
a new backend also requires updating the /import-model dropdown source:
either a new importer in core/gallery/importers/, extending an existing
one for drop-in replacements, or the pref-only slice for backends with
no reliable auto-detect signal. Always covered by a table-driven test.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for Batch 0 primitives

Introduce failing tests that drive Batch 0 of the importer expansion:

- pkg/huggingface-api: assert GetModelDetails populates PipelineTag and
  LibraryName from /api/models/{repo}, and that a failing metadata
  endpoint still returns file details (best-effort fetch).
- core/gallery/importers/helpers_test.go: new table-driven coverage for
  HasFile, HasExtension, HasONNX, HasONNXConfigPair, HasGGMLFile.
- core/gallery/importers/importers_test.go: assert ErrAmbiguousImport
  sentinel exists and round-trips through errors.Is.
- core/gallery/importers/local_test.go: extend with detection cases for
  ggml-*.bin (whisper), silero_vad.onnx (silero-vad), and the piper
  .onnx + .onnx.json pair.
- core/http/endpoints/localai/import_model_test.go: assert
  ImportModelURIEndpoint returns HTTP 400 with a structured
  {error, detail, hint} body when ErrAmbiguousImport surfaces.

All tests fail in the expected places (missing fields, missing
helpers, missing sentinel, endpoint still wraps as 500).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): Batch 0 foundation — helpers, sentinel, local detection

Implements the Batch 0 primitives that subsequent importer batches build on:

- pkg/huggingface-api: ModelDetails gains PipelineTag and LibraryName.
  GetModelDetails now layers a best-effort GET /api/models/{repo} fetch
  on top of ListFiles — a metadata outage leaves the fields empty but
  still returns full file details. Uses a dedicated response struct
  because the single-model endpoint uses snake_case keys while the list
  endpoint historically returned camelCase.

- core/gallery/importers/helpers.go: generic HasFile, HasExtension,
  HasONNX, HasONNXConfigPair, HasGGMLFile helpers working on
  []hfapi.ModelFile so per-backend importers can detect artefact
  patterns without duplicating string wrangling.

- core/gallery/importers/importers.go: adds the ErrAmbiguousImport
  sentinel. DiscoverModelConfig now returns it (wrapped with
  fmt.Errorf("%w: ...")) when no importer matched AND the HF
  pipeline_tag falls in a whitelist of narrow modalities (ASR, TTS,
  sentence-similarity, text-classification, object-detection). The
  whitelist is intentionally narrow — unknown tags keep the previous
  "no importer matched" behaviour to avoid blocking rare repos.

- core/gallery/importers/local.go: three new local-path detections,
  inserted before the existing merged-transformers branch:
    * ggml-*.bin → whisper
    * silero*.onnx → silero-vad
    * *.onnx + *.onnx.json pair → piper

- core/http/endpoints/localai/import_model.go: ImportModelURIEndpoint
  surfaces ErrAmbiguousImport as HTTP 400 with
  {error, detail, hint} JSON, preserving existing behaviour for
  unrelated errors.

Green tests:
  go test ./core/gallery/importers/... ./pkg/huggingface-api/... \
          ./core/http/endpoints/localai/...

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(importers): red tests for KnownBackend endpoint and importer metadata

Add failing tests that drive Batch UI-Dropdown:

- importers_test.go: assert importers expose Name/Modality/AutoDetects
  and that LlamaCPPImporter advertises drop-in replacements via a new
  AdditionalBackendsProvider interface. A Registry() accessor is also
  expected.

- backend_test.go (new): assert GET /backends/known returns
  []schema.KnownBackend, covers every importer, exposes drop-in
  llama-cpp replacements, includes curated pref-only backends, has no
  duplicates, and is sorted by Modality+Name.

These tests fail at compile time against master; they are intentionally
red so the follow-up green commit is reviewable.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery): add /backends/known endpoint for importer-aware backend list

Extend the Importer interface with Name/Modality/AutoDetects so the
import system can self-describe its registry, and introduce the
AdditionalBackendsProvider interface so importers can advertise drop-in
replacements (llama-cpp advertises ik-llama-cpp and turboquant).

Expose the new GET /backends/known endpoint that merges:

- the importer registry (auto-detect supported),
- drop-in replacements hosted by importers (preference-only),
- a curated knownPrefOnlyBackends slice for backends with no dedicated
  importer (sglang, tinygrad, trl, mlx-vlm, whisperx, kokoros, Qwen TTS
  variants, sam3-cpp) — kept at the top of backend.go so contributors
  adding a new pref-only backend have one obvious place to edit,
- backends installed on disk but unknown to the importer (marked
  AutoDetect=false, empty Modality).

The endpoint deliberately does NOT filter by gallery membership or host
capability (unlike /backends/available): LocalAI may auto-install a
backend that is not yet present, so the import form dropdown must show
everything the importer knows about.

Response is deduplicated (importer wins over pref-only) and sorted by
Modality+Name for deterministic output.

Registered in core/http/routes/localai.go next to /backends/available
under the same admin middleware.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui): source import form backend dropdown from /backends/known

Replace the hard-coded BACKENDS constant in ImportModel.jsx with a
live fetch of /backends/known on mount. Users now see every backend
the importer layer knows about (including preference-only entries)
grouped by modality, not a stale subset.

Changes:

- config.js: add backendsKnown endpoint constant next to
  backendsAvailable.
- api.js: add backendsApi.listKnown() wrapper.
- ImportModel.jsx: remove BACKENDS constant, fetch the list via
  useEffect, and derive grouped options via buildBackendOptions.
  Preference-only entries render with a " (preference-only)" suffix.
  Loading state disables the dropdown with a "Loading backends…"
  placeholder; on fetch failure the form falls back to auto-detect
  only and surfaces a non-blocking toast.
- SearchableSelect.jsx: accept items flagged isHeader=true and render
  them as non-selectable section dividers. Keyboard navigation skips
  headers and search queries hide them so filtered output stays
  relevant.

Vitest is not set up in this project (devDependencies ship Playwright
only). Per the brief's guard-rail, no frontend test framework is
introduced; coverage is provided by the Go handler tests that assert
the /backends/known contract consumed by the React form.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for whisper importer

Asserts detection on ggerganov/whisper.cpp (via ggml-*.bin filename),
the preferences.backend=whisper override path for arbitrary URIs,
and the Importer interface metadata (name/modality/autodetect).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add whisper importer

Recognises whisper.cpp GGML models by the "ggml-*.bin" filename
convention (direct URL or HF repo member) and by the explicit
preferences.backend="whisper" override. Emits backend: whisper with
the transcript use-case. Registered before llama-cpp so the narrow
filename signal wins before any generic GGUF match is attempted.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for moonshine importer

Asserts detection on UsefulSensors/moonshine-tiny via owner + ONNX
files, the preferences.backend=moonshine override for arbitrary URIs,
and the Importer interface metadata (name/modality/autodetect).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add moonshine importer

Matches UsefulSensors-owned HF repos whose artefacts or metadata
identify them as ASR: on-disk .onnx files (the canonical Moonshine
packaging) OR pipeline_tag=automatic-speech-recognition (covers
transformers/safetensors-only sibling repos). preferences.backend=
moonshine overrides detection. Test uses the live moonshine-tiny
repo because the canonical UsefulSensors/moonshine repo currently
hits a recursive-subfolder bug in pkg/huggingface-api ListFiles.

Registered after WhisperImporter but before LlamaCPPImporter and
TransformersImporter so the narrower owner+ASR signal wins before
the generic tokenizer.json check routes the repo to transformers.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for nemo importer

Asserts detection on nvidia/parakeet-tdt-0.6b-v3 via owner + .nemo
file, the preferences.backend=nemo override for arbitrary URIs, and
the Importer interface metadata (name/modality/autodetect).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add nemo importer

Matches nvidia-owned HF repos that ship a .nemo checkpoint archive,
the canonical NeMo ASR packaging. preferences.backend=nemo forces
detection. Registered between moonshine and llama-cpp so the narrow
owner + extension signal wins before any downstream generic matcher.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for faster-whisper importer

Asserts detection on Systran/faster-whisper-large-v3 (owner +
model.bin + config.json + ASR pipeline), the preferences.backend=
faster-whisper override for arbitrary URIs, and the Importer
interface metadata.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add faster-whisper importer

Recognises CTranslate2-packaged whisper checkpoints distributed for
the faster-whisper runtime: model.bin + config.json + ASR
pipeline_tag, narrowed to Systran-owned repos or repo names
containing "faster-whisper" to avoid falsely claiming vanilla
OpenAI whisper HF repos. preferences.backend=faster-whisper
overrides detection. Registered before llama-cpp and transformers
so the narrow signal wins before tokenizer.json routes the repo to
the generic transformers importer.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for qwen-asr importer

Asserts detection on Qwen/Qwen3-ASR-1.7B via owner + ASR substring
in the repo name, the preferences.backend=qwen-asr override for
arbitrary URIs, and the Importer interface metadata.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add qwen-asr importer

Matches Qwen-owned HF repos whose name contains "ASR"
(case-insensitive), routing them to the qwen-asr backend rather
than the generic transformers/vllm path. The substring check scans
the repo portion only so the owner field cannot leak a false match.
preferences.backend=qwen-asr forces detection. Registered before
llama-cpp and transformers so the narrow owner+name signal wins.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): ASR ambiguity surfaces ErrAmbiguousImport

Locks in the behaviour added in Batch 0: an HF repo whose pipeline_tag
marks it as automatic-speech-recognition but whose artefacts match no
ASR importer (and no generic importer) must fail with
ErrAmbiguousImport so callers know to pass preferences.backend rather
than silently guess. pyannote/voice-activity-detection is the fixture
— its file list is only config.yaml + README, leaving every importer's
artefact check negative.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for piper importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add piper importer

Detects piper TTS voices by the canonical <voice>.onnx + <voice>.onnx.json
pair packaging (via HasONNXConfigPair). Narrow enough to skip generic
ONNX repos used by other backends (Moonshine ASR, sentence-transformers).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for bark importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add bark importer

Detects Suno's Bark TTS checkpoints by HF owner "suno" + repo name
prefix "bark". Adds HFOwnerRepoFromURI() helper so importers can fall
back to URI parsing when pkg/huggingface-api's recursive tree listing
errors on repos with nested subdirectories (suno/bark ships a
speaker_embeddings/v2 subtree that trips a pre-existing path-doubling
bug in the listFilesInPath recursion).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for fish-speech importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add fish-speech importer

Detects Fish Audio TTS releases by HF owner "fishaudio" with a URI-based
fallback for repos whose tree recursion trips the pre-existing hfapi
path-doubling bug.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for outetts importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add outetts importer

Detects OuteAI's OuteTTS releases by HF owner "OuteAI" or a case-
insensitive "OuteTTS" substring in the repo name, with a URI-based
fallback for recursion-bugged repos.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for voxcpm importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add voxcpm importer

Detects OpenBMB's VoxCPM TTS family by repo-name substring (community
mirrors re-host the weights under many owners — mlx-community,
bluryar, callgg, etc).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for kokoro importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add kokoro importer

Detects hexgrad's Kokoro TTS by the "Kokoro" repo-name substring paired
with a PyTorch .pth/.pt checkpoint — the pairing excludes ONNX-only
mirrors (handled by the pref-only `kokoros` Rust runtime) and GGUF
mirrors (handled by llama-cpp).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for kitten-tts importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add kitten-tts importer

Detects KittenML's kitten-tts releases by owner or "kitten-tts" repo-name
substring, with URI-parsing fallback.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for neutts importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add neutts importer

Detects Neuphonic's NeuTTS releases by owner "neuphonic" or "neutts"
repo-name substring, with URI-parsing fallback.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for chatterbox importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add chatterbox importer

Detects Resemble AI's Chatterbox TTS by owner "ResembleAI" or
"chatterbox" repo-name substring, with URI-parsing fallback.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for vibevoice importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add vibevoice importer

Detects Microsoft's VibeVoice TTS by "vibevoice" repo-name substring
(case-insensitive) so community mirrors still route here.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for coqui importer

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add coqui importer

Detects Coqui AI's TTS releases (XTTS-v2, YourTTS, …) by the
authoritative `coqui` HF owner, with URI-parsing fallback.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): TTS ambiguity surfaces ErrAmbiguousImport

Adds a Ginkgo spec that imports nari-labs/Dia-1.6B — a real HF repo
carrying pipeline_tag="text-to-speech" whose artefacts (*.pth, one
safetensors shard, preprocessor_config.json, config.json) match none of
the Batch-2 TTS importers nor the generic text/image importers — and
asserts DiscoverModelConfig wraps ErrAmbiguousImport via errors.Is.

Also pivots the endpoint-level ambiguity fixture from hexgrad/Kokoro-82M
to nari-labs/Dia-1.6B. Batch 2 added a dedicated kokoro importer that
now claims the original fixture; Dia remains genuinely unclaimed and
so exercises the same ambiguity code path at the HTTP layer.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for stablediffusion-ggml importer

Covers HF repo detection (city96/FLUX.1-dev-gguf), raw .gguf URL matching on
filename arch tokens, preference override, and Importer interface metadata.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add stablediffusion-ggml importer

Detects GGUF-packed Stable Diffusion and FLUX checkpoints (leejet owner,
city96 FLUX mirrors, second-state SD dumps, raw .gguf URLs with arch
tokens) and routes them to the stablediffusion-ggml backend. Registered
BEFORE LlamaCPPImporter so .gguf image checkpoints are not stolen by
llama-cpp's generic .gguf match. Reuses HFOwnerRepoFromURI for the
hfapi-recursion-bug fallback. preferences.backend overrides detection.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for ace-step importer

Covers HF repo-name detection (ACE-Step/ACE-Step-v1-3.5B), preference
override, and Importer interface metadata.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add ace-step importer

Routes ACE-Step music generation checkpoints (ACE-Step/ACE-Step-v1-3.5B,
ACE-Step/Ace-Step1.5, community mirrors) to the ace-step backend.
Matching is case-insensitive on the "ace-step" repo-name substring and
owner, with an HFOwnerRepoFromURI fallback for the hfapi recursion bug.
KnownUsecaseStrings mirrors the gallery's ace-step-turbo entry
(sound_generation, tts). preferences.backend overrides.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): surface ErrAmbiguousImport on text-to-image misses

Adds text-to-image to ambiguousModalities whitelist and covers the
h94/IP-Adapter-FaceID case — pipeline_tag=text-to-image but ships only
.bin/.safetensors so diffusers, stablediffusion-ggml, llama-cpp,
transformers, vllm, mlx, and ace-step all miss. DiscoverModelConfig now
surfaces ErrAmbiguousImport for that shape instead of the opaque
"no importer matched" error.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for vllm-omni importer

Introduces the test surface for the forthcoming VLLMOmniImporter:
detection via preferences.backend, Qwen owner + Omni repo token,
URI-only fallback, negative cases (plain Qwen, random OmniX repo), and
Import() emitting backend: vllm-omni with chat + multimodal usecases.

Includes a registration-order assertion via DiscoverModelConfig to pin
the requirement that vllm-omni wins over vllm for Qwen Omni repos
(tokenizer files are usually present too).

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add vllm-omni importer

Adds VLLMOmniImporter for Qwen Omni-style multimodal checkpoints
(Qwen3-Omni, Qwen2.5-Omni, …). Detection is narrow: HF owner "Qwen"
combined with "omni" in the repo name, or a repo name matching the
-Omni-/Omni- naming pattern. preferences.backend="vllm-omni" always
wins; HFOwnerRepoFromURI provides a URI-only fallback for the hfapi
recursion-bug edge case.

Emitted YAML sets backend: vllm-omni and known_usecases: [chat,
multimodal], matching the gallery/index.yaml vllm-omni entries. The
importer is registered ahead of VLLMImporter so Qwen Omni repos —
which also carry tokenizer files — route to vllm-omni rather than the
plain vllm backend.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for llama-cpp drop-in preferences

Pins the expected drop-in replacement behaviour: preferences.backend
of ik-llama-cpp or turboquant must swap the emitted YAML backend
field while keeping the llama-cpp file layout identical. Also covers
the unknown-backend case (must stay llama-cpp) and re-asserts
AdditionalBackends() returns the two curated entries with non-empty
descriptions.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): llama-cpp honours ik-llama-cpp and turboquant drop-in preferences

preferences.backend set to ik-llama-cpp or turboquant now swaps the
emitted YAML backend field while leaving the file layout, model path,
mmproj handling and everything else in the llama-cpp Import pipeline
untouched. Unknown values are ignored and fall back to backend:
llama-cpp so arbitrary input can't leak into the config.

Aligns the AdditionalBackends() descriptions with the user-facing
naming conventions surfaced via /backends/known. No changes to the
pref-only curated list in endpoints/localai/backend.go: the two
drop-in names have always lived on the importer side via
AdditionalBackends.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for silero-vad importer

Add the SileroVADImporter test fixtures covering metadata, preference
overrides, snakers4 + onnx detection, silero_vad.onnx canonical filename,
URI fallback, and live HF discovery. Implementation follows in the next
commit.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add silero-vad importer

Recognise the Silero VAD ONNX packaging: the canonical silero_vad.onnx
filename or any ONNX file under the snakers4 owner. Emits a
backend: silero-vad config with the vad known_usecase, and attaches the
canonical file entry when present so the weights download on import.

Registered before the generic importers so the unique-filename signal
takes precedence over any downstream tokenizer-based matcher.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for rerankers importer

Cover the RerankersImporter contract: interface metadata, preference
override, cross-encoder owner detection, case-insensitive 'reranker'
substring match (BAAI/bge-reranker, Alibaba-NLP/gte-reranker), URI
fallback, and the full-discovery ordering check that a BAAI reranker
repo must route to the rerankers importer rather than transformers.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add rerankers importer

Recognise reranker repositories — cross-encoder owner or any repo whose
name contains 'reranker' (case-insensitive). Emits backend: rerankers
with reranking: true and the rerank known_usecase.

Registered ahead of sentencetransformers and transformers so reranker
repos that happen to ship tokenizer.json or modules.json still route
here.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for sentencetransformers importer

Cover the SentenceTransformersImporter contract: interface metadata,
preference override, modules.json marker file, sentence_bert_config.json
marker file, sentence-transformers owner, URI fallback, and the
full-discovery ordering check that ensures a sentence-transformers HF
URI routes here rather than transformers.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add sentencetransformers importer

Recognise sentence-transformers embedding repos by modules.json,
sentence_bert_config.json, or the sentence-transformers owner. Emits
backend: sentencetransformers with embeddings: true and the embeddings
known_usecase.

Registered ahead of transformers so ST repos that carry tokenizer.json
still route here.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): add failing tests for rfdetr importer

Cover the RFDetrImporter contract: interface metadata, preference
override, case-insensitive rf-detr and rfdetr substring matches, URI
fallback, and negative cases. Implementation follows in the next
commit.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(gallery/importers): add rfdetr importer

Recognise RF-DETR object-detection repositories by a case-insensitive
'rf-detr' / 'rfdetr' substring in the repo name. Emits backend: rfdetr
with the detection known_usecase.

Registered ahead of transformers so RF-DETR repos with tokenizer
artefacts still route here.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(gallery/importers): surface ErrAmbiguousImport on sentence-similarity misses

Add an ambiguity fixture covering the embeddings/rerankers modality.
Qdrant/bm25 carries pipeline_tag=sentence-similarity but ships only
config.json + stopword .txt files — none of the Batch 5 importers
(silero-vad, rerankers, sentencetransformers, rfdetr) or the generic
vllm/transformers/llama-cpp/mlx/diffusers importers match. Because the
modality is in the ambiguous whitelist, DiscoverModelConfig must
surface ErrAmbiguousImport.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(localai/backend): red tests for KnownBackend.Installed flag

Extend the /backends/known suite with three failing cases that pin down
the forthcoming Installed field: JSON field presence on every entry,
flipping to true when an importer-registered backend is also present on
disk (and staying false for non-installed pref-only entries), and
surfacing system-only backends with empty modality and AutoDetect=false.

A small writeFakeSystemBackend helper plants a run.sh under the backends
dir so gallery.ListSystemBackends recognises the fixture.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(schema,localai/backend): add Installed flag to KnownBackend

Add an Installed bool to schema.KnownBackend and populate it from the
/backends/known handler so the React import form can warn users that
picking a not-yet-installed backend will trigger an automatic download
on submit.

Computation: after merging the importer registry, additional backends
provider entries and the curated pref-only slice, the handler walks
gallery.ListSystemBackends(systemState) and either flips the existing
map entry's Installed flag to true (preserving modality / autodetect /
description metadata) or inserts a bare {Installed:true} entry for
system-only backends the importer layer doesn't know about.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(localai/import_model): structured ambiguous-import response

Add red tests covering the extended ambiguity shape the React import
form needs:

- ImportModelURIEndpoint must return an HTTP 400 body that exposes the
  detected `modality` (normalised to the importer modality key, e.g.
  "tts" for pipeline_tag=text-to-speech) and a list of `candidates`
  (backend names filtered by modality, excluding text-LLM backends).
- The importers package must surface a typed AmbiguousImportError so
  HTTP consumers can read Modality + Candidates without parsing the
  error string. errors.Is against the existing sentinel keeps working.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(localai/import_model): structured ambiguity response with modality + candidates

DiscoverModelConfig now returns a typed AmbiguousImportError that
carries the importer modality key, candidate backend names, the
original URI, and the raw HF pipeline_tag. Its Is() preserves
errors.Is(err, ErrAmbiguousImport) for legacy callers.

The importer modality is pre-mapped from the HF pipeline_tag
(automatic-speech-recognition → asr, text-to-speech → tts, etc) via
PipelineTagToModality — surfaced as an exported helper so downstream
consumers can avoid duplicating the table. CandidatesForModality
filters the default importer registry plus AdditionalBackendsProvider
drop-ins by modality, sorts deterministically, and is the single
source of truth used by ImportModelURIEndpoint.

ImportModelURIEndpoint now returns HTTP 400 with
  { error, detail, modality, candidates, hint }
when ambiguity fires, letting the React form render a modality-scoped
picker inline instead of a generic toast.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): manual pick badge + tooltip

Red Playwright coverage for the preference-only → manual pick rename:

- The Backend dropdown renders a "manual pick" badge on every option
  whose KnownBackend.auto_detect is false.
- The badge carries a title attribute with hover-tooltip copy that
  explains auto-detect won't route to this backend.
- Auto-detectable backends must NOT carry the badge.
- The legacy " (preference-only)" suffix is gone from every label.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* ui(import): replace preference-only suffix with manual pick badge

SearchableSelect option rows now support an optional badge field — a
muted pill rendered to the right of the label with an optional title
attribute for native hover tooltips. Plain text so screen readers read
it alongside the option name.

buildBackendOptions in ImportModel stops appending " (preference-only)"
to the label and instead sets badge="manual pick" plus a descriptive
tooltip on every option whose auto_detect is false. The Backend help
text explains what "manual pick" means so users aren't left wondering
about the badge.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): inline ambiguity picker

Red Playwright coverage for Batch A2 — when the server returns a 400
ambiguity body, the form must render an inline alert instead of a
toast, expose one clickable chip per candidate backend, and support
both auto-resubmit on pick and silent dismiss.

- Mocks /api/models/import-uri with the structured ambiguity body
  (error, detail, modality, candidates, hint).
- On first click of Import, the alert is visible, carries
  modality-specific copy, and shows a chip per candidate.
- Clicking a chip clears the alert, sets the Backend dropdown, and
  triggers a second POST to /api/models/import-uri.
- Dismissing the alert leaves the Backend dropdown on Auto-detect —
  no implicit backend assignment.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): inline ambiguity alert with candidate chips

Adds AmbiguityAlert — a soft, info-coloured card rendered above the URI
input when the server returns a structured 400 with { modality,
candidates }. Message is modality-aware (tts/asr/embeddings/image/
reranker/detection get purpose-written copy, everything else falls back
to a generic template). Each candidate is a clickable chip that shows a
download icon when /backends/known marks the backend as not yet
installed, so users aren't surprised by an implicit install.

ImportModel wires the alert to handleSimpleImport's error path:
- api.handleResponse now attaches { status, body } to the thrown Error
  so pages can pattern-match on structured responses instead of string
  error messages.
- handleSimpleImport detects `status === 400 && body.error === 'ambiguous
  import'` and flips into the inline-picker mode instead of toasting.
- Clicking a chip sets prefs.backend and auto-resubmits (passing the
  picked backend as an override so setPrefs's asynchrony doesn't leak
  a stale value).
- Dismissing clears the alert; changing the URI or the backend also
  clears it so a stale alert never sticks around.

Test fixtures mock GET /backends/known + POST /models/import-uri so the
Playwright specs don't depend on real network reachability.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): auto-install warning

Red Playwright coverage for Batch A3 — when the user picks a backend
whose KnownBackend.installed is false, the form must render a muted
inline note under the Backend dropdown warning that submitting will
download the backend first. Picking an installed backend or leaving
Auto-detect selected must keep the note hidden.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): auto-install warning under backend dropdown

When the user picks a backend whose KnownBackend.installed is false,
render a muted inline note under the Backend dropdown's help text
warning that submitting will download the backend first. The note
lives inside the same form-group so it lines up with the existing
hint text; it's hidden when Auto-detect is selected (the selected
backend is unknowable at that point) or when the chosen backend is
already on disk.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* ui(import): drop redundant section header, adjust icons, rename HF shortcut

- Remove the "Import from URI" card-level <h2> — the page title already
  says "Import New Model" one row up, so the secondary header was
  duplicating information.
- Swap the fa-star on "Common Preferences" for fa-sliders (stars imply
  favourites/ratings; this is just a preferences block) and move the
  Custom Preferences fa-sliders-h to fa-plus-circle so the two blocks
  read as distinct rather than as two sliders.
- Rename the HF shortcut from "Search GGUF on HF" → "Browse models on
  HF" and drop the `search=gguf` filter on the linked URL. The import
  form now supports ~40 backends; hard-coding GGUF in the copy no
  longer matches the form's actual reach.
- Pure polish — no behaviour change, covered by the existing Batch A
  Playwright suite.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): batch B — simple/power switch, options, tabs, dialog

Adds a failing Playwright suite covering the full Batch B surface ahead
of implementation:

- B1: SimplePowerSwitch segmented control renders, toggles, persists to
  localStorage across reloads.
- B2: Simple-mode Options disclosure is collapsed by default; expanding
  exposes only Backend, Model Name, Description (no quantizations,
  mmproj, model type, or custom prefs).
- B3: Power mode has Preferences and YAML tabs with a persistent
  selection across reloads; URI/name/description typed in Simple carry
  over to Power; YAML tab swaps the primary action to Create.
- B4: Switching Power -> Simple with a custom preference set triggers
  the 3-button confirmation dialog (Keep / Discard / Cancel) with the
  documented semantics.

Tests fail against master — implementation lands in the following
commits.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): add SimplePowerSwitch segmented control

Replaces the previous "Advanced Mode / Simple Mode" toggle button in the
page header with a two-segment control that flips between Simple and
Power. The control reuses the existing .segmented CSS shared with the
Sound page for visual consistency.

Mode state is persisted to localStorage under `import-form-mode` so
reloads land on the same view (default: simple). The boolean alias
`isAdvancedMode` is retained internally to minimise diff — subsequent
commits reshape the Simple and Power surfaces independently.

Closes B1 from the Batch B Playwright suite.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): simple mode collapsible options, power tabs, switch dialog

Completes the Batch B surface in a single structural pass so Simple and
Power mode can evolve independently:

Simple mode
  - URI input + Ambiguity alert + Import button, plus a collapsible
    "Options" disclosure that exposes ONLY Backend, Model Name,
    Description. Quantizations / MMProj / Model Type / Diffusers fields
    / Custom Preferences are no longer rendered in Simple mode.

Power mode
  - In-page segmented "Preferences · YAML" tab strip. Active tab
    persists to localStorage under `import-form-power-tab`.
  - Preferences tab = the full existing preferences + custom prefs
    panel (no progressive disclosure yet — that's Batch D).
  - YAML tab = the existing CodeEditor. Primary button reads "Create"
    here, "Import Model" everywhere else.

Switch dialog
  - Power -> Simple with non-default prefs (advanced pref keys set,
    any custom-pref key non-empty, or YAML edited away from the
    template) opens a 3-button dialog: Keep & switch / Discard &
    switch / Cancel.
  - Keep preserves all state. Discard resets prefs + customPrefs + YAML
    to defaults. Cancel leaves the user in Power mode.

Page subtitle reflects the current surface (Simple, Power/Preferences,
Power/YAML). Estimate banner renders everywhere except Power/YAML.

Closes B2/B3/B4 from the Batch B Playwright suite.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): expand Options disclosure in Batch A tests

Batch B hid the Backend dropdown behind a collapsible Options disclosure
in Simple mode. The Batch A tests that exercise the dropdown directly
(manual-pick badge, ambiguity chip sets the selected backend, auto-
install warning) now click the disclosure toggle before asserting on
dropdown contents. Test intent is unchanged.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* ui(import): strip decorative icons from field labels

The preference panel had 12 Font Awesome icons decorating field labels
(Backend, Model Name, Description, Quantizations, MMProj Quantizations,
Model Type, Pipeline Type, Scheduler Type, Enable Parameters, Embeddings,
CUDA, plus fa-link on Model URI). Every label screamed equally, flattening
the visual hierarchy.

Remove them. Keep icons where they carry meaning: page-level section
headers, URI format guide entries, primary buttons, the Simple-mode
Options disclosure, the ambiguity alert's fa-lightbulb, the auto-install
note's fa-download, and the Estimated-requirements banner's
fa-memory / fa-microchip / fa-download.

No new behaviour, no layout / spacing changes beyond removing the
orphaned icon margin. Playwright suite green.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): progressive disclosure of preference fields

Cover the Batch D visibility matrix for Power > Preferences: Quantizations,
MMProj Quantizations, and Model Type each render only for the backends that
can consume them, stay visible when the backend is unset, and preserve any
value the user already typed when toggled off and back on. Also pin the
shrunk Description textarea at rows=2.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): progressive disclosure + shorter description textarea

Gate Quantizations, MMProj Quantizations, and Model Type in the Power >
Preferences tab so each field only renders for the backends that can
actually consume it. Backend unset keeps everything visible. Hidden
fields' state is preserved (the JSX wrapper is guarded, not the
underlying prefs state) so users flipping backends back and forth don't
lose input.

Also shrink the Description textarea from rows=3 to rows=2 — it's
shared between Simple Options and Power Preferences so the change
applies to both.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): enter-to-submit in Simple mode

Red test for Batch F3 — pressing Enter in the URI input must POST
/models/import-uri, and Enter in the Description textarea must insert
a newline without submitting the form.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): enter-to-submit in Simple mode

Wrap the Simple-mode URI input + ambiguity alert + Options disclosure
in a <form> whose onSubmit calls handleSimpleImport. Pressing Enter in
the URI input (or any Simple-mode text input) now submits the import
without having to move the mouse to the header button. The Description
textarea keeps its native behaviour — Enter inserts a newline.

A hidden submit button is included because the visible Import button
lives outside the form in the page header; some browsers only fire
implicit Enter-submit when the form contains a submit-capable element.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* ui(import,SearchableSelect,components): aria-hidden on decorative icons

Every Font Awesome icon in the import form is decorative — its meaning
is already conveyed by adjacent visible text. Adding aria-hidden="true"
prevents screen readers from announcing the unicode glyph point as
content. Covers ImportModel.jsx (all remaining <i> glyphs) and
SearchableSelect.jsx (the trigger chevron).

AmbiguityAlert and SimplePowerSwitch already set aria-hidden on their
icons when the components landed in Batches A and B — no change needed
there.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* ui(SearchableSelect): responsive dropdown maxHeight + hover focus guard

F2 — replace fixed pixel heights with min(pixel, vh) so the dropdown
and its inner scroll region don't overflow short viewports. Outer
container: 260px -> min(260px, 60vh); inner listbox: 200px ->
min(200px, 50vh). Tall viewports still get the original pixel caps.

F5 — short-circuit onMouseEnter when the hovered row is already the
focused row. Avoids queueing a setFocusIndex call (and a render) for
every mousemove inside the same item — the state would be identical.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* ui(import): aria-label on custom preference rows

The Key / Value inputs and trash button in each Custom Preferences row
previously relied on placeholder text alone. Placeholders are not
accessible names — they vanish on input and screen readers do not
announce them consistently. Add row-indexed aria-labels so assistive
tech can distinguish "Preference key for row 1" from "row 2", and give
the trash button an explicit "Remove this preference" label.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* test(ui/import): modality chip row

Red tests for Batch E — a horizontal modality chip row that filters the
Backend dropdown by modality. Covers visibility in Simple-mode Options
and Power/Preferences (and absence in Power/YAML), filter behaviour,
mismatched-backend clearing with toast, ambiguity-alert auto-selection,
and radiogroup keyboard navigation.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* feat(ui/import): add ModalityChips component + filter integration

Horizontal chip row (Any, Text, Speech, TTS, Image, Embeddings,
Rerankers, Detection, VAD) filters the Backend dropdown options to the
selected modality. Default is Any — no filter, current behaviour.

- New ModalityChips component (radiogroup pattern, roving tabindex,
  arrow-key navigation, Home/End).
- buildBackendOptions now accepts an optional modalityFilter so grouped
  output is narrowed before rendering.
- Chips render inside Simple-mode Options disclosure and Power >
  Preferences tab. Power > YAML stays unaffected.
- Switching the filter drops a mismatched backend selection and
  surfaces a toast so the auto-clear is visible.
- Ambiguity alerts auto-activate the matching chip so users see only
  relevant backends even if they dismiss the alert.

Tightens the Batch E tests' option-matching to the label <span> so the
"↵" keybind hint on the focused row doesn't break accessible-name
lookups.

Assisted-by: Claude:claude-opus-4-7[1m] [Agent]

* fix(ui/import): rename Power to Advanced + stop URI-formats toggle from submitting form

The "Supported URI Formats" disclosure button inside the Simple-mode form
lacked an explicit type attribute, so it defaulted to type="submit". Every
click triggered the form's onSubmit and surfaced the empty-URI validation
toast ("Please enter a model URI"). Marking it type="button" lets it
behave as a pure toggle.

While here, rename the user-visible "Power" label to "Advanced" in the
mode switch (button text + tooltip) and the Power-mode tab's aria-label,
matching the term users actually expect. The internal mode key stays
'power' so tests, localStorage, and data-testid selectors are untouched.

Assisted-by: Claude:claude-opus-4-7

* fix(system): fall back to cpu when meta backend lacks default capability

Meta backends like vllm and sglang enumerate concrete variants for
nvidia/amd/intel/cpu but omit a default: catch-all entry. On a no-GPU
host the reported capability is "default", so the previous Capability()
returned "default" unconditionally on a miss — IsCompatibleWith then saw
no "default" key and filtered the meta out of AvailableBackends. The
import flow's auto-install step then failed with "no backend found with
name <meta>", contradicting the UI's promise that the backend would be
downloaded on demand.

Try the explicit "default" key first, then fall back to "cpu" before
giving up. vllm now resolves to cpu-vllm on CPU-only Linux without
touching the gallery YAML.

Assisted-by: Claude:claude-opus-4-7
2026-04-22 22:42:37 +02:00
Ettore Di Giacinto
372eb08dcf fix(gallery): allow uninstalling orphaned meta backends + force reinstall (#9434)
Two interrelated bugs that combined to make a meta backend impossible
to uninstall once its concrete had been removed from disk (partial
install, earlier crash, manual cleanup).

1. DeleteBackendFromSystem returned "meta backend %q not found" and
   bailed out early when the concrete directory didn't exist,
   preventing the orphaned meta dir from ever being removed. Treat a
   missing concrete as idempotent success — log a warning and continue
   to remove the orphan meta.

2. InstallBackendFromGallery's "already installed, skip" short-circuit
   only checked that the name was known (`backends.Exists(name)`); an
   orphaned meta whose RunFile points at a missing concrete still
   satisfies that check, so every reinstall returned nil without doing
   anything. Afterwards the worker's findBackend returned empty and we
   kept looping with "backend %q not found after install attempt".
   Require the entry to be actually runnable (run.sh stat-able, not a
   directory) before skipping.

New helper isBackendRunnable centralises the runnability test so both
the install guard and future callers stay in sync. Tests cover the
orphaned-meta delete path and the non-runnable short-circuit case.
2026-04-20 00:10:19 +02:00
Ettore Di Giacinto
75a63f87d8 feat(distributed): sync state with frontends, better backend management reporting (#9426)
* fix(distributed): detect backend upgrades across worker nodes

Before this change `DistributedBackendManager.CheckUpgrades` delegated to the
local manager, which read backends from the frontend filesystem. In
distributed deployments the frontend has no backends installed locally —
they live on workers — so the upgrade-detection loop never ran and the UI
silently never surfaced upgrades even when the gallery advertised newer
versions or digests.

Worker-side: NATS backend.list reply now carries Version, URI and Digest
for each installed backend (read from metadata.json).

Frontend-side: DistributedBackendManager.ListBackends aggregates per-node
refs (name, status, version, digest) instead of deduping, and CheckUpgrades
feeds that aggregation into gallery.CheckUpgradesAgainst — a new entrypoint
factored out of CheckBackendUpgrades so both paths share the same core
logic.

Cluster drift policy: when per-node version/digest tuples disagree, the
backend is flagged upgradeable regardless of whether any single node
matches the gallery, and UpgradeInfo.NodeDrift enumerates the outliers so
operators can see *why* it is out of sync. The next upgrade-all realigns
the cluster.

Tests cover: drift detection, unanimous-match (no upgrade), and the
empty-installed-version path that the old distributed code silently
missed.

* feat(ui): surface backend upgrades in the System page

The System page (Manage.jsx) only showed updates as a tiny inline arrow,
so operators routinely missed them. Port the Backend Gallery's upgrade UX
so System speaks the same visual language:

- Yellow banner at the top of the Backends tab when upgrades are pending,
  with an "Upgrade all" button (serial fan-out, matches the gallery) and a
  "Updates only" filter toggle.
- Warning pill (↑ N) next to the tab label so the count is glanceable even
  when the banner is scrolled out of view.
- Per-row labeled "Upgrade to vX.Y" button (replaces the icon-only button
  that silently flipped semantics between Reinstall and Upgrade), plus an
  "Update available" badge in the new Version column.
- New columns: Version (with upgrade + drift chips), Nodes (per-node
  attribution badges for distributed mode, degrading to a compact
  "on N nodes · M offline" chip above three nodes), Installed (relative
  time).
- System backends render a "Protected" chip instead of a bare "—" so rows
  still align and the reason is obvious.
- Delete uses the softer btn-danger-ghost so rows don't scream red; the
  ConfirmDialog still owns the "are you sure".

The upgrade checker also needed the same per-worker fix as the previous
commit: NewUpgradeChecker now takes a BackendManager getter so its
periodic runs call the distributed CheckUpgrades (which asks workers)
instead of the empty frontend filesystem. Without this the /api/backends/
upgrades endpoint stayed empty in distributed mode even with the protocol
change in place.

New CSS primitives — .upgrade-banner, .tab-pill, .badge-row, .cell-stack,
.cell-mono, .cell-muted, .row-actions, .btn-danger-ghost — all live in
App.css so other pages can adopt them without duplicating styles.

* feat(ui): polish the Nodes page so it reads like a product

The Nodes page was the biggest visual liability in distributed mode.
Rework the main dashboard surfaces in place without changing behavior:

StatCards: uniform height (96px min), left accent bar colored by the
metric's semantic (success/warning/error/primary), icon lives in a
36x36 soft-tinted chip top-right, value is left-aligned and large.
Grid auto-fills so the row doesn't collapse on narrow viewports. This
replaces the previous thin-bordered boxes with inconsistent heights.

Table rows: expandable rows now show a chevron cue on the left (rotates
on expand) so users know rows open. Status cell became a dedicated chip
with an LED-style halo dot instead of a bare bullet. Action buttons gained
labels — "Approve", "Resume", "Drain" — so the icons aren't doing all
the semantic work; the destructive remove action uses the softer
btn-danger-ghost variant so rows don't scream red, with the ConfirmDialog
still owning the real "are you sure". Applied cell-mono/cell-muted
utility classes so label chips and addresses share one spacing/font
grammar instead of re-declaring inline styles everywhere.

Expanded drawer: empty states for Loaded Models and Installed Backends
now render as a proper drawer-empty card (dashed border, icon, one-line
hint) instead of a plain muted string that read like broken formatting.

Tabs: three inline-styled buttons became the shared .tab class so they
inherit focus ring, hover state, and the rest of the design system —
matches the System page.

"Add more workers" toggle turned into a .nodes-add-worker dashed-border
button labelled "Register a new worker" (action voice) instead of a
chevron + muted link that operators kept mistaking for broken text.

New shared CSS primitives carry over to other pages:
.stat-grid + .stat-card, .row-chevron, .node-status, .drawer-empty,
.nodes-add-worker.

* feat(distributed): durable backend fan-out + state reconciliation

Two connected problems handled together:

1) Backend delete/install/upgrade used to silently skip non-healthy nodes,
   so a delete during an outage left a zombie on the offline node once it
   returned. The fan-out now records intent in a new pending_backend_ops
   table before attempting the NATS round-trip. Currently-healthy nodes
   get an immediate attempt; everyone else is queued. Unique index on
   (node_id, backend, op) means reissuing the same operation refreshes
   next_retry_at instead of stacking duplicates.

2) Loaded-model state could drift from reality: a worker OOM'd, got
   killed, or restarted a backend process would leave a node_models row
   claiming the model was still loaded, feeding ghost entries into the
   /api/nodes/models listing and the router's scheduling decisions.

The existing ReplicaReconciler gains two new passes that run under a
fresh KeyStateReconciler advisory lock (non-blocking, so one wedged
frontend doesn't freeze the cluster):

  - drainPendingBackendOps: retries queued ops whose next_retry_at has
    passed on currently-healthy nodes. Success deletes the row; failure
    bumps attempts and pushes next_retry_at out with exponential backoff
    (30s → 15m cap). ErrNoResponders also marks the node unhealthy.

  - probeLoadedModels: gRPC-HealthChecks addresses the DB thinks are
    loaded but hasn't seen touched in the last probeStaleAfter (2m).
    Unreachable addresses are removed from the registry. A pluggable
    ModelProber lets tests substitute a fake without standing up gRPC.

DistributedBackendManager exposes DeleteBackendDetailed so the HTTP
handler can surface per-node outcomes ("2 succeeded, 1 queued") to the
UI in a follow-up commit; the existing DeleteBackend still returns
error-only for callers that don't care about node breakdown.

Multi-frontend safety: the state pass uses advisorylock.TryWithLockCtx
on a new key so N frontends coordinate — the same pattern the health
monitor and replica reconciler already rely on. Single-node mode runs
both passes inline (adapter is nil, state drain is a no-op).

Tests cover the upsert semantics, backoff math, the probe removing an
unreachable model but keeping a reachable one, and filtering by
probeStaleAfter.

* feat(ui): show cluster distribution of models in the System page

When a frontend restarted in distributed mode, models that workers had
already loaded weren't visible until the operator clicked into each node
manually — the /api/models/capabilities endpoint only knew about
configs on the frontend's filesystem, not the registry-backed truth.

/api/models/capabilities now joins in ListAllLoadedModels() when the
registry is active, returning loaded_on[] with node id/name/state/status
for each model. Models that live in the registry but lack a local config
(the actual ghosts, not recovered from the frontend's file cache) still
surface with source="registry-only" so operators can see and persist
them; without that emission they'd be invisible to this frontend.

Manage → Models replaces the old Running/Idle pill with a distribution
cell that lists the first three nodes the model is loaded on as chips
colored by state (green loaded, blue loading, amber anything else). On
wider clusters the remaining count collapses into a +N chip with a
title-attribute breakdown. Disabled / single-node behavior unchanged.

Adopted models get an extra "Adopted" ghost-icon chip with hover copy
explaining what it means and how to make it permanent.

Distributed mode also enables a 10s auto-refresh and a "Last synced Xs
ago" indicator next to the Update button so ghost rows drop off within
one reconcile tick after their owning process dies. Non-distributed
mode is untouched — no polling, no cell-stack, same old Running/Idle.

* feat(ui): NodeDistributionChip — shared per-node attribution component

Large clusters were going to break the Manage → Backends Nodes column:
the old inline logic rendered every node as a badge and would shred the
layout at >10 workers, plus the Manage → Models distribution cell had
copy-pasted its own slightly-different version.

NodeDistributionChip handles any cluster size with two render modes:
  - small (≤3 nodes): inline chips of node names, colored by health.
  - large: a single "on N nodes · M offline · K drift" summary chip;
    clicking opens a Popover with a per-node table (name, status,
    version, digest for backends; name, status, state for models).

Drift counting mirrors the backend's summarizeNodeDrift so the UI
number matches UpgradeInfo.NodeDrift. Digests are truncated to the
docker-style 12-char form with the full value preserved in the title.

Popover is a new general-purpose primitive: fixed positioning anchored
to the trigger, flips above when there's no room below, closes on
outside-click or Escape, returns focus to the trigger. Uses .card as
its surface so theming is inherited. Also useful for a future
labels-editor popup and the user menu.

Manage.jsx drops its duplicated inline Nodes-column + loaded_on cell
and uses the shared chip with context="backends" / "models"
respectively. Delete code removes ~40 lines of ad-hoc logic.

* feat(ui): shared FilterBar across the System page tabs

The Backends gallery had a nice search + chip + toggle strip; the System
page had nothing, so the two surfaces felt like different apps. Lift the
pattern into a reusable FilterBar and wire both System tabs through it.

New component core/http/react-ui/src/components/FilterBar.jsx renders a
search input, a role="tablist" chip row (aria-selected for a11y), and
optional toggles / right slot. Chips support an optional `count` which
the System page uses to show "User 3", "Updates 1" etc.

System Models tab: search by id or backend; chips for
All/Running/Idle/Disabled/Pinned plus a conditional Distributed chip in
distributed mode. "Last synced" + Update button live in the right slot.

System Backends tab: search by name/alias/meta-backend-for; chips for
All/User/System/Meta plus conditional Updates / Offline-nodes chips
when relevant. The old ad-hoc "Updates only" toggle from the upgrade
banner folded into the Updates chip — one source of truth for that
filter. Offline chip only appears in distributed mode when at least
one backend has an unhealthy node, so the chip row stays quiet on
healthy clusters.

Filter state persists in URL query params (mq/mf/bq/bf) so deep links
and tab switches keep the operator's filter context instead of
resetting every time.

Also adds an "Adopted" distribution path: when a model in
/api/models/capabilities carries source="registry-only" (discovered on
a worker but not configured locally), the Models tab shows a ghost chip
labelled "Adopted" with hover copy explaining how to persist it — this
is what closes the loop on the ghost-model story end-to-end.
2026-04-19 17:55:53 +02:00
Ettore Di Giacinto
7c5d6162f7 fix(ui): rename model config files on save to prevent duplicates (#9388)
Editing a model's YAML and changing the `name:` field previously wrote
the new body to the original `<oldName>.yaml`. On reload the config
loader indexed that file under the new name while the old key
lingered in memory, producing two entries in the system UI that
shared a single underlying file — deleting either removed both.

Detect the rename in EditModelEndpoint and rename the on-disk
`<name>.yaml` and `._gallery_<name>.yaml` to match, drop the stale
in-memory key before the reload, and redirect the editor URL in the
React UI so it tracks the new name. Reject conflicts (409) and names
containing path separators (400).

Fixes #9294
2026-04-17 08:12:48 +02:00
Ettore Di Giacinto
016da02845 feat: refactor shared helpers and enhance MLX backend functionality (#9335)
* refactor(backends): extract python_utils + add mlx_utils shared helpers

Move parse_options() and messages_to_dicts() out of vllm_utils.py into a
new framework-agnostic python_utils.py, and re-export them from vllm_utils
so existing vllm / vllm-omni imports keep working.

Add mlx_utils.py with split_reasoning() and parse_tool_calls() — ported
from mlx_vlm/server.py's process_tool_calls. These work with any
mlx-lm / mlx-vlm tool module (anything exposing tool_call_start,
tool_call_end, parse_tool_call). Used by the mlx and mlx-vlm backends in
later commits to emit structured ChatDelta.tool_calls without
reimplementing per-model parsing.

Shared smoke tests confirm:
- parse_options round-trips bool/int/float/string
- vllm_utils re-exports are identity-equal to python_utils originals
- mlx_utils parse_tool_calls handles <tool_call>...</tool_call> with a
  shim module and produces a correctly-indexed list with JSON arguments
- mlx_utils split_reasoning extracts <think> blocks and leaves clean
  content

* feat(mlx): wire native tool parsers + ChatDelta + token usage + logprobs

Bring the MLX backend up to the same structured-output contract as vLLM
and llama.cpp: emit Reply.chat_deltas so the OpenAI HTTP layer sees
tool_calls and reasoning_content, not just raw text.

Key insight: mlx_lm.load() returns a TokenizerWrapper that already auto-
detects the right tool parser from the model's chat template
(_infer_tool_parser in mlx_lm/tokenizer_utils.py). The wrapper exposes
has_tool_calling, has_thinking, tool_parser, tool_call_start,
tool_call_end, think_start, think_end — no user configuration needed,
unlike vLLM.

Changes in backend/python/mlx/backend.py:

- Imports: replace inline parse_options / messages_to_dicts with the
  shared helpers from python_utils. Pull split_reasoning / parse_tool_calls
  from the new mlx_utils shared module.
- LoadModel: log the auto-detected has_tool_calling / has_thinking /
  tool_parser_type for observability. Drop the local is_float / is_int
  duplicates.
- _prepare_prompt: run request.Messages through messages_to_dicts so
  tool_call_id / tool_calls / reasoning_content survive the conversion,
  and pass tools=json.loads(request.Tools) + enable_thinking=True (when
  request.Metadata says so) to apply_chat_template. Falls back on
  TypeError for tokenizers whose template doesn't accept those kwargs.
- _build_generation_params: return an additional (logits_params,
  stop_words) pair. Maps RepetitionPenalty / PresencePenalty /
  FrequencyPenalty to mlx_lm.sample_utils.make_logits_processors and
  threads StopPrompts through to post-decode truncation.
- New _tool_module_from_tokenizer / _finalize_output / _truncate_at_stop
  helpers. _finalize_output runs split_reasoning when has_thinking is
  true and parse_tool_calls (using a SimpleNamespace shim around the
  wrapper's tool_parser callable) when has_tool_calling is true, then
  extracts prompt_tokens, generation_tokens and (best-effort) logprobs
  from the last GenerationResponse chunk.
- Predict: use make_logits_processors, accumulate text + last_response,
  finalize into a structured Reply carrying chat_deltas,
  prompt_tokens, tokens, logprobs. Early-stops on user stop sequences.
- PredictStream: per-chunk Reply still carries raw message bytes for
  back-compat but now also emits chat_deltas=[ChatDelta(content=delta)].
  On loop exit, emit a terminal Reply with structured
  reasoning_content / tool_calls / token counts / logprobs — so the Go
  side sees tool calls without needing the regex fallback.
- TokenizeString RPC: uses the TokenizerWrapper's encode(); returns
  length + tokens or FAILED_PRECONDITION if the model isn't loaded.
- Free RPC: drops model / tokenizer / lru_cache, runs gc.collect(),
  calls mx.metal.clear_cache() when available, and best-effort clears
  torch.cuda as a belt-and-suspenders.

* feat(mlx-vlm): mirror MLX parity (tool parsers + ChatDelta + samplers)

Same treatment as the MLX backend: emit structured Reply.chat_deltas,
tool_calls, reasoning_content, token counts and logprobs, and extend
sampling parameter coverage beyond the temp/top_p pair the backend
used to handle.

- Imports: drop the inline is_float/is_int helpers, pull parse_options /
  messages_to_dicts from python_utils and split_reasoning /
  parse_tool_calls from mlx_utils. Also import make_sampler and
  make_logits_processors from mlx_lm.sample_utils — mlx-vlm re-uses them.
- LoadModel: use parse_options; call mlx_vlm.tool_parsers._infer_tool_parser
  / load_tool_module to auto-detect a tool module from the processor's
  chat_template. Stash think_start / think_end / has_thinking so later
  finalisation can split reasoning blocks without duck-typing on each
  call. Logs the detected parser type.
- _prepare_prompt: convert proto Messages via messages_to_dicts (so
  tool_call_id / tool_calls survive), pass tools=json.loads(request.Tools)
  and enable_thinking=True to apply_chat_template when present, fall
  back on TypeError for older mlx-vlm versions. Also handle the
  prompt-only + media and empty-prompt + media paths consistently.
- _build_generation_params: return (max_tokens, sampler_params,
  logits_params, stop_words). Maps repetition_penalty / presence_penalty /
  frequency_penalty and passes them through make_logits_processors.
- _finalize_output / _truncate_at_stop: common helper used by Predict
  and PredictStream to split reasoning, run parse_tool_calls against the
  auto-detected tool module, build ToolCallDelta list, and extract token
  counts + logprobs from the last GenerationResult.
- Predict / PredictStream: switch from mlx_vlm.generate to mlx_vlm.stream_generate
  in both paths, accumulate text + last_response, pass sampler and
  logits_processors through, emit content-only ChatDelta per streaming
  chunk followed by a terminal Reply carrying reasoning_content,
  tool_calls, prompt_tokens, tokens and logprobs. Non-streaming Predict
  returns the same structured Reply shape.
- New helper _collect_media extracted from the duplicated base64 image /
  audio decode loop.
- New TokenizeString RPC using the processor's tokenizer.encode and
  Free RPC that drops model/processor/config, runs gc + Metal cache
  clear + best-effort torch.cuda cache clear.

* feat(importer/mlx): auto-set tool_parser/reasoning_parser on import

Mirror what core/gallery/importers/vllm.go does: after applying the
shared inference defaults, look up the model URI in parser_defaults.json
and append matching tool_parser:/reasoning_parser: entries to Options.

The MLX backends auto-detect tool parsers from the chat template at
runtime so they don't actually consume these options — but surfacing
them in the generated YAML:
  - keeps the import experience consistent with vllm
  - gives users a single visible place to override
  - documents the intended parser for a given model family

* test(mlx): add helper unit tests + TokenizeString/Free + e2e make targets

- backend/python/mlx/test.py: add TestSharedHelpers with server-less
  unit tests for parse_options, messages_to_dicts, split_reasoning and
  parse_tool_calls (using a SimpleNamespace shim to fake a tool module
  without requiring a model). Plus test_tokenize_string and test_free
  RPC tests that load a tiny MLX-quantized Llama and exercise the new
  RPCs end-to-end.

- backend/python/mlx-vlm/test.py: same helper unit tests + cleanup of
  the duplicated import block at the top of the file.

- Makefile: register BACKEND_MLX and BACKEND_MLX_VLM (they were missing
  from the docker-build-target eval list — only mlx-distributed had a
  generated target before). Add test-extra-backend-mlx and
  test-extra-backend-mlx-vlm convenience targets that build the
  respective image and run tests/e2e-backends with the tools capability
  against mlx-community/Qwen2.5-0.5B-Instruct-4bit. The MLX backend
  auto-detects the tool parser from the chat template so no
  BACKEND_TEST_OPTIONS is needed (unlike vllm).

* fix(libbackend): don't pass --copies to venv unless PORTABLE_PYTHON=true

backend/python/common/libbackend.sh:ensureVenv() always invoked
'python -m venv --copies', but macOS system python (and some other
builds) refuses with:

    Error: This build of python cannot create venvs without using symlinks

--copies only matters when _makeVenvPortable later relocates the venv,
which only happens when PORTABLE_PYTHON=true. Make --copies conditional
on that flag and fall back to default (symlinked) venv otherwise.

Caught while bringing up the mlx backend on Apple Silicon — the same
build path is used by every Python backend with USE_PIP=true.

* fix(mlx): support mlx-lm 0.29.x tool calling + drop deprecated clear_cache

The released mlx-lm 0.29.x ships a much simpler tool-calling API than
HEAD: TokenizerWrapper detects the <tool_call>...</tool_call> markers
from the tokenizer vocab and exposes has_tool_calling /
tool_call_start / tool_call_end, but does NOT expose a tool_parser
callable on the wrapper and does NOT ship a mlx_lm.tool_parsers
subpackage at all (those only exist on main).

Caught while running the smoke test on Apple Silicon with the
released mlx-lm 0.29.1: tokenizer.tool_parser raised AttributeError
(falling through to the underlying HF tokenizer), so
_tool_module_from_tokenizer always returned None and tool calls slipped
through as raw <tool_call>...</tool_call> text in Reply.message instead
of being parsed into ChatDelta.tool_calls.

Fix: when has_tool_calling is True but tokenizer.tool_parser is missing,
default the parse_tool_call callable to json.loads(body.strip()) — that's
exactly what mlx_lm.tool_parsers.json_tools.parse_tool_call does on HEAD
and covers the only format 0.29 detects (<tool_call>JSON</tool_call>).
Future mlx-lm releases that ship more parsers will be picked up
automatically via the tokenizer.tool_parser attribute when present.

Also tighten the LoadModel logging — the old log line read
init_kwargs.get('tool_parser_type') which doesn't exist on 0.29 and
showed None even when has_tool_calling was True. Log the actual
tool_call_start / tool_call_end markers instead.

While here, switch Free()'s Metal cache clear from the deprecated
mx.metal.clear_cache to mx.clear_cache (mlx >= 0.30), with a
fallback for older releases. Mirrored to the mlx-vlm backend.

* feat(mlx-distributed): mirror MLX parity (tool calls + ChatDelta + sampler)

Same treatment as the mlx and mlx-vlm backends: emit Reply.chat_deltas
with structured tool_calls / reasoning_content / token counts /
logprobs, expand sampling parameter coverage beyond temp+top_p, and
add the missing TokenizeString and Free RPCs.

Notes specific to mlx-distributed:

- Rank 0 is the only rank that owns a sampler — workers participate in
  the pipeline-parallel forward pass via mx.distributed and don't
  re-implement sampling. So the new logits_params (repetition_penalty,
  presence_penalty, frequency_penalty) and stop_words apply on rank 0
  only; we don't need to extend coordinator.broadcast_generation_params,
  which still ships only max_tokens / temperature / top_p to workers
  (everything else is a rank-0 concern).
- Free() now broadcasts CMD_SHUTDOWN to workers when a coordinator is
  active, so they release the model on their end too. The constant is
  already defined and handled by the existing worker loop in
  backend.py:633 (CMD_SHUTDOWN = -1).
- Drop the locally-defined is_float / is_int / parse_options trio in
  favor of python_utils.parse_options, re-exported under the module
  name for back-compat with anything that imported it directly.
- _prepare_prompt: route through messages_to_dicts so tool_call_id /
  tool_calls / reasoning_content survive, pass tools=json.loads(
  request.Tools) and enable_thinking=True to apply_chat_template, fall
  back on TypeError for templates that don't accept those kwargs.
- New _tool_module_from_tokenizer (with the json.loads fallback for
  mlx-lm 0.29.x), _finalize_output, _truncate_at_stop helpers — same
  contract as the mlx backend.
- LoadModel logs the auto-detected has_tool_calling / has_thinking /
  tool_call_start / tool_call_end so users can see what the wrapper
  picked up for the loaded model.
- backend/python/mlx-distributed/test.py: add the same TestSharedHelpers
  unit tests (parse_options, messages_to_dicts, split_reasoning,
  parse_tool_calls) that exist for mlx and mlx-vlm.
2026-04-13 18:44:03 +02:00
Ettore Di Giacinto
d67623230f feat(vllm): parity with llama.cpp backend (#9328)
* fix(schema): serialize ToolCallID and Reasoning in Messages.ToProto

The ToProto conversion was dropping tool_call_id and reasoning_content
even though both proto and Go fields existed, breaking multi-turn tool
calling and reasoning passthrough to backends.

* refactor(config): introduce backend hook system and migrate llama-cpp defaults

Adds RegisterBackendHook/runBackendHooks so each backend can register
default-filling functions that run during ModelConfig.SetDefaults().

Migrates the existing GGUF guessing logic into hooks_llamacpp.go,
registered for both 'llama-cpp' and the empty backend (auto-detect).
Removes the old guesser.go shim.

* feat(config): add vLLM parser defaults hook and importer auto-detection

Introduces parser_defaults.json mapping model families to vLLM
tool_parser/reasoning_parser names, with longest-pattern-first matching.

The vllmDefaults hook auto-fills tool_parser and reasoning_parser
options at load time for known families, while the VLLMImporter writes
the same values into generated YAML so users can review and edit them.

Adds tests covering MatchParserDefaults, hook registration via
SetDefaults, and the user-override behavior.

* feat(vllm): wire native tool/reasoning parsers + chat deltas + logprobs

- Use vLLM's ToolParserManager/ReasoningParserManager to extract structured
  output (tool calls, reasoning content) instead of reimplementing parsing
- Convert proto Messages to dicts and pass tools to apply_chat_template
- Emit ChatDelta with content/reasoning_content/tool_calls in Reply
- Extract prompt_tokens, completion_tokens, and logprobs from output
- Replace boolean GuidedDecoding with proper GuidedDecodingParams from Grammar
- Add TokenizeString and Free RPC methods
- Fix missing `time` import used by load_video()

* feat(vllm): CPU support + shared utils + vllm-omni feature parity

- Split vllm install per acceleration: move generic `vllm` out of
  requirements-after.txt into per-profile after files (cublas12, hipblas,
  intel) and add CPU wheel URL for cpu-after.txt
- requirements-cpu.txt now pulls torch==2.7.0+cpu from PyTorch CPU index
- backend/index.yaml: register cpu-vllm / cpu-vllm-development variants
- New backend/python/common/vllm_utils.py: shared parse_options,
  messages_to_dicts, setup_parsers helpers (used by both vllm backends)
- vllm-omni: replace hardcoded chat template with tokenizer.apply_chat_template,
  wire native parsers via shared utils, emit ChatDelta with token counts,
  add TokenizeString and Free RPCs, detect CPU and set VLLM_TARGET_DEVICE
- Add test_cpu_inference.py: standalone script to validate CPU build with
  a small model (Qwen2.5-0.5B-Instruct)

* fix(vllm): CPU build compatibility with vllm 0.14.1

Validated end-to-end on CPU with Qwen2.5-0.5B-Instruct (LoadModel, Predict,
TokenizeString, Free all working).

- requirements-cpu-after.txt: pin vllm to 0.14.1+cpu (pre-built wheel from
  GitHub releases) for x86_64 and aarch64. vllm 0.14.1 is the newest CPU
  wheel whose torch dependency resolves against published PyTorch builds
  (torch==2.9.1+cpu). Later vllm CPU wheels currently require
  torch==2.10.0+cpu which is only available on the PyTorch test channel
  with incompatible torchvision.
- requirements-cpu.txt: bump torch to 2.9.1+cpu, add torchvision/torchaudio
  so uv resolves them consistently from the PyTorch CPU index.
- install.sh: add --index-strategy=unsafe-best-match for CPU builds so uv
  can mix the PyTorch index and PyPI for transitive deps (matches the
  existing intel profile behaviour).
- backend.py LoadModel: vllm >= 0.14 removed AsyncLLMEngine.get_model_config
  so the old code path errored out with AttributeError on model load.
  Switch to the new get_tokenizer()/tokenizer accessor with a fallback
  to building the tokenizer directly from request.Model.

* fix(vllm): tool parser constructor compat + e2e tool calling test

Concrete vLLM tool parsers override the abstract base's __init__ and
drop the tools kwarg (e.g. Hermes2ProToolParser only takes tokenizer).
Instantiating with tools= raised TypeError which was silently caught,
leaving chat_deltas.tool_calls empty.

Retry the constructor without the tools kwarg on TypeError — tools
aren't required by these parsers since extract_tool_calls finds tool
syntax in the raw model output directly.

Validated with Qwen/Qwen2.5-0.5B-Instruct + hermes parser on CPU:
the backend correctly returns ToolCallDelta{name='get_weather',
arguments='{"location": "Paris, France"}'} in ChatDelta.

test_tool_calls.py is a standalone smoke test that spawns the gRPC
backend, sends a chat completion with tools, and asserts the response
contains a structured tool call.

* ci(backend): build cpu-vllm container image

Add the cpu-vllm variant to the backend container build matrix so the
image registered in backend/index.yaml (cpu-vllm / cpu-vllm-development)
is actually produced by CI.

Follows the same pattern as the other CPU python backends
(cpu-diffusers, cpu-chatterbox, etc.) with build-type='' and no CUDA.
backend_pr.yml auto-picks this up via its matrix filter from backend.yml.

* test(e2e-backends): add tools capability + HF model name support

Extends tests/e2e-backends to cover backends that:
- Resolve HuggingFace model ids natively (vllm, vllm-omni) instead of
  loading a local file: BACKEND_TEST_MODEL_NAME is passed verbatim as
  ModelOptions.Model with no download/ModelFile.
- Parse tool calls into ChatDelta.tool_calls: new "tools" capability
  sends a Predict with a get_weather function definition and asserts
  the Reply contains a matching ToolCallDelta. Uses UseTokenizerTemplate
  with OpenAI-style Messages so the backend can wire tools into the
  model's chat template.
- Need backend-specific Options[]: BACKEND_TEST_OPTIONS lets a test set
  e.g. "tool_parser:hermes,reasoning_parser:qwen3" at LoadModel time.

Adds make target test-extra-backend-vllm that:
- docker-build-vllm
- loads Qwen/Qwen2.5-0.5B-Instruct
- runs health,load,predict,stream,tools with tool_parser:hermes

Drops backend/python/vllm/test_{cpu_inference,tool_calls}.py — those
standalone scripts were scaffolding used while bringing up the Python
backend; the e2e-backends harness now covers the same ground uniformly
alongside llama-cpp and ik-llama-cpp.

* ci(test-extra): run vllm e2e tests on CPU

Adds tests-vllm-grpc to the test-extra workflow, mirroring the
llama-cpp and ik-llama-cpp gRPC jobs. Triggers when files under
backend/python/vllm/ change (or on run-all), builds the local-ai
vllm container image, and runs the tests/e2e-backends harness with
BACKEND_TEST_MODEL_NAME=Qwen/Qwen2.5-0.5B-Instruct, tool_parser:hermes,
and the tools capability enabled.

Uses ubuntu-latest (no GPU) — vllm runs on CPU via the cpu-vllm
wheel we pinned in requirements-cpu-after.txt. Frees disk space
before the build since the docker image + torch + vllm wheel is
sizeable.

* fix(vllm): build from source on CI to avoid SIGILL on prebuilt wheel

The prebuilt vllm 0.14.1+cpu wheel from GitHub releases is compiled with
SIMD instructions (AVX-512 VNNI/BF16 or AMX-BF16) that not every CPU
supports. GitHub Actions ubuntu-latest runners SIGILL when vllm spawns
the model_executor.models.registry subprocess for introspection, so
LoadModel never reaches the actual inference path.

- install.sh: when FROM_SOURCE=true on a CPU build, temporarily hide
  requirements-cpu-after.txt so installRequirements installs the base
  deps + torch CPU without pulling the prebuilt wheel, then clone vllm
  and compile it with VLLM_TARGET_DEVICE=cpu. The resulting binaries
  target the host's actual CPU.
- backend/Dockerfile.python: accept a FROM_SOURCE build-arg and expose
  it as an ENV so install.sh sees it during `make`.
- Makefile docker-build-backend: forward FROM_SOURCE as --build-arg
  when set, so backends that need source builds can opt in.
- Makefile test-extra-backend-vllm: call docker-build-vllm via a
  recursive $(MAKE) invocation so FROM_SOURCE flows through.
- .github/workflows/test-extra.yml: set FROM_SOURCE=true on the
  tests-vllm-grpc job. Slower but reliable — the prebuilt wheel only
  works on hosts that share the build-time SIMD baseline.

Answers 'did you test locally?': yes, end-to-end on my local machine
with the prebuilt wheel (CPU supports AVX-512 VNNI). The CI runner CPU
gap was not covered locally — this commit plugs that gap.

* ci(vllm): use bigger-runner instead of source build

The prebuilt vllm 0.14.1+cpu wheel requires SIMD instructions (AVX-512
VNNI/BF16) that stock ubuntu-latest GitHub runners don't support —
vllm.model_executor.models.registry SIGILLs on import during LoadModel.

Source compilation works but takes 30-40 minutes per CI run, which is
too slow for an e2e smoke test. Instead, switch tests-vllm-grpc to the
bigger-runner self-hosted label (already used by backend.yml for the
llama-cpp CUDA build) — that hardware has the required SIMD baseline
and the prebuilt wheel runs cleanly.

FROM_SOURCE=true is kept as an opt-in escape hatch:
- install.sh still has the CPU source-build path for hosts that need it
- backend/Dockerfile.python still declares the ARG + ENV
- Makefile docker-build-backend still forwards the build-arg when set
Default CI path uses the fast prebuilt wheel; source build can be
re-enabled by exporting FROM_SOURCE=true in the environment.

* ci(vllm): install make + build deps on bigger-runner

bigger-runner is a bare self-hosted runner used by backend.yml for
docker image builds — it has docker but not the usual ubuntu-latest
toolchain. The make-based test target needs make, build-essential
(cgo in 'go test'), and curl/unzip (the Makefile protoc target
downloads protoc from github releases).

protoc-gen-go and protoc-gen-go-grpc come via 'go install' in the
install-go-tools target, which setup-go makes possible.

* ci(vllm): install libnuma1 + libgomp1 on bigger-runner

The vllm 0.14.1+cpu wheel ships a _C C++ extension that dlopens
libnuma.so.1 at import time. When the runner host doesn't have it,
the extension silently fails to register its torch ops, so
EngineCore crashes on init_device with:

  AttributeError: '_OpNamespace' '_C_utils' object has no attribute
    'init_cpu_threads_env'

Also add libgomp1 (OpenMP runtime, used by torch CPU kernels) to be
safe on stripped-down runners.

* feat(vllm): bundle libnuma/libgomp via package.sh

The vllm CPU wheel ships a _C extension that dlopens libnuma.so.1 at
import time; torch's CPU kernels in turn use libgomp.so.1 (OpenMP).
Without these on the host, vllm._C silently fails to register its
torch ops and EngineCore crashes with:

  AttributeError: '_OpNamespace' '_C_utils' object has no attribute
    'init_cpu_threads_env'

Rather than asking every user to install libnuma1/libgomp1 on their
host (or every LocalAI base image to ship them), bundle them into
the backend image itself — same pattern fish-speech and the GPU libs
already use. libbackend.sh adds ${EDIR}/lib to LD_LIBRARY_PATH at
run time so the bundled copies are picked up automatically.

- backend/python/vllm/package.sh (new): copies libnuma.so.1 and
  libgomp.so.1 from the builder's multilib paths into ${BACKEND}/lib,
  preserving soname symlinks. Runs during Dockerfile.python's
  'Run backend-specific packaging' step (which already invokes
  package.sh if present).
- backend/Dockerfile.python: install libnuma1 + libgomp1 in the
  builder stage so package.sh has something to copy (the Ubuntu
  base image otherwise only has libgomp in the gcc dep chain).
- test-extra.yml: drop the workaround that installed these libs on
  the runner host — with the backend image self-contained, the
  runner no longer needs them, and the test now exercises the
  packaging path end-to-end the way a production host would.

* ci(vllm): disable tests-vllm-grpc job (heterogeneous runners)

Both ubuntu-latest and bigger-runner have inconsistent CPU baselines:
some instances support the AVX-512 VNNI/BF16 instructions the prebuilt
vllm 0.14.1+cpu wheel was compiled with, others SIGILL on import of
vllm.model_executor.models.registry. The libnuma packaging fix doesn't
help when the wheel itself can't be loaded.

FROM_SOURCE=true compiles vllm against the actual host CPU and works
everywhere, but takes 30-50 minutes per run — too slow for a smoke
test on every PR.

Comment out the job for now. The test itself is intact and passes
locally; run it via 'make test-extra-backend-vllm' on a host with the
required SIMD baseline. Re-enable when:
  - we have a self-hosted runner label with guaranteed AVX-512 VNNI/BF16, or
  - vllm publishes a CPU wheel with a wider baseline, or
  - we set up a docker layer cache that makes FROM_SOURCE acceptable

The detect-changes vllm output, the test harness changes (tests/
e2e-backends + tools cap), the make target (test-extra-backend-vllm),
the package.sh and the Dockerfile/install.sh plumbing all stay in
place.
2026-04-13 11:00:29 +02:00
Ettore Di Giacinto
2865f0f8d3 feat(ux): backend management enhancement (#9325)
* feat: add PreferDevelopmentBackends setting, expose isMeta/isDevelopment in API

- Add PreferDevelopmentBackends config field, CLI flag, runtime setting
- Add IsDevelopment() method to GalleryBackend
- Use AvailableBackendsUnfiltered in UI API to show all backends
- Expose isMeta, isDevelopment, preferDevelopmentBackends in backend API response

* feat: upgrade banner with Upgrade All button, detect pre-existing backends

- Add upgrade banner on Backends page showing count and Upgrade All button
- Fix upgrade detection for backends installed before version tracking:
  flag as upgradeable when gallery has a version but installed has none
- Fix OCI digest check to flag backends with no stored digest as upgradeable
2026-04-12 00:35:22 +02:00
Ettore Di Giacinto
8ab0744458 feat: backend versioning, upgrade detection and auto-upgrade (#9315)
* feat: add backend versioning data model foundation

Add Version, URI, and Digest fields to BackendMetadata for tracking
installed backend versions and enabling upgrade detection. Add Version
field to GalleryBackend. Add UpgradeAvailable/AvailableVersion fields
to SystemBackend. Implement GetImageDigest() for lightweight OCI digest
lookups via remote.Head. Record version, URI, and digest at install time
in InstallBackend() and propagate version through meta backends.

* feat: add backend upgrade detection and execution logic

Add CheckBackendUpgrades() to compare installed backend versions/digests
against gallery entries, and UpgradeBackend() to perform atomic upgrades
with backup-based rollback on failure. Includes Agent A's data model
changes (Version/URI/Digest fields, GetImageDigest).

* feat: add AutoUpgradeBackends config and runtime settings

Add configuration and runtime settings for backend auto-upgrade:
- RuntimeSettings field for dynamic config via API/JSON
- ApplicationConfig field, option func, and roundtrip conversion
- CLI flag with LOCALAI_AUTO_UPGRADE_BACKENDS env var
- Config file watcher support for runtime_settings.json
- Tests for ToRuntimeSettings, ApplyRuntimeSettings, and roundtrip

* feat(ui): add backend version display and upgrade support

- Add upgrade check/trigger API endpoints to config and api module
- Backends page: version badge, upgrade indicator, upgrade button
- Manage page: version in metadata, context-aware upgrade/reinstall button
- Settings page: auto-upgrade backends toggle

* feat: add upgrade checker service, API endpoints, and CLI command

- UpgradeChecker background service: checks every 6h, auto-upgrades when enabled
- API endpoints: GET /backends/upgrades, POST /backends/upgrades/check, POST /backends/upgrade/:name
- CLI: `localai backends upgrade` command, version display in `backends list`
- BackendManager interface: add UpgradeBackend and CheckUpgrades methods
- Wire upgrade op through GalleryService backend handler
- Distributed mode: fan-out upgrade to worker nodes via NATS

* fix: use advisory lock for upgrade checker in distributed mode

In distributed mode with multiple frontend instances, use PostgreSQL
advisory lock (KeyBackendUpgradeCheck) so only one instance runs
periodic upgrade checks and auto-upgrades. Prevents duplicate
upgrade operations across replicas.

Standalone mode is unchanged (simple ticker loop).

* test: add e2e tests for backend upgrade API

- Test GET /api/backends/upgrades returns 200 (even with no upgrade checker)
- Test POST /api/backends/upgrade/:name accepts request and returns job ID
- Test full upgrade flow: trigger upgrade via API, wait for job completion,
  verify run.sh updated to v2 and metadata.json has version 2.0.0
- Test POST /api/backends/upgrades/check returns 200
- Fix nil check for applicationInstance in upgrade API routes
2026-04-11 22:31:15 +02:00
Ettore Di Giacinto
e00ce981f0 fix: try to add whisperx and faster-whisper for more variants (#9278)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-08 21:23:38 +02:00
Ettore Di Giacinto
223deb908d fix(nats): improve error handling (#9222)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-04 12:11:54 +02:00
Ettore Di Giacinto
59108fbe32 feat: add distributed mode (#9124)
* feat: add distributed mode (experimental)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix data races, mutexes, transactions

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix events and tool stream in agent chat

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* use ginkgo

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(cron): compute correctly time boundaries avoiding re-triggering

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* enhancements, refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* do not flood of healthy checks

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* do not list obvious backends as text backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* tests fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring and consolidation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop redundant healthcheck

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* enhancements, refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-03-30 00:47:27 +02:00
Ettore Di Giacinto
5d410e5a03 fix(download): do not remove dst dir until we try all fallbacks (#9100)
This actually caused fallbacks to be compeletely no-op as we were
removing the destination dir before calling containerd.Apply

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-03-22 10:29:57 +01:00
Ettore Di Giacinto
031a36c995 feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092)
* feat: wire min_p

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: inferencing defaults

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(refactor): re-use iterative parser

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: generate automatically inference defaults from unsloth

Instead of trying to re-invent the wheel and maintain here the inference
defaults, prefer to consume unsloth ones, and contribute there as
necessary.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: apply defaults also to models installed via gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: be consistent and apply fallback to all endpoint

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-03-22 00:57:15 +01:00
Ettore Di Giacinto
d9c1db2b87 feat: add (experimental) fine-tuning support with TRL (#9088)
* feat: add fine-tuning endpoint

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(experimental): add fine-tuning endpoint and TRL support

This changeset defines new GRPC signatues for Fine tuning backends, and
add TRL backend as initial fine-tuning engine. This implementation also
supports exporting to GGUF and automatically importing it to LocalAI
after fine-tuning.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* commit TRL backend, stop by killing process

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* move fine-tune to generic features

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add evals, reorder menu

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-03-21 02:08:02 +01:00
lif
e0ab1a8b43 fix: use exact tag matching for model gallery tag filtering (#9041)
The Search() method uses strings.Contains() on comma-joined tags,
causing substring false positives (e.g., "asr" matching "image-diffusers").

Add FilterByTag() method that checks each tag with strings.EqualFold()
for exact, case-insensitive matching. Add 'tag' query parameter to
/api/models and /api/backends endpoints. Update the React frontend to
send filter selections as 'tag' instead of 'term'.

Closes #8775

Signed-off-by: majiayu000 <1835304752@qq.com>
2026-03-20 08:37:45 +01:00
Richard Palethorpe
cfb7641eea feat(ui, gallery): Show model backends and add searchable model/backend selector (#9060)
* feat(ui, gallery): Display and filter by the backend models use

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* feat(ui): Add searchable model backend/model selector and prevent delete models being selected

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2026-03-18 21:14:41 +01:00
LocalAI [bot]
bf4f8da266 fix: include model name in mmproj file path to prevent model isolation (#8937) (#8940)
* fix: include model name in mmproj file path to prevent model isolation issues

This fix addresses issue #8937 where different models with mmproj files
having the same filename (e.g., mmproj-F32.gguf) would overwrite each other.

By including the model name in the path (llama-cpp/mmproj/<model-name>/<filename>),
each model's mmproj files are now stored in separate directories, preventing
the collision that caused conversations to fail when switching between models.

Fixes #8937

Signed-off-by: LocalAI Bot <localai-bot@example.com>

* test: update test expectations for model name in mmproj path

The test file had hardcoded expectations for the old mmproj path format.
Updated the test expectations to include the model name subdirectory
to match the new path structure introduced in the fix.

Fixes CI failures on tests-apple and tests-linux

* fix: add model name to model path for consistency with mmproj path

This change makes the model path consistent with the mmproj path by
including the model name subdirectory in both paths:
- mmproj: llama-cpp/mmproj/<model-name>/<filename>
- model: llama-cpp/models/<model-name>/<filename>

This addresses the reviewer's feedback that the model config generation
needs to correctly reference the mmproj file path.

Fixes the issue where the model path didn't include the model name
subdirectory while the mmproj path did.

Signed-off-by: team-coding-agent-1 <team-coding-agent-1@localai.dev>

---------

Signed-off-by: LocalAI Bot <localai-bot@example.com>
Signed-off-by: team-coding-agent-1 <team-coding-agent-1@localai.dev>
Co-authored-by: team-coding-agent-1 <team-coding-agent-1@localai.dev>
2026-03-11 10:28:37 +01:00
Ettore Di Giacinto
05a3d00924 chore(size): display size of HF models and allow to specify it from the gallery (#8907)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-03-09 17:38:14 +01:00
LocalAI [bot]
2334556a8f feat(cli): add configurable backend image fallback tags via CLI options (#8817)
* feat(cli): add configurable backend image fallback tags via CLI options

- Add three new CLI flags: --backend-images-release-tag, --backend-images-branch-tag, --backend-dev-suffix
- Add corresponding fields to SystemState for passing configuration
- Add WithBackendImagesReleaseTag, WithBackendImagesBranchTag, WithBackendDevSuffix options
- Modify getFallbackTagValues to use SystemState instead of environment variables
- Pass CLI options through to SystemState in run.go

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>

* fix: add missing os import in core/gallery/backends.go

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>

---------

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-authored-by: localai-bot <localai-bot@users.noreply.github.com>
2026-03-08 21:16:37 +01:00
LocalAI [bot]
c187b160e7 fix(gallery): clean up partially downloaded backend on installation failure (#8679)
When a backend download fails (e.g., on Mac OS with port conflicts causing
connection issues), the backend directory is left with partial files.
This causes subsequent installation attempts to fail with 'run file not
found' because the sanity check runs on an empty/partial directory.

This fix cleans up the backend directory when the initial download fails
before attempting fallback URIs or mirrors. This ensures a clean state
for retry attempts.

Fixes: #8016

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-authored-by: localai-bot <localai-bot@users.noreply.github.com>
2026-02-28 13:10:53 +01:00
LocalAI [bot]
959458f0db fix(gallery): add fallback URI resolution for backend installation (#8663)
* fix(gallery): add fallback URI resolution for backend installation

When a backend installation fails (e.g., due to missing 'latest-' tag),
try fallback URIs in order:
1. Replace 'latest-' with 'master-' in the URI
2. If that fails, append '-development' to the backend name

This fixes the issue where backend index entries don't match the
repository tags. For example, installing 'ace-step' tries to download
'latest-gpu-nvidia-cuda-13-ace-step' but only 'master-gpu-nvidia-cuda-13-ace-step'
exists in the quay.io registry.

Fixes: #8437
Signed-off-by: localai-bot <139863280+localai-bot@users.noreply.github.com>

* chore(gallery): make fallback URI patterns configurable via env vars

---------

Signed-off-by: localai-bot <139863280+localai-bot@users.noreply.github.com>
2026-02-27 10:56:33 +01:00
LocalAI [bot]
8bfe458fbc fix: change file permissions from 0600 to 0644 in InstallModel (#8657)
Closes #8119

When installing models from the gallery, files are created with 0600
permissions (owner read/write only), making them unreadable by the
LocalAI server when running as a different user.

This fix changes the permissions to 0644 (owner read/write, group/others
read), allowing the server to read model files regardless of the user
it runs as.

Co-authored-by: localai-bot <localai-bot@users.noreply.github.com>
2026-02-26 09:38:54 +01:00
Richard Palethorpe
074a982853 fix(gallery): Use YAML v3 to avoid merging maps with incompatible keys (#8580)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2026-02-16 14:10:19 +01:00
Copilot
673a80a578 feat: Filter backend gallery by system capabilities (#7950)
* Initial plan

* Add backend gallery filtering based on system capabilities

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* Refactor L4T backend check to come before NVIDIA check

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* Refactor: move capabilities business logic to capabilities.go and use constants

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* feat: display system capability in webui and refactor tests

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* chore: rename System/Capability

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor: use getSystemCapabilities in IsBackendCompatible for consistency

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* refactor: keep unused constants private in capabilities.go

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* fix: skip AMD/ROCm and Intel/SYCL tests on darwin

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-10 23:34:01 +01:00
lif
d7b2eee08f fix: add nil checks before mergo.Merge to prevent panic in gallery model installation (#7785)
Fixes #7420

Added nil checks before calling mergo.Merge in InstallModelFromGallery and InstallModel
functions to prevent panic when req.Overrides or configOverrides are nil. The panic was
occurring at models.go:248 during Qwen-Image-Edit gallery model download.

Changes:
- Added nil check for req.Overrides before merging in InstallModelFromGallery (line 126)
- Added nil check for configOverrides before merging in InstallModel (line 248)
- Added test case to verify nil configOverrides are handled without panic

Signed-off-by: majiayu000 <1835304752@qq.com>
2025-12-30 09:51:45 +01:00
Ettore Di Giacinto
c37785b78c chore(refactor): move logging to common package based on slog (#7668)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-21 19:33:13 +01:00
Ettore Di Giacinto
fc5b9ebfcc feat(loader): enhance single active backend to support LRU eviction (#7535)
* feat(loader): refactor single active backend support to LRU

This changeset introduces LRU management of loaded backends. Users can
set now a maximum number of models to be loaded concurrently, and, when
setting LocalAI in single active backend mode we set LRU to 1 for
backward compatibility.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update docs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-12 12:28:38 +01:00
Ettore Di Giacinto
3b5c2ea633 feat(ui): allow to order search results (#7507)
* feat(ui): improve table view and let items to be sorted

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: use constants

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-11 00:11:33 +01:00
Ettore Di Giacinto
8ca98c90ea chore(importers/llama.cpp): add models to 'llama-cpp' subfolder (#7450)
This makes paths predictable, and avoids multiple model files to show in
the main view

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-07 21:44:57 +01:00
Copilot
16e5689162 feat(importers): Add diffuser backend importer with ginkgo tests and UI support (#7316)
* Initial plan

* Add diffuser backend importer with ginkgo tests

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* Finalize diffuser backend importer implementation

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* Add diffuser preferences to model-editor import section

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* Use gopkg.in/yaml.v3 for consistency in diffuser importer

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-20 22:38:30 +01:00
Ettore Di Giacinto
382474e4a1 fix: do not delete files if used by other configured models (#7235)
* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: prevent deletion of model files shared by multiple configurations (#7317)

* Initial plan

* fix: do not delete files if used by other configured models

- Fixed bug in DeleteModelFromSystem where OR was used instead of AND for file suffix check
- Fixed bug where model config filename comparison was incorrect
- Added comprehensive Ginkgo test to verify shared model files are not deleted

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

* fix: prevent deletion of model files shared by multiple configurations

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-20 14:55:51 +01:00
Ettore Di Giacinto
77bbeed57e feat(importer): unify importing code with CLI (#7299)
* feat(importer): support ollama and OCI, unify code

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: support importing from local file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* support also yaml config files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Correctly handle local files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Extract importing errors

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add importer tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add integration tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(UX): improve and specify supported URI formats

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fail if backend does not have a runfile

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(gallery): add cache for galleries

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(ui): remove handler duplicate

File input handlers are now handled by Alpine.js @change handlers in chat.html.
Removed duplicate listeners to prevent files from being processed twice

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(ui): be consistent in attachments in the chat

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fail if no importer matches

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: propagate ops correctly

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-19 20:52:11 +01:00
Ettore Di Giacinto
be8cf838c2 feat(importers): add transformers and vLLM (#7278)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-15 22:47:09 +01:00
Ettore Di Giacinto
735ca757fa feat(ui): allow to cancel ops (#7264)
* feat(ui): allow to cancel ops

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Improve progress text

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Cancel queued ops, don't show up message cancellation always

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: fixup displaying of total progress over multiple files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-13 18:41:47 +01:00
Ettore Di Giacinto
b1d1f2a37d chore(importers): small logic enhancements (#7262)
* chore(import): import mmproj files to specific folder

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Slightly enhance logics

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-12 22:08:08 +01:00
Ettore Di Giacinto
3728552e94 feat: import models via URI (#7245)
* feat: initial hook to install elements directly

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP: ui changes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move HF api client to pkg

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add simple importer for gguf files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add opcache

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* wire importers to CLI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add omitempty to config fields

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add MLX importer

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small refactors to star to use HF for discovery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Common preferences

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add support to bare HF repos

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(importer/llama.cpp): add support for mmproj files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add mmproj quants to common preferences

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix vlm usage in tokenizer mode with llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-12 20:48:56 +01:00
Ettore Di Giacinto
d424a27fa2 chore: display warning only when directory is present (#7050)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-03 18:56:47 +01:00
Ettore Di Giacinto
b8f40dde1e feat: do also text match (#6891)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-10-29 17:18:56 +01:00
Ettore Di Giacinto
f452a027a2 chore(gallery search): fuzzy with case insentivie (#6490)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-10-17 09:02:28 +02:00
Ettore Di Giacinto
83534f8e00 feat(gallery): add fuzzy search (#6481)
chore(model gallery): add fuzzy search

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-10-16 18:51:33 +02:00
Ettore Di Giacinto
7a36e8d967 chore(ui): skip duplicated entries in search list (#6425)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-10-10 16:35:05 +02:00
Sertaç Özercan
ebbcba342a fix: runtime capability detection for backends (#6149)
* runtime capability detection for backends

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* test

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* skip nvidia on darwin

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* address review comments

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* fix apple test

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

* remove unused func

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>

---------

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2025-09-11 10:46:19 +02:00
Ettore Di Giacinto
79a41a5e07 fix: register backends to model-loader during installation (#6159)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-28 19:11:02 +02:00
Ettore Di Giacinto
1d830ce7dd feat(mlx): add mlx backend (#6049)
* chore: allow to install with pip

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make the backend to build and actually work

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* List models from system only

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add script to build darwin python backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Run protogen in libbackend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Detect if mps is available across python backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* CI: try to build backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Debug CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Index mlx-vlm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Remove mlx-vlm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop CI test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-22 08:42:29 +02:00
Ettore Di Giacinto
7050c9f69d feat(webui): add import/edit model page (#6050)
* feat(webui): add import/edit model page

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Convert to a YAML editor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pass by the baseurl

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Improve visibility of the yaml editor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add test file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make reset work

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Emit error only if we can't delete the model yaml file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 23:48:09 +02:00
Ettore Di Giacinto
089efe05fd feat(backends): add system backend, refactor (#6059)
- Add a system backend path
- Refactor and consolidate system information in system state
- Use system state in all the components to figure out the system paths
  to used whenever needed
- Refactor BackendConfig -> ModelConfig. This was otherway misleading as
  now we do have a backend configuration which is not the model config.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 19:38:26 +02:00
Ettore Di Giacinto
19c92c70c5 fix(backend-detection): default to CPU if there is less than 4GB of GPU available (#6057)
fix(gpu-detection): default to CPU if there is less than 4GB of GPU available

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 16:57:33 +02:00
Ettore Di Giacinto
b52bfaf1b3 fix: do not show invalid backends (#6058)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 13:01:56 +02:00
Ettore Di Giacinto
90f5639639 feat(backends): allow backends to not have a metadata file (#5963)
In this case we generate one on the fly and we infer the metadata we
can.

Obviously this have the side effect of not being able to register
potential aliases.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-03 16:47:02 +02:00
Ettore Di Giacinto
a35a701052 feat(backends): install from local path (#5962)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-03 14:24:50 +02:00