core/http/app_test.go had grown to 1495 lines exercising three concerns at
once: HTTP-layer integration, real-backend inference (llama-gguf, tts,
stablediffusion, transformers embeddings, whisper), and service logic that
already has unit-level coverage. Each PR paid for 6 backend builds plus
real-model downloads to satisfy a single suite.
Reorg per layer:
- app_test.go (1495 -> 1003 lines) drives the mock-backend binary only.
Kept: auth, routing, gallery API, file:// import, /system, agent-jobs
HTTP plumbing, config-file model loading. Deleted real-inference specs
(llama-gguf chat, ggml completions/streaming, logprobs, logit_bias,
transcription, embeddings, External-gRPC, Stores duplicate, Model gallery
Context). Lifted Agent Jobs out of the deleted Stores Context.
- tests/e2e-backends/backend_test.go gains logprobs, logit_bias, and
no-first-token-dup specs (the latter folded into PredictStream). Two
new caps gate them so non-LLM backends opt out.
- tests/e2e-aio/e2e_test.go gains a streaming smoke under Context("text")
to catch container-level streaming regressions.
- tests/models_fixtures/ removed; all fixtures referenced testmodel.ggml.
app_test.go now writes per-Context inline mock-model YAMLs.
CI:
- test.yml + tests-e2e.yml gain paths-ignore (docs/, examples/, *.md,
backend/) so docs and backend-only PRs skip them. test.yml drops the
6-backend Build step plus TRANSFORMER_BACKEND/GO_TAGS=tts; tests-apple
drops the llama-cpp-darwin build.
- New tests-aio.yml runs the AIO container nightly + on workflow_dispatch
+ master/tags. The tests-e2e-container job moved out of test.yml so PRs
no longer pay AIO cost.
- New tests-llama-cpp-smoke job in test-extra.yml runs on every PR with
no detect-changes gate; pulls quay.io/go-skynet/local-ai-backends:
master-cpu-llama-cpp (no build on PR) and exercises predict/stream/
logprobs/logit_bias against Qwen3-0.6B. This is the PR-acceptance
real-backend gate after AIO moved to nightly. The path-gated heavy
test-extra-backend-llama-cpp wrapper appends the same caps so it
exercises the moved specs when the backend actually changes.
Makefile:
- Deleted test-models/testmodel.ggml (the wget chain), test-llama-gguf,
test-tts, test-stablediffusion, test-realtime-models. test target
drops --label-filter, HUGGINGFACE_GRPC, TRANSFORMER_BACKEND, TEST_DIR,
FIXTURES, CONFIG_FILE, MODELS_PATH, BACKENDS_PATH; depends on
build-mock-backend. test-stores keeps a focused entry point and depends
on backends/local-store. clean-tests also clears the mock-backend
binary.
Net per typical Go-side PR: ~25min (6 backend builds + tests + AIO) +
~8min e2e drops to ~5min mock-backend test + ~8min e2e + ~5-10min
llama-cpp-smoke (image pulled). Docs and backend-only PRs skip the
always-on workflows entirely.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:claude-opus-4-7 [Edit] [Write] [Bash]
PR #9583 changed the supervisor's process map key from `modelID` to
`modelID#replicaIndex`, but the NATS lifecycle handlers kept passing
the bare modelID:
* `backend.stop` (subscribeLifecycleEvents): `s.stopBackend(req.Backend)`
→ `s.processes["Qwen3.6-..."]` missed (actual key is "...#0") →
silent no-op. Admin "Unload model" clicks released VRAM via
model.unload but left the gRPC process alive on its old port.
Subsequent chats hit installBackend, found the leftover process,
reused its address — and the UI reported "no models loaded" while
the model kept responding.
* `backend.delete` (subscribeLifecycleEvents): same map miss in
`isRunning(req.Backend)` and `s.stopBackend(req.Backend)` — admin
"Delete backend" deleted the binary while the process was still
serving traffic.
Add `resolveProcessKeys(id)`: exact match if `id` is a full processKey
(stopAllBackends iterates the map and passes its own keys);
prefix-match if `id` is bare (NATS handlers); empty if `id` contains
`#` but doesn't match (no spurious fallback when the caller was
explicit). stopBackend and isRunning now call it; stopBackend gets a
new stopBackendExact helper for per-key cleanup.
TDD: regression test fails without the fix (resolveProcessKeys
doesn't exist; map lookup by bare name returns nothing). Tests pass
post-fix.
Reproduced live: registry row count was 0 for the model the user
"Unloaded", chat still served by the leftover worker process.
SmartRouter behavior is correct in itself — it falls through to
scheduleAndLoad when no row exists; the bug was that the leftover
process corrupted the install path.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Bash]
The register endpoint called SetNodeLabels(req.Labels) — replace-all
semantics — so every worker re-register wiped every label not in the
worker's body. The bug existed since labels were introduced in
PR #9186 (Mar 31), but only triggered for workers that supplied
labels via --node-labels.
PR #9583 (the multi-replica refactor) added an auto-mirrored
`node.replica-slots` label to every worker's registration body, which
made `len(req.Labels) > 0` always true — turning a latent edge-case
bug into a universal one. Operators reported "labels assigned to
node do not persist": labels survived until the next worker restart,
then disappeared.
Fix: iterate req.Labels and call SetNodeLabel (upsert) for each
instead of SetNodeLabels (delete-then-recreate). Worker-managed
labels still refresh on re-register; UI-added labels survive.
Trade-off: an operator who removes a label from --node-labels won't
have it auto-removed from the DB on next register — they can clean it
via the UI. Acceptable, since the alternative (current behavior)
silently destroys operator state.
Regression test added first (TDD): RegisterNodeEndpoint registers a
node, the test simulates a UI add via SetNodeLabel, then re-registers
with a different worker label set; assertion that the UI-added label
survives. Test fails against the broken code, passes against the fix.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Bash]
The multi-replica refactor (PR #9583) changed the worker's process key
from `modelID` to `modelID#replicaIndex`, but the BackendLogStore kept
the bare-modelID lookup. Result: every distributed deployment lost
backend logs in the Nodes UI — single-replica too, since even the
default capacity of 1 produces a `#0` suffix.
Two changes wired together:
* pkg/model: BackendLogStore.GetLines/Subscribe now treat a modelID
without `#` as a model prefix and merge across all `modelID#N` replica
buffers (timestamp-sorted for GetLines; fan-in for Subscribe). Calls
with a full `modelID#N` key resolve exactly. ListModels strips
replica suffixes and deduplicates so the listing surfaces one entry
per loaded model.
* react-ui: per-replica log streams as the default. Loaded Models
table disambiguates each row with a `rep N` pill (only when the node
hosts >1 replica of a model). Each row's "View logs" link routes to
the per-replica process key so operators see only that replica's
output. The logs page renders the replica context as a chip in the
title and surfaces a segmented control — `Replica 0 / 1 / … / All
merged` — when the model has multiple replicas; the merged segment
uses the bare-modelID URL (delegating to the store's prefix
aggregation) for the side-by-side comparison case. Single-replica
deployments see no extra UI.
Tests added first (TDD): the regression set in
backend_log_store_test.go reproduces the bug at the exact failure
point — GetLines/ListModels/Subscribe assertions all fail against the
broken code, all pass against the fix. TestSubscribe_PerReplicaFilter
pins the exact-key path so a future change can't silently break it.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Skill:critique] [Skill:audit] [Skill:polish] [Skill:distill]
The Manage view's flagsFor() short-circuited on b.IsMeta and returned
dev=false for every meta backend, so meta-dev entries
(e.g. llama-cpp-development, whisper-development, insightface-development)
leaked through the Development toggle in distributed mode and stayed
visible whether the toggle was on or off. The count chip even
under-reported because those rows were excluded from it.
Drop the IsMeta short-circuit and trust gallery enrichment for both
flags. Production metas (llama-cpp) are tagged isAlias=false /
isDevelopment=false in the gallery so they still pass both toggles;
meta-dev entries carry isDevelopment=true and now correctly hide
alongside concrete dev variants.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
macOS runners can't use the registry-backed BuildKit cache (no Docker
daemon), so every darwin matrix run was paying full cost for brew
installs, Go module downloads, llama.cpp recompiles and Python wheel
resolution.
Wires actions/cache@v4 into the reusable workflow for four caches:
- Go modules + build cache (setup-go cache: true), shared across matrix
- Homebrew downloads + selected /opt/homebrew/Cellar entries, with
HOMEBREW_NO_AUTO_UPDATE so restored Cellar paths stay stable
- ccache for the llama-cpp CMake variants, keyed on the pinned
LLAMA_VERSION; CMAKE_*_COMPILER_LAUNCHER is exported via GITHUB_ENV
so backend/cpp/llama-cpp/Makefile picks it up without script changes
- Python uv + pip wheel cache, keyed by backend + ISO week — same
one-cold-rebuild-per-week cadence as the Linux DEPS_REFRESH
Read/write semantics match the existing BuildKit policy: every run
restores, only master/tag pushes save, so PRs can't pollute master's
warm cache.
Documents the new caches and the macOS-specific constraints in
.agents/ci-caching.md.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7[1m] [Claude Code]
* feat(distributed): support multiple replicas of one model on the same node
The distributed scheduler implicitly assumed `(node_id, model_name)` was
unique, but the schema didn't enforce it and the worker keyed all gRPC
processes by model name alone. With `MinReplicas=2` against a single
worker, the reconciler "scaled up" every 30s but the registry never
advanced past 1 row — the worker re-loaded the model in-place every tick
until VRAM fragmented and the gRPC process died.
This change introduces multi-replica-per-node as a first-class concept,
with capacity-aware scheduling, a circuit breaker, and VRAM
soft-reservation. Operators can declare per-node capacity via the worker
flag `--max-replicas-per-model` (mirrored as auto-label
`node.replica-slots=N`) or override per-node from the UI.
* Schema: BackendNode gains MaxReplicasPerModel (default 1) and
ReservedVRAM. NodeModel gains ReplicaIndex (composite with node_id +
model_name). ModelSchedulingConfig gains UnsatisfiableUntil/Ticks for
the reconciler circuit breaker.
* Registry: replica_index threaded through SetNodeModel, RemoveNodeModel,
IncrementInFlight, DecrementInFlight, TouchNodeModel, GetNodeModel,
SetNodeModelLoadInfo and the InFlightTrackingClient. New helpers:
CountReplicasOnNode, NextFreeReplicaIndex (with ErrNoFreeSlot),
RemoveAllNodeModelReplicas, FindNodesWithFreeSlot,
ClusterCapacityForModel, ReserveVRAM/ReleaseVRAM (atomic UPDATE with
ErrInsufficientVRAM), and the unsatisfiable-flag CRUD.
* Worker: processKey now `<modelID>#<replicaIndex>` so concurrent loads
of the same model land on distinct ports. Adds CLI flag
--max-replicas-per-model (env LOCALAI_MAX_REPLICAS_PER_MODEL, default 1)
and emits the auto-label.
* Router: scheduleNewModel filters candidates by free slot, allocates the
replica index, and soft-reserves VRAM before installing the backend.
evictLRUAndFreeNode now deletes the targeted row by ID instead of all
replicas of the model on the node — fixes a latent bug where evicting
one replica orphaned its siblings.
* Reconciler: caps scale-up at ClusterCapacityForModel so a misconfig
(MinReplicas > capacity) doesn't loop forever. After 3 consecutive
ticks of capacity==0 it sets UnsatisfiableUntil for a 5m cooldown and
emits a warning. ClearAllUnsatisfiable fires from Register,
ApproveNode, SetNodeLabel(s), RemoveNodeLabel and
UpdateMaxReplicasPerModel so a new node joining or label changes wake
the reconciler immediately. scaleDownIdle removes highest-replica-index
first to keep slots compact.
* Heartbeat resets reserved_vram to 0 — worker is the source of truth
for actual free VRAM; the reservation is only for the in-tick race
window between two scheduling decisions.
* Probe path (reconciler.probeLoadedModels and health.doCheckAll) now
pass the row's replica_index to RemoveNodeModel so an unreachable
replica doesn't orphan healthy siblings.
* Admin override: PUT /api/nodes/:id/max-replicas-per-model sets a
sticky override (preserved across worker re-registration). DELETE
clears the override so the worker's flag applies again on next
register. Required because Kong defaults the worker flag to 1, so
every worker restart would have silently reverted the UI value.
* React UI: always-visible slot badge on the node row (muted at default
1, accented when >1); inline editor in the expanded drawer with
pencil-to-edit, Save/Cancel, Esc/Enter, "(override)" indicator when
the value is admin-set, and a "Reset" button to hand control back to
the worker. Soft confirm when shrinking the cap below the count of
loaded replicas. Scheduling rules table gets an "Unsatisfiable until
HH:MM" status badge surfacing the cooldown.
* node.replica-slots filtered out of the labels strip on the row to
avoid duplicating the slot badge.
23 new Ginkgo specs (registry, reconciler, inflight, health) cover:
multi-replica row independence, RemoveNodeModel of one replica
preserving siblings, NextFreeReplicaIndex slot allocation including
ErrNoFreeSlot, capacity-gated scale-up with circuit breaker tripping
and recovery on Register, scheduleDownIdle ordering, ClusterCapacity
math, ReserveVRAM admission gating, Heartbeat reset, override survival
across worker re-registration, and ResetMaxReplicasPerModel handing
control back. Plus 8 stdlib tests for the worker processKey / CLI /
auto-label.
Closes the flap reproduced on Qwen3.6-35B against the nvidia-thor
worker (single 128 GiB node, MinReplicas=2): the reconciler now caps
the scale-up at the cluster's actual capacity instead of looping.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Read] [Edit] [Bash] [Skill:critique] [Skill:audit] [Skill:polish] [Skill:golang-testing]
* refactor(react-ui/nodes): tighten capacity editor copy + adopt ActionMenu for row actions
* Capacity editor hint trimmed from operator-doc-style ("Sourced from
the worker's `--max-replicas-per-model` flag. Changing it here makes it
a sticky admin override that survives worker restarts." → "Saved
values stick across worker restarts.") and the override-state copy
similarly compressed. The full mechanic is no longer needed in the UI
— the override pill carries the meaning and the docs cover the rest.
* Node row actions migrated from an inline cluster of icon buttons
(Drain / Resume / Trash) to the kebab ActionMenu used by /manage for
per-row model actions, so dense Nodes tables stay clean. Approve
stays as a prominent primary button — it's a stateful admission gate,
not a routine action, and elevating it matches how /manage surfaces
install-time decisions outside the menu.
* The expanded drawer's Labels section now filters node.replica-slots
out of the editable label list. The label is owned by the Capacity
editor above; surfacing it again as an editable label invited
confusion (the Capacity save would clobber any direct edit).
Both backend and agent workers benefit — they share the row rendering
path, so the action menu and label filter apply to both.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] [Skill:critique] [Skill:audit] [Skill:polish]
* fix(react-ui/nodes): suppress slot badge on agent workers
Agent workers don't load models, so the per-node replica capacity is
inapplicable to them. Showing "1× slots" on agent rows was a tiny
inconsistency from the unified rendering path — gate the badge on
node_type !== 'agent' so it only appears on backend workers.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp]
* refactor(react-ui/nodes): distill expanded drawer + restyle scheduling form
The expanded node drawer used to stack five panels — slot badge,
filled capacity box, Loaded Models h4+empty-state, Installed Backends
h4+empty-state, Labels h4+chips+form — making routine inspections feel
like a control panel. The scheduling rule form wrapped its mode toggle
as two 50%-width filled buttons that competed visually with the actual
primary action.
* Drawer: collapse three rarely-touched config zones (Capacity,
Backends, Labels) into one `<details>` "Manage" disclosure (closed by
default) with small uppercase eyebrow labels for each zone instead of
parallel h4 sub-headings. Loaded Models stays as the at-a-glance
headline with a single-line empty hint instead of a boxed empty state.
CapacityEditor renders flat (no filled background) — the Manage
disclosure provides framing.
* Scheduling form: replace the chunky 50%-width button-tabs with the
project's existing `.segmented` control (icon + label, sized to
content). Mode hint becomes a single tied line below. Fields stack
vertically with helper text under inputs and a hairline divider above
the right-aligned Save / Cancel.
The empty drawer collapses from ~5 stacked sections (~280px tall) to
two lines (~80px). The scheduling form now reads as a designed dialog
instead of raw building blocks. Both surfaces now match the typographic
density and weight of the rest of the admin pages.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] [Skill:distill] [Skill:audit] [Skill:polish]
* feat(react-ui/nodes): replace scheduling form's model picker with searchable combobox
The native <select> made operators scroll through every gallery entry to
find a model name. The project already has SearchableModelSelect (used
in Studio/Talk/etc.) which combines free-text search with the gallery
list and accepts typed model names that aren't installed yet — useful
for pre-staging a scheduling rule before the node it'll run on has
finished bootstrapping.
Also drops the now-unused useModels import (the combobox manages the
gallery hook internally).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit]
* refactor(react-ui/nodes): consolidate key/value chip editor + add replica preset chips
The Nodes page was rendering the same key=value chip pattern in two
places with subtly different markup: the Labels editor in the expanded
drawer and (post-distill) the Node Selector input in the scheduling
form. The form's input was also a comma-separated string that operators
were getting wrong.
* Extract <KeyValueChips> as a fully controlled chip-builder. Parent
owns the map and decides what onAdd/onRemove does — form state for the
scheduling form, API calls for the live drawer Labels editor. Same
visuals everywhere; one component to change when polish needs apply.
* Replace the comma-separated Node Selector text input with KeyValueChips.
Operators were copying syntax from docs and missing commas; the chip
vocabulary makes the key=value structure self-documenting.
* Add <ReplicaInput>: numeric input + quick-pick preset chips for Min/Max
replicas. Picked over a slider because replica counts are exact specs
derived from VRAM math (operator decision, not a fuzzy estimate). The
chips give one-click access to common values (1/2/3/4 for Min,
0=no-limit/2/4/8 for Max) without the slider's special-value problem
(MaxReplicas=0 is categorical, not a position on a continuum).
* Drop the now-unused labelInputs state in the Nodes page (the inline
label editor's per-node draft state lived there and is now owned by
KeyValueChips).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Skill:distill]
* test: fix CI fallout from multi-replica refactor (e2e/distributed + playwright)
Two breakages caught by CI that didn't surface in the local run:
* tests/e2e/distributed/*.go — multiple files used the pre-PR2 registry
signatures for SetNodeModel / IncrementInFlight / DecrementInFlight /
RemoveNodeModel / TouchNodeModel / GetNodeModel / SetNodeModelLoadInfo
and one stale adapter.InstallBackend call in node_lifecycle_test.go.
All updated to pass replicaIndex=0 — these tests don't exercise
multi-replica behavior, they just need to compile against the new
signatures. The chip-builder tests in core/services/nodes/ already
cover the multi-replica logic.
* core/http/react-ui/e2e/nodes-per-node-backend-actions.spec.js — the
drawer's distill refactor moved Backends inside a "Manage" <details>
disclosure that's collapsed by default. The test helper expanded the
node row but never opened Manage, so the per-node backend table was
never in the DOM. Helper now clicks `.node-manage > summary` after
expanding the row.
All 100 playwright tests pass locally; tests/e2e/distributed compiles
clean.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Bash]
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
The shared backend/Dockerfile.python ends in:
RUN cd /${BACKEND} && PORTABLE_PYTHON=true make
which `pip install`s each backend's requirements*.txt. A scan of all 34
Python backends shows every single one ships at least some unpinned deps
(torch, transformers, vllm, diffusers, ...). With the registry cache now
enabled, that `make` layer's BuildKit hash depends only on Dockerfile
instructions + COPYed source — not on what pip resolves at runtime — so
a warm cache would freeze upstream versions indefinitely.
DEPS_REFRESH is an ARG declared right before that RUN. backend_build.yml
computes `date -u +%Y-W%V` (ISO week, e.g. `2026-W17`) and passes it as
a build-arg, so the install layer invalidates at most once per week and
re-resolves PyPI / nightly indexes. Within a week, builds stay warm.
Only Dockerfile.python is affected: Go (go.sum) and Rust (Cargo.lock)
already lock their deps, and the C++ backends pull gRPC at a pinned tag
and llama.cpp at a pinned commit.
Add .agents/ci-caching.md documenting the cache layout
(quay.io/go-skynet/ci-cache:cache<tag-suffix>), read/write semantics
(master writes, PRs read-only), DEPS_REFRESH semantics, and how to
manually evict tags. Index it from AGENTS.md (CLAUDE.md is a symlink).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:claude-opus-4-7-1m
The Dockerfile's HEALTHCHECK probes http://localhost:8080/readyz, which
is the OpenAI API server port. When the same image runs as a worker, it
listens on the gRPC base port (50051) and an HTTP file transfer server
on port-1 (50050) — nothing on 8080 — so docker always reports the
container as unhealthy.
Add unauthenticated /readyz and /healthz endpoints to the worker's HTTP
file transfer server, and override HEALTHCHECK_ENDPOINT for worker-1 in
the distributed compose file. Disable the healthcheck for agent-worker
since it is NATS-only and exposes no HTTP server.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:claude-opus-4-7
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
- Switch cache-from/cache-to in backend_build.yml and image_build.yml
from the unused gha cache to type=registry pointing at
quay.io/go-skynet/ci-cache:cache<tag-suffix>, mode=max with
ignore-error=true. Master/tag builds populate their own
per-matrix-entry cache; PR builds read-only.
- Drop the broken generate_grpc_cache.yaml cron. It targeted a `grpc`
Dockerfile stage that was removed by b1fc5acd in July 2025, has been
failing every night since, and never populated the gha cache. The new
registry-cache scheme is self-warming, so no separate populator is
needed.
- Remove the dead GRPC_VERSION / GRPC_BASE_IMAGE / GRPC_MAKEFLAGS
build-args from image_build.yml and the orphan ARG GRPC_BASE_IMAGE in
the root Dockerfile (the root Dockerfile no longer compiles gRPC; the
source build now lives in backend/Dockerfile.{llama-cpp,
ik-llama-cpp, turboquant} only and uses its own ARG defaults).
- Drop the unused grpc-base-image input from image_build.yml plus the
matrix passthroughs in image.yml / image-pr.yml.
- Drop the unused GRPC_VERSION env in test.yml.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:claude-opus-4-7-1m
Replace the universal max-width:1200px cap on .page with a four-tier
archetype system (narrow 760, medium 1080, default 1600, wide unbounded)
selected per page based on what its UX actually wants. Data/table pages
fill ultrawide displays; forms cap at reading width; tabbed feature
surfaces breathe.
Mobile/tablet:
- New 640/1024 breakpoint split. Tablets (640-1023) get a persistent
52px icon rail; below 640 keeps the slide-off drawer.
- Drawer polish: body-scroll lock, Escape to close, focus moves into
the drawer on open and back to the hamburger on close, aria-hidden
+ inert on main while open.
- Mobile top bar carries hamburger + theme toggle + account avatar
(44x44 touch targets) so theme/account aren't trapped in the drawer.
- Page-level reflow on phones: page-header column-stacks, filter chips
scroll horizontally, tables go edge-to-edge, OperationsBar overflows
rather than wrapping. Honors prefers-reduced-motion.
Manage > Models: drop the toggle column; Enable/Disable joins the
per-row Actions menu alongside Stop/Pin/Edit/Logs/Delete for
consistency with the other action verbs.
Page-width tokens live in theme.css so future tuning is one line.
Removes 7 inline maxWidth workarounds from page roots.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude Code:claude-opus-4-7 [Edit] [Bash]
Meta backends are now always shown — they're the entries operators
configure against — and two independent toggles govern the noise around
them. "Variants" hides platform-specific concrete builds that a meta
backend aliases on the host (e.g. llama-cpp-cuda12-12.4). "Development"
hides pre-release `-development` builds. Each toggle shows the count of
items currently hidden in its category. The legacy `bm` URL flag is
honored on read so existing deep-links resolve to the same view they
used to.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
The overrides.parameters.model field referenced 'Qwen3.-27B-Claude-...' (missing the '5'), so model loads failed because the configured filename did not match the file actually downloaded by the entry's files: list ('Qwen3.5-27B-Claude-...').
Aligns the override filename with the files: entries and with the upstream HF repo (mradermacher/Qwen3.5-27B-...).
Mirrors the whisper capabilities map with -development variants so
clients can pull the master-tagged whisper.cpp backend via a single
platform-resolved name, matching the existing faster-whisper-development
and whisperx-development entries.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
In distributed mode the Backends gallery used to fan every install out to
every worker — fine for auto-resolving (meta) backends like llama-cpp where
each node picks its own variant, but wrong for hardware-specific builds
like cpu-llama-cpp that would silently land on every GPU node.
Adds a node-targeted install path through the existing
POST /api/nodes/:id/backends/install plumbing, with two entry points:
- Backends gallery row gets a split-button in distributed mode. Auto-
resolving keeps "Install on all nodes" as the primary; chevron menu
opens the picker. Hardware-specific routes the primary directly to the
picker — no fan-out path on the row.
- Nodes-page drawer gets a "+ Add backend" button that navigates to
/app/backends?target=<node-id>; the gallery scopes itself to that node
(banner, single per-row install button, Reinstall/Remove for already-
installed). One gallery, two scopes — no second UI to maintain.
The picker (new NodeInstallPicker) shows a 3-state suitability column
(Compatible / Override / Installed), an auto-expanding variant override
disclosure that fires when selected nodes have no working GPU, parallel
per-node installs with inline status and Retry-failed-nodes, and a
mismatch confirm that names the consequence on the button itself.
A 409 fan-out guard on /api/backends/apply protects CLI/Terraform/script
users from the same footgun: hardware-specific installs in distributed
mode now return code "concrete_backend_requires_target" with a human-
readable error and a meta_alternative pointer.
The gallery list payload now surfaces capabilities, metaBackendFor and
per-row nodes (NodeBackendRef) so the picker and the new Nodes column
have everything they need without re-walking the gallery client-side.
GODEBUG=netdns=go is set on the compose services because the cgo DNS
resolver follows the container's nsswitch.conf to host systemd-resolved
(127.0.0.53), unreachable from inside the container; the pure-Go
resolver reads /etc/resolv.conf directly and uses Docker's embedded DNS.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude Code:claude-opus-4-7[1m] [Edit] [Bash] [Read] [Write]
Manage page row actions moved into ActionMenu in b336d9c6, so the
inline `<a title="Backend logs">` the e2e specs were asserting on no
longer exists. Open the row's kebab and assert against the menuitem.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7
Bring the System / Manage page up to the visual standard of the Install
gallery so installed models and backends stop reading like a debug dump.
- Unified ResourceRow anatomy (icon, name+description, badges, status,
expandable detail) shared across both tabs.
- Gallery enrichment cross-references installed names against the gallery
list endpoints to surface icons, descriptions, license, tags, and links
with a graceful "no description" fallback for custom imports.
- Header summary with four StatCards (Models / Backends / Running /
Updates) — clickable to switch tab + pre-set filter.
- Backends meta + development entries hidden by default; "Show meta &
development" paired toggle in the FilterBar with hidden-count hint.
- Kebab (three-dot) ActionMenu replaces the inline button cluster on
every row; restrained until hover, keyboard-navigable, danger items
separated by a divider.
- Backend "Version" cell now falls back to short digest, OCI tag, or
ocifile basename when no semver is set, instead of showing "—" for
every OCI install. Detail panel exposes full Source URI + Digest.
- Drop redundant column headers ("Actions", "On") — kebabs and toggles
carry their own affordance; screen readers still get a label.
- Inline System / User / Meta / Dev badges next to the backend name so
the dedicated Type column doesn't reserve space for "USER" repeated.
- Tightened the spacing between the System Resources card and the
StatCards so they no longer crowd the RAM bar.
Extracted StatCard and GalleryLoader from Nodes.jsx and Models.jsx into
shared components so the visual language is one source of truth.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude Code:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
The local model directory scan treats every non-skipped file as a model
config candidate. Sidecar artifacts that ship alongside checkpoints
(checkpoint blobs, downloaded archives, ggml-style tag files) were
slipping through and showing up as bogus models in the listing. Add
their extensions to the suffix-skip list.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
The chat and agent-chat pages auto-scrolled to the bottom on every
streamed token. If the user scrolled up to re-read part of a response,
the next chunk pulled them back down — making long replies unreadable
while streaming.
Track a stickToBottomRef on each scroll event: if the user is within
80px of the bottom we keep auto-scrolling, otherwise we leave them
where they are. On chat switch we snap back to the bottom and re-pin.
Same fix applied to both Chat.jsx and AgentChat.jsx since they share
the same streaming pattern.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
whisper.cpp can emit bytes that are not valid UTF-8 — typically a
multibyte codepoint split across token boundaries. protobuf string
fields reject those at marshal time, which would surface as a transcribe
failure. Run strings.ToValidUTF8 on the segment text before it leaves
the cgo boundary so the bad byte gets replaced with U+FFFD.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
- useModels.refetch now runs silently — distributed-mode 10s auto-refresh
no longer flips loading=true and replaces the table with a spinner card.
- Manage Use Cases column derives badges from each model's actual
capabilities (Chat / Image / TTS / Embeddings / etc.) instead of
hardcoding a "Chat" link for every row.
- FilterBar right slot is right-aligned via margin-left:auto so the
Update button lives at the end of the row, not next to the chips.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
- embeddings → embedding (6 models): aligns with the WebUI filter button
defined in core/http/views/models.html ({ term: 'embedding', ... }), so
models like nomic-embed-text-v1.5 now appear under the Embedding filter
- TTS → tts (5 models), ASR → asr (2 models): lowercase, per existing
convention used by 161+ models
- CPU/Cpu → cpu (17 models), GPU → gpu (17 models): lowercase, per existing
convention used by 666+ models
- dedupe duplicate tag entries on 3 models that already had repeated tags
(gpt-oss-20b had gguf x2; arcee-ai/AFM-4.5B had gpu x2; one Qwen model
had default x2)
Closes#9247
Extend the existing CPU build matrix entries to produce a multi-arch
manifest (linux/amd64,linux/arm64) at the same image tags. arm64
Linux hosts without an NVIDIA GPU report the "default" capability,
which already maps to cpu-whisperx / cpu-faster-whisper in
backend/index.yaml -- so the manifest list lets Docker pull the right
variant without any gallery changes.
Both stacks install cleanly under aarch64: torch (2.4.1/2.8.0),
faster-whisper, ctranslate2, whisperx, opencv-python and the
remaining deps all ship manylinux2014_aarch64 wheels, so no source
builds run under QEMU emulation.
Follows the same pattern already used by cpu-llama-cpp-quantization.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
The docs site uses the hugo-theme-relearn theme, which provides
`notice` instead of Docsy's `alert`. The face-recognition,
voice-recognition, and stores feature pages used `{{% alert %}}`,
breaking `hugo build` with "template for shortcode \"alert\" not
found".
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Blaizzy/mlx-vlm git HEAD bumped its constraint to mlx>=0.31.2, but
mlx-cuda-12 and mlx-cuda-13 are only published up to 0.31.1 on PyPI.
Since mlx[cudaXX]==0.31.2 forces a sibling wheel that doesn't exist,
pip backtracks through every older mlx[cudaXX], none of which satisfy
mlx>=0.31.2, producing ResolutionImpossible.
Pin all variants to the v0.4.4 tag (mlx>=0.30.0), which resolves
cleanly against mlx[cuda13]==0.31.1. cpu/mps weren't broken yet but
are pinned for consistency.
Assisted-by: Claude:claude-opus-4-7
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
The pinned flash-attn 2.8.3+cu12torch2.7 wheel breaks at import time
once vllm 0.19.1 upgrades torch to its hard-pinned 2.10.0:
ImportError: .../flash_attn_2_cuda...so: undefined symbol:
_ZN3c104cuda29c10_cuda_check_implementationEiPKcS2_ib
That C10 CUDA symbol is libtorch-version-specific. Dao-AILab has not yet
published flash-attn wheels for torch 2.10 -- the latest release (2.8.3)
tops out at torch 2.8 -- so any wheel pinned here is silently ABI-broken
the moment vllm completes its install.
vllm 0.19.1 lists flashinfer-python==0.6.6 as a hard dep, which already
covers the attention path. The only other use of flash-attn in vllm is
the rotary apply_rotary import in
vllm/model_executor/layers/rotary_embedding/common.py, which is guarded
by find_spec("flash_attn") and falls back cleanly when absent.
Also unpin torch in requirements-cublas12.txt: the 2.7.0 pin only
existed to give the flash-attn wheel a matching torch to link against.
With flash-attn gone, vllm's own torch==2.10.0 dep is the binding
constraint regardless of what we put here.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Adds split_mode (alias sm) to the llama.cpp backend options allowlist,
accepting none|layer|row|tensor. The tensor value targets the experimental
backend-agnostic tensor parallelism from ggml-org/llama.cpp#19378 and
requires a llama.cpp build that includes that PR, FlashAttention enabled,
KV-cache quantization disabled, and a manually set context size.
Assisted-by: Claude:claude-opus-4-7
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backends): add CUDA 13 + L4T arm64 CUDA 13 variants for vllm/vllm-omni/sglang
Adds new build profiles mirroring the diffusers/ace-step pattern so vLLM
serving (and SGLang on arm64) can be deployed on CUDA 13 hosts and
JetPack 7 boards:
- vllm: cublas13 (PyPI cu130 channel) + l4t13 (jetson-ai-lab SBSA cu130
prebuilt vllm + flash-attn).
- vllm-omni: cublas13 + l4t13. Floats vllm version on cu13 since vllm
0.19+ ships cu130 wheels by default and vllm-omni tracks vllm master;
cu12 path keeps the 0.14.0 pin to avoid disturbing existing images.
- sglang: l4t13 arm64 only — uses the prebuilt sglang wheel from the
jetson-ai-lab SBSA cu130 index, so no source build is needed.
Cublas13 sglang on x86_64 is intentionally deferred.
CI matrix gains five new images (-gpu-nvidia-cuda-13-vllm{,-omni},
-nvidia-l4t-cuda-13-arm64-{vllm,vllm-omni,sglang}); backend/index.yaml
gains the matching capability keys (nvidia-cuda-13, nvidia-l4t-cuda-13)
and latest/development merge entries.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
* fix(backends): use unsafe-best-match index strategy on l4t13 builds
The jetson-ai-lab SBSA cu130 index lists transitive deps (decord, etc.)
at limited versions / older Python ABIs. uv defaults to the first index
that contains a package and refuses to fall through to PyPI, so sglang
l4t13 build fails resolving decord. Mirror the existing cpu sglang
profile by setting --index-strategy=unsafe-best-match on l4t13 across
the three backends, and apply it to the explicit vllm install line in
vllm-omni's install.sh (which doesn't honor EXTRA_PIP_INSTALL_FLAGS).
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
* fix(sglang): drop [all] extras on l4t13, floor version at 0.5.0
The [all] extra brings in outlines→decord, and decord has no aarch64
cp312 wheel on PyPI nor the jetson-ai-lab index (only legacy cp35-cp37
tags). With unsafe-best-match enabled, uv backtracked through sglang
versions trying to satisfy decord and silently landed on
sglang==0.1.16, an ancient version with an entirely different dep
tree (cloudpickle/outlines 0.0.44, etc.).
Drop [all] so decord is no longer required, and floor sglang at 0.5.0
to prevent any future resolver misfire from degrading the version
again.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(distributed): surface per-node backend op errors to OpStatus
DistributedBackendManager.{Install,Upgrade,Delete}Backend discarded the
per-node BackendOpResult from enqueueAndDrainBackendOp with `_, err :=`.
When workers replied Success=false (e.g. an OCI image with no arm64
variant on a Jetson host), the per-node Error string was recorded in
result.Nodes[].Error but never reached the toplevel return value, so
OpStatus.Error stayed empty and the UI reported the install as
"completed" while the backend was nowhere on the cluster.
Add BackendOpResult.Err() that aggregates per-node Status=="error"
entries into a single error. Queued nodes (waiting for reconciler retry)
are deliberately not treated as failures. Wire the three callers and
DeleteBackendDetailed to call result.Err() so reply.Success=false
finally reaches OpStatus.Error → /api/backends/job/:uid → the UI.
The Delete closures had a related bug: they discarded the reply with
`_` and only checked the NATS round-trip error, so reply.Success=false
was a silent success even with the new aggregation. Check both.
Standalone mode (LocalBackendManager) already surfaces gallery errors
correctly through the same OpStatus.Error path; no change needed there.
Tests: 9 new Ginkgo specs covering all-success / all-fail with distinct
errors / mixed / all-queued / no-nodes for Install, Upgrade, Delete.
Assisted-by: Claude:claude-opus-4-7 [Bash] [Edit] [Read] [Write]
* feat(react-ui): per-node backend delete + clearer upgrade affordance
The Nodes page exposed a per-node "reinstall" button (fa-sync-alt,
tooltip "Reinstall backend") but no per-node delete, even though the
Go side has had POST /api/nodes/:id/backends/delete →
RemoteUnloaderAdapter.DeleteBackend → NATS-to-specific-node wired up
for a while. Sync icons read as "refresh data" — the action is
functionally an upgrade (re-pulls the gallery image), so the affordance
was misleading.
Per-node backend row now renders two icon buttons:
- Upgrade: btn-secondary btn-sm + fa-arrow-up, tooltip "Upgrade backend
on this node". Names both action and scope to differentiate from the
cluster-wide upgrade on the Backends page.
- Delete: btn-danger-ghost btn-sm + fa-trash, tooltip "Delete backend
from this node". Matches the node-level destructive style at the row
action column rather than the solid btn-danger of primary destructive
pages, since this is a secondary action inside a busy row.
Delete goes through the existing ConfirmDialog (danger=true) with copy
that names the backend and the node explicitly — it's a non-recoverable
op on a specific scope. Reuses nodesApi.deleteBackend(id, backend) which
already existed in the API client.
Tests: 4 new Playwright specs covering upgrade clarity (icon + tooltip),
delete button presence, confirm dialog flow with POST body assertion,
and cancel-doesn't-POST.
Assisted-by: Claude:claude-opus-4-7 [Bash] [Edit] [Read] [Write]
* feat(react-ui): editorial refresh with Nord palette and polished primitives
Replaces the cool gray-blue theme with a deep Nord-inspired palette:
frost-cyan accent (#88c0d0) on deep blue-black surfaces (#13171f /
#1a1f2a / #242a36), snow-storm text scale, aurora status colours.
- Typography: Geist Variable + Geist Mono Variable (Google Fonts) with
ss01/ss03/cv11 stylistic alternates; strengthened h1-h6 hierarchy;
editorial negative tracking.
- Primitives: buttons gain depth (inset highlight + hover lift +
brightness filter); inputs become sunken wells with sage-swap-to-frost
focus rings; cards hover-lift and gain an .card--accent left-rail
variant; badges become mono caps rectangles with tabular-nums.
- Chrome: sidebar active state is now an inset left rail + tint
(no border-left); modals get popIn animation and proper shadow lift;
toasts carry an inset accent bar + slide-in instead of tinted fills;
operations bar breathes on active installs.
- Empty states: editorial pattern (eyebrow rule, large mono title,
52ch lede) that inherits gracefully even without page JSX edits.
- Chat: assistant bubbles drop the gray-nested-in-gray card for a
transparent pull-quote with a left border; user bubbles soften from
loud accent fill to a subtle frost tint.
- Motion: custom spring easing cubic-bezier(0.22,1,0.36,1), 180ms
standard; breathing/pulse/popIn keyframes; global prefers-reduced-
motion honoring.
- Radii tightened to 3/5/8/10px; warm-shadow tokens redone for cool
depth; ::selection, :focus-visible, kbd globals added.
- Migrated hardcoded 'JetBrains Mono' CSS literals to var(--font-mono)
so the Geist Mono swap lands everywhere.
Scope is intentionally tokens + primitives only. Page JSX and the
~1,800 inline style={{…}} instances are untouched and flagged as
follow-ups.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write]
* feat(react-ui): complete-coverage pass — migrate inline styles to tokens
Follows up the editorial/Nord token refresh with a mechanical sweep of
page JSX and shared components so nothing bypasses the design system.
- Font family: replaced 80+ 'JetBrains Mono' / 'Space Grotesk' inline
literals (and the string-CSS variants in CollectionDetails and
AgentStatus) with var(--font-mono) / var(--font-sans). SVG <text>
nodes that used the attribute form were switched to style={{ }} so
the CSS variable resolves.
- Radii: every unquoted numeric borderRadius (2/3/4/10) is now a
var(--radius-*) token; 50% and 999px kept as computed shapes.
- Spacing: clean-token gaps and margins (4/8/16px) moved to
var(--spacing-xs/sm/md); padding: '4px 8px' and '8px 16px' lifted
into token pairs. Micro-values (2/6/10/12px) left inline where no
token maps cleanly.
- Colors: Talk.jsx button/canvas-surface hardcodes moved to
var(--color-*); FineTune.jsx chart series colours now use the
--color-data-* Nord palette (cyan/red/purple/orange instead of
tailwind hex); AgentStatus tool-call icon and error tag hex swapped
for var(--color-warning) / var(--color-text-inverse).
- CodeMirror editor (utils/cmTheme.js): both themes rebased on Nord —
polar-night surfaces and aurora syntax highlighting (dark), snow-
storm surfaces with darkened aurora (light). Caret/selection/active
line/search now frost-cyan tinted instead of legacy indigo/purple.
Legitimately dynamic styles (computed widths, per-row colours, canvas
2D context fill/stroke for waveform and spectrogram drawing) remain
inline — they can't be expressed as CSS tokens.
29 files, +237/-237 — identity preserved, semantics re-anchored to
the token system.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write]
Workers on NVIDIA unified-memory hardware (DGX Spark / GB10, Jetson AGX Thor,
Jetson Orin/Xavier/Nano) were reporting `available_vram=0` back to the frontend,
so the Nodes UI showed the node as fully used even when most of the unified
memory was actually free.
Three causes addressed:
* `isTegraDevice` only matched `/sys/devices/soc0/family == "Tegra"`. DGX Spark
(SBSA) reports JEDEC codes there instead — `jep106:0426` for the NVIDIA
manufacturer — so the Tegra/unified-memory fallback never ran. Renamed to
`isNVIDIAIntegratedGPU` and extended to also match `jep106:0426[:*]` via
`/sys/devices/soc0/soc_id`.
* The unified-iGPU code defaulted the device name to `"NVIDIA Jetson"` when
`/proc/device-tree/model` was missing. That's what happens for Thor inside a
docker container, and always on DGX Spark. New `nvidiaIntegratedGPUName`
resolves via dt-model → `/sys/devices/soc0/machine` → `soc_id` lookup
(`jep106:0426:8901` → `"NVIDIA GB10"`) so the Nodes UI labels the box
correctly.
* Worker heartbeat sent `available_vram=0` (or total-as-available) when VRAM
usage was momentarily unknown — e.g. when `nvidia-smi` intermittently failed
with `waitid: no child processes` under containers without `--init`. Each
such heartbeat overwrote the DB and made the UI flip to "fully used".
`heartbeatBody` now omits `available_vram` in that case so the DB keeps its
last good value.
Also updates the commented GPU blocks in both compose files with
`NVIDIA_DRIVER_CAPABILITIES=compute,utility`, `capabilities: [gpu, utility]`,
and `init: true`, and documents the requirement in the distributed-mode and
nvidia-l4t pages. Without `utility`, NVML/`nvidia-smi` are absent inside the
container, which is what put the DGX Spark worker into the buggy fallback in
the first place.
Detection verified on live hardware (dgx.casa / GB10 and 192.168.68.23 / Thor)
by running a cross-compiled probe of the new helpers on both host and inside
the worker container.
Assisted-by: Claude:opus-4.7 [Claude Code]