* fix(tests): inline model_test fixtures after tests/models_fixtures removal
The previous reorg removed tests/models_fixtures/ but core/config/model_test.go
still read CONFIG_FILE/MODELS_PATH env vars pointing into that directory, so
`make test` failed with "open : no such file or directory" on the readConfigFile
spec (the suite ran with --fail-fast and bailed before openresponses_test).
Inline the YAMLs (config/embeddings/grpc/rwkv/whisper) directly into the test
file, materialise them into a per-test tmpdir via BeforeEach, and drop the
env-var lookups. The test no longer depends on Makefile plumbing.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:claude-opus-4-7 [Edit] [Write] [Bash]
* refactor(modeladmin): extract model-admin helpers into a service package
Lift the bodies of EditModelEndpoint, PatchConfigEndpoint,
ToggleStateModelEndpoint, TogglePinnedModelEndpoint and
VRAMEstimateEndpoint into core/services/modeladmin so the same logic can
be called by non-HTTP clients (notably the in-process MCP server that
backs the LocalAI Assistant chat modality, landing in a follow-up commit).
The HTTP handlers shrink to thin shells that parse echo inputs, call the
matching helper, map typed errors (ErrNotFound, ErrConflict,
ErrPathNotTrusted, ErrBadAction, ...) to the existing HTTP status codes,
and render the existing response shapes. No REST-surface behaviour change;
the existing localai endpoint tests cover the regression net.
Adds focused unit tests for each helper against tmp-dir-backed
ModelConfigLoader fixtures (deep-merge patch, rename + conflict, path
separator guard, toggle/pin enable/disable, sync callback).
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(assistant): LocalAI Assistant chat modality with in-memory MCP server
Adds a chat modality, admin-only, that wires the chat session to an
in-memory MCP server exposing LocalAI's own admin/management surface as
tools. An admin can install models, manage backends, edit configs and
check status by chatting; the LLM calls tools like gallery_search,
install_model, import_model_uri, list_installed_models, edit_model_config
and surfaces the results.
Same Go package powers two modes:
pkg/mcp/localaitools/
NewServer(client, opts) builds an MCP server that registers the
19-tool admin catalog. The LocalAIClient interface has two impls:
- inproc.Client — calls services directly (no HTTP loopback,
no synthetic admin API key). Used in-process by the chat handler.
- httpapi.Client — calls the LocalAI REST API. Used by the new
`local-ai mcp-server --target=…` subcommand to control a remote
LocalAI from a stdio MCP host.
Tools and their embedded skill prompts are agnostic to which client
backs them. Skill prompts are markdown files under prompts/, embedded
via go:embed and assembled into the system prompt at server init.
Wiring:
- core/http/endpoints/mcp/localai_assistant.go — process-wide holder
that spins up the in-memory MCP server once at Application start
using paired net.Pipe transports, then reuses LocalToolExecutor
(no fork) for every chat request that opts in.
- core/http/endpoints/openai/chat.go — small branch ahead of the
existing MCP block: when metadata.localai_assistant=true,
defense-in-depth admin check + executor swap + system-prompt
injection. All downstream tool dispatch is unchanged.
- core/http/auth/{permissions,features}.go — adds
FeatureLocalAIAssistant; gating happens at the chat handler entry
plus admin-only `/api/settings`.
- core/cli/{run.go,cli.go,mcp_server.go} —
LOCALAI_DISABLE_ASSISTANT flag (runtime-toggleable via Settings, no
restart), plus `local-ai mcp-server` stdio subcommand.
- core/config/runtime_settings.go — `localai_assistant_enabled`
runtime setting; the chat handler reads `DisableLocalAIAssistant`
live at request entry.
UI:
- Home.jsx — prominent self-explanatory CTA card on first run
("Manage LocalAI by chatting"); collapses to a compact
"Manage by chat" button in the quick-links row once used,
persisted via localStorage.
- Chat.jsx — admin-only "Manage" toggle in the chat header,
"Manage mode" badge, dedicated empty-state copy, starter chips.
- Settings.jsx — "LocalAI Assistant" section with the runtime
enable toggle.
- useChat.js — `localaiAssistant` flag on the chat schema; injects
`metadata.localai_assistant=true` on requests when active.
Distributed mode: the in-memory MCP server lives only on the head node;
inproc.Client wraps already-distributed-aware services so installs
propagate to workers via the existing GalleryService machinery.
Documentation: `.agents/localai-assistant-mcp.md` is the contributor
contract — when adding an admin REST endpoint, also add a LocalAIClient
method, an inproc + httpapi impl, a tool registration, and a skill
prompt update; the AGENTS.md index links to it.
Out of scope (follow-ups): per-tool RBAC granularity for non-admin
read-only access; streaming mcp_tool_progress for long installs;
React Vitest rig for the UI changes.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(assistant): extract tool/capability/MiB/server-name constants
The MCP tool surface, capability tag set, server-name default, and the
chat-handler metadata key were repeated as bare string literals across
seven files. Renaming any one required hand-editing every call site and
risked code/test/prompt drift.
This pulls them into typed constants:
- pkg/mcp/localaitools/tools.go — Tool* constants for the 19 MCP tools,
plus DefaultServerName.
- pkg/mcp/localaitools/capability.go — typed Capability + constants for
the capability tag set the LLM passes to list_installed_models. The
type rides through LocalAIClient.ListInstalledModels and replaces the
triplet of "embed"/"embedding"/"embeddings" with the single
CapabilityEmbeddings.
- pkg/mcp/localaitools/inproc/client.go — bytesPerMiB constant for the
VRAMEstimate byte→MB conversion.
- core/http/endpoints/mcp/tools.go — MetadataKeyLocalAIAssistant for the
"localai_assistant" request-metadata key consumed by the chat handler.
Tool registrations, the test catalog, the dispatch table, the validation
fixtures, and the fake/stub clients all reference the constants. The
embedded skill prompts under prompts/ keep their bare strings (go:embed
markdown can't import Go constants); the existing TestPromptsContain
SafetyAnchors guards the alignment.
No behaviour change. All tests pass with -race.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(modeladmin): typed Action for ToggleState/TogglePinned
The toggle/pin verbs were bare strings everywhere — handler signatures,
service implementations, MCP tool args, the fake/stub clients, the
inproc and httpapi LocalAIClient impls, plus 4 test files. A typo in
any caller silently fell through to the runtime "must be 'enable' or
'disable'" check.
Introduce core/services/modeladmin.Action (string alias) with
ActionEnable, ActionDisable, ActionPin, ActionUnpin and a small Valid
helper. The compiler now catches mismatches at every boundary; renames
ripple through one source of truth.
LocalAIClient.ToggleModelState/Pinned signatures change to take
modeladmin.Action. The package is brand-new and unreleased so this is
a free public-API tightening.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(assistant): respect ctx cancellation on gallery channel sends
InstallModel, DeleteModel, ImportModelURI, InstallBackend and
UpgradeBackend all pushed onto galleryop channels with bare sends. If the
worker was paused or the buffer full, the chat-handler goroutine blocked
forever — the LLM kept polling and the request leaked.
Wrap the five sends in a sendModelOp/sendBackendOp helper that selects
on ctx.Done() so a cancelled chat completion surfaces context.Canceled
back to the LLM instead of hanging.
Adds inproc/client_test.go with a pre-cancelled-ctx regression test on
InstallModel; the helpers are shared so the same guarantee covers the
other four call sites.
Assisted-by: Claude:claude-opus-4-7 [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(assistant): graceful shutdown for in-memory holder and stdio CLI
Two related leaks:
- Application.start() built the LocalAIAssistantHolder but never wired
Close() into the graceful-termination chain — the in-memory MCP
transport pair stayed alive until process exit, and the goroutines
behind net.Pipe() didn't drain. Hook into the existing
signals.RegisterGracefulTerminationHandler chain (same pattern as
core/http/endpoints/mcp/tools.go:770).
- core/cli/mcp_server.go ran srv.Run with context.Background(); a
Ctrl-C from the host (Claude Desktop, mcphost, npx inspector) or a
SIGTERM from process supervision left the stdio loop reading from a
closed pipe. Switch to signal.NotifyContext to surface the signal
through ctx and let srv.Run drain.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(assistant): typed HTTPError + propagate prompt walk error
The httpapi client detected "no such job" by substring-matching on the
error string ("404", "could not find") — brittle to status-code
formatting changes and to LocalAI fixing /models/jobs/:uuid to return a
proper 404. Replace with a typed *HTTPError whose Is() method honours
errors.Is(err, ErrHTTPNotFound). The 500-with-"could not find" branch
stays as a transitional fallback documented in Is().
Same change covers ListNodes' 404 fallback for the /api/nodes endpoint.
Adds httptest tests for both 404 and the legacy 500 path, plus a
direct errors.Is exposure test so external callers (the standalone
stdio CLI host) can match without re-string-parsing.
Also tightens prompts.SystemPrompt: panic when fs.WalkDir on the
embedded FS fails. The only realistic cause is a build-time //go:embed
misconfiguration; serving an empty system prompt to the LLM is much
worse than crashing init. TestSystemPromptIncludesAllEmbeddedFiles
catches regressions in CI.
Assisted-by: Claude:claude-opus-4-7 [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(modeladmin): atomic writes for model config files
The five sites that wrote model YAML used os.WriteFile, which opens
with O_TRUNC|O_WRONLY|O_CREATE. A crash mid-write left the destination
truncated and the model unloadable until manual repair. Pre-existing
behaviour inherited from the original endpoint handlers — fix once now
that there's a single helper.
Adds writeFileAtomic: writes to a sibling temp file, chmods, syncs via
Close(), then os.Rename. Same-directory temp keeps the rename atomic on
the same filesystem; cleanup runs on every error path so stray temps
don't accumulate. No new dependency.
Applied to:
- ConfigService.PatchConfig
- ConfigService.EditYAML (both rename and in-place branches)
- mutateYAMLBoolFlag (drives ToggleState + TogglePinned)
atomic_test.go covers the happy path plus a read-only-dir failure case
that asserts the original file is preserved (skipped on Windows where
the chmod trick is POSIX-specific).
Assisted-by: Claude:claude-opus-4-7 [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(assistant): prune dead code, mark stub, document conventions
Three small cleanups landing together:
- Drop the unused errNotImplemented sentinel from inproc/client.go.
All five methods that used to return it are wired to modeladmin
helpers since the Phase B commit; the package var is dead.
- Annotate httpapi.Client.GetModelConfig as a known stub. LocalAI's
/models/edit/:name returns rendered HTML, not JSON, so the standalone
CLI's get_model_config tool surfaces a clear error to the LLM. A
future JSON-only /api/models/config-yaml/:name endpoint is tracked in
the agent contract; FIXME points at it.
- Extend `.agents/localai-assistant-mcp.md` with a "Code conventions"
section that documents the audit-driven rules: tool/Capability/Action
constants, errors.Is over substring matching, ctx-aware channel
sends, atomic writes, and graceful shutdown. Refresh the file map so
it lists tools.go and capability.go and drops the removed
tools_bootstrap.go.
The tools_models.go diff is a comment-only change explaining why the
ModelName empty-string check stays at the tool layer (consistency
across LocalAIClient implementations, since the SDK schema validator
only enforces presence, not non-empty).
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* test(assistant): convert test files to ginkgo + gomega
The repo convention (per core/http/endpoints/localai/*_test.go,
core/gallery/**, etc.) is Ginkgo v2 with Gomega assertions. The tests I
introduced for the assistant feature used vanilla testing.T, which made
them stand out and stripped the BDD structure the rest of the suite
relies on.
Convert every test file in the assistant scope to Ginkgo:
pkg/mcp/localaitools/
dto_test.go — Describe("DTOs round-trip through JSON")
prompts_test.go — Describe("SystemPrompt assembler")
server_test.go — Describe("Server tool catalog"),
Describe("Tool dispatch"),
Describe("Tool error surfacing"),
Describe("Argument validation"),
Describe("Concurrent tool calls")
parity_test.go — Describe("LocalAIClient parity"),
hosts the suite's single RunSpecs (the file
is package localaitools_test so it can
import httpapi without an import cycle;
Ginkgo aggregates Describes from both the
internal and external test packages into
one run).
httpapi/client_test.go — Describe("httpapi.Client against the
LocalAI admin REST surface"),
Describe("ErrHTTPNotFound"),
Describe("Bearer token")
inproc/client_test.go — Describe("inproc.Client cancellation")
core/services/modeladmin/
config_test.go — Describe("ConfigService") with sub-Describes
for GetConfig, PatchConfig, EditYAML
state_test.go — Describe("ConfigService.ToggleState")
pinned_test.go — Describe("ConfigService.TogglePinned")
atomic_test.go — Describe("writeFileAtomic")
core/http/endpoints/mcp/
localai_assistant_test.go — Describe("LocalAIAssistantHolder")
Each package gets a `*_suite_test.go` with the standard
`RegisterFailHandler(Fail) + RunSpecs(t, "...")` boilerplate. Helpers
that previously took *testing.T (newTestService, writeModelYAML,
readMap, sortedStrings, sortGalleries, etc.) drop the *T receiver and
use Gomega Expectations directly. tmp dirs come from GinkgoT().TempDir().
No semantic change to test coverage — every original assertion has a
direct Gomega counterpart. All suites pass with -race.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* test+docs(assistant): drift detector for Tool ↔ REST route mapping
Honest gap from the audit: the parity_test.go suite only checks four
methods, and uses the same httpapi.Client for both sides — it asserts
stability of the DTO shapes, not equivalence between in-process and
HTTP. If a contributor adds an admin REST endpoint without an MCP tool,
or a tool without a matching httpapi route, both surfaces silently
diverge.
Add a coverage test plus stronger docs:
- pkg/mcp/localaitools/coverage_test.go introduces a hand-maintained
toolToHTTPRoute map: every Tool* constant must list the REST endpoint
the httpapi.Client hits (or "(none)" with a documented reason). Two
Ginkgo specs assert the map and the published catalog stay in sync —
one fails when a Tool is added without a route entry, the other fails
when a route entry references a tool that no longer exists. Verified
by removing the ToolDeleteModel entry locally; the test fired with a
clear message pointing the contributor at the file.
Deliberate non-test: we don't enumerate live admin REST routes from
here. Walking the route registry requires booting Application;
parsing core/http/routes/localai.go is brittle. The "new admin REST
endpoint → MCP tool" direction stays a PR checklist item — see below.
- AGENTS.md gets a new Quick Reference bullet that calls out the rule
and points at the test by name.
- .agents/api-endpoints-and-auth.md tightens the existing "Companion:
MCP admin tool surface" subsection from "if useful, consider..." to
"MUST be considered, with three concrete outcomes (tool added,
deliberately skipped with documented reason, or forgot — which
breaks the contract)". Adds a checklist item at the bottom of the
file's authoritative checklist.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(assistant): drop duplicate DTOs, surface canonical types
Audit feedback: localaitools/dto.go reinvented several types that already
existed in the codebase. Replace the duplicates with the canonical types
so the LLM-visible wire format stays aligned with the rest of LocalAI by
construction (no parallel structs to keep in sync).
Removed (and the canonical type now used by the LocalAIClient interface):
localaitools.Gallery → config.Gallery
localaitools.GalleryModelHit → gallery.Metadata
localaitools.VRAMEstimate → vram.EstimateResult
Tightened scope:
localaitools.Backend → kept, but reduced to {Name, Installed}.
ListKnownBackends now returns
[]schema.KnownBackend (the canonical
type already used by REST /backends/known).
Kept with documented rationale:
localaitools.JobStatus — galleryop.OpStatus has Error error which
marshals to "{}". JobStatus is the
JSON-friendly mirror.
localaitools.Node — nodes.BackendNode carries gorm internals
+ token hash; we expose only the
LLM-relevant fields.
ImportModelURIRequest/Response — schema.ImportModelRequest and
GalleryResponse are wire-shaped, mine
are LLM-shaped (BackendPreference flat,
AmbiguousBackend exposed).
Side wins:
- Drop bytesPerMiB; vram.EstimateResult already carries human-readable
display strings (size_display, vram_display) the LLM uses directly.
- Drop the handler-private vramEstimateRequest in
core/http/endpoints/localai/vram.go and bind directly into
modeladmin.VRAMRequest (now JSON-tagged).
Both clients pass through these types now where possible (e.g.
ListGalleries in inproc.Client is a one-liner returning
AppConfig.Galleries; httpapi.Client.GallerySearch decodes straight into
[]gallery.Metadata).
All tests green with -race.
Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(assistant): extract REST route paths into named constants
httpapi.Client had 18 bare-string path sites scattered across methods.
Pull them into pkg/mcp/localaitools/httpapi/routes.go: static paths as
package-private constants, dynamic paths as small builders that handle
url.PathEscape on segment values.
No behaviour change. Drops the now-unused net/url import from client.go
since path escaping moved into routes.go alongside the path it applies to.
Local-only by design: the server-side registrations in
core/http/routes/localai.go remain bare strings. Sharing constants across
the pkg/ ↔ core/ boundary would invert the layering today; the existing
Tool↔REST drift-detector in coverage_test.go is the safety net for that
direction.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
* docs(assistant): align with shipped UI and dropped bootstrap env vars
The LocalAI Assistant doc still described the older iteration:
- The in-chat toggle was renamed from "Admin" to "Manage" (the badge is
now "Manage mode" and the home page exposes a "Manage by chat" CTA).
- LOCALAI_ASSISTANT_BOOTSTRAP_MODEL / --localai-assistant-bootstrap-model
and the bootstrap_default_model tool were removed — admins pick a model
from the existing selector instead, no env-var configuration required.
- The shipped tool catalog includes import_model_uri but didn't appear in
the doc; bootstrap_default_model appeared but no longer exists.
- The Settings → LocalAI Assistant runtime toggle wasn't mentioned as the
preferred way to disable without restart.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
PR #9583 changed the supervisor's process map key from `modelID` to
`modelID#replicaIndex`, but the NATS lifecycle handlers kept passing
the bare modelID:
* `backend.stop` (subscribeLifecycleEvents): `s.stopBackend(req.Backend)`
→ `s.processes["Qwen3.6-..."]` missed (actual key is "...#0") →
silent no-op. Admin "Unload model" clicks released VRAM via
model.unload but left the gRPC process alive on its old port.
Subsequent chats hit installBackend, found the leftover process,
reused its address — and the UI reported "no models loaded" while
the model kept responding.
* `backend.delete` (subscribeLifecycleEvents): same map miss in
`isRunning(req.Backend)` and `s.stopBackend(req.Backend)` — admin
"Delete backend" deleted the binary while the process was still
serving traffic.
Add `resolveProcessKeys(id)`: exact match if `id` is a full processKey
(stopAllBackends iterates the map and passes its own keys);
prefix-match if `id` is bare (NATS handlers); empty if `id` contains
`#` but doesn't match (no spurious fallback when the caller was
explicit). stopBackend and isRunning now call it; stopBackend gets a
new stopBackendExact helper for per-key cleanup.
TDD: regression test fails without the fix (resolveProcessKeys
doesn't exist; map lookup by bare name returns nothing). Tests pass
post-fix.
Reproduced live: registry row count was 0 for the model the user
"Unloaded", chat still served by the leftover worker process.
SmartRouter behavior is correct in itself — it falls through to
scheduleAndLoad when no row exists; the bug was that the leftover
process corrupted the install path.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Bash]
* feat(distributed): support multiple replicas of one model on the same node
The distributed scheduler implicitly assumed `(node_id, model_name)` was
unique, but the schema didn't enforce it and the worker keyed all gRPC
processes by model name alone. With `MinReplicas=2` against a single
worker, the reconciler "scaled up" every 30s but the registry never
advanced past 1 row — the worker re-loaded the model in-place every tick
until VRAM fragmented and the gRPC process died.
This change introduces multi-replica-per-node as a first-class concept,
with capacity-aware scheduling, a circuit breaker, and VRAM
soft-reservation. Operators can declare per-node capacity via the worker
flag `--max-replicas-per-model` (mirrored as auto-label
`node.replica-slots=N`) or override per-node from the UI.
* Schema: BackendNode gains MaxReplicasPerModel (default 1) and
ReservedVRAM. NodeModel gains ReplicaIndex (composite with node_id +
model_name). ModelSchedulingConfig gains UnsatisfiableUntil/Ticks for
the reconciler circuit breaker.
* Registry: replica_index threaded through SetNodeModel, RemoveNodeModel,
IncrementInFlight, DecrementInFlight, TouchNodeModel, GetNodeModel,
SetNodeModelLoadInfo and the InFlightTrackingClient. New helpers:
CountReplicasOnNode, NextFreeReplicaIndex (with ErrNoFreeSlot),
RemoveAllNodeModelReplicas, FindNodesWithFreeSlot,
ClusterCapacityForModel, ReserveVRAM/ReleaseVRAM (atomic UPDATE with
ErrInsufficientVRAM), and the unsatisfiable-flag CRUD.
* Worker: processKey now `<modelID>#<replicaIndex>` so concurrent loads
of the same model land on distinct ports. Adds CLI flag
--max-replicas-per-model (env LOCALAI_MAX_REPLICAS_PER_MODEL, default 1)
and emits the auto-label.
* Router: scheduleNewModel filters candidates by free slot, allocates the
replica index, and soft-reserves VRAM before installing the backend.
evictLRUAndFreeNode now deletes the targeted row by ID instead of all
replicas of the model on the node — fixes a latent bug where evicting
one replica orphaned its siblings.
* Reconciler: caps scale-up at ClusterCapacityForModel so a misconfig
(MinReplicas > capacity) doesn't loop forever. After 3 consecutive
ticks of capacity==0 it sets UnsatisfiableUntil for a 5m cooldown and
emits a warning. ClearAllUnsatisfiable fires from Register,
ApproveNode, SetNodeLabel(s), RemoveNodeLabel and
UpdateMaxReplicasPerModel so a new node joining or label changes wake
the reconciler immediately. scaleDownIdle removes highest-replica-index
first to keep slots compact.
* Heartbeat resets reserved_vram to 0 — worker is the source of truth
for actual free VRAM; the reservation is only for the in-tick race
window between two scheduling decisions.
* Probe path (reconciler.probeLoadedModels and health.doCheckAll) now
pass the row's replica_index to RemoveNodeModel so an unreachable
replica doesn't orphan healthy siblings.
* Admin override: PUT /api/nodes/:id/max-replicas-per-model sets a
sticky override (preserved across worker re-registration). DELETE
clears the override so the worker's flag applies again on next
register. Required because Kong defaults the worker flag to 1, so
every worker restart would have silently reverted the UI value.
* React UI: always-visible slot badge on the node row (muted at default
1, accented when >1); inline editor in the expanded drawer with
pencil-to-edit, Save/Cancel, Esc/Enter, "(override)" indicator when
the value is admin-set, and a "Reset" button to hand control back to
the worker. Soft confirm when shrinking the cap below the count of
loaded replicas. Scheduling rules table gets an "Unsatisfiable until
HH:MM" status badge surfacing the cooldown.
* node.replica-slots filtered out of the labels strip on the row to
avoid duplicating the slot badge.
23 new Ginkgo specs (registry, reconciler, inflight, health) cover:
multi-replica row independence, RemoveNodeModel of one replica
preserving siblings, NextFreeReplicaIndex slot allocation including
ErrNoFreeSlot, capacity-gated scale-up with circuit breaker tripping
and recovery on Register, scheduleDownIdle ordering, ClusterCapacity
math, ReserveVRAM admission gating, Heartbeat reset, override survival
across worker re-registration, and ResetMaxReplicasPerModel handing
control back. Plus 8 stdlib tests for the worker processKey / CLI /
auto-label.
Closes the flap reproduced on Qwen3.6-35B against the nvidia-thor
worker (single 128 GiB node, MinReplicas=2): the reconciler now caps
the scale-up at the cluster's actual capacity instead of looping.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Read] [Edit] [Bash] [Skill:critique] [Skill:audit] [Skill:polish] [Skill:golang-testing]
* refactor(react-ui/nodes): tighten capacity editor copy + adopt ActionMenu for row actions
* Capacity editor hint trimmed from operator-doc-style ("Sourced from
the worker's `--max-replicas-per-model` flag. Changing it here makes it
a sticky admin override that survives worker restarts." → "Saved
values stick across worker restarts.") and the override-state copy
similarly compressed. The full mechanic is no longer needed in the UI
— the override pill carries the meaning and the docs cover the rest.
* Node row actions migrated from an inline cluster of icon buttons
(Drain / Resume / Trash) to the kebab ActionMenu used by /manage for
per-row model actions, so dense Nodes tables stay clean. Approve
stays as a prominent primary button — it's a stateful admission gate,
not a routine action, and elevating it matches how /manage surfaces
install-time decisions outside the menu.
* The expanded drawer's Labels section now filters node.replica-slots
out of the editable label list. The label is owned by the Capacity
editor above; surfacing it again as an editable label invited
confusion (the Capacity save would clobber any direct edit).
Both backend and agent workers benefit — they share the row rendering
path, so the action menu and label filter apply to both.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] [Skill:critique] [Skill:audit] [Skill:polish]
* fix(react-ui/nodes): suppress slot badge on agent workers
Agent workers don't load models, so the per-node replica capacity is
inapplicable to them. Showing "1× slots" on agent rows was a tiny
inconsistency from the unified rendering path — gate the badge on
node_type !== 'agent' so it only appears on backend workers.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp]
* refactor(react-ui/nodes): distill expanded drawer + restyle scheduling form
The expanded node drawer used to stack five panels — slot badge,
filled capacity box, Loaded Models h4+empty-state, Installed Backends
h4+empty-state, Labels h4+chips+form — making routine inspections feel
like a control panel. The scheduling rule form wrapped its mode toggle
as two 50%-width filled buttons that competed visually with the actual
primary action.
* Drawer: collapse three rarely-touched config zones (Capacity,
Backends, Labels) into one `<details>` "Manage" disclosure (closed by
default) with small uppercase eyebrow labels for each zone instead of
parallel h4 sub-headings. Loaded Models stays as the at-a-glance
headline with a single-line empty hint instead of a boxed empty state.
CapacityEditor renders flat (no filled background) — the Manage
disclosure provides framing.
* Scheduling form: replace the chunky 50%-width button-tabs with the
project's existing `.segmented` control (icon + label, sized to
content). Mode hint becomes a single tied line below. Fields stack
vertically with helper text under inputs and a hairline divider above
the right-aligned Save / Cancel.
The empty drawer collapses from ~5 stacked sections (~280px tall) to
two lines (~80px). The scheduling form now reads as a designed dialog
instead of raw building blocks. Both surfaces now match the typographic
density and weight of the rest of the admin pages.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] [Skill:distill] [Skill:audit] [Skill:polish]
* feat(react-ui/nodes): replace scheduling form's model picker with searchable combobox
The native <select> made operators scroll through every gallery entry to
find a model name. The project already has SearchableModelSelect (used
in Studio/Talk/etc.) which combines free-text search with the gallery
list and accepts typed model names that aren't installed yet — useful
for pre-staging a scheduling rule before the node it'll run on has
finished bootstrapping.
Also drops the now-unused useModels import (the combobox manages the
gallery hook internally).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit]
* refactor(react-ui/nodes): consolidate key/value chip editor + add replica preset chips
The Nodes page was rendering the same key=value chip pattern in two
places with subtly different markup: the Labels editor in the expanded
drawer and (post-distill) the Node Selector input in the scheduling
form. The form's input was also a comma-separated string that operators
were getting wrong.
* Extract <KeyValueChips> as a fully controlled chip-builder. Parent
owns the map and decides what onAdd/onRemove does — form state for the
scheduling form, API calls for the live drawer Labels editor. Same
visuals everywhere; one component to change when polish needs apply.
* Replace the comma-separated Node Selector text input with KeyValueChips.
Operators were copying syntax from docs and missing commas; the chip
vocabulary makes the key=value structure self-documenting.
* Add <ReplicaInput>: numeric input + quick-pick preset chips for Min/Max
replicas. Picked over a slider because replica counts are exact specs
derived from VRAM math (operator decision, not a fuzzy estimate). The
chips give one-click access to common values (1/2/3/4 for Min,
0=no-limit/2/4/8 for Max) without the slider's special-value problem
(MaxReplicas=0 is categorical, not a position on a continuum).
* Drop the now-unused labelInputs state in the Nodes page (the inline
label editor's per-node draft state lived there and is now owned by
KeyValueChips).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Skill:distill]
* test: fix CI fallout from multi-replica refactor (e2e/distributed + playwright)
Two breakages caught by CI that didn't surface in the local run:
* tests/e2e/distributed/*.go — multiple files used the pre-PR2 registry
signatures for SetNodeModel / IncrementInFlight / DecrementInFlight /
RemoveNodeModel / TouchNodeModel / GetNodeModel / SetNodeModelLoadInfo
and one stale adapter.InstallBackend call in node_lifecycle_test.go.
All updated to pass replicaIndex=0 — these tests don't exercise
multi-replica behavior, they just need to compile against the new
signatures. The chip-builder tests in core/services/nodes/ already
cover the multi-replica logic.
* core/http/react-ui/e2e/nodes-per-node-backend-actions.spec.js — the
drawer's distill refactor moved Backends inside a "Manage" <details>
disclosure that's collapsed by default. The test helper expanded the
node row but never opened Manage, so the per-node backend table was
never in the DOM. Helper now clicks `.node-manage > summary` after
expanding the row.
All 100 playwright tests pass locally; tests/e2e/distributed compiles
clean.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:opus-4-7 [Edit] [Bash]
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Workers on NVIDIA unified-memory hardware (DGX Spark / GB10, Jetson AGX Thor,
Jetson Orin/Xavier/Nano) were reporting `available_vram=0` back to the frontend,
so the Nodes UI showed the node as fully used even when most of the unified
memory was actually free.
Three causes addressed:
* `isTegraDevice` only matched `/sys/devices/soc0/family == "Tegra"`. DGX Spark
(SBSA) reports JEDEC codes there instead — `jep106:0426` for the NVIDIA
manufacturer — so the Tegra/unified-memory fallback never ran. Renamed to
`isNVIDIAIntegratedGPU` and extended to also match `jep106:0426[:*]` via
`/sys/devices/soc0/soc_id`.
* The unified-iGPU code defaulted the device name to `"NVIDIA Jetson"` when
`/proc/device-tree/model` was missing. That's what happens for Thor inside a
docker container, and always on DGX Spark. New `nvidiaIntegratedGPUName`
resolves via dt-model → `/sys/devices/soc0/machine` → `soc_id` lookup
(`jep106:0426:8901` → `"NVIDIA GB10"`) so the Nodes UI labels the box
correctly.
* Worker heartbeat sent `available_vram=0` (or total-as-available) when VRAM
usage was momentarily unknown — e.g. when `nvidia-smi` intermittently failed
with `waitid: no child processes` under containers without `--init`. Each
such heartbeat overwrote the DB and made the UI flip to "fully used".
`heartbeatBody` now omits `available_vram` in that case so the DB keeps its
last good value.
Also updates the commented GPU blocks in both compose files with
`NVIDIA_DRIVER_CAPABILITIES=compute,utility`, `capabilities: [gpu, utility]`,
and `init: true`, and documents the requirement in the distributed-mode and
nvidia-l4t pages. Without `utility`, NVML/`nvidia-smi` are absent inside the
container, which is what put the DGX Spark worker into the buggy fallback in
the first place.
Detection verified on live hardware (dgx.casa / GB10 and 192.168.68.23 / Thor)
by running a cross-compiled probe of the new helpers on both host and inside
the worker container.
Assisted-by: Claude:opus-4.7 [Claude Code]
- transcript.go: Model not found error now suggests available models commands
- util.go: GGUF error explains format and how to get models
- worker_p2p.go: Token error explains purpose and how to obtain one
- run.go: Startup failure includes troubleshooting steps and docs link
- model_config_loader.go: Config validation errors include file path and guidance
Refs: H2 - UX Review Issue
Signed-off-by: localai-bot <localai-bot@noreply.github.com>
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When installing a backend with a custom OCI URI in distributed mode,
the URI was captured in ManagementOp.ExternalURI by the HTTP handler
but never forwarded to workers. BackendInstallRequest had no URI field,
so workers fell through to the gallery lookup and failed with
"no backend found with name <custom-name>".
Add URI/Name/Alias fields to BackendInstallRequest and thread them from
ManagementOp through DistributedBackendManager.InstallBackend() and the
RemoteUnloaderAdapter. On the worker side, route to InstallExternalBackend
when URI is set instead of InstallBackendFromGallery. Update all
remaining InstallBackend call sites (UpgradeBackend, reconciler
pending-op drain, router auto-install) to pass empty strings for the
new params.
Assisted-by: Claude Code:claude-sonnet-4-6
Signed-off-by: Russell Sim <rsl@simopolis.xyz>
* fix(distributed): detect backend upgrades across worker nodes
Before this change `DistributedBackendManager.CheckUpgrades` delegated to the
local manager, which read backends from the frontend filesystem. In
distributed deployments the frontend has no backends installed locally —
they live on workers — so the upgrade-detection loop never ran and the UI
silently never surfaced upgrades even when the gallery advertised newer
versions or digests.
Worker-side: NATS backend.list reply now carries Version, URI and Digest
for each installed backend (read from metadata.json).
Frontend-side: DistributedBackendManager.ListBackends aggregates per-node
refs (name, status, version, digest) instead of deduping, and CheckUpgrades
feeds that aggregation into gallery.CheckUpgradesAgainst — a new entrypoint
factored out of CheckBackendUpgrades so both paths share the same core
logic.
Cluster drift policy: when per-node version/digest tuples disagree, the
backend is flagged upgradeable regardless of whether any single node
matches the gallery, and UpgradeInfo.NodeDrift enumerates the outliers so
operators can see *why* it is out of sync. The next upgrade-all realigns
the cluster.
Tests cover: drift detection, unanimous-match (no upgrade), and the
empty-installed-version path that the old distributed code silently
missed.
* feat(ui): surface backend upgrades in the System page
The System page (Manage.jsx) only showed updates as a tiny inline arrow,
so operators routinely missed them. Port the Backend Gallery's upgrade UX
so System speaks the same visual language:
- Yellow banner at the top of the Backends tab when upgrades are pending,
with an "Upgrade all" button (serial fan-out, matches the gallery) and a
"Updates only" filter toggle.
- Warning pill (↑ N) next to the tab label so the count is glanceable even
when the banner is scrolled out of view.
- Per-row labeled "Upgrade to vX.Y" button (replaces the icon-only button
that silently flipped semantics between Reinstall and Upgrade), plus an
"Update available" badge in the new Version column.
- New columns: Version (with upgrade + drift chips), Nodes (per-node
attribution badges for distributed mode, degrading to a compact
"on N nodes · M offline" chip above three nodes), Installed (relative
time).
- System backends render a "Protected" chip instead of a bare "—" so rows
still align and the reason is obvious.
- Delete uses the softer btn-danger-ghost so rows don't scream red; the
ConfirmDialog still owns the "are you sure".
The upgrade checker also needed the same per-worker fix as the previous
commit: NewUpgradeChecker now takes a BackendManager getter so its
periodic runs call the distributed CheckUpgrades (which asks workers)
instead of the empty frontend filesystem. Without this the /api/backends/
upgrades endpoint stayed empty in distributed mode even with the protocol
change in place.
New CSS primitives — .upgrade-banner, .tab-pill, .badge-row, .cell-stack,
.cell-mono, .cell-muted, .row-actions, .btn-danger-ghost — all live in
App.css so other pages can adopt them without duplicating styles.
* feat(ui): polish the Nodes page so it reads like a product
The Nodes page was the biggest visual liability in distributed mode.
Rework the main dashboard surfaces in place without changing behavior:
StatCards: uniform height (96px min), left accent bar colored by the
metric's semantic (success/warning/error/primary), icon lives in a
36x36 soft-tinted chip top-right, value is left-aligned and large.
Grid auto-fills so the row doesn't collapse on narrow viewports. This
replaces the previous thin-bordered boxes with inconsistent heights.
Table rows: expandable rows now show a chevron cue on the left (rotates
on expand) so users know rows open. Status cell became a dedicated chip
with an LED-style halo dot instead of a bare bullet. Action buttons gained
labels — "Approve", "Resume", "Drain" — so the icons aren't doing all
the semantic work; the destructive remove action uses the softer
btn-danger-ghost variant so rows don't scream red, with the ConfirmDialog
still owning the real "are you sure". Applied cell-mono/cell-muted
utility classes so label chips and addresses share one spacing/font
grammar instead of re-declaring inline styles everywhere.
Expanded drawer: empty states for Loaded Models and Installed Backends
now render as a proper drawer-empty card (dashed border, icon, one-line
hint) instead of a plain muted string that read like broken formatting.
Tabs: three inline-styled buttons became the shared .tab class so they
inherit focus ring, hover state, and the rest of the design system —
matches the System page.
"Add more workers" toggle turned into a .nodes-add-worker dashed-border
button labelled "Register a new worker" (action voice) instead of a
chevron + muted link that operators kept mistaking for broken text.
New shared CSS primitives carry over to other pages:
.stat-grid + .stat-card, .row-chevron, .node-status, .drawer-empty,
.nodes-add-worker.
* feat(distributed): durable backend fan-out + state reconciliation
Two connected problems handled together:
1) Backend delete/install/upgrade used to silently skip non-healthy nodes,
so a delete during an outage left a zombie on the offline node once it
returned. The fan-out now records intent in a new pending_backend_ops
table before attempting the NATS round-trip. Currently-healthy nodes
get an immediate attempt; everyone else is queued. Unique index on
(node_id, backend, op) means reissuing the same operation refreshes
next_retry_at instead of stacking duplicates.
2) Loaded-model state could drift from reality: a worker OOM'd, got
killed, or restarted a backend process would leave a node_models row
claiming the model was still loaded, feeding ghost entries into the
/api/nodes/models listing and the router's scheduling decisions.
The existing ReplicaReconciler gains two new passes that run under a
fresh KeyStateReconciler advisory lock (non-blocking, so one wedged
frontend doesn't freeze the cluster):
- drainPendingBackendOps: retries queued ops whose next_retry_at has
passed on currently-healthy nodes. Success deletes the row; failure
bumps attempts and pushes next_retry_at out with exponential backoff
(30s → 15m cap). ErrNoResponders also marks the node unhealthy.
- probeLoadedModels: gRPC-HealthChecks addresses the DB thinks are
loaded but hasn't seen touched in the last probeStaleAfter (2m).
Unreachable addresses are removed from the registry. A pluggable
ModelProber lets tests substitute a fake without standing up gRPC.
DistributedBackendManager exposes DeleteBackendDetailed so the HTTP
handler can surface per-node outcomes ("2 succeeded, 1 queued") to the
UI in a follow-up commit; the existing DeleteBackend still returns
error-only for callers that don't care about node breakdown.
Multi-frontend safety: the state pass uses advisorylock.TryWithLockCtx
on a new key so N frontends coordinate — the same pattern the health
monitor and replica reconciler already rely on. Single-node mode runs
both passes inline (adapter is nil, state drain is a no-op).
Tests cover the upsert semantics, backoff math, the probe removing an
unreachable model but keeping a reachable one, and filtering by
probeStaleAfter.
* feat(ui): show cluster distribution of models in the System page
When a frontend restarted in distributed mode, models that workers had
already loaded weren't visible until the operator clicked into each node
manually — the /api/models/capabilities endpoint only knew about
configs on the frontend's filesystem, not the registry-backed truth.
/api/models/capabilities now joins in ListAllLoadedModels() when the
registry is active, returning loaded_on[] with node id/name/state/status
for each model. Models that live in the registry but lack a local config
(the actual ghosts, not recovered from the frontend's file cache) still
surface with source="registry-only" so operators can see and persist
them; without that emission they'd be invisible to this frontend.
Manage → Models replaces the old Running/Idle pill with a distribution
cell that lists the first three nodes the model is loaded on as chips
colored by state (green loaded, blue loading, amber anything else). On
wider clusters the remaining count collapses into a +N chip with a
title-attribute breakdown. Disabled / single-node behavior unchanged.
Adopted models get an extra "Adopted" ghost-icon chip with hover copy
explaining what it means and how to make it permanent.
Distributed mode also enables a 10s auto-refresh and a "Last synced Xs
ago" indicator next to the Update button so ghost rows drop off within
one reconcile tick after their owning process dies. Non-distributed
mode is untouched — no polling, no cell-stack, same old Running/Idle.
* feat(ui): NodeDistributionChip — shared per-node attribution component
Large clusters were going to break the Manage → Backends Nodes column:
the old inline logic rendered every node as a badge and would shred the
layout at >10 workers, plus the Manage → Models distribution cell had
copy-pasted its own slightly-different version.
NodeDistributionChip handles any cluster size with two render modes:
- small (≤3 nodes): inline chips of node names, colored by health.
- large: a single "on N nodes · M offline · K drift" summary chip;
clicking opens a Popover with a per-node table (name, status,
version, digest for backends; name, status, state for models).
Drift counting mirrors the backend's summarizeNodeDrift so the UI
number matches UpgradeInfo.NodeDrift. Digests are truncated to the
docker-style 12-char form with the full value preserved in the title.
Popover is a new general-purpose primitive: fixed positioning anchored
to the trigger, flips above when there's no room below, closes on
outside-click or Escape, returns focus to the trigger. Uses .card as
its surface so theming is inherited. Also useful for a future
labels-editor popup and the user menu.
Manage.jsx drops its duplicated inline Nodes-column + loaded_on cell
and uses the shared chip with context="backends" / "models"
respectively. Delete code removes ~40 lines of ad-hoc logic.
* feat(ui): shared FilterBar across the System page tabs
The Backends gallery had a nice search + chip + toggle strip; the System
page had nothing, so the two surfaces felt like different apps. Lift the
pattern into a reusable FilterBar and wire both System tabs through it.
New component core/http/react-ui/src/components/FilterBar.jsx renders a
search input, a role="tablist" chip row (aria-selected for a11y), and
optional toggles / right slot. Chips support an optional `count` which
the System page uses to show "User 3", "Updates 1" etc.
System Models tab: search by id or backend; chips for
All/Running/Idle/Disabled/Pinned plus a conditional Distributed chip in
distributed mode. "Last synced" + Update button live in the right slot.
System Backends tab: search by name/alias/meta-backend-for; chips for
All/User/System/Meta plus conditional Updates / Offline-nodes chips
when relevant. The old ad-hoc "Updates only" toggle from the upgrade
banner folded into the Updates chip — one source of truth for that
filter. Offline chip only appears in distributed mode when at least
one backend has an unhealthy node, so the chip row stays quiet on
healthy clusters.
Filter state persists in URL query params (mq/mf/bq/bf) so deep links
and tab switches keep the operator's filter context instead of
resetting every time.
Also adds an "Adopted" distribution path: when a model in
/api/models/capabilities carries source="registry-only" (discovered on
a worker but not configured locally), the Models tab shows a ghost chip
labelled "Adopted" with hover copy explaining how to persist it — this
is what closes the loop on the ghost-model story end-to-end.
* feat: add PreferDevelopmentBackends setting, expose isMeta/isDevelopment in API
- Add PreferDevelopmentBackends config field, CLI flag, runtime setting
- Add IsDevelopment() method to GalleryBackend
- Use AvailableBackendsUnfiltered in UI API to show all backends
- Expose isMeta, isDevelopment, preferDevelopmentBackends in backend API response
* feat: upgrade banner with Upgrade All button, detect pre-existing backends
- Add upgrade banner on Backends page showing count and Upgrade All button
- Fix upgrade detection for backends installed before version tracking:
flag as upgradeable when gallery has a version but installed has none
- Fix OCI digest check to flag backends with no stored digest as upgradeable
* feat: add backend versioning data model foundation
Add Version, URI, and Digest fields to BackendMetadata for tracking
installed backend versions and enabling upgrade detection. Add Version
field to GalleryBackend. Add UpgradeAvailable/AvailableVersion fields
to SystemBackend. Implement GetImageDigest() for lightweight OCI digest
lookups via remote.Head. Record version, URI, and digest at install time
in InstallBackend() and propagate version through meta backends.
* feat: add backend upgrade detection and execution logic
Add CheckBackendUpgrades() to compare installed backend versions/digests
against gallery entries, and UpgradeBackend() to perform atomic upgrades
with backup-based rollback on failure. Includes Agent A's data model
changes (Version/URI/Digest fields, GetImageDigest).
* feat: add AutoUpgradeBackends config and runtime settings
Add configuration and runtime settings for backend auto-upgrade:
- RuntimeSettings field for dynamic config via API/JSON
- ApplicationConfig field, option func, and roundtrip conversion
- CLI flag with LOCALAI_AUTO_UPGRADE_BACKENDS env var
- Config file watcher support for runtime_settings.json
- Tests for ToRuntimeSettings, ApplyRuntimeSettings, and roundtrip
* feat(ui): add backend version display and upgrade support
- Add upgrade check/trigger API endpoints to config and api module
- Backends page: version badge, upgrade indicator, upgrade button
- Manage page: version in metadata, context-aware upgrade/reinstall button
- Settings page: auto-upgrade backends toggle
* feat: add upgrade checker service, API endpoints, and CLI command
- UpgradeChecker background service: checks every 6h, auto-upgrades when enabled
- API endpoints: GET /backends/upgrades, POST /backends/upgrades/check, POST /backends/upgrade/:name
- CLI: `localai backends upgrade` command, version display in `backends list`
- BackendManager interface: add UpgradeBackend and CheckUpgrades methods
- Wire upgrade op through GalleryService backend handler
- Distributed mode: fan-out upgrade to worker nodes via NATS
* fix: use advisory lock for upgrade checker in distributed mode
In distributed mode with multiple frontend instances, use PostgreSQL
advisory lock (KeyBackendUpgradeCheck) so only one instance runs
periodic upgrade checks and auto-upgrades. Prevents duplicate
upgrade operations across replicas.
Standalone mode is unchanged (simple ticker loop).
* test: add e2e tests for backend upgrade API
- Test GET /api/backends/upgrades returns 200 (even with no upgrade checker)
- Test POST /api/backends/upgrade/:name accepts request and returns job ID
- Test full upgrade flow: trigger upgrade via API, wait for job completion,
verify run.sh updated to v2 and metadata.json has version 2.0.0
- Test POST /api/backends/upgrades/check returns 200
- Fix nil check for applicationInstance in upgrade API routes
* always enable parallel requests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: add node reconciler, allow to schedule to group of nodes, min/max autoscaler
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: move tests to ginkgo
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(smart router): order by available vram
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: add distributed mode (experimental)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix data races, mutexes, transactions
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix events and tool stream in agent chat
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* use ginkgo
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(cron): compute correctly time boundaries avoiding re-triggering
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* enhancements, refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* do not flood of healthy checks
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* do not list obvious backends as text backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* tests fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring and consolidation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Drop redundant healthcheck
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* enhancements, refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: add fine-tuning endpoint
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(experimental): add fine-tuning endpoint and TRL support
This changeset defines new GRPC signatues for Fine tuning backends, and
add TRL backend as initial fine-tuning engine. This implementation also
supports exporting to GGUF and automatically importing it to LocalAI
after fine-tuning.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* commit TRL backend, stop by killing process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* move fine-tune to generic features
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add evals, reorder menu
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fix tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(ui): add users and authentication support
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: allow the admin user to impersonificate users
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: ui improvements, disable 'Users' button in navbar when no auth is configured
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: add OIDC support
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: gate models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: cache requests to optimize speed
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* small UI enhancements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(ui): style improvements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: cover other paths by auth
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: separate local auth, refactor
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* security hardening, approval mode
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: fix tests and expectations
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: update localagi/localrecall
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
- Add 'agent' subcommand with 'run' and 'list' sub-commands
- Support running agents by name from pool.json registry
- Support running agents from JSON config files
- Implement foreground mode with --prompt flag for single-turn interactions
- Reuse AgentPoolService for consistent agent initialization
- Add comprehensive unit tests for config loading and overrides
Fixes#8960
Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
Also test for regressions in HTTP GET API key exempted endpoints because
this list can get out of sync with the UI routes.
Also fix support for proxying on a different prefix both server and
client side.
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Otherwise if using collections with postgresql we create a deadlock, as
we need embeddings to be up
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
feat: standardize CLI flag naming to kebab-case with backwards compatibility
- Rename --p2ptoken to --p2p-token for consistency
- Add deprecation alias for old --p2ptoken flag
- Fix broken name tag in config check command
- Add runtime deprecation warning system (core/cli/deprecations.go)
- Document kebab-case naming convention in code comments
- Maintain full backwards compatibility via kong aliases
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
* feat(mlx-distributed): add new MLX-distributed backend
Add new MLX distributed backend with support for both TCP and RDMA for
model sharding.
This implementation ties in the discovery implementation already in
place, and re-uses the same P2P mechanism for the TCP MLX-distributed
inferencing.
The Auto-parallel implementation is inspired by Exo's
ones (who have been added to acknowledgement for the great work!)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* expose a CLI to facilitate backend starting
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: make manual rank0 configurable via model configs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add missing features from mlx backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Apply suggestion from @mudler
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
feat: add --data-path CLI flag for persistent data separation
- Add LOCALAI_DATA_PATH environment variable and --data-path CLI flag
- Default data path: /data (separate from configuration directory)
- Automatic migration on startup: moves agent_tasks.json, agent_jobs.json, collections/, and assets/ from old config dir to new data path
- Backward compatible: preserves old behavior if LOCALAI_DATA_PATH is not set
- Agent state and job directories now use DataPath with proper fallback chain
- Update documentation with new flag and docker-compose example
This separates mutable persistent data (collectiondb, agents, assets, skills) from configuration files, enabling better volume mounting and data persistence in containerized deployments.
Signed-off-by: localai-bot <localai-bot@noreply.github.com>
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
feat: add shell completion support for bash, zsh, and fish
- Add core/cli/completion.go with dynamic completion script generation
- Add core/cli/completion_test.go with unit tests
- Modify cmd/local-ai/main.go to support completion command
- Modify core/cli/cli.go to add Completion subcommand
- Add docs/content/features/shell-completion.md with installation instructions
The completion scripts are generated dynamically from the Kong CLI model,
so they automatically include all commands, subcommands, and flags.
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
* feat: add standalone and agentic functionalities
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* expose agents via responses api
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: Add LOCALAI_DISABLE_MCP environment variable to disable MCP support
- Added DisableMCP field to RunCMD struct in core/cli/run.go
- Added LOCALAI_DISABLE_MCP environment variable support
- Added DisableMCP field to ApplicationConfig struct
- Added DisableMCP AppOption function
- Updated MCP endpoint routing to check appConfig.DisableMCP
- When LOCALAI_DISABLE_MCP is set to true/1/yes, MCP endpoints are not registered
When set, all MCP functionality is disabled and appropriate error messages
are returned to users.
Use Cases:
- Security-conscious deployments where MCP is not needed
- Reducing attack surface
- Compliance requirements that prohibit certain protocol support
Environment variable: LOCALAI_DISABLE_MCP=true
Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
* docs: Add documentation for LOCALAI_DISABLE_MCP environment variable
- Add section explaining how to disable MCP support using environment variable
- Document use cases for disabling MCP
- Provide examples for CLI and Docker usage
Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
---------
Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-authored-by: localai-bot <localai-bot@users.noreply.github.com>
* feat(musicgen): add ace-step and UI interface
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Correctly handle model dir
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Drop auto-download
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add to models, fixup UIs icons
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* l4t13 is incompatbile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* avoid pinning version for cuda12
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Drop l4t12
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP response format implementation for audio transcriptions
(cherry picked from commit e271dd764bbc13846accf3beb8b6522153aa276f)
Signed-off-by: Andres Smith <andressmithdev@pm.me>
* Rework transcript response_format and add more formats
(cherry picked from commit 6a93a8f63e2ee5726bca2980b0c9cf4ef8b7aeb8)
Signed-off-by: Andres Smith <andressmithdev@pm.me>
* Add test and replace go-openai package with official openai go client
(cherry picked from commit f25d1a04e46526429c89db4c739e1e65942ca893)
Signed-off-by: Andres Smith <andressmithdev@pm.me>
* Fix faster-whisper backend and refactor transcription formatting to also work on CLI
Signed-off-by: Andres Smith <andressmithdev@pm.me>
(cherry picked from commit 69a93977d5e113eb7172bd85a0f918592d3d2168)
Signed-off-by: Andres Smith <andressmithdev@pm.me>
---------
Signed-off-by: Andres Smith <andressmithdev@pm.me>
Co-authored-by: nanoandrew4 <nanoandrew4@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* feat: allow to set forcing backends eviction while requests are in flight
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: try to make the request sit and retry if eviction couldn't be done
Otherwise calls that in order to pass would need to shutdown other
backends would just fail.
In this way instead we make the request sit and retry eviction until it
succeeds. The thresholds can be configured by the user.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* expose settings to CLI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(makefile): Add buildargs for sd and cuda when building backend
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* feat(whisper): Add prompt to condition transcription output
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* feat: allow to install backends from URL in the WebUI and API
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* trace backends installations
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(loader): refactor single active backend support to LRU
This changeset introduces LRU management of loaded backends. Users can
set now a maximum number of models to be loaded concurrently, and, when
setting LocalAI in single active backend mode we set LRU to 1 for
backward compatibility.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: add tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(agent): agent jobs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Multiple webhooks, simplify
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Do not use cron with seconds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Create separate pages for details
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Detect if no models have MCP configuration, show wizard
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Make services test to run
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>