mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-30 03:55:58 -04:00
* feat(distributed): support multiple replicas of one model on the same node The distributed scheduler implicitly assumed `(node_id, model_name)` was unique, but the schema didn't enforce it and the worker keyed all gRPC processes by model name alone. With `MinReplicas=2` against a single worker, the reconciler "scaled up" every 30s but the registry never advanced past 1 row — the worker re-loaded the model in-place every tick until VRAM fragmented and the gRPC process died. This change introduces multi-replica-per-node as a first-class concept, with capacity-aware scheduling, a circuit breaker, and VRAM soft-reservation. Operators can declare per-node capacity via the worker flag `--max-replicas-per-model` (mirrored as auto-label `node.replica-slots=N`) or override per-node from the UI. * Schema: BackendNode gains MaxReplicasPerModel (default 1) and ReservedVRAM. NodeModel gains ReplicaIndex (composite with node_id + model_name). ModelSchedulingConfig gains UnsatisfiableUntil/Ticks for the reconciler circuit breaker. * Registry: replica_index threaded through SetNodeModel, RemoveNodeModel, IncrementInFlight, DecrementInFlight, TouchNodeModel, GetNodeModel, SetNodeModelLoadInfo and the InFlightTrackingClient. New helpers: CountReplicasOnNode, NextFreeReplicaIndex (with ErrNoFreeSlot), RemoveAllNodeModelReplicas, FindNodesWithFreeSlot, ClusterCapacityForModel, ReserveVRAM/ReleaseVRAM (atomic UPDATE with ErrInsufficientVRAM), and the unsatisfiable-flag CRUD. * Worker: processKey now `<modelID>#<replicaIndex>` so concurrent loads of the same model land on distinct ports. Adds CLI flag --max-replicas-per-model (env LOCALAI_MAX_REPLICAS_PER_MODEL, default 1) and emits the auto-label. * Router: scheduleNewModel filters candidates by free slot, allocates the replica index, and soft-reserves VRAM before installing the backend. evictLRUAndFreeNode now deletes the targeted row by ID instead of all replicas of the model on the node — fixes a latent bug where evicting one replica orphaned its siblings. * Reconciler: caps scale-up at ClusterCapacityForModel so a misconfig (MinReplicas > capacity) doesn't loop forever. After 3 consecutive ticks of capacity==0 it sets UnsatisfiableUntil for a 5m cooldown and emits a warning. ClearAllUnsatisfiable fires from Register, ApproveNode, SetNodeLabel(s), RemoveNodeLabel and UpdateMaxReplicasPerModel so a new node joining or label changes wake the reconciler immediately. scaleDownIdle removes highest-replica-index first to keep slots compact. * Heartbeat resets reserved_vram to 0 — worker is the source of truth for actual free VRAM; the reservation is only for the in-tick race window between two scheduling decisions. * Probe path (reconciler.probeLoadedModels and health.doCheckAll) now pass the row's replica_index to RemoveNodeModel so an unreachable replica doesn't orphan healthy siblings. * Admin override: PUT /api/nodes/:id/max-replicas-per-model sets a sticky override (preserved across worker re-registration). DELETE clears the override so the worker's flag applies again on next register. Required because Kong defaults the worker flag to 1, so every worker restart would have silently reverted the UI value. * React UI: always-visible slot badge on the node row (muted at default 1, accented when >1); inline editor in the expanded drawer with pencil-to-edit, Save/Cancel, Esc/Enter, "(override)" indicator when the value is admin-set, and a "Reset" button to hand control back to the worker. Soft confirm when shrinking the cap below the count of loaded replicas. Scheduling rules table gets an "Unsatisfiable until HH:MM" status badge surfacing the cooldown. * node.replica-slots filtered out of the labels strip on the row to avoid duplicating the slot badge. 23 new Ginkgo specs (registry, reconciler, inflight, health) cover: multi-replica row independence, RemoveNodeModel of one replica preserving siblings, NextFreeReplicaIndex slot allocation including ErrNoFreeSlot, capacity-gated scale-up with circuit breaker tripping and recovery on Register, scheduleDownIdle ordering, ClusterCapacity math, ReserveVRAM admission gating, Heartbeat reset, override survival across worker re-registration, and ResetMaxReplicasPerModel handing control back. Plus 8 stdlib tests for the worker processKey / CLI / auto-label. Closes the flap reproduced on Qwen3.6-35B against the nvidia-thor worker (single 128 GiB node, MinReplicas=2): the reconciler now caps the scale-up at the cluster's actual capacity instead of looping. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Read] [Edit] [Bash] [Skill:critique] [Skill:audit] [Skill:polish] [Skill:golang-testing] * refactor(react-ui/nodes): tighten capacity editor copy + adopt ActionMenu for row actions * Capacity editor hint trimmed from operator-doc-style ("Sourced from the worker's `--max-replicas-per-model` flag. Changing it here makes it a sticky admin override that survives worker restarts." → "Saved values stick across worker restarts.") and the override-state copy similarly compressed. The full mechanic is no longer needed in the UI — the override pill carries the meaning and the docs cover the rest. * Node row actions migrated from an inline cluster of icon buttons (Drain / Resume / Trash) to the kebab ActionMenu used by /manage for per-row model actions, so dense Nodes tables stay clean. Approve stays as a prominent primary button — it's a stateful admission gate, not a routine action, and elevating it matches how /manage surfaces install-time decisions outside the menu. * The expanded drawer's Labels section now filters node.replica-slots out of the editable label list. The label is owned by the Capacity editor above; surfacing it again as an editable label invited confusion (the Capacity save would clobber any direct edit). Both backend and agent workers benefit — they share the row rendering path, so the action menu and label filter apply to both. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] [Skill:critique] [Skill:audit] [Skill:polish] * fix(react-ui/nodes): suppress slot badge on agent workers Agent workers don't load models, so the per-node replica capacity is inapplicable to them. Showing "1× slots" on agent rows was a tiny inconsistency from the unified rendering path — gate the badge on node_type !== 'agent' so it only appears on backend workers. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] * refactor(react-ui/nodes): distill expanded drawer + restyle scheduling form The expanded node drawer used to stack five panels — slot badge, filled capacity box, Loaded Models h4+empty-state, Installed Backends h4+empty-state, Labels h4+chips+form — making routine inspections feel like a control panel. The scheduling rule form wrapped its mode toggle as two 50%-width filled buttons that competed visually with the actual primary action. * Drawer: collapse three rarely-touched config zones (Capacity, Backends, Labels) into one `<details>` "Manage" disclosure (closed by default) with small uppercase eyebrow labels for each zone instead of parallel h4 sub-headings. Loaded Models stays as the at-a-glance headline with a single-line empty hint instead of a boxed empty state. CapacityEditor renders flat (no filled background) — the Manage disclosure provides framing. * Scheduling form: replace the chunky 50%-width button-tabs with the project's existing `.segmented` control (icon + label, sized to content). Mode hint becomes a single tied line below. Fields stack vertically with helper text under inputs and a hairline divider above the right-aligned Save / Cancel. The empty drawer collapses from ~5 stacked sections (~280px tall) to two lines (~80px). The scheduling form now reads as a designed dialog instead of raw building blocks. Both surfaces now match the typographic density and weight of the rest of the admin pages. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Edit] [chrome-devtools-mcp] [Skill:distill] [Skill:audit] [Skill:polish] * feat(react-ui/nodes): replace scheduling form's model picker with searchable combobox The native <select> made operators scroll through every gallery entry to find a model name. The project already has SearchableModelSelect (used in Studio/Talk/etc.) which combines free-text search with the gallery list and accepts typed model names that aren't installed yet — useful for pre-staging a scheduling rule before the node it'll run on has finished bootstrapping. Also drops the now-unused useModels import (the combobox manages the gallery hook internally). Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Edit] * refactor(react-ui/nodes): consolidate key/value chip editor + add replica preset chips The Nodes page was rendering the same key=value chip pattern in two places with subtly different markup: the Labels editor in the expanded drawer and (post-distill) the Node Selector input in the scheduling form. The form's input was also a comma-separated string that operators were getting wrong. * Extract <KeyValueChips> as a fully controlled chip-builder. Parent owns the map and decides what onAdd/onRemove does — form state for the scheduling form, API calls for the live drawer Labels editor. Same visuals everywhere; one component to change when polish needs apply. * Replace the comma-separated Node Selector text input with KeyValueChips. Operators were copying syntax from docs and missing commas; the chip vocabulary makes the key=value structure self-documenting. * Add <ReplicaInput>: numeric input + quick-pick preset chips for Min/Max replicas. Picked over a slider because replica counts are exact specs derived from VRAM math (operator decision, not a fuzzy estimate). The chips give one-click access to common values (1/2/3/4 for Min, 0=no-limit/2/4/8 for Max) without the slider's special-value problem (MaxReplicas=0 is categorical, not a position on a continuum). * Drop the now-unused labelInputs state in the Nodes page (the inline label editor's per-node draft state lived there and is now owned by KeyValueChips). Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Edit] [Skill:distill] * test: fix CI fallout from multi-replica refactor (e2e/distributed + playwright) Two breakages caught by CI that didn't surface in the local run: * tests/e2e/distributed/*.go — multiple files used the pre-PR2 registry signatures for SetNodeModel / IncrementInFlight / DecrementInFlight / RemoveNodeModel / TouchNodeModel / GetNodeModel / SetNodeModelLoadInfo and one stale adapter.InstallBackend call in node_lifecycle_test.go. All updated to pass replicaIndex=0 — these tests don't exercise multi-replica behavior, they just need to compile against the new signatures. The chip-builder tests in core/services/nodes/ already cover the multi-replica logic. * core/http/react-ui/e2e/nodes-per-node-backend-actions.spec.js — the drawer's distill refactor moved Backends inside a "Manage" <details> disclosure that's collapsed by default. The test helper expanded the node row but never opened Manage, so the per-node backend table was never in the DOM. Helper now clicks `.node-manage > summary` after expanding the row. All 100 playwright tests pass locally; tests/e2e/distributed compiles clean. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: claude-code:opus-4-7 [Edit] [Bash] --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
167 lines
5.8 KiB
JavaScript
167 lines
5.8 KiB
JavaScript
import { test, expect } from '@playwright/test'
|
|
|
|
// These specs cover the per-node backend row in the Nodes page:
|
|
// - the upgrade affordance is self-explanatory (icon + tooltip)
|
|
// - a delete affordance is present and goes through ConfirmDialog
|
|
//
|
|
// We mock the distributed-mode API so the tests can run against the
|
|
// standalone ui-test-server without spinning up workers/NATS.
|
|
|
|
const NODE_ID = 'test-node-1'
|
|
const NODE_NAME = 'worker-test'
|
|
const BACKEND_NAME = 'cuda12-vllm-development'
|
|
|
|
async function mockDistributedNodes(page, { onDelete } = {}) {
|
|
await page.route('**/api/nodes', (route) => {
|
|
route.fulfill({
|
|
status: 200,
|
|
contentType: 'application/json',
|
|
body: JSON.stringify([
|
|
{
|
|
id: NODE_ID,
|
|
name: NODE_NAME,
|
|
node_type: 'backend',
|
|
address: '10.0.0.1:50051',
|
|
http_address: '10.0.0.1:8090',
|
|
status: 'healthy',
|
|
total_vram: 0,
|
|
available_vram: 0,
|
|
total_ram: 8_000_000_000,
|
|
available_ram: 4_000_000_000,
|
|
gpu_vendor: '',
|
|
last_heartbeat: new Date().toISOString(),
|
|
created_at: new Date().toISOString(),
|
|
updated_at: new Date().toISOString(),
|
|
},
|
|
]),
|
|
})
|
|
})
|
|
|
|
await page.route('**/api/nodes/scheduling', (route) => {
|
|
route.fulfill({
|
|
status: 200,
|
|
contentType: 'application/json',
|
|
body: '[]',
|
|
})
|
|
})
|
|
|
|
await page.route(`**/api/nodes/${NODE_ID}/models`, (route) => {
|
|
route.fulfill({
|
|
status: 200,
|
|
contentType: 'application/json',
|
|
body: '[]',
|
|
})
|
|
})
|
|
|
|
await page.route(`**/api/nodes/${NODE_ID}/backends`, (route) => {
|
|
route.fulfill({
|
|
status: 200,
|
|
contentType: 'application/json',
|
|
body: JSON.stringify([
|
|
{
|
|
name: BACKEND_NAME,
|
|
is_system: false,
|
|
is_meta: false,
|
|
installed_at: new Date().toISOString(),
|
|
},
|
|
]),
|
|
})
|
|
})
|
|
|
|
await page.route(`**/api/nodes/${NODE_ID}/backends/delete`, async (route) => {
|
|
if (onDelete) {
|
|
await onDelete(route)
|
|
}
|
|
route.fulfill({
|
|
status: 200,
|
|
contentType: 'application/json',
|
|
body: JSON.stringify({ message: 'backend deleted' }),
|
|
})
|
|
})
|
|
}
|
|
|
|
async function expandNodeAndWaitForBackends(page) {
|
|
await page.goto('/app/nodes')
|
|
// Click the row to expand it. The chevron toggle and the row both work,
|
|
// but clicking the name cell is the most user-like.
|
|
await page.getByText(NODE_NAME).first().click()
|
|
// Backends, Capacity and Labels live behind a "Manage" <details>
|
|
// disclosure (the drawer was distilled to keep at-a-glance content
|
|
// lean — see distill refactor in the multi-replica branch). Open it
|
|
// by clicking the summary inside the .node-manage scope so the
|
|
// per-node backend table is in the DOM before assertions run.
|
|
await page.locator('.node-manage > summary').first().click()
|
|
await expect(page.getByRole('cell', { name: BACKEND_NAME, exact: true })).toBeVisible({ timeout: 10_000 })
|
|
}
|
|
|
|
test.describe('Nodes page — per-node backend actions', () => {
|
|
test('upgrade affordance is self-explanatory (not "Reinstall backend" with a sync icon)', async ({ page }) => {
|
|
await mockDistributedNodes(page)
|
|
await expandNodeAndWaitForBackends(page)
|
|
|
|
// Negative: the old, ambiguous wording must not be used.
|
|
await expect(page.locator('button[title="Reinstall backend"]')).toHaveCount(0)
|
|
await expect(page.locator('button[title="Reinstall backend"] i.fa-sync-alt')).toHaveCount(0)
|
|
|
|
// Positive: a self-explanatory upgrade affordance is rendered next to the
|
|
// backend row. We accept either an arrow-up or arrows-rotate glyph; both
|
|
// map to "upgrade" semantics in FontAwesome 6 unambiguously.
|
|
const upgradeBtn = page.locator('button[title="Upgrade backend on this node"]')
|
|
await expect(upgradeBtn).toBeVisible()
|
|
const iconClass = await upgradeBtn.locator('i').getAttribute('class')
|
|
expect(iconClass).toMatch(/fa-(arrow-up|arrows-rotate|up-long)/)
|
|
})
|
|
|
|
test('per-node backend row shows a delete (trash) button next to upgrade', async ({ page }) => {
|
|
await mockDistributedNodes(page)
|
|
await expandNodeAndWaitForBackends(page)
|
|
|
|
const deleteBtn = page.locator('button[title="Delete backend from this node"]')
|
|
await expect(deleteBtn).toBeVisible()
|
|
await expect(deleteBtn.locator('i.fa-trash')).toBeVisible()
|
|
})
|
|
|
|
test('clicking delete opens the confirm dialog and POSTs to the per-node delete endpoint', async ({ page }) => {
|
|
let postedBody = null
|
|
await mockDistributedNodes(page, {
|
|
onDelete: async (route) => {
|
|
postedBody = route.request().postDataJSON()
|
|
},
|
|
})
|
|
await expandNodeAndWaitForBackends(page)
|
|
|
|
await page.locator('button[title="Delete backend from this node"]').click()
|
|
|
|
// ConfirmDialog uses role="alertdialog" and a danger confirm button.
|
|
const dialog = page.getByRole('alertdialog')
|
|
await expect(dialog).toBeVisible()
|
|
const confirmBtn = dialog.locator('button.btn-danger')
|
|
await expect(confirmBtn).toBeVisible()
|
|
await confirmBtn.click()
|
|
|
|
// Wait until the POST landed.
|
|
await expect.poll(() => postedBody, { timeout: 5_000 }).toEqual({ backend: BACKEND_NAME })
|
|
})
|
|
|
|
test('clicking delete and cancelling does not POST', async ({ page }) => {
|
|
let deleteCalls = 0
|
|
await mockDistributedNodes(page, {
|
|
onDelete: () => {
|
|
deleteCalls += 1
|
|
},
|
|
})
|
|
await expandNodeAndWaitForBackends(page)
|
|
|
|
await page.locator('button[title="Delete backend from this node"]').click()
|
|
|
|
const dialog = page.getByRole('alertdialog')
|
|
await expect(dialog).toBeVisible()
|
|
await dialog.getByRole('button', { name: /cancel/i }).click()
|
|
await expect(dialog).toBeHidden()
|
|
|
|
// Give any errant request a moment to fire so a regression would be caught.
|
|
await page.waitForTimeout(500)
|
|
expect(deleteCalls).toBe(0)
|
|
})
|
|
})
|