mirror of
https://github.com/mudler/LocalAI.git
synced 2026-05-16 20:52:08 -04:00
* feat(concurrency-groups): per-model exclusive groups for backend loading Adds `concurrency_groups: [...]` to model YAML configs. Two models that share a group cannot be loaded concurrently on the same node — loading one evicts the others, reusing the existing pinned/busy/retry policy from LRU eviction. Layered design: - Watchdog (pkg/model): per-node correctness floor — on every Load(), evict any loaded model that shares a group with the requested one. Pinned skips surface NeedMore so the loader retries (and ultimately logs a clear warning), instead of silently allowing the rule to be violated. - Distributed scheduler (core/services/nodes): soft anti-affinity hint — scheduleNewModel prefers nodes that don't already host a same-group model, falling back to eviction only if every candidate has a conflict. Composes with NodeSelector at the same point in the candidate pipeline. Per-node, not cluster-wide: VRAM is a node-local resource, and two heavy models running on different nodes is fine. The ConfigLoader is wired into SmartRouter via a small ConcurrencyConflictResolver interface so the nodes package keeps a narrow surface on core/config. Refactors the inner LRU eviction body into a shared collectEvictionsLocked helper and the loader retry loop into retryEnforce(fn, maxRetries, interval), so both LRU and group enforcement share busy/pinned/retry semantics. Closes #9659. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix(watchdog): sync pinned + concurrency_groups at startup The startup-time watchdog setup lives in initializeWatchdog (startup.go), not in startWatchdog (watchdog.go). The latter is only invoked from the runtime-settings RestartWatchdog path. As a result, neither SyncPinnedModelsToWatchdog nor SyncModelGroupsToWatchdog ran at boot, so `pinned: true` and `concurrency_groups: [...]` only became effective after a settings-driven watchdog restart. Fix by adding both sync calls to initializeWatchdog. Confirmed end-to-end: loading model A in group "heavy", then C with no group (coexists), then B in group "heavy" now correctly evicts A and leaves [B, C]. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix(test): satisfy errcheck on new os.Remove in concurrency_groups spec CI lint runs new-from-merge-base, so the existing pre-existing `defer os.Remove(tmp.Name())` lines are baseline-grandfathered but the one introduced by the concurrency_groups YAML round-trip test is held to errcheck. Wrap the remove in a closure that discards the error. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
48 lines
1.4 KiB
Go
48 lines
1.4 KiB
Go
package application
|
|
|
|
import (
|
|
"github.com/mudler/LocalAI/core/config"
|
|
. "github.com/onsi/ginkgo/v2"
|
|
. "github.com/onsi/gomega"
|
|
)
|
|
|
|
var _ = Describe("extractModelGroupsFromConfigs", func() {
|
|
It("returns an empty map when no config declares groups", func() {
|
|
out := extractModelGroupsFromConfigs([]config.ModelConfig{
|
|
{Name: "a"},
|
|
{Name: "b"},
|
|
})
|
|
Expect(out).To(BeEmpty())
|
|
})
|
|
|
|
It("returns each model's normalized groups", func() {
|
|
out := extractModelGroupsFromConfigs([]config.ModelConfig{
|
|
{Name: "a", ConcurrencyGroups: []string{" heavy ", "vision", "heavy"}},
|
|
{Name: "b", ConcurrencyGroups: []string{"heavy"}},
|
|
{Name: "c"}, // no groups → omitted
|
|
})
|
|
Expect(out).To(HaveLen(2))
|
|
Expect(out["a"]).To(Equal([]string{"heavy", "vision"}))
|
|
Expect(out["b"]).To(Equal([]string{"heavy"}))
|
|
Expect(out).ToNot(HaveKey("c"))
|
|
})
|
|
|
|
It("omits models whose groups normalize to empty", func() {
|
|
out := extractModelGroupsFromConfigs([]config.ModelConfig{
|
|
{Name: "blanks", ConcurrencyGroups: []string{"", " "}},
|
|
})
|
|
Expect(out).To(BeEmpty())
|
|
})
|
|
|
|
It("skips disabled models so they cannot block loading after re-enable", func() {
|
|
disabled := true
|
|
out := extractModelGroupsFromConfigs([]config.ModelConfig{
|
|
{Name: "a", ConcurrencyGroups: []string{"heavy"}, Disabled: &disabled},
|
|
{Name: "b", ConcurrencyGroups: []string{"heavy"}},
|
|
})
|
|
Expect(out).To(HaveLen(1))
|
|
Expect(out).To(HaveKey("b"))
|
|
Expect(out).ToNot(HaveKey("a"))
|
|
})
|
|
})
|