Files
LocalAI/docker-compose.distributed.yaml
Ettore Di Giacinto 551ebdb57a fix(distributed): correct VRAM/RAM reporting on NVIDIA unified-memory hosts (#9545)
Workers on NVIDIA unified-memory hardware (DGX Spark / GB10, Jetson AGX Thor,
Jetson Orin/Xavier/Nano) were reporting `available_vram=0` back to the frontend,
so the Nodes UI showed the node as fully used even when most of the unified
memory was actually free.

Three causes addressed:

* `isTegraDevice` only matched `/sys/devices/soc0/family == "Tegra"`. DGX Spark
  (SBSA) reports JEDEC codes there instead — `jep106:0426` for the NVIDIA
  manufacturer — so the Tegra/unified-memory fallback never ran. Renamed to
  `isNVIDIAIntegratedGPU` and extended to also match `jep106:0426[:*]` via
  `/sys/devices/soc0/soc_id`.

* The unified-iGPU code defaulted the device name to `"NVIDIA Jetson"` when
  `/proc/device-tree/model` was missing. That's what happens for Thor inside a
  docker container, and always on DGX Spark. New `nvidiaIntegratedGPUName`
  resolves via dt-model → `/sys/devices/soc0/machine` → `soc_id` lookup
  (`jep106:0426:8901` → `"NVIDIA GB10"`) so the Nodes UI labels the box
  correctly.

* Worker heartbeat sent `available_vram=0` (or total-as-available) when VRAM
  usage was momentarily unknown — e.g. when `nvidia-smi` intermittently failed
  with `waitid: no child processes` under containers without `--init`. Each
  such heartbeat overwrote the DB and made the UI flip to "fully used".
  `heartbeatBody` now omits `available_vram` in that case so the DB keeps its
  last good value.

Also updates the commented GPU blocks in both compose files with
`NVIDIA_DRIVER_CAPABILITIES=compute,utility`, `capabilities: [gpu, utility]`,
and `init: true`, and documents the requirement in the distributed-mode and
nvidia-l4t pages. Without `utility`, NVML/`nvidia-smi` are absent inside the
container, which is what put the DGX Spark worker into the buggy fallback in
the first place.

Detection verified on live hardware (dgx.casa / GB10 and 192.168.68.23 / Thor)
by running a cross-compiled probe of the new helpers on both host and inside
the worker container.

Assisted-by: Claude:opus-4.7 [Claude Code]
2026-04-24 22:02:23 +02:00

200 lines
6.4 KiB
YAML

# Docker Compose for LocalAI Distributed Mode
#
# Starts a full distributed stack: PostgreSQL, NATS, a LocalAI frontend,
# and one llama-cpp backend node.
#
# Model files are transferred from the frontend to backend nodes via HTTP
# — no shared volumes needed between frontend and backends.
#
# Usage:
# docker compose -f docker-compose.distributed.yaml up
#
# See docs: https://localai.io/features/distributed-mode/
services:
# --- Infrastructure ---
postgres:
image: quay.io/mudler/localrecall:v0.5.5-postgresql # PostgreSQL with pgvector
environment:
POSTGRES_DB: localai
POSTGRES_USER: localai
POSTGRES_PASSWORD: localai
volumes:
- postgres_data:/var/lib/postgresql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U localai"]
interval: 5s
timeout: 3s
retries: 10
nats:
image: nats:2-alpine
ports:
- "4222:4222" # Client connections
- "8222:8222" # HTTP monitoring (optional, useful for debugging)
command: ["--js", "-m", "8222"] # Enable JetStream + monitoring
# --- LocalAI Frontend ---
# Stateless API server that routes requests to backend nodes.
# Add more replicas behind a load balancer for HA.
localai:
# image: localai/localai:latest-cpu
build:
context: .
dockerfile: Dockerfile
args:
- IMAGE_TYPE=core
- BASE_IMAGE=ubuntu:24.04
ports:
- "8080:8080"
environment:
# Distributed mode
LOCALAI_DISTRIBUTED: "true"
LOCALAI_NATS_URL: "nats://nats:4222"
LOCALAI_AGENT_POOL_EMBEDDING_MODEL: "granite-embedding-107m-multilingual"
LOCALAI_AGENT_POOL_VECTOR_ENGINE: "postgres"
LOCALAI_AGENT_POOL_DATABASE_URL: "postgresql://localai:localai@postgres:5432/localai?sslmode=disable"
LOCALAI_REGISTRATION_TOKEN: "changeme" # Change this in production!
# Auth (required for distributed mode — must use PostgreSQL)
LOCALAI_AUTH: "true"
LOCALAI_AUTH_DATABASE_URL: "postgresql://localai:localai@postgres:5432/localai?sslmode=disable"
# Paths
MODELS_PATH: /models
volumes:
- frontend_models:/models
- frontend_data:/data
depends_on:
postgres:
condition: service_healthy
nats:
condition: service_started
# --- Worker Node ---
# A generic worker that self-registers with the frontend.
# The same LocalAI image is used — no separate image needed.
# The SmartRouter dynamically tells workers which backend to install via NATS.
#
# Model files are transferred from the frontend via HTTP file staging.
# The worker has its own independent models volume.
worker-1:
# image: localai/localai:latest-cpu
build:
context: .
dockerfile: Dockerfile
args:
- IMAGE_TYPE=core
- BASE_IMAGE=ubuntu:24.04
command:
- worker
environment:
LOCALAI_SERVE_ADDR: "0.0.0.0:50051"
LOCALAI_ADVERTISE_ADDR: "worker-1:50051"
LOCALAI_ADVERTISE_HTTP_ADDR: "worker-1:50050"
DEBUG: "true"
LOCALAI_REGISTER_TO: "http://localai:8080"
LOCALAI_NODE_NAME: "worker-1"
LOCALAI_REGISTRATION_TOKEN: "changeme" # Must match frontend token
LOCALAI_HEARTBEAT_INTERVAL: "10s"
LOCALAI_NATS_URL: "nats://nats:4222"
MODELS_PATH: /models
volumes:
- worker_1_models:/models
depends_on:
localai:
condition: service_started
nats:
condition: service_started
# --- GPU Support (NVIDIA) ---
# Uncomment the following and change the image to a CUDA variant
# (e.g., localai/localai:latest-gpu-nvidia-cuda-12) to enable GPU.
#
# NVIDIA_DRIVER_CAPABILITIES must include `utility` so nvidia-smi / NVML
# are available inside the container; without it the worker cannot report
# free VRAM and the Nodes page will show 0 free / total used.
# `init: true` avoids zombie-reap races that make nvidia-smi flaky.
#
# init: true
# environment:
# NVIDIA_DRIVER_CAPABILITIES: "compute,utility"
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia.com/gpu
# count: all
# capabilities: [gpu, utility]
# --- Shared Volume Mode (optional) ---
# If all services run on the same Docker host, you can skip gRPC file transfer
# by sharing a single models volume. Replace the volumes above with:
#
# localai:
# volumes:
# - shared_models:/models
# - frontend_data:/data
#
# backend-llama-cpp:
# volumes:
# - shared_models:/models
#
# Then add to the volumes section:
# shared_models:
#
# With shared volumes, model files are already available on the backend —
# gRPC file staging becomes a no-op (paths match).
# --- Adding More Workers ---
# Copy the worker-1 service above and change:
# - Service name (e.g., worker-2)
# - LOCALAI_NODE_NAME (must be unique)
# - LOCALAI_ADVERTISE_ADDR (must match service name)
#
# Workers are generic — no backend type needed. The SmartRouter
# will dynamically install the required backend via NATS when
# a model request arrives.
# --- Agent Worker ---
# Dedicated process for agent chat execution.
# Receives chat jobs from NATS, runs cogito LLM calls via the LocalAI API,
# and publishes results back via NATS for SSE delivery.
# No database access needed — config and skills are sent in the NATS payload.
agent-worker-1:
# image: localai/localai:latest-cpu
build:
context: .
dockerfile: Dockerfile
args:
- IMAGE_TYPE=core
- BASE_IMAGE=ubuntu:24.04
# Install Docker CLI and start agent-worker.
# The Docker socket is mounted from the host so that MCP stdio servers
# using "docker run" commands can spawn containers on the host Docker.
entrypoint: ["/bin/sh", "-c"]
command:
- |
apt-get update -qq && apt-get install -y -qq docker.io >/dev/null 2>&1
exec /entrypoint.sh agent-worker
environment:
LOCALAI_NATS_URL: "nats://nats:4222"
LOCALAI_REGISTER_TO: "http://localai:8080"
LOCALAI_NODE_NAME: "agent-worker-1"
LOCALAI_REGISTRATION_TOKEN: "changeme" # Must match frontend token
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
localai:
condition: service_started
nats:
condition: service_started
volumes:
postgres_data:
frontend_models:
frontend_data:
worker_1_models: