fix(docs): fix broken references to distributed mode

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
Ettore Di Giacinto
2026-04-03 09:46:06 +02:00
parent c0a023d13d
commit 7e0b73deaa
22 changed files with 23 additions and 24 deletions

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "GPU acceleration"
title = "GPU Acceleration"
weight = 9
url = "/features/gpu-acceleration/"
+++

View File

@@ -27,8 +27,7 @@ LocalAI provides a comprehensive set of features for running AI models locally.
- **[Realtime API](openai-realtime/)** - Low-latency multi-modal conversations (voice+text) over WebSocket
- **[Constrained Grammars](constrained_grammars/)** - Control model output format with BNF grammars
- **[GPU Acceleration](GPU-acceleration/)** - Optimize performance with GPU support
- **[Distributed Inference](distributed_inferencing/)** - Scale inference across multiple nodes
- **[Distributed Mode](distributed-mode/)** - Horizontal scaling with PostgreSQL, NATS, and remote backend nodes
- **[Distribution](distribution/)** - Scale inference across multiple nodes (P2P federation or production distributed mode)
- **[P2P API](p2p/)** - Monitor and manage P2P worker and federated nodes
- **[Model Context Protocol (MCP)](mcp/)** - Enable agentic capabilities with MCP integration
- **[Agents](agents/)** - Autonomous AI agents with tools, knowledge base, and skills

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "🤖 Agents"
title = "Agents"
weight = 21
url = '/features/agents'
+++

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "🔈 Audio to text"
title = "Audio to Text"
weight = 16
url = "/features/audio-to-text/"
+++

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "🔐 Authentication & Authorization"
title = "Authentication & Authorization"
weight = 26
url = '/features/authentication'
+++

View File

@@ -1,5 +1,5 @@
---
title: "⚙️ Backends"
title: "Backends"
description: "Learn how to use, manage, and develop backends in LocalAI"
weight: 4
url: "/backends/"

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "✍️ Constrained Grammars"
title = "Constrained Grammars"
weight = 15
url = "/features/constrained_grammars/"
+++

View File

@@ -5,7 +5,7 @@ weight = 14
url = "/features/distributed-mode/"
+++
Distributed mode enables horizontal scaling of LocalAI across multiple machines using **PostgreSQL** for state and node registry, and **NATS** for real-time coordination. Unlike the [P2P/federation approach](/features/distribute/), distributed mode is designed for production deployments and Kubernetes environments where you need centralized management, health monitoring, and deterministic routing.
Distributed mode enables horizontal scaling of LocalAI across multiple machines using **PostgreSQL** for state and node registry, and **NATS** for real-time coordination. Unlike the [P2P/federation approach]({{% relref "features/distributed_inferencing" %}}), distributed mode is designed for production deployments and Kubernetes environments where you need centralized management, health monitoring, and deterministic routing.
{{% notice note %}}
Distributed mode requires authentication enabled with a **PostgreSQL** database — SQLite is not supported. This is because the node registry, job store, and other distributed state are stored in PostgreSQL tables.

View File

@@ -1,12 +1,12 @@
+++
disableToc = false
title = "🆕🖧 Distributed Inference"
title = "P2P / Federated Inference"
weight = 15
url = "/features/distribute/"
+++
{{% notice tip %}}
Looking for production-grade horizontal scaling with PostgreSQL and NATS? See [Distributed Mode](/features/distributed-mode/).
Looking for production-grade horizontal scaling with PostgreSQL and NATS? See [Distributed Mode]({{% relref "features/distributed-mode" %}}).
{{% /notice %}}
This functionality enables LocalAI to distribute inference requests across multiple worker nodes, improving efficiency and performance. Nodes are automatically discovered and connect via p2p by using a shared token which makes sure the communication is secure and private between the nodes of the network.

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "🧠 Embeddings"
title = "Embeddings"
weight = 13
url = "/features/embeddings/"
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "🥽 GPT Vision"
title = "GPT Vision"
weight = 14
url = "/features/gpt-vision/"
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "🎨 Image generation"
title = "Image Generation"
weight = 12
url = "/features/image-generation/"
+++

View File

@@ -1,5 +1,5 @@
+++
title = "🔗 Model Context Protocol (MCP)"
title = "Model Context Protocol (MCP)"
weight = 20
toc = true
description = "Agentic capabilities with Model Context Protocol integration"

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "🖼️ Model gallery"
title = "Model Gallery"
weight = 18
url = '/models'
+++

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "🔍 Object detection"
title = "Object Detection"
weight = 13
url = "/features/object-detection/"
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "🔥 OpenAI functions and tools"
title = "OpenAI Functions and Tools"
weight = 17
url = "/features/openai-functions/"
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "📈 Reranker"
title = "Reranker"
weight = 11
url = "/features/reranker/"
+++

View File

@@ -1,6 +1,6 @@
+++
disableToc = false
title = "⚙️ Runtime Settings"
title = "Runtime Settings"
weight = 25
url = '/features/runtime-settings'
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "💾 Stores"
title = "Stores"
weight = 18
url = '/stores'
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "📖 Text generation (GPT)"
title = "Text Generation (GPT)"
weight = 10
url = "/features/text-generation/"
+++

View File

@@ -1,7 +1,7 @@
+++
disableToc = false
title = "🗣 Text to audio (TTS)"
title = "Text to Audio (TTS)"
weight = 11
url = "/features/text-to-audio/"
+++

View File

@@ -119,7 +119,7 @@ For production deployments or when you need more compute, LocalAI supports distr
- **P2P federation**: Connect multiple LocalAI instances for load-balanced inference
- **Model sharding**: Split large models across multiple machines
See the **Nodes** page in the web interface or the [Distribution docs]({{% relref "features/distribute" %}}) for setup instructions.
See the **Nodes** page in the web interface or the [Distribution docs]({{% relref "features/distribution" %}}) for setup instructions.
## What's Next?