* ci(bump-deps): register ds4 + move version pin into the Makefile The initial ds4 PR (#9758) put the upstream commit pin in backend/cpp/ds4/prepare.sh as a shell variable. The auto-bump bot at .github/bump_deps.sh greps for ^$VAR?= in a Makefile, so DS4_VERSION was invisible to it - other backends (llama-cpp, ik-llama-cpp, turboquant, voxtral, etc.) all pin in their Makefile. This change: - Moves DS4_VERSION?= and DS4_REPO?= to the top of backend/cpp/ds4/Makefile. - Inlines the git init/fetch/checkout recipe into the 'ds4:' target (matches llama-cpp's 'llama.cpp:' target pattern). Directory acts as the target so make only re-clones when missing. - Deletes the now-redundant prepare.sh. - Adds antirez/ds4 + DS4_VERSION + main + backend/cpp/ds4/Makefile to the .github/workflows/bump_deps.yaml matrix so the daily bot opens PRs against this pin. - Updates .agents/ds4-backend.md to point at the Makefile. Verified: $ grep -m1 '^DS4_VERSION?=' backend/cpp/ds4/Makefile DS4_VERSION?=ae302c2fa18cc6d9aefc021d0f27ae03c9ad2fc0 $ make -C backend/cpp/ds4 ds4 # clones into ds4/ at the pin $ make -C backend/cpp/ds4 ds4 # no-op on second invocation make: 'ds4' is up to date. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: route backend/cpp/ds4/ changes through changed-backends.js scripts/changed-backends.js:inferBackendPath has an explicit branch per cpp dockerfile suffix (ik-llama-cpp, turboquant, llama-cpp). Without a matching branch the function returns null, the backend never lands in the path map, and PR change-detection cannot map "backend/cpp/ds4/X changed" -> "rebuild ds4 image". This is why PR #9761 produced zero ds4 jobs even though it directly edits backend/cpp/ds4/Makefile. Adds the missing branch (Dockerfile.ds4 -> backend/cpp/ds4/), placed before the llama-cpp branch (since both share the .cpp ancestry but ds4 is more specific - same ordering rule documented in .agents/adding-backends.md). Verified with a local Node simulation of the script against this PR's diff: the path map now contains 'ds4 -> backend/cpp/ds4/' and a 'backend/cpp/ds4/Makefile' change correctly triggers the ds4 backend in the rebuild set. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * docs(adding-backends): harden the two gotchas that bit ds4 Both omissions are silent at the time you ADD a backend - the failure mode only appears later (the bump bot stays silent forever, or the path filter shows up on the next PR that touches your backend with zero CI jobs and looks broken for unrelated reasons). Expanding the `scripts/changed-backends.js` paragraph from a one-liner to a fully worked example, and adding a new sibling paragraph for the `bump_deps.yaml` + Makefile-pin contract. Both call out the specific mistakes from the ds4 timeline (#9758 → #9761) so future contributors can pattern-match on the cause. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
LocalAI Backend Architecture
This directory contains the core backend infrastructure for LocalAI, including the gRPC protocol definition, multi-language Dockerfiles, and language-specific backend implementations.
Overview
LocalAI uses a unified gRPC-based architecture that allows different programming languages to implement AI backends while maintaining consistent interfaces and capabilities. The backend system supports multiple hardware acceleration targets and provides a standardized way to integrate various AI models and frameworks.
Architecture Components
1. Protocol Definition (backend.proto)
The backend.proto file defines the gRPC service interface that all backends must implement. This ensures consistency across different language implementations and provides a contract for communication between LocalAI core and backend services.
Core Services
- Text Generation:
Predict,PredictStreamfor LLM inference - Embeddings:
Embeddingfor text vectorization - Image Generation:
GenerateImagefor stable diffusion and image models - Audio Processing:
AudioTranscription,TTS,SoundGeneration - Video Generation:
GenerateVideofor video synthesis - Object Detection:
Detectfor computer vision tasks - Vector Storage:
StoresSet,StoresGet,StoresFindfor RAG operations - Reranking:
Rerankfor document relevance scoring - Voice Activity Detection:
VADfor audio segmentation
Key Message Types
PredictOptions: Comprehensive configuration for text generationModelOptions: Model loading and configuration parametersResult: Standardized response formatStatusResponse: Backend health and memory usage information
2. Multi-Language Dockerfiles
The backend system provides language-specific Dockerfiles that handle the build environment and dependencies for different programming languages:
Dockerfile.pythonDockerfile.golangDockerfile.llama-cpp
3. Language-Specific Implementations
Python Backends (python/)
- transformers: Hugging Face Transformers framework
- vllm: High-performance LLM inference
- mlx: Apple Silicon optimization
- diffusers: Stable Diffusion models
- Audio: coqui, faster-whisper, kitten-tts
- Vision: mlx-vlm, rfdetr
- Specialized: rerankers, chatterbox, kokoro
Go Backends (go/)
- whisper: OpenAI Whisper speech recognition in Go with GGML cpp backend (whisper.cpp)
- stablediffusion-ggml: Stable Diffusion in Go with GGML Cpp backend
- piper: Text-to-speech synthesis Golang with C bindings using rhaspy/piper
- local-store: Vector storage backend
C++ Backends (cpp/)
- llama-cpp: Llama.cpp integration
- grpc: GRPC utilities and helpers
Hardware Acceleration Support
CUDA (NVIDIA)
- Versions: CUDA 12.x, 13.x
- Features: cuBLAS, cuDNN, TensorRT optimization
- Targets: x86_64, ARM64 (Jetson)
ROCm (AMD)
- Features: HIP, rocBLAS, MIOpen
- Targets: AMD GPUs with ROCm support
Intel
- Features: oneAPI, Intel Extension for PyTorch
- Targets: Intel GPUs, XPUs, CPUs
Vulkan
- Features: Cross-platform GPU acceleration
- Targets: Windows, Linux, Android, macOS
Apple Silicon
- Features: MLX framework, Metal Performance Shaders
- Targets: M1/M2/M3 Macs
Backend Registry (index.yaml)
The index.yaml file serves as a central registry for all available backends, providing:
- Metadata: Name, description, license, icons
- Capabilities: Hardware targets and optimization profiles
- Tags: Categorization for discovery
- URLs: Source code and documentation links
Building Backends
Prerequisites
- Docker with multi-architecture support
- Appropriate hardware drivers (CUDA, ROCm, etc.)
- Build tools (make, cmake, compilers)
Build Commands
Example of build commands with Docker
# Build Python backend
docker build -f backend/Dockerfile.python \
--build-arg BACKEND=transformers \
--build-arg BUILD_TYPE=cublas12 \
--build-arg CUDA_MAJOR_VERSION=12 \
--build-arg CUDA_MINOR_VERSION=0 \
-t localai-backend-transformers .
# Build Go backend
docker build -f backend/Dockerfile.golang \
--build-arg BACKEND=whisper \
--build-arg BUILD_TYPE=cpu \
-t localai-backend-whisper .
# Build C++ backend
docker build -f backend/Dockerfile.llama-cpp \
--build-arg BACKEND=llama-cpp \
--build-arg BUILD_TYPE=cublas12 \
-t localai-backend-llama-cpp .
For ARM64/Mac builds, docker can't be used, and the makefile in the respective backend has to be used.
Build Types
cpu: CPU-only optimizationcublas12,cublas13: CUDA 12.x, 13.x with cuBLAShipblas: ROCm with rocBLASintel: Intel oneAPI optimizationvulkan: Vulkan-based accelerationmetal: Apple Metal optimization
Backend Development
Creating a New Backend
- Choose Language: Select Python, Go, or C++ based on requirements
- Implement Interface: Implement the gRPC service defined in
backend.proto - Add Dependencies: Create appropriate requirements files
- Configure Build: Set up Dockerfile and build scripts
- Register Backend: Add entry to
index.yaml - Test Integration: Verify gRPC communication and functionality
Backend Structure
backend-name/
├── backend.py/go/cpp # Main implementation
├── requirements.txt # Dependencies
├── Dockerfile # Build configuration
├── install.sh # Installation script
├── run.sh # Execution script
├── test.sh # Test script
└── README.md # Backend documentation
Required gRPC Methods
At minimum, backends must implement:
Health()- Service health checkLoadModel()- Model loading and initializationPredict()- Main inference endpointStatus()- Backend status and metrics
Integration with LocalAI Core
Backends communicate with LocalAI core through gRPC:
- Service Discovery: Core discovers available backends
- Model Loading: Core requests model loading via
LoadModel - Inference: Core sends requests via
Predictor specialized endpoints - Streaming: Core handles streaming responses for real-time generation
- Monitoring: Core tracks backend health and performance
Performance Optimization
Memory Management
- Model Caching: Efficient model loading and caching
- Batch Processing: Optimize for multiple concurrent requests
- Memory Pinning: GPU memory optimization for CUDA/ROCm
Hardware Utilization
- Multi-GPU: Support for tensor parallelism
- Mixed Precision: FP16/BF16 for memory efficiency
- Kernel Fusion: Optimized CUDA/ROCm kernels
Troubleshooting
Common Issues
- GRPC Connection: Verify backend service is running and accessible
- Model Loading: Check model paths and dependencies
- Hardware Detection: Ensure appropriate drivers and libraries
- Memory Issues: Monitor GPU memory usage and model sizes
Contributing
When contributing to the backend system:
- Follow Protocol: Implement the exact gRPC interface
- Add Tests: Include comprehensive test coverage
- Document: Provide clear usage examples
- Optimize: Consider performance and resource usage
- Validate: Test across different hardware targets