Files
LocalAI/backend
Ettore Di Giacinto 181ebb6df4 feat: voice recognition (#9500)
* feat(voice-recognition): add /v1/voice/{verify,analyze,embed} + speaker-recognition backend

Audio analog to face recognition. Adds three gRPC RPCs
(VoiceVerify / VoiceAnalyze / VoiceEmbed), their Go service and HTTP
layers, a new FLAG_SPEAKER_RECOGNITION capability flag, and a Python
backend scaffold under backend/python/speaker-recognition/ wrapping
SpeechBrain ECAPA-TDNN with a parallel OnnxDirectEngine for
WeSpeaker / 3D-Speaker ONNX exports.

The kokoros Rust backend gets matching unimplemented trait stubs —
tonic's async_trait has no defaults, so adding an RPC without Rust
stubs breaks the build (same regression fixed by eb01c772 for face).

Swagger, /api/instructions, and the auth RouteFeatureRegistry /
APIFeatures list are updated so the endpoints surface everywhere a
client or admin UI looks.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): add 1:N identify + register/forget endpoints

Mirrors the face-recognition register/identify/forget surface. New
package core/services/voicerecognition/ carries a Registry interface
and a local-store-backed implementation (same in-memory vector-store
plumbing facerecognition uses, separate instance so the embedding
spaces stay isolated).

Handlers under /v1/voice/{register,identify,forget} reuse
backend.VoiceEmbed to compute the probe vector, then delegate the
nearest-neighbour search to the registry. Default cosine-distance
threshold is tuned for ECAPA-TDNN on VoxCeleb (0.25, EER ~1.9%).

As with the face registry, the current backing is in-memory only — a
pgvector implementation is a future constructor-level swap.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): gallery, docs, CI and e2e coverage

- backend/index.yaml: speaker-recognition backend entry + CPU and
  CUDA-12 image variants (plus matching development variants).
- gallery/index.yaml: speechbrain-ecapa-tdnn (default) and
  wespeaker-resnet34 model entries. The WeSpeaker SHA-256 is a
  deliberate placeholder — the HF URI must be curl'd and its hash
  filled in before the entry installs.
- docs/content/features/voice-recognition.md: API reference + quickstart,
  mirrors the face-recognition docs.
- React UI: CAP_SPEAKER_RECOGNITION flag export (consumers follow face's
  precedent — no dedicated tab yet).
- tests/e2e-backends: voice_embed / voice_verify / voice_analyze specs.
  Helper resolveFaceFixture is reused as-is — the only thing face/voice
  share is "download a file into workDir", so no need for a new helper.
- Makefile: docker-build-speaker-recognition + test-extra-backend-
  speaker-recognition-{ecapa,all} targets. Audio fixtures default to
  VCTK p225/p226 samples from HuggingFace.
- CI: test-extra.yml grows a tests-speaker-recognition-grpc job
  mirroring insightface. backend.yml matrix gains CPU + CUDA-12 image
  build entries — scripts/changed-backends.js auto-picks these up.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): wire a working /v1/voice/analyze head

Adds AnalysisHead: a lazy-loading age / gender / emotion inference
wrapper that plugs into both SpeechBrainEngine and OnnxDirectEngine.

Defaults to two open-licence HuggingFace checkpoints:
  - audeering/wav2vec2-large-robust-24-ft-age-gender (Apache 2.0) —
    age regression + 3-way gender (female / male / child).
  - superb/wav2vec2-base-superb-er (Apache 2.0) — 4-way emotion.

Both are optional and degrade gracefully when transformers or the
model can't be loaded — the engine raises NotImplementedError so the
gRPC layer returns 501 instead of a generic 500.

Emotion classes pass through from the model (neutral/happy/angry/sad
on the default checkpoint); the e2e test now accepts any non-empty
dominant gender so custom age_gender_model overrides don't fail it.

Adds transformers to the backend's CPU and CUDA-12 requirements.

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): pin real WeSpeaker ResNet34 ONNX SHA-256

Replaces the placeholder hash in gallery/index.yaml with the actual
SHA-256 (7bb2f06e…) of the upstream
Wespeaker/wespeaker-voxceleb-resnet34-LM ONNX at ~25MB. `local-ai
models install wespeaker-resnet34` now succeeds.

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): soundfile loader + honest analyze default

Two issues surfaced on first end-to-end smoke with the actual backend
image:

1. torchaudio.load in torchaudio 2.8+ requires the torchcodec package
   for audio decoding. Switch SpeechBrainEngine._load_waveform to the
   already-present soundfile (listed in requirements.txt) plus a numpy
   linear resample to 16kHz. Drops a heavy ffmpeg-linked dep and the
   codepath we never exercise (torchaudio's ffmpeg backend).

2. The AnalysisHead was defaulting to audeering/wav2vec2-large-robust-
   24-ft-age-gender, but AutoModelForAudioClassification silently
   mangles that checkpoint — it reports the age head weights as
   UNEXPECTED and re-initialises the classifier head with random
   values, so the "gender" output is noise and there is no age output
   at all. Make age/gender opt-in instead (empty default; users wire
   a cleanly-loadable Wav2Vec2ForSequenceClassification checkpoint via
   age_gender_model: option). Emotion keeps its working Superb default.
   Also broaden _infer_age_gender's tensor-shape handling and catch
   runtime exceptions so a dodgy age/gender head never takes down the
   whole analyze call.

Docs and README updated to match the new policy.

Verified with the branch-scoped gallery on localhost:
- voice/embed    → 192-d ECAPA-TDNN vector
- voice/verify   → same-clip dist≈6e-08 verified=true; cross-speaker
                   dist 0.76–0.99 verified=false (as expected)
- voice/register/identify/forget → round-trip works, 404 on unknown id
- voice/analyze  → emotion populated, age/gender omitted (opt-in)

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): real CI audio fixtures + fixture-agnostic verify spec

Two issues surfaced after CI actually ran the speaker-recognition e2e
target (I'd curl-tested against a running server but hadn't run the
make target locally):

1. The default BACKEND_TEST_VOICE_AUDIO_* URLs pointed at
   huggingface.co/datasets/CSTR-Edinburgh/vctk paths that return 404
   (the dataset is gated). Swap them for the speechbrain test samples
   served from github.com/speechbrain/speechbrain/raw/develop/ —
   public, no auth, correct 16kHz mono format.

2. The VoiceVerify spec required d(file1,file2) < 0.4, assuming
   file1/file2 were same-speaker. The speechbrain samples are three
   different speakers (example1/2/5), and there is no easy un-gated
   source of true same-speaker audio pairs (VoxCeleb/VCTK/LibriSpeech
   are all license- or size-gated for CI use). Replace the ceiling
   check with a relative-ordering assertion: d(pair) > d(same-clip)
   for both file2 and file3 — that's enough to prove the embeddings
   encode speaker info, and it works with any three non-identical
   clips. Actual speaker ordering d(1,2) vs d(1,3) is logged but not
   asserted.

Local run: 4/4 voice specs pass (Health, LoadModel, VoiceEmbed,
VoiceVerify) on the built backend image. 12 non-voice specs skipped
as expected.

Assisted-by: Claude:claude-opus-4-7

* fix(ci): checkout with submodules in the reusable backend_build workflow

The kokoros Rust backend build fails with

    failed to read .../sources/Kokoros/kokoros/Cargo.toml: No such file

because the reusable backend_build.yml workflow's actions/checkout
step was missing `submodules: true`. Dockerfile.rust does `COPY .
/LocalAI`, and without the submodule files the subsequent `cargo
build` can't find the vendored Kokoros crate.

The bug pre-dates this PR — scripts/changed-backends.js only triggers
the kokoros image job when something under backend/rust/kokoros or
the shared proto changes, so master had been coasting past it. The
voice-recognition proto addition re-broke it.

Other checkouts in backend.yml (llama-cpp-darwin) and test-extra.yml
(insightface, kokoros, speaker-recognition) already pass
`submodules: true`; this brings the shared backend image builder in
line.

Assisted-by: Claude:claude-opus-4-7
2026-04-23 12:07:14 +02:00
..
2026-04-23 12:07:14 +02:00
2026-04-23 12:07:14 +02:00
2026-04-23 12:07:14 +02:00

LocalAI Backend Architecture

This directory contains the core backend infrastructure for LocalAI, including the gRPC protocol definition, multi-language Dockerfiles, and language-specific backend implementations.

Overview

LocalAI uses a unified gRPC-based architecture that allows different programming languages to implement AI backends while maintaining consistent interfaces and capabilities. The backend system supports multiple hardware acceleration targets and provides a standardized way to integrate various AI models and frameworks.

Architecture Components

1. Protocol Definition (backend.proto)

The backend.proto file defines the gRPC service interface that all backends must implement. This ensures consistency across different language implementations and provides a contract for communication between LocalAI core and backend services.

Core Services

  • Text Generation: Predict, PredictStream for LLM inference
  • Embeddings: Embedding for text vectorization
  • Image Generation: GenerateImage for stable diffusion and image models
  • Audio Processing: AudioTranscription, TTS, SoundGeneration
  • Video Generation: GenerateVideo for video synthesis
  • Object Detection: Detect for computer vision tasks
  • Vector Storage: StoresSet, StoresGet, StoresFind for RAG operations
  • Reranking: Rerank for document relevance scoring
  • Voice Activity Detection: VAD for audio segmentation

Key Message Types

  • PredictOptions: Comprehensive configuration for text generation
  • ModelOptions: Model loading and configuration parameters
  • Result: Standardized response format
  • StatusResponse: Backend health and memory usage information

2. Multi-Language Dockerfiles

The backend system provides language-specific Dockerfiles that handle the build environment and dependencies for different programming languages:

  • Dockerfile.python
  • Dockerfile.golang
  • Dockerfile.llama-cpp

3. Language-Specific Implementations

Python Backends (python/)

  • transformers: Hugging Face Transformers framework
  • vllm: High-performance LLM inference
  • mlx: Apple Silicon optimization
  • diffusers: Stable Diffusion models
  • Audio: coqui, faster-whisper, kitten-tts
  • Vision: mlx-vlm, rfdetr
  • Specialized: rerankers, chatterbox, kokoro

Go Backends (go/)

  • whisper: OpenAI Whisper speech recognition in Go with GGML cpp backend (whisper.cpp)
  • stablediffusion-ggml: Stable Diffusion in Go with GGML Cpp backend
  • piper: Text-to-speech synthesis Golang with C bindings using rhaspy/piper
  • local-store: Vector storage backend

C++ Backends (cpp/)

  • llama-cpp: Llama.cpp integration
  • grpc: GRPC utilities and helpers

Hardware Acceleration Support

CUDA (NVIDIA)

  • Versions: CUDA 12.x, 13.x
  • Features: cuBLAS, cuDNN, TensorRT optimization
  • Targets: x86_64, ARM64 (Jetson)

ROCm (AMD)

  • Features: HIP, rocBLAS, MIOpen
  • Targets: AMD GPUs with ROCm support

Intel

  • Features: oneAPI, Intel Extension for PyTorch
  • Targets: Intel GPUs, XPUs, CPUs

Vulkan

  • Features: Cross-platform GPU acceleration
  • Targets: Windows, Linux, Android, macOS

Apple Silicon

  • Features: MLX framework, Metal Performance Shaders
  • Targets: M1/M2/M3 Macs

Backend Registry (index.yaml)

The index.yaml file serves as a central registry for all available backends, providing:

  • Metadata: Name, description, license, icons
  • Capabilities: Hardware targets and optimization profiles
  • Tags: Categorization for discovery
  • URLs: Source code and documentation links

Building Backends

Prerequisites

  • Docker with multi-architecture support
  • Appropriate hardware drivers (CUDA, ROCm, etc.)
  • Build tools (make, cmake, compilers)

Build Commands

Example of build commands with Docker

# Build Python backend
docker build -f backend/Dockerfile.python \
  --build-arg BACKEND=transformers \
  --build-arg BUILD_TYPE=cublas12 \
  --build-arg CUDA_MAJOR_VERSION=12 \
  --build-arg CUDA_MINOR_VERSION=0 \
  -t localai-backend-transformers .

# Build Go backend
docker build -f backend/Dockerfile.golang \
  --build-arg BACKEND=whisper \
  --build-arg BUILD_TYPE=cpu \
  -t localai-backend-whisper .

# Build C++ backend
docker build -f backend/Dockerfile.llama-cpp \
  --build-arg BACKEND=llama-cpp \
  --build-arg BUILD_TYPE=cublas12 \
  -t localai-backend-llama-cpp .

For ARM64/Mac builds, docker can't be used, and the makefile in the respective backend has to be used.

Build Types

  • cpu: CPU-only optimization
  • cublas12, cublas13: CUDA 12.x, 13.x with cuBLAS
  • hipblas: ROCm with rocBLAS
  • intel: Intel oneAPI optimization
  • vulkan: Vulkan-based acceleration
  • metal: Apple Metal optimization

Backend Development

Creating a New Backend

  1. Choose Language: Select Python, Go, or C++ based on requirements
  2. Implement Interface: Implement the gRPC service defined in backend.proto
  3. Add Dependencies: Create appropriate requirements files
  4. Configure Build: Set up Dockerfile and build scripts
  5. Register Backend: Add entry to index.yaml
  6. Test Integration: Verify gRPC communication and functionality

Backend Structure

backend-name/
├── backend.py/go/cpp    # Main implementation
├── requirements.txt      # Dependencies
├── Dockerfile           # Build configuration
├── install.sh           # Installation script
├── run.sh              # Execution script
├── test.sh             # Test script
└── README.md           # Backend documentation

Required gRPC Methods

At minimum, backends must implement:

  • Health() - Service health check
  • LoadModel() - Model loading and initialization
  • Predict() - Main inference endpoint
  • Status() - Backend status and metrics

Integration with LocalAI Core

Backends communicate with LocalAI core through gRPC:

  1. Service Discovery: Core discovers available backends
  2. Model Loading: Core requests model loading via LoadModel
  3. Inference: Core sends requests via Predict or specialized endpoints
  4. Streaming: Core handles streaming responses for real-time generation
  5. Monitoring: Core tracks backend health and performance

Performance Optimization

Memory Management

  • Model Caching: Efficient model loading and caching
  • Batch Processing: Optimize for multiple concurrent requests
  • Memory Pinning: GPU memory optimization for CUDA/ROCm

Hardware Utilization

  • Multi-GPU: Support for tensor parallelism
  • Mixed Precision: FP16/BF16 for memory efficiency
  • Kernel Fusion: Optimized CUDA/ROCm kernels

Troubleshooting

Common Issues

  1. GRPC Connection: Verify backend service is running and accessible
  2. Model Loading: Check model paths and dependencies
  3. Hardware Detection: Ensure appropriate drivers and libraries
  4. Memory Issues: Monitor GPU memory usage and model sizes

Contributing

When contributing to the backend system:

  1. Follow Protocol: Implement the exact gRPC interface
  2. Add Tests: Include comprehensive test coverage
  3. Document: Provide clear usage examples
  4. Optimize: Consider performance and resource usage
  5. Validate: Test across different hardware targets