mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-16 21:08:16 -04:00
* feat(backend): add tinygrad multimodal backend
Wire tinygrad as a new Python backend covering LLM text generation with
native tool-call extraction, embeddings, Stable Diffusion 1.x image
generation, and Whisper speech-to-text from a single self-contained
container.
Backend (`backend/python/tinygrad/`):
- `backend.py` gRPC servicer with LLM Predict/PredictStream (auto-detects
Llama / Qwen2 / Mistral architecture from `config.json`, supports
safetensors and GGUF), Embedding via mean-pooled last hidden state,
GenerateImage via the vendored SD1.x pipeline, AudioTranscription +
AudioTranscriptionStream via the vendored Whisper inference loop, plus
Tokenize / ModelMetadata / Status / Free.
- Vendored upstream model code under `vendor/` (MIT, headers preserved):
llama.py with an added `qkv_bias` flag for Qwen2-family bias support
and an `embed()` method that returns the last hidden state, plus
clip.py, unet.py, stable_diffusion.py (trimmed to drop the MLPerf
training branch that pulls `mlperf.initializers`), audio_helpers.py
and whisper.py (trimmed to drop the pyaudio listener).
- Pluggable tool-call parsers under `tool_parsers/`: hermes (Qwen2.5 /
Hermes), llama3_json (Llama 3.1+), qwen3_xml (Qwen 3), mistral
(Mistral / Mixtral). Auto-selected from model architecture or `Options`.
- `install.sh` pins Python 3.11.14 (tinygrad >=0.12 needs >=3.11; the
default portable python is 3.10).
- `package.sh` bundles libLLVM.so.1 + libedit/libtinfo/libgomp/libsndfile
into the scratch image. `run.sh` sets `CPU_LLVM=1` and `LLVM_PATH` so
tinygrad's CPU device uses the in-process libLLVM JIT instead of
shelling out to the missing `clang` binary.
- Local unit tests for Health and the four parsers in `test.py`.
Build wiring:
- Root `Makefile`: `.NOTPARALLEL`, `prepare-test-extra`, `test-extra`,
`BACKEND_TINYGRAD = tinygrad|python|.|false|true`,
docker-build-target eval, and `docker-build-backends` aggregator.
- `.github/workflows/backend.yml`: cpu / cuda12 / cuda13 build matrix
entries (mirrors the transformers backend placement).
- `backend/index.yaml`: `&tinygrad` meta + cpu/cuda12/cuda13 image
entries (latest + development).
E2E test wiring:
- `tests/e2e-backends/backend_test.go` gains an `image` capability that
exercises GenerateImage and asserts a non-empty PNG is written to
`dst`. New `BACKEND_TEST_IMAGE_PROMPT` / `BACKEND_TEST_IMAGE_STEPS`
knobs.
- Five new make targets next to `test-extra-backend-vllm`:
- `test-extra-backend-tinygrad` — Qwen2.5-0.5B-Instruct + hermes,
mirrors the vllm target 1:1 (5/9 specs in ~57s).
- `test-extra-backend-tinygrad-embeddings` — same model, embeddings
via LLM hidden state (3/9 in ~10s).
- `test-extra-backend-tinygrad-sd` — stable-diffusion-v1-5 mirror,
health/load/image (3/9 in ~10min, 4 diffusion steps on CPU).
- `test-extra-backend-tinygrad-whisper` — openai/whisper-tiny.en
against jfk.wav from whisper.cpp samples (4/9 in ~49s).
- `test-extra-backend-tinygrad-all` aggregate.
All four targets land green on the first MVP pass: 15 specs total, 0
failures across LLM+tools, embeddings, image generation, and speech
transcription.
* refactor(tinygrad): collapse to a single backend image
tinygrad generates its own GPU kernels (PTX renderer for CUDA, the
autogen ctypes wrappers for HIP / Metal / WebGPU) and never links
against cuDNN, cuBLAS, or any toolkit-version-tied library. The only
runtime dependency that varies across hosts is the driver's libcuda.so.1
/ libamdhip64.so, which are injected into the container at run time by
the nvidia-container / rocm runtimes. So unlike torch- or vLLM-based
backends, there is no reason to ship per-CUDA-version images.
- Drop the cuda12-tinygrad and cuda13-tinygrad build-matrix entries
from .github/workflows/backend.yml. The sole remaining entry is
renamed to -tinygrad (from -cpu-tinygrad) since it is no longer
CPU-only.
- Collapse backend/index.yaml to a single meta + development pair.
The meta anchor carries the latest uri directly; the development
entry points at the master tag.
- run.sh picks the tinygrad device at launch time by probing
/usr/lib/... for libcuda.so.1 / libamdhip64.so. When libcuda is
visible we set CUDA=1 + CUDA_PTX=1 so tinygrad uses its own PTX
renderer (avoids any nvrtc/toolkit dependency); otherwise we fall
back to HIP or CLANG. CPU_LLVM=1 + LLVM_PATH keep the in-process
libLLVM JIT for the CLANG path.
- backend.py's _select_tinygrad_device() is trimmed to a CLANG-only
fallback since production device selection happens in run.sh.
Re-ran test-extra-backend-tinygrad after the change:
Ran 5 of 9 Specs in 56.541 seconds — 5 Passed, 0 Failed
Python Backends for LocalAI
This directory contains Python-based AI backends for LocalAI, providing support for various AI models and hardware acceleration targets.
Overview
The Python backends use a unified build system based on libbackend.sh that provides:
- Automatic virtual environment management with support for both
uvandpip - Hardware-specific dependency installation (CPU, CUDA, Intel, MLX, etc.)
- Portable Python support for standalone deployments
- Consistent backend execution across different environments
Available Backends
Core AI Models
- transformers - Hugging Face Transformers framework (PyTorch-based)
- vllm - High-performance LLM inference engine
- mlx - Apple Silicon optimized ML framework
Audio & Speech
- coqui - Coqui TTS models
- faster-whisper - Fast Whisper speech recognition
- kitten-tts - Lightweight TTS
- mlx-audio - Apple Silicon audio processing
- chatterbox - TTS model
- kokoro - TTS models
Computer Vision
- diffusers - Stable Diffusion and image generation
- mlx-vlm - Vision-language models for Apple Silicon
- rfdetr - Object detection models
Specialized
- rerankers - Text reranking models
Quick Start
Prerequisites
- Python 3.10+ (default: 3.10.18)
uvpackage manager (recommended) orpip- Appropriate hardware drivers for your target (CUDA, Intel, etc.)
Installation
Each backend can be installed individually:
# Navigate to a specific backend
cd backend/python/transformers
# Install dependencies
make transformers
# or
bash install.sh
# Run the backend
make run
# or
bash run.sh
Using the Unified Build System
The libbackend.sh script provides consistent commands across all backends:
# Source the library in your backend script
source $(dirname $0)/../common/libbackend.sh
# Install requirements (automatically handles hardware detection)
installRequirements
# Start the backend server
startBackend $@
# Run tests
runUnittests
Hardware Targets
The build system automatically detects and configures for different hardware:
- CPU - Standard CPU-only builds
- CUDA - NVIDIA GPU acceleration (supports CUDA 12/13)
- Intel - Intel XPU/GPU optimization
- MLX - Apple Silicon (M1/M2/M3) optimization
- HIP - AMD GPU acceleration
Target-Specific Requirements
Backends can specify hardware-specific dependencies:
requirements.txt- Base requirementsrequirements-cpu.txt- CPU-specific packagesrequirements-cublas12.txt- CUDA 12 packagesrequirements-cublas13.txt- CUDA 13 packagesrequirements-intel.txt- Intel-optimized packagesrequirements-mps.txt- Apple Silicon packages
Configuration Options
Environment Variables
PYTHON_VERSION- Python version (default: 3.10)PYTHON_PATCH- Python patch version (default: 18)BUILD_TYPE- Force specific build targetUSE_PIP- Use pip instead of uv (default: false)PORTABLE_PYTHON- Enable portable Python buildsLIMIT_TARGETS- Restrict backend to specific targets
Example: CUDA 12 Only Backend
# In your backend script
LIMIT_TARGETS="cublas12"
source $(dirname $0)/../common/libbackend.sh
Example: Intel-Optimized Backend
# In your backend script
LIMIT_TARGETS="intel"
source $(dirname $0)/../common/libbackend.sh
Development
Adding a New Backend
- Create a new directory in
backend/python/ - Copy the template structure from
common/template/ - Implement your
backend.pywith the required gRPC interface - Add appropriate requirements files for your target hardware
- Use
libbackend.shfor consistent build and execution
Testing
# Run backend tests
make test
# or
bash test.sh
Building
# Install dependencies
make <backend-name>
# Clean build artifacts
make clean
Architecture
Each backend follows a consistent structure:
backend-name/
├── backend.py # Main backend implementation
├── requirements.txt # Base dependencies
├── requirements-*.txt # Hardware-specific dependencies
├── install.sh # Installation script
├── run.sh # Execution script
├── test.sh # Test script
├── Makefile # Build targets
└── test.py # Unit tests
Troubleshooting
Common Issues
- Missing dependencies: Ensure all requirements files are properly configured
- Hardware detection: Check that
BUILD_TYPEmatches your system - Python version: Verify Python 3.10+ is available
- Virtual environment: Use
ensureVenvto create/activate environments
Contributing
When adding new backends or modifying existing ones:
- Follow the established directory structure
- Use
libbackend.shfor consistent behavior - Include appropriate requirements files for all target hardware
- Add comprehensive tests
- Update this README if adding new backend types