* feat(proto): add speaker field to TranscriptSegment for diarization
Add speaker field to the gRPC TranscriptSegment message and map it
through the Go schema, enabling backends to return speaker labels.
Signed-off-by: eureka928 <meobius123@gmail.com>
* feat(whisperx): add whisperx backend for transcription with diarization
Add Python gRPC backend using WhisperX for speech-to-text with
word-level timestamps, forced alignment, and speaker diarization
via pyannote-audio when HF_TOKEN is provided.
Signed-off-by: eureka928 <meobius123@gmail.com>
* feat(whisperx): register whisperx backend in Makefile
Signed-off-by: eureka928 <meobius123@gmail.com>
* feat(whisperx): add whisperx meta and image entries to index.yaml
Signed-off-by: eureka928 <meobius123@gmail.com>
* ci(whisperx): add build matrix entries for CPU, CUDA 12/13, and ROCm
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): unpin torch versions and use CPU index for cpu requirements
Address review feedback:
- Use --extra-index-url for CPU torch wheels to reduce size
- Remove torch version pins, let uv resolve compatible versions
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): pin torch ROCm variant to fix CI build failure
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): pin torch CPU variant to fix uv resolution failure
Pin torch==2.8.0+cpu so uv resolves the CPU wheel from the extra
index instead of picking torch==2.8.0+cu128 from PyPI, which pulls
unresolvable CUDA dependencies.
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): use unsafe-best-match index strategy to fix uv resolution failure
uv's default first-match strategy finds torch on PyPI before checking
the extra index, causing it to pick torch==2.8.0+cu128 instead of the
CPU variant. This makes whisperx's transitive torch dependency
unresolvable. Using unsafe-best-match lets uv consider all indexes.
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(whisperx): drop +cpu local version suffix to fix uv resolution failure
PEP 440 ==2.8.0 matches 2.8.0+cpu from the extra index, avoiding the
issue where uv cannot locate an explicit +cpu local version specifier.
This aligns with the pattern used by all other CPU backends.
Signed-off-by: eureka928 <meobius123@gmail.com>
* fix(backends): drop +rocm local version suffixes from hipblas requirements to fix uv resolution
uv cannot resolve PEP 440 local version specifiers (e.g. +rocm6.4,
+rocm6.3) in pinned requirements. The --extra-index-url already points
to the correct ROCm wheel index and --index-strategy unsafe-best-match
(set in libbackend.sh) ensures the ROCm variant is preferred.
Applies the same fix as 7f5d72e8 (which resolved this for +cpu) across
all 14 hipblas requirements files.
Signed-off-by: eureka928 <meobius123@gmail.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: eureka928 <meobius123@gmail.com>
* revert: scope hipblas suffix fix to whisperx only
Reverts changes to non-whisperx hipblas requirements files per
maintainer review — other backends are building fine with the +rocm
local version suffix.
Signed-off-by: eureka928 <meobius123@gmail.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: eureka928 <meobius123@gmail.com>
---------
Signed-off-by: eureka928 <meobius123@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Some datacenter setups might be stuck with the 5.x kernel which doesn't
play well with CUDA >=12.9. To incrase compatibility with the CUDA 12.x
branch, downgrade to 12.8. For newer systems, it is still suggested to
use CUDA 13.x wherever compatible.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vllm-omni: add new backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* default to py3.12
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
exllama2 development has stalled and only old architectures are
supported. exllamav3 is still in development, meanwhile cleaning up
exllama2 from the gallery.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Builds exhausts CI currently, and there are better backends at this
point in time. We will probably deprecate it in the future.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This image is for HW prior Jetpack 7. Jetpack 7 broke compatibility with
older devices (which are still in use) such as AGX Orin or Jetsons.
While we do have l4t-cuda-13 images with sbsa support for new Nvidia
devices (Thor, DGX, etc). For older HW we are forced to keep old images
around as 24.04 does not seem to be supported.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backends): add moonshine backend for faster transcription
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add backend to CI, update AGENTS.md from this exercise
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(vibevoice): add backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: add workflow and backend index
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(gallery): add vibevoice
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Use self-hosted for intel builds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Pin python version for l4t
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: use ubuntu 24.04 for cuda13 l4t images
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Drop openblas from containers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore(ci): add cuda13 jobs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add to pipelines and to capabilities. Start to work on the gallery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* gallery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* capabilities: try to detect by looking at /usr/local
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* neutts
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* backends.yaml
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add cuda13 l4t requirements.txt
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add cuda13 requirements.txt
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Pin vllm
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Not all backends are compatible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add vllm to requirements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* vllm is not pre-compiled for cuda 13
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Rename tag suffix for hipblas whisper to match backend config
hipblas images generally have the suffix `-gpu-rocm-hipblas-X`. One exception to this currently is the hipblas build of Whisper which has the suffix `gpu-hipblas-whisper.
However, as `backend/index.yaml` references the image tag for Whisper using the more consistent form (i.e. `latest-gpu-rocm-hipblas-whisper`), it is not possible to add the backend as raised in #6114.
Therefore, rename the suffix for hipblas whisper images to use the more consistent form, aligning with other hipblas builds as well as the expected image name in `backend/index.yaml`.
Signed-off-by: Kingsley Jarrett <kj@kingj.net>
* chore(ci): Build Go based backends on Darwin
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* chore(stablediffusion-ggml): Fixes for building on Darwin
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* chore(whisper): Build on Darwin
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* feat(mlx-audio): Add mlx-audio backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* improve loading
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* CI tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set C_INCLUDE_PATH to point to python install
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(backends): bundle python
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* test ci
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* vllm on self-hosted
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add clang
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Try to fix it for Mac
* Relocate links only when is portable
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Make sure to call macosPortableEnv
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Use self-hosted for vllm
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* CI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* chore: allow to install with pip
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Make the backend to build and actually work
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* List models from system only
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add script to build darwin python backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Run protogen in libbackend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Detect if mps is available across python backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* CI: try to build backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Debug CI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Index mlx-vlm
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Remove mlx-vlm
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Drop CI test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>