Compare commits

...

98 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
5041294265 Initial plan 2026-02-02 22:01:37 +00:00
Ettore Di Giacinto
08b2b8d755 fix(libbackend): do not inject --index-strategy unsafe-best-match to all
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-02-02 17:13:45 +01:00
Dream
10a1e6c74d feat(whisperx): add whisperx backend for transcription with speaker diarization (#8299)
* feat(proto): add speaker field to TranscriptSegment for diarization

Add speaker field to the gRPC TranscriptSegment message and map it
through the Go schema, enabling backends to return speaker labels.

Signed-off-by: eureka928 <meobius123@gmail.com>

* feat(whisperx): add whisperx backend for transcription with diarization

Add Python gRPC backend using WhisperX for speech-to-text with
word-level timestamps, forced alignment, and speaker diarization
via pyannote-audio when HF_TOKEN is provided.

Signed-off-by: eureka928 <meobius123@gmail.com>

* feat(whisperx): register whisperx backend in Makefile

Signed-off-by: eureka928 <meobius123@gmail.com>

* feat(whisperx): add whisperx meta and image entries to index.yaml

Signed-off-by: eureka928 <meobius123@gmail.com>

* ci(whisperx): add build matrix entries for CPU, CUDA 12/13, and ROCm

Signed-off-by: eureka928 <meobius123@gmail.com>

* fix(whisperx): unpin torch versions and use CPU index for cpu requirements

Address review feedback:
- Use --extra-index-url for CPU torch wheels to reduce size
- Remove torch version pins, let uv resolve compatible versions

Signed-off-by: eureka928 <meobius123@gmail.com>

* fix(whisperx): pin torch ROCm variant to fix CI build failure

Signed-off-by: eureka928 <meobius123@gmail.com>

* fix(whisperx): pin torch CPU variant to fix uv resolution failure

Pin torch==2.8.0+cpu so uv resolves the CPU wheel from the extra
index instead of picking torch==2.8.0+cu128 from PyPI, which pulls
unresolvable CUDA dependencies.

Signed-off-by: eureka928 <meobius123@gmail.com>

* fix(whisperx): use unsafe-best-match index strategy to fix uv resolution failure

uv's default first-match strategy finds torch on PyPI before checking
the extra index, causing it to pick torch==2.8.0+cu128 instead of the
CPU variant. This makes whisperx's transitive torch dependency
unresolvable. Using unsafe-best-match lets uv consider all indexes.

Signed-off-by: eureka928 <meobius123@gmail.com>

* fix(whisperx): drop +cpu local version suffix to fix uv resolution failure

PEP 440 ==2.8.0 matches 2.8.0+cpu from the extra index, avoiding the
issue where uv cannot locate an explicit +cpu local version specifier.
This aligns with the pattern used by all other CPU backends.

Signed-off-by: eureka928 <meobius123@gmail.com>

* fix(backends): drop +rocm local version suffixes from hipblas requirements to fix uv resolution

uv cannot resolve PEP 440 local version specifiers (e.g. +rocm6.4,
+rocm6.3) in pinned requirements. The --extra-index-url already points
to the correct ROCm wheel index and --index-strategy unsafe-best-match
(set in libbackend.sh) ensures the ROCm variant is preferred.

Applies the same fix as 7f5d72e8 (which resolved this for +cpu) across
all 14 hipblas requirements files.

Signed-off-by: eureka928 <meobius123@gmail.com>

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: eureka928 <meobius123@gmail.com>

* revert: scope hipblas suffix fix to whisperx only

Reverts changes to non-whisperx hipblas requirements files per
maintainer review — other backends are building fine with the +rocm
local version suffix.

Signed-off-by: eureka928 <meobius123@gmail.com>

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: eureka928 <meobius123@gmail.com>

---------

Signed-off-by: eureka928 <meobius123@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 16:33:12 +01:00
Alex O'Connell
b7585ca738 fix(api): Add missing field in initial OpenAI streaming response (#8341)
Add missing field in initial OpenAI streaming response

Signed-off-by: Alex O'Connell <35843486+acon96@users.noreply.github.com>
2026-02-02 08:30:04 +01:00
LocalAI [bot]
8cae99229c chore: ⬆️ Update ggml-org/llama.cpp to 2634ed207a17db1a54bd8df0555bd8499a6ab691 (#8336)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-02-01 21:23:57 +00:00
rampa3
04e0f444e1 chore(model gallery): Add Qwen 3 VL 8B thinking & instruct (#8329)
Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-02-01 17:34:31 +01:00
rampa3
6f410f4cbe chore(model gallery): Rename downloaded filename for Magistral Small mmproj (#8327)
Rename downloaded filename for Magistral Small mmproj

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-02-01 17:33:48 +01:00
Ettore Di Giacinto
800f749c7b fix: drop gguf VRAM estimation (now redundant) (#8325)
fix: drop gguf VRAM estimation

Cleanup. This is now handled directly in llama.cpp, no need to estimate from Go.

VRAM estimation in general is tricky, but llama.cpp ( 41ea26144e/src/llama.cpp (L168) ) lately has added an automatic "fitting" of models to VRAM, so we can drop backend-specific GGUF VRAM estimation from our code instead of trying to guess as we already enable it

 397f7f0862/backend/cpp/llama-cpp/grpc-server.cpp (L393)

Fixes: https://github.com/mudler/LocalAI/issues/8302
See: https://github.com/mudler/LocalAI/issues/8302#issuecomment-3830773472
2026-02-01 17:33:28 +01:00
Andres
b6459ddd57 feat(api): Add transcribe response format request parameter & adjust STT backends (#8318)
* WIP response format implementation for audio transcriptions

(cherry picked from commit e271dd764bbc13846accf3beb8b6522153aa276f)
Signed-off-by: Andres Smith <andressmithdev@pm.me>

* Rework transcript response_format and add more formats

(cherry picked from commit 6a93a8f63e2ee5726bca2980b0c9cf4ef8b7aeb8)
Signed-off-by: Andres Smith <andressmithdev@pm.me>

* Add test and replace go-openai package with official openai go client

(cherry picked from commit f25d1a04e46526429c89db4c739e1e65942ca893)
Signed-off-by: Andres Smith <andressmithdev@pm.me>

* Fix faster-whisper backend and refactor transcription formatting to also work on CLI

Signed-off-by: Andres Smith <andressmithdev@pm.me>
(cherry picked from commit 69a93977d5e113eb7172bd85a0f918592d3d2168)
Signed-off-by: Andres Smith <andressmithdev@pm.me>

---------

Signed-off-by: Andres Smith <andressmithdev@pm.me>
Co-authored-by: nanoandrew4 <nanoandrew4@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-02-01 17:33:17 +01:00
Ettore Di Giacinto
397f7f0862 fix(ui): take account of reasoning in token count calculation (#8324)
We were skipping reasoning traces when counting tokens, yielding to a
wrong sum count.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-02-01 10:48:31 +01:00
LocalAI [bot]
234072769c chore(model gallery): 🤖 add 1 new models via gallery agent (#8321)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-02-01 09:02:05 +01:00
LocalAI [bot]
3445415b3d chore: ⬆️ Update ggml-org/llama.cpp to 41ea26144e55d23f37bb765f88c07588d786567f (#8317)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-31 21:18:31 +00:00
LocalAI [bot]
b05e110aa6 chore: ⬆️ Update ggml-org/llama.cpp to 1488339138d609139c4400d1b80f8a5b1a9a203c (#8306)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-31 08:59:09 +01:00
LocalAI [bot]
e69cba2444 chore: ⬆️ Update ggml-org/whisper.cpp to aa1bc0d1a6dfd70dbb9f60c11df12441e03a9075 (#8305)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-31 08:58:54 +01:00
LocalAI [bot]
f7903597ac chore(model-gallery): ⬆️ update checksum (#8307)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-30 21:27:48 +01:00
LocalAI [bot]
ee76a0cd1c feat(swagger): update swagger (#8304)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-30 21:27:10 +01:00
Ettore Di Giacinto
4ca5b737bf chore(cuda): target 12.8 for 12 to increase compatibility (#8297)
Some datacenter setups might be stuck with the 5.x kernel which doesn't
play well with CUDA >=12.9. To incrase compatibility with the CUDA 12.x
branch, downgrade to 12.8. For newer systems, it is still suggested to
use CUDA 13.x wherever compatible.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-30 12:58:44 +01:00
Ettore Di Giacinto
4077aaf978 chore: re-enable e2e tests, fixups anthropic API tools support (#8296)
* chore(tests): add mock backend e2e tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixup anthropic tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* prepare e2e tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop repetitive tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop specific CI workflow

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup anthropic issues, move all e2e tests to use mocked backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-30 12:41:50 +01:00
Ettore Di Giacinto
68dd9765a0 feat(tts): add support for streaming mode (#8291)
* feat(tts): add support for streaming mode

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Send first audio, make sure it's 16

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-30 11:58:01 +01:00
LocalAI [bot]
2c44b06a67 chore: ⬆️ Update ggml-org/llama.cpp to 4fdbc1e4dba428ce0cf9d2ac22232dc170bbca82 (#8283)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-29 23:43:29 +01:00
LocalAI [bot]
7cc90db3e5 chore(model-gallery): ⬆️ update checksum (#8285)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-29 21:51:18 +01:00
Ettore Di Giacinto
1e08e02598 feat(qwen-asr): add support to qwen-asr (#8281)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-29 21:50:35 +01:00
Richard Palethorpe
dd8e74a486 feat(realtime): Add audio conversations (#6245)
* feat(realtime): Add audio conversations

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* chore(realtime): Vendor the updated API and modify for server side

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* feat(realtime): Update to the GA realtime API

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* chore: Document realtime API and add docs to AGENTS.md

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* feat: Filter reasoning from spoken output

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(realtime): Send delta and done events for tool calls and audio transcripts

Ensure that content is sent in both deltas and done events for function call arguments and audio transcripts. This fixes compatibility with clients that rely on delta events for parsing.

💘 Generated with Crush

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(realtime): Improve tool call handling and error reporting

- Refactor Model interface to accept []types.ToolUnion and *types.ToolChoiceUnion
  instead of JSON strings, eliminating unnecessary marshal/unmarshal cycles
- Fix Parameters field handling: support both map[string]any and JSON string formats
- Add PredictConfig() method to Model interface for accessing model configuration
- Add comprehensive debug logging for tool call parsing and function config
- Add missing return statement after prediction error (critical bug fix)
- Add warning logs for NoAction function argument parsing failures
- Improve error visibility throughout generateResponse function

💘 Generated with Crush

Assisted-by: Claude Sonnet 4.5 via Crush <crush@charm.land>
Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2026-01-29 08:44:53 +01:00
Ettore Di Giacinto
48e08772f3 chore(llama.cpp): bump to 'f6b533d898ce84bae8d9fa8dfc6697ac087800bf' (#8275)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-29 00:22:25 +01:00
LocalAI [bot]
c28c0227c6 chore: ⬆️ Update leejet/stable-diffusion.cpp to e411520407663e1ddf8ff2e5ed4ff3a116fbbc97 (#8274)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-28 21:23:05 +00:00
Richard Palethorpe
856ca2d6b1 fix(qwen3): Be explicit with function calling format (#8265)
Qwen3 4b was using the wrong function format (i.e. using "function"
instead of "name") within the realtime API.

If we specify the function calling format explicitly then it stops it.

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2026-01-28 14:44:29 +01:00
Ettore Di Giacinto
9b973b79f6 feat: add VoxCPM tts backend (#8109)
* feat: add VoxCPM tts backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Disable voxcpm on arm64 cpu

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-28 14:44:04 +01:00
Ettore Di Giacinto
cba8ef4e38 chore: fix backend icons
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-28 09:09:00 +01:00
Ettore Di Giacinto
f729e300d6 chore: fix backend icons
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-28 09:08:03 +01:00
LocalAI [bot]
9916811a79 chore: ⬆️ Update ggml-org/llama.cpp to 2b4cbd2834e427024bc7f935a1f232aecac6679b (#8258)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-28 08:50:16 +01:00
Ettore Di Giacinto
2f7c595cd1 chore(model gallery): add z-image and z-image-turbo for diffusers (#8260)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-27 22:42:10 +01:00
rampa3
73decac746 chore(model gallery): Add mistral-community/pixtral-12b with mmproj (#8245)
Rebased branch add_pixtral on master

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-01-27 21:43:31 +01:00
Ettore Di Giacinto
ec1598868b feat(vibevoice): add ASR support (#8222)
* feat(vibevoice): add ASR support

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(tests): download voice files

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run on bigger runner

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* debug

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* CI can't hold vibevoice

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-27 20:19:22 +01:00
rampa3
93d7e5d4b8 chore(model gallery): Add entry for Magistral Small 1.2 with mmproj (#8248)
Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-01-27 16:55:00 +01:00
rampa3
ff5a54b9d1 chore(model gallery): Add entry for Mistral Small 3.1 with mmproj (#8247)
* chore(model gallery): Add entry for Mistral Small 3.1 with mmproj

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

* Use llama-cpp subfolder structure akin to Qwen 3 VL

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

---------

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-01-27 16:54:14 +01:00
LocalAI [bot]
3c1f823c47 chore: ⬆️ Update ggml-org/llama.cpp to 8f80d1b254aef70a0959e314be368d05debe7294 (#8229)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-26 21:19:43 +00:00
LocalAI [bot]
4024220d00 chore(model gallery): 🤖 add 1 new models via gallery agent (#8220)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-26 12:11:24 +01:00
LocalAI [bot]
f76958d761 chore: ⬆️ Update ggml-org/llama.cpp to 0440bfd1605333726ea0fb7a836942660bf2f9a6 (#8216)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-26 00:50:35 +01:00
LocalAI [bot]
2bd5ca45de chore: ⬆️ Update leejet/stable-diffusion.cpp to 43e829f21966abb96b08c712bccee872dc820914 (#8215)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-26 00:50:16 +01:00
Ettore Di Giacinto
6804ce1c39 chore(docs): change MEMORY_FILE_PATH to MEMORY_INDEX_PATH
Updated MEMORY_FILE_PATH to MEMORY_INDEX_PATH in memory configuration.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-25 22:14:11 +01:00
Dedy F. Setyawan
d499071bff fix(ui): correctly display selected image model (#8208)
Signed-off-by: Dedy F. Setyawan <dedyfajars@gmail.com>
2026-01-25 14:54:40 +01:00
Ettore Di Giacinto
26a374b717 chore: drop bark which is unmaintained (#8207)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-25 09:26:40 +01:00
rampa3
980de0e25b chore(model gallery): Add most of not yet present Piper voices from Hugging Face (#8202)
Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-01-25 08:56:53 +01:00
Ettore Di Giacinto
4767371aee chore(README): Add links
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-24 22:49:27 +01:00
Ettore Di Giacinto
131d247b78 chore(README): Update and simplify links
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-24 22:46:40 +01:00
Ettore Di Giacinto
b2a8a63899 feat(vllm-omni): add new backend (#8188)
* feat(vllm-omni: add new backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* default to py3.12

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-24 22:23:30 +01:00
LocalAI [bot]
05a332cd5f chore: ⬆️ Update ggml-org/llama.cpp to bb02f74c612064947e51d23269a1cf810b67c9a7 (#8196)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-24 21:19:43 +00:00
Ettore Di Giacinto
05904c77f5 chore(exllama): drop backend now almost deprecated (#8186)
exllama2 development has stalled and only old architectures are
supported. exllamav3 is still in development, meanwhile cleaning up
exllama2 from the gallery.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-24 08:57:37 +01:00
LocalAI [bot]
17783fa7d9 chore: ⬆️ Update leejet/stable-diffusion.cpp to fa61ea744d1a87fa26a63f8a86e45587bc9534d6 (#8184)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-24 08:57:24 +01:00
LocalAI [bot]
4019094111 chore: ⬆️ Update ggml-org/llama.cpp to 557515be1e93ed8939dd8a7c7d08765fdbe8be31 (#8183)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-24 08:57:08 +01:00
Ettore Di Giacinto
ca65fc751a chore(model gallery): add qwen3-tts to model gallery (#8187)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-23 23:06:50 +01:00
LocalAI [bot]
a1e3acc590 docs: ⬆️ update docs version mudler/LocalAI (#8182)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-23 22:03:47 +01:00
Ettore Di Giacinto
a36960e069 fix(qwen-tts): change icon URL in index.yaml
Updated the icon URL for the project.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-23 22:00:14 +01:00
Ettore Di Giacinto
58bb6a29ed Revert "chore(deps): bump torch from 2.4.1 to 2.7.1+xpu in /backend/python/bark in the pip group across 1 directory" (#8180)
Revert "chore(deps): bump torch from 2.4.1 to 2.7.1+xpu in /backend/python/ba…"

This reverts commit 5881c82413.
2026-01-23 17:25:04 +01:00
dependabot[bot]
5881c82413 chore(deps): bump torch from 2.4.1 to 2.7.1+xpu in /backend/python/bark in the pip group across 1 directory (#8175)
chore(deps): bump torch

Bumps the pip group with 1 update in the /backend/python/bark directory: torch.


Updates `torch` from 2.4.1 to 2.7.1+xpu

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.7.1+xpu
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-23 15:32:15 +00:00
Ettore Di Giacinto
923ebbb344 feat(qwen-tts): add Qwen-tts backend (#8163)
* feat(qwen-tts): add Qwen-tts backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update intel deps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop flash-attn for cuda13

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-23 15:18:41 +01:00
LocalAI [bot]
ea51567b89 chore(model gallery): 🤖 add 1 new models via gallery agent (#8170)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-23 08:19:39 +01:00
LocalAI [bot]
552c62a19c chore: ⬆️ Update leejet/stable-diffusion.cpp to 5e4579c11d0678f9765463582d024e58270faa9c (#8166)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-23 08:18:05 +01:00
Ettore Di Giacinto
c0b21a921b feat: detect thinking support from backend automatically if not explicitly set (#8167)
detect thinking support from backend automatically if not explicitly set

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-23 00:38:28 +01:00
LocalAI [bot]
b10045adc2 chore: ⬆️ Update ggml-org/llama.cpp to a5eaa1d6a3732bc0f460b02b61c95680bba5a012 (#8165)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-22 23:32:05 +00:00
Ettore Di Giacinto
61b5e3b629 chore: drop test file
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-22 22:19:38 +00:00
Ettore Di Giacinto
e35d7cb3b3 chore: drop test file
the function now was removed

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-22 21:47:52 +00:00
Ettore Di Giacinto
0fa0ac4797 fix(videogen): drop incomplete endpoint, add GGUF support for LTX-2 (#8160)
* Debug

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop openai video endpoint (is not complete)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add download button

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-22 14:09:20 +01:00
LocalAI [bot]
be7ed85838 chore(model gallery): 🤖 add 1 new models via gallery agent (#8157)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-22 08:25:40 +01:00
LocalAI [bot]
c12b310028 chore: ⬆️ Update ggml-org/llama.cpp to c301172f660a1fe0b42023da990bf7385d69adb4 (#8151)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-22 00:51:22 +01:00
LocalAI [bot]
0447d5564d chore: ⬆️ Update leejet/stable-diffusion.cpp to 329571131d62d64a4f49e1acbef49ae02544fdcd (#8152)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-22 00:50:41 +01:00
Ettore Di Giacinto
22c0eb5421 chore(diffusers): add 'av' to requirements.txt (#8155)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-21 22:35:00 +01:00
LocalAI [bot]
a0a00fb937 chore(model-gallery): ⬆️ update checksum (#8153)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-21 21:45:11 +01:00
LocalAI [bot]
6dd44742ea feat(swagger): update swagger (#8150)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-21 21:44:44 +01:00
Richard Palethorpe
00c72e7d3e fix(tracing): Create trace buffer on first request to enable tracing at runtime (#8148)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2026-01-21 18:39:39 +01:00
LocalAI [bot]
d01c335cf6 chore: ⬆️ Update ggml-org/whisper.cpp to 7aa8818647303b567c3a21fe4220b2681988e220 (#8146)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-21 17:44:01 +01:00
LocalAI [bot]
5687df4535 chore: ⬆️ Update ggml-org/llama.cpp to ad8d85bd94cc86e89d23407bdebf98f2e6510c61 (#8145)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-21 15:41:36 +00:00
Ettore Di Giacinto
f5fade97e6 chore: drop noisy logs (#8142)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-21 09:52:20 +01:00
Ettore Di Giacinto
b88ae31e4e chore(model gallery): add flux 2 and flux 2 klein (#8141)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-21 09:46:33 +01:00
Ettore Di Giacinto
f6daaa7c35 chore(deps): Bump llama.cpp to '1c7cf94b22a9dc6b1d32422f72a627787a4783a3' (#8136)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-21 00:12:13 +01:00
Ettore Di Giacinto
c491c6ca90 feat(openresponses): Support reasoning blocks (#8133)
* feat(openresponses): support reasoning blocks

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* allow to disable reasoning, refactor common logic

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add option to only strip reasoning

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add configurations for custom reasoning tokens

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-21 00:11:45 +01:00
Ettore Di Giacinto
34e054f607 fix(reasoning): support models with reasoning without starting thinking tag (#8132)
* chore: extract reasoning to its own package

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* make sure we detect thinking tokens from template

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Allow to override via config, add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-20 21:07:59 +01:00
LocalAI [bot]
e886bb291a chore(model gallery): 🤖 add 1 new models via gallery agent (#8128)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-20 12:58:29 +01:00
Ettore Di Giacinto
4bf2f8bbd8 chore(docs): update docs with Anthropic API and openresponses
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-20 09:25:24 +01:00
LocalAI [bot]
d3525b7509 chore: ⬆️ Update ggml-org/llama.cpp to 959ecf7f234dc0bc0cd6829b25cb0ee1481aa78a (#8122)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-19 22:50:47 +01:00
LocalAI [bot]
c8aa821e0e chore: ⬆️ Update leejet/stable-diffusion.cpp to a48b4a3ade9972faf0adcad47e51c6fc03f0e46d (#8121)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-19 22:27:46 +01:00
dependabot[bot]
b3191927ae chore(deps): bump github.com/mudler/cogito from 0.7.2 to 0.8.1 (#8124)
Bumps [github.com/mudler/cogito](https://github.com/mudler/cogito) from 0.7.2 to 0.8.1.
- [Release notes](https://github.com/mudler/cogito/releases)
- [Commits](https://github.com/mudler/cogito/compare/v0.7.2...v0.8.1)

---
updated-dependencies:
- dependency-name: github.com/mudler/cogito
  dependency-version: 0.8.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 22:26:26 +01:00
LocalAI [bot]
54c5a2d9ea docs: ⬆️ update docs version mudler/LocalAI (#8120)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-19 21:18:24 +00:00
Ettore Di Giacinto
0279591fec Enable reranking for Qwen3-VL-Reranker-8B
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-19 15:28:58 +01:00
LocalAI [bot]
8845186955 chore: ⬆️ Update leejet/stable-diffusion.cpp to 2efd19978dd4164e387bf226025c9666b6ef35e2 (#8099)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-18 22:40:35 +01:00
LocalAI [bot]
ab8ed24358 chore: ⬆️ Update ggml-org/llama.cpp to 287a33017b32600bfc0e81feeb0ad6e81e0dd484 (#8100)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-18 22:40:14 +01:00
LocalAI [bot]
a021df5a88 feat(swagger): update swagger (#8098)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-18 22:10:06 +01:00
Ettore Di Giacinto
5f403b1631 chore: drop neutts for l4t (#8101)
Builds exhausts CI currently, and there are better backends at this
point in time. We will probably deprecate it in the future.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-18 21:55:56 +01:00
rampa3
897ad1729e chore(model gallery): add qwen3-coder-30b-a3b-instruct based on model request (#8082)
* chore(model gallery): add qwen3-coder-30b-a3b-instruct based on model request

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

* added missing model config import URL

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

---------

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-01-18 09:23:07 +01:00
LocalAI [bot]
16a18a2e55 chore: ⬆️ Update leejet/stable-diffusion.cpp to 9565c7f6bd5fcff124c589147b2621244f2c4aa1 (#8086)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-17 22:12:21 +01:00
Ettore Di Giacinto
3387bfaee0 feat(api): add support for open responses specification (#8063)
* feat: openresponses

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add ttl settings, fix tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: register cors middleware by default

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* satisfy schema

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Logitbias and logprobs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add grammar

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* SSE compliance

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* tool JSON conversion

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* support background mode

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* swagger

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* drop code. This is handled in the handler

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* background mode for MCP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-17 22:11:47 +01:00
LocalAI [bot]
1cd33047b4 chore: ⬆️ Update ggml-org/llama.cpp to 2fbde785bc106ae1c4102b0e82b9b41d9c466579 (#8087)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-17 21:10:18 +00:00
Ettore Di Giacinto
1de045311a chore(ui): add video generation link (#8079)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-17 09:49:47 +01:00
LocalAI [bot]
5fe9bf9f84 chore: ⬆️ Update ggml-org/whisper.cpp to f53dc74843e97f19f94a79241357f74ad5b691a6 (#8074)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-17 08:32:53 +01:00
LocalAI [bot]
d4fd0c0609 chore: ⬆️ Update ggml-org/llama.cpp to 388ce822415f24c60fcf164a321455f1e008cafb (#8073)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-16 21:22:33 +00:00
Ettore Di Giacinto
d16722ee13 Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/rerankers in the pip group across 1 directory" (#8072)
Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/pyt…"

This reverts commit 1f10ab39a9.
2026-01-16 20:50:33 +01:00
dependabot[bot]
1f10ab39a9 chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/rerankers in the pip group across 1 directory (#8066)
chore(deps): bump torch

Bumps the pip group with 1 update in the /backend/python/rerankers directory: [torch](https://github.com/pytorch/pytorch).


Updates `torch` from 2.3.1+cxx11.abi to 2.8.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.8.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.8.0
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 19:38:12 +00:00
Ettore Di Giacinto
4d36e393d1 fix(ci): use more beefy runner for expensive jobs (#8065)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-16 19:26:40 +01:00
243 changed files with 22755 additions and 4694 deletions

View File

@@ -91,10 +91,23 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-cpu-whisperx'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'true'
backend: "whisperx"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
# CUDA 12 builds
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-vibevoice'
@@ -107,7 +120,46 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-qwen-asr'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-qwen-tts'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-voxcpm'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "voxcpm"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-pocket-tts'
@@ -133,11 +185,11 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-llama-cpp'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "llama-cpp"
@@ -146,7 +198,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-vllm'
@@ -159,7 +211,20 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-vllm-omni'
runs-on: 'arc-runner-set'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "vllm-omni"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-transformers'
@@ -172,7 +237,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-diffusers'
@@ -185,7 +250,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-kokoro'
@@ -198,7 +263,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-faster-whisper'
@@ -209,6 +274,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-whisperx'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "whisperx"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
@@ -224,20 +302,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-bark'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "bark"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-chatterbox'
@@ -250,7 +315,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-moonshine'
@@ -263,7 +328,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-stablediffusion-ggml'
@@ -276,7 +341,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-whisper'
@@ -289,7 +354,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-rfdetr'
@@ -302,20 +367,7 @@ jobs:
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-exllama2'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "exllama2"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-neutts'
@@ -353,6 +405,45 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-13-qwen-asr'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-13-qwen-tts'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-13-voxcpm'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "voxcpm"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
@@ -431,6 +522,32 @@ jobs:
backend: "vibevoice"
dockerfile: "./backend/Dockerfile.python"
context: "./"
- build-type: 'l4t'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen-asr'
runs-on: 'ubuntu-24.04-arm'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
ubuntu-version: '2404'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
- build-type: 'l4t'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen-tts'
runs-on: 'ubuntu-24.04-arm'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
ubuntu-version: '2404'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
- build-type: 'l4t'
cuda-major-version: "13"
cuda-minor-version: "0"
@@ -488,11 +605,11 @@ jobs:
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-13-bark'
tag-suffix: '-gpu-nvidia-cuda-13-whisperx'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "bark"
backend: "whisperx"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
@@ -627,6 +744,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-vllm-omni'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "vllm-omni"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
@@ -680,6 +810,45 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-qwen-asr'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-qwen-tts'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-voxcpm'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "voxcpm"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
@@ -699,7 +868,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-faster-whisper'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "faster-whisper"
@@ -711,11 +880,11 @@ jobs:
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-coqui'
runs-on: 'ubuntu-latest'
tag-suffix: '-gpu-rocm-hipblas-whisperx'
runs-on: 'bigger-runner'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "coqui"
backend: "whisperx"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
@@ -724,11 +893,11 @@ jobs:
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-bark'
runs-on: 'arc-runner-set'
tag-suffix: '-gpu-rocm-hipblas-coqui'
runs-on: 'bigger-runner'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "bark"
backend: "coqui"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
@@ -824,6 +993,32 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-qwen-asr'
runs-on: 'ubuntu-24.04-arm'
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
skip-drivers: 'true'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-qwen-tts'
runs-on: 'ubuntu-24.04-arm'
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
skip-drivers: 'true'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
@@ -890,6 +1085,45 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-qwen-asr'
runs-on: 'arc-runner-set'
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
skip-drivers: 'false'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-qwen-tts'
runs-on: 'arc-runner-set'
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
skip-drivers: 'false'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-voxcpm'
runs-on: 'arc-runner-set'
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
skip-drivers: 'false'
backend: "voxcpm"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
@@ -916,19 +1150,6 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-bark'
runs-on: 'ubuntu-latest'
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
skip-drivers: 'false'
backend: "bark"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
# piper
- build-type: ''
cuda-major-version: ""
@@ -943,27 +1164,13 @@ jobs:
dockerfile: "./backend/Dockerfile.golang"
context: "./"
ubuntu-version: '2404'
# bark-cpp
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-bark-cpp'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "bark-cpp"
dockerfile: "./backend/Dockerfile.golang"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-cpu-llama-cpp'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "llama-cpp"
@@ -989,7 +1196,7 @@ jobs:
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-gpu-vulkan-llama-cpp'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "llama-cpp"
@@ -1223,46 +1430,6 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
# exllama2
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-cpu-exllama2'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "exllama2"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-exllama2'
runs-on: 'ubuntu-latest'
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
skip-drivers: 'false'
backend: "exllama2"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
skip-drivers: 'true'
tag-latest: 'auto'
tag-suffix: '-gpu-hipblas-exllama2'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
runs-on: 'ubuntu-latest'
backend: "exllama2"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
@@ -1330,19 +1497,6 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
platforms: 'linux/arm64'
skip-drivers: 'true'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-arm64-neutts'
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
runs-on: 'ubuntu-24.04-arm'
backend: "neutts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
@@ -1356,6 +1510,45 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-cpu-qwen-asr'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "qwen-asr"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-cpu-qwen-tts'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "qwen-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-cpu-voxcpm'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "voxcpm"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""

View File

@@ -37,7 +37,7 @@
include:
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'false'
tag-suffix: '-gpu-nvidia-cuda-12'

View File

@@ -88,7 +88,7 @@
ubuntu-codename: 'noble'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
cuda-minor-version: "8"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12'

View File

@@ -238,7 +238,7 @@ jobs:
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install build-essential ffmpeg
sudo apt-get install -y build-essential ffmpeg
sudo apt-get install -y ca-certificates cmake curl patch espeak espeak-ng python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
@@ -257,7 +257,7 @@ jobs:
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install build-essential ffmpeg
sudo apt-get install -y build-essential ffmpeg
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
@@ -276,7 +276,7 @@ jobs:
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install build-essential ffmpeg
sudo apt-get install -y build-essential ffmpeg
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
@@ -284,4 +284,61 @@ jobs:
- name: Test pocket-tts
run: |
make --jobs=5 --output-sync=target -C backend/python/pocket-tts
make --jobs=5 --output-sync=target -C backend/python/pocket-tts test
make --jobs=5 --output-sync=target -C backend/python/pocket-tts test
tests-qwen-tts:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v6
with:
submodules: true
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install -y build-essential ffmpeg
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
pip install --user --no-cache-dir grpcio-tools==1.64.1
- name: Test qwen-tts
run: |
make --jobs=5 --output-sync=target -C backend/python/qwen-tts
make --jobs=5 --output-sync=target -C backend/python/qwen-tts test
tests-qwen-asr:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v6
with:
submodules: true
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install -y build-essential ffmpeg sox
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
pip install --user --no-cache-dir grpcio-tools==1.64.1
- name: Test qwen-asr
run: |
make --jobs=5 --output-sync=target -C backend/python/qwen-asr
make --jobs=5 --output-sync=target -C backend/python/qwen-asr test
tests-voxcpm:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v6
with:
submodules: true
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install build-essential ffmpeg
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
pip install --user --no-cache-dir grpcio-tools==1.64.1
- name: Test voxcpm
run: |
make --jobs=5 --output-sync=target -C backend/python/voxcpm
make --jobs=5 --output-sync=target -C backend/python/voxcpm test

56
.github/workflows/tests-e2e.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
---
name: 'E2E Backend Tests'
on:
pull_request:
push:
branches:
- master
tags:
- '*'
concurrency:
group: ci-tests-e2e-backend-${{ github.head_ref || github.ref }}-${{ github.repository }}
cancel-in-progress: true
jobs:
tests-e2e-backend:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: ['1.25.x']
steps:
- name: Clone
uses: actions/checkout@v6
with:
submodules: true
- name: Setup Go ${{ matrix.go-version }}
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: false
- name: Display Go version
run: go version
- name: Proto Dependencies
run: |
# Install protoc
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v26.1/protoc-26.1-linux-x86_64.zip -o protoc.zip && \
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
rm protoc.zip
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@1958fcbe2ca8bd93af633f11e97d44e567e945af
PATH="$PATH:$HOME/go/bin" make protogen-go
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: Test Backend E2E
run: |
PATH="$PATH:$HOME/go/bin" make build-mock-backend test-e2e
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.23
with:
detached: true
connect-timeout-seconds: 180
limit-access-to-actor: true

2
.gitignore vendored
View File

@@ -36,6 +36,8 @@ LocalAI
models/*
test-models/
test-dir/
tests/e2e-aio/backends
tests/e2e-aio/models
release/

View File

@@ -4,13 +4,13 @@ Building and testing the project depends on the components involved and the plat
## Building a specified backend
Let's say the user wants to build a particular backend for a given platform. For example let's say they want to build bark for ROCM/hipblas
Let's say the user wants to build a particular backend for a given platform. For example let's say they want to build coqui for ROCM/hipblas
- The Makefile has targets like `docker-build-bark` created with `generate-docker-build-target` at the time of writing. Recently added backends may require a new target.
- The Makefile has targets like `docker-build-coqui` created with `generate-docker-build-target` at the time of writing. Recently added backends may require a new target.
- At a minimum we need to set the BUILD_TYPE, BASE_IMAGE build-args
- Use .github/workflows/backend.yml as a reference it lists the needed args in the `include` job strategy matrix
- l4t and cublas also requires the CUDA major and minor version
- You can pretty print a command like `DOCKER_MAKEFLAGS=-j$(nproc --ignore=1) BUILD_TYPE=hipblas BASE_IMAGE=rocm/dev-ubuntu-24.04:6.4.4 make docker-build-bark`
- You can pretty print a command like `DOCKER_MAKEFLAGS=-j$(nproc --ignore=1) BUILD_TYPE=hipblas BASE_IMAGE=rocm/dev-ubuntu-24.04:6.4.4 make docker-build-coqui`
- Unless the user specifies that they want you to run the command, then just print it because not all agent frontends handle long running jobs well and the output may overflow your context
- The user may say they want to build AMD or ROCM instead of hipblas, or Intel instead of SYCL or NVIDIA insted of l4t or cublas. Ask for confirmation if there is ambiguity.
- Sometimes the user may need extra parameters to be added to `docker build` (e.g. `--platform` for cross-platform builds or `--progress` to view the full logs), in which case you can generate the `docker build` command directly.
@@ -95,7 +95,7 @@ test-extra: prepare-test-extra
Add a backend definition variable in the backend definitions section (around line 428-457). The format depends on the backend type:
**For Python backends with root context** (like `faster-whisper`, `bark`):
**For Python backends with root context** (like `faster-whisper`, `coqui`):
```makefile
BACKEND_<BACKEND_NAME> = <backend-name>|python|.|false|true
```
@@ -280,3 +280,11 @@ Always check `llama.cpp` for new model configuration options that should be supp
- `llama.cpp/common/chat-parser.cpp` - Format presets and model-specific handlers
- `llama.cpp/common/chat.h` - Format enums and parameter structures
- `llama.cpp/tools/server/server-context.cpp` - Server configuration options
# Documentation
The project documentation is located in `docs/content`. When adding new features or changing existing functionality, it is crucial to update the documentation to reflect these changes. This helps users understand how to use the new capabilities and ensures the documentation stays relevant.
- **Feature Documentation**: If you add a new feature (like a new backend or API endpoint), create a new markdown file in `docs/content/features/` explaining what it is, how to configure it, and how to use it.
- **Configuration**: If you modify configuration options, update the relevant sections in `docs/content/`.
- **Examples**: providing concrete examples (like YAML configuration blocks) is highly encouraged to help users get started quickly.

View File

@@ -10,7 +10,7 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates curl wget espeak-ng libgomp1 \
ffmpeg libopenblas0 libopenblas-dev && \
ffmpeg libopenblas0 libopenblas-dev sox && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

View File

@@ -1,5 +1,5 @@
# Disable parallel execution for backend builds
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/stablediffusion-ggml-darwin backends/vllm backends/moonshine backends/pocket-tts
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/stablediffusion-ggml-darwin backends/vllm backends/vllm-omni backends/moonshine backends/pocket-tts backends/qwen-tts backends/qwen-asr backends/voxcpm backends/whisperx
GOCMD=go
GOTEST=$(GOCMD) test
@@ -7,16 +7,14 @@ GOVET=$(GOCMD) vet
BINARY_NAME=local-ai
LAUNCHER_BINARY_NAME=local-ai-launcher
CUDA_MAJOR_VERSION?=13
CUDA_MINOR_VERSION?=0
UBUNTU_VERSION?=2404
UBUNTU_CODENAME?=noble
GORELEASER?=
export BUILD_TYPE?=
export CUDA_MAJOR_VERSION?=12
export CUDA_MINOR_VERSION?=9
export CUDA_MAJOR_VERSION?=13
export CUDA_MINOR_VERSION?=0
GO_TAGS?=
BUILD_ID?=
@@ -191,9 +189,6 @@ run-e2e-aio: protogen-go
########################################################
prepare-e2e:
mkdir -p $(TEST_DIR)
cp -rfv $(abspath ./tests/e2e-fixtures)/gpu.yaml $(TEST_DIR)/gpu.yaml
test -e $(TEST_DIR)/ggllm-test-model.bin || wget -q https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/resolve/main/codellama-7b-instruct.Q2_K.gguf -O $(TEST_DIR)/ggllm-test-model.bin
docker build \
--build-arg IMAGE_TYPE=core \
--build-arg BUILD_TYPE=$(BUILD_TYPE) \
@@ -207,14 +202,16 @@ prepare-e2e:
-t localai-tests .
run-e2e-image:
ls -liah $(abspath ./tests/e2e-fixtures)
docker run -p 5390:8080 -e MODELS_PATH=/models -e THREADS=1 -e DEBUG=true -d --rm -v $(TEST_DIR):/models --gpus all --name e2e-tests-$(RANDOM) localai-tests
docker run -p 5390:8080 -e MODELS_PATH=/models -e THREADS=1 -e DEBUG=true -d --rm -v $(TEST_DIR):/models --name e2e-tests-$(RANDOM) localai-tests
test-e2e:
test-e2e: build-mock-backend prepare-e2e run-e2e-image
@echo 'Running e2e tests'
BUILD_TYPE=$(BUILD_TYPE) \
LOCALAI_API=http://$(E2E_BRIDGE_IP):5390/v1 \
LOCALAI_API=http://$(E2E_BRIDGE_IP):5390 \
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --flake-attempts $(TEST_FLAKES) -v -r ./tests/e2e
$(MAKE) clean-mock-backend
$(MAKE) teardown-e2e
docker rmi localai-tests
teardown-e2e:
rm -rf $(TEST_DIR) || true
@@ -314,18 +311,28 @@ prepare-test-extra: protogen-python
$(MAKE) -C backend/python/diffusers
$(MAKE) -C backend/python/chatterbox
$(MAKE) -C backend/python/vllm
$(MAKE) -C backend/python/vllm-omni
$(MAKE) -C backend/python/vibevoice
$(MAKE) -C backend/python/moonshine
$(MAKE) -C backend/python/pocket-tts
$(MAKE) -C backend/python/qwen-tts
$(MAKE) -C backend/python/qwen-asr
$(MAKE) -C backend/python/voxcpm
$(MAKE) -C backend/python/whisperx
test-extra: prepare-test-extra
$(MAKE) -C backend/python/transformers test
$(MAKE) -C backend/python/diffusers test
$(MAKE) -C backend/python/chatterbox test
$(MAKE) -C backend/python/vllm test
$(MAKE) -C backend/python/vllm-omni test
$(MAKE) -C backend/python/vibevoice test
$(MAKE) -C backend/python/moonshine test
$(MAKE) -C backend/python/pocket-tts test
$(MAKE) -C backend/python/qwen-tts test
$(MAKE) -C backend/python/qwen-asr test
$(MAKE) -C backend/python/voxcpm test
$(MAKE) -C backend/python/whisperx test
DOCKER_IMAGE?=local-ai
DOCKER_AIO_IMAGE?=local-ai-aio
@@ -434,7 +441,6 @@ backend-images:
BACKEND_LLAMA_CPP = llama-cpp|llama-cpp|.|false|false
# Golang backends
BACKEND_BARK_CPP = bark-cpp|golang|.|false|true
BACKEND_PIPER = piper|golang|.|false|true
BACKEND_LOCAL_STORE = local-store|golang|.|false|true
BACKEND_HUGGINGFACE = huggingface|golang|.|false|true
@@ -447,18 +453,21 @@ BACKEND_RERANKERS = rerankers|python|.|false|true
BACKEND_TRANSFORMERS = transformers|python|.|false|true
BACKEND_FASTER_WHISPER = faster-whisper|python|.|false|true
BACKEND_COQUI = coqui|python|.|false|true
BACKEND_BARK = bark|python|.|false|true
BACKEND_EXLLAMA2 = exllama2|python|.|false|true
BACKEND_RFDETR = rfdetr|python|.|false|true
BACKEND_KITTEN_TTS = kitten-tts|python|.|false|true
BACKEND_NEUTTS = neutts|python|.|false|true
BACKEND_KOKORO = kokoro|python|.|false|true
BACKEND_VLLM = vllm|python|.|false|true
BACKEND_VLLM_OMNI = vllm-omni|python|.|false|true
BACKEND_DIFFUSERS = diffusers|python|.|--progress=plain|true
BACKEND_CHATTERBOX = chatterbox|python|.|false|true
BACKEND_VIBEVOICE = vibevoice|python|.|--progress=plain|true
BACKEND_MOONSHINE = moonshine|python|.|false|true
BACKEND_POCKET_TTS = pocket-tts|python|.|false|true
BACKEND_QWEN_TTS = qwen-tts|python|.|false|true
BACKEND_QWEN_ASR = qwen-asr|python|.|false|true
BACKEND_VOXCPM = voxcpm|python|.|false|true
BACKEND_WHISPERX = whisperx|python|.|false|true
# Helper function to build docker image for a backend
# Usage: $(call docker-build-backend,BACKEND_NAME,DOCKERFILE_TYPE,BUILD_CONTEXT,PROGRESS_FLAG,NEEDS_BACKEND_ARG)
@@ -482,7 +491,6 @@ endef
# Generate all docker-build targets
$(eval $(call generate-docker-build-target,$(BACKEND_LLAMA_CPP)))
$(eval $(call generate-docker-build-target,$(BACKEND_BARK_CPP)))
$(eval $(call generate-docker-build-target,$(BACKEND_PIPER)))
$(eval $(call generate-docker-build-target,$(BACKEND_LOCAL_STORE)))
$(eval $(call generate-docker-build-target,$(BACKEND_HUGGINGFACE)))
@@ -493,24 +501,37 @@ $(eval $(call generate-docker-build-target,$(BACKEND_RERANKERS)))
$(eval $(call generate-docker-build-target,$(BACKEND_TRANSFORMERS)))
$(eval $(call generate-docker-build-target,$(BACKEND_FASTER_WHISPER)))
$(eval $(call generate-docker-build-target,$(BACKEND_COQUI)))
$(eval $(call generate-docker-build-target,$(BACKEND_BARK)))
$(eval $(call generate-docker-build-target,$(BACKEND_EXLLAMA2)))
$(eval $(call generate-docker-build-target,$(BACKEND_RFDETR)))
$(eval $(call generate-docker-build-target,$(BACKEND_KITTEN_TTS)))
$(eval $(call generate-docker-build-target,$(BACKEND_NEUTTS)))
$(eval $(call generate-docker-build-target,$(BACKEND_KOKORO)))
$(eval $(call generate-docker-build-target,$(BACKEND_VLLM)))
$(eval $(call generate-docker-build-target,$(BACKEND_VLLM_OMNI)))
$(eval $(call generate-docker-build-target,$(BACKEND_DIFFUSERS)))
$(eval $(call generate-docker-build-target,$(BACKEND_CHATTERBOX)))
$(eval $(call generate-docker-build-target,$(BACKEND_VIBEVOICE)))
$(eval $(call generate-docker-build-target,$(BACKEND_MOONSHINE)))
$(eval $(call generate-docker-build-target,$(BACKEND_POCKET_TTS)))
$(eval $(call generate-docker-build-target,$(BACKEND_QWEN_TTS)))
$(eval $(call generate-docker-build-target,$(BACKEND_QWEN_ASR)))
$(eval $(call generate-docker-build-target,$(BACKEND_VOXCPM)))
$(eval $(call generate-docker-build-target,$(BACKEND_WHISPERX)))
# Pattern rule for docker-save targets
docker-save-%: backend-images
docker save local-ai-backend:$* -o backend-images/$*.tar
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-vibevoice docker-build-exllama2 docker-build-moonshine docker-build-pocket-tts
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-vllm-omni docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-chatterbox docker-build-vibevoice docker-build-moonshine docker-build-pocket-tts docker-build-qwen-tts docker-build-qwen-asr docker-build-voxcpm docker-build-whisperx
########################################################
### Mock Backend for E2E Tests
########################################################
build-mock-backend: protogen-go
$(GOCMD) build -o tests/e2e/mock-backend/mock-backend ./tests/e2e/mock-backend
clean-mock-backend:
rm -f tests/e2e/mock-backend/mock-backend
########################################################
### END Backends

View File

@@ -51,34 +51,16 @@
**LocalAI** is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by [Ettore Di Giacinto](https://github.com/mudler).
## 📚🆕 Local Stack Family
## Local Stack Family
🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:
Liking LocalAI? LocalAI is part of an integrated suite of AI infrastructure tools, you might also like:
- **[LocalAGI](https://github.com/mudler/LocalAGI)** - AI agent orchestration platform with OpenAI Responses API compatibility and advanced agentic capabilities
- **[LocalRecall](https://github.com/mudler/LocalRecall)** - MCP/REST API knowledge base system providing persistent memory and storage for AI agents
- 🆕 **[Cogito](https://github.com/mudler/cogito)** - Go library for building intelligent, co-operative agentic software and LLM-powered workflows, focusing on improving results for small, open source language models that scales to any LLM. Powers LocalAGI and LocalAI MCP/Agentic capabilities
- 🆕 **[Wiz](https://github.com/mudler/wiz)** - Terminal-based AI agent accessible via Ctrl+Space keybinding. Portable, local-LLM friendly shell assistant with TUI/CLI modes, tool execution with approval, MCP protocol support, and multi-shell compatibility (zsh, bash, fish)
- 🆕 **[SkillServer](https://github.com/mudler/skillserver)** - Simple, centralized skills database for AI agents via MCP. Manages skills as Markdown files with MCP server integration, web UI for editing, Git synchronization, and full-text search capabilities
<table>
<tr>
<td width="50%" valign="top">
<a href="https://github.com/mudler/LocalAGI">
<img src="https://raw.githubusercontent.com/mudler/LocalAGI/refs/heads/main/webui/react-ui/public/logo_2.png" width="300" alt="LocalAGI Logo">
</a>
</td>
<td width="50%" valign="top">
<h3><a href="https://github.com/mudler/LocalAGI">LocalAGI</a></h3>
<p>A powerful Local AI agent management platform that serves as a drop-in replacement for OpenAI's Responses API, enhanced with advanced agentic capabilities.</p>
</td>
</tr>
<tr>
<td width="50%" valign="top">
<a href="https://github.com/mudler/LocalRecall">
<img src="https://raw.githubusercontent.com/mudler/LocalRecall/refs/heads/main/static/localrecall_horizontal.png" width="300" alt="LocalRecall Logo">
</a>
</td>
<td width="50%" valign="top">
<h3><a href="https://github.com/mudler/LocalRecall">LocalRecall</a></h3>
<p>A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.</p>
</td>
</tr>
</table>
## Screenshots / Video
@@ -257,6 +239,7 @@ Roadmap items: [List of issues](https://github.com/mudler/LocalAI/issues?q=is%3A
- 🔈 [Audio to Text](https://localai.io/features/audio-to-text/) (Audio transcription with `whisper.cpp`)
- 🎨 [Image generation](https://localai.io/features/image-generation)
- 🔥 [OpenAI-alike tools API](https://localai.io/features/openai-functions/)
- ⚡ [Realtime API](https://localai.io/features/openai-realtime/) (Speech-to-speech)
- 🧠 [Embeddings generation for vector databases](https://localai.io/features/embeddings/)
- ✍️ [Constrained grammars](https://localai.io/features/constrained_grammars/)
- 🖼️ [Download Models directly from Huggingface ](https://localai.io/models/)
@@ -278,7 +261,6 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
| **llama.cpp** | LLM inference in C/C++ | CUDA 12/13, ROCm, Intel SYCL, Vulkan, Metal, CPU |
| **vLLM** | Fast LLM inference with PagedAttention | CUDA 12/13, ROCm, Intel |
| **transformers** | HuggingFace transformers framework | CUDA 12/13, ROCm, Intel, CPU |
| **exllama2** | GPTQ inference library | CUDA 12/13 |
| **MLX** | Apple Silicon LLM inference | Metal (M1/M2/M3+) |
| **MLX-VLM** | Apple Silicon Vision-Language Models | Metal (M1/M2/M3+) |
@@ -287,8 +269,6 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
|---------|-------------|---------------------|
| **whisper.cpp** | OpenAI Whisper in C/C++ | CUDA 12/13, ROCm, Intel SYCL, Vulkan, CPU |
| **faster-whisper** | Fast Whisper with CTranslate2 | CUDA 12/13, ROCm, Intel, CPU |
| **bark** | Text-to-audio generation | CUDA 12/13, ROCm, Intel |
| **bark-cpp** | C++ implementation of Bark | CUDA, Metal, CPU |
| **coqui** | Advanced TTS with 1100+ languages | CUDA 12/13, ROCm, Intel, CPU |
| **kokoro** | Lightweight TTS model | CUDA 12/13, ROCm, Intel, CPU |
| **chatterbox** | Production-grade TTS | CUDA 12/13, CPU |
@@ -298,6 +278,7 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
| **neutts** | Text-to-speech with voice cloning | CUDA 12/13, ROCm, CPU |
| **vibevoice** | Real-time TTS with voice cloning | CUDA 12/13, ROCm, Intel, CPU |
| **pocket-tts** | Lightweight CPU-based TTS | CUDA 12/13, ROCm, Intel, CPU |
| **qwen-tts** | High-quality TTS with custom voice, voice design, and voice cloning | CUDA 12/13, ROCm, Intel, CPU |
### Image & Video Generation
| Backend | Description | Acceleration Support |
@@ -319,9 +300,9 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
|-------------------|-------------------|------------------|
| **NVIDIA CUDA 12** | All CUDA-compatible backends | Nvidia hardware |
| **NVIDIA CUDA 13** | All CUDA-compatible backends | Nvidia hardware |
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts, vibevoice, pocket-tts | AMD Graphics |
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark, vibevoice, pocket-tts | Intel Arc, Intel iGPUs |
| **Apple Metal** | llama.cpp, whisper, diffusers, MLX, MLX-VLM, bark-cpp | Apple M1/M2/M3+ |
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, neutts, vibevoice, pocket-tts, qwen-tts | AMD Graphics |
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, coqui, kokoro, vibevoice, pocket-tts, qwen-tts | Intel Arc, Intel iGPUs |
| **Apple Metal** | llama.cpp, whisper, diffusers, MLX, MLX-VLM | Apple M1/M2/M3+ |
| **Vulkan** | llama.cpp, whisper, stablediffusion | Cross-platform GPUs |
| **NVIDIA Jetson (CUDA 12)** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI (AGX Orin, etc.) |
| **NVIDIA Jetson (CUDA 13)** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI (DGX Spark) |
@@ -343,6 +324,10 @@ Agentic Libraries:
MCPs:
- https://github.com/mudler/MCPs
OS Assistant:
- https://github.com/mudler/Keygeist - Keygeist is an AI-powered keyboard operator that listens for key combinations and responds with AI-generated text typed directly into your Linux box.
Model galleries
- https://github.com/go-skynet/model-gallery

View File

@@ -46,7 +46,7 @@ The backend system provides language-specific Dockerfiles that handle the build
- **vllm**: High-performance LLM inference
- **mlx**: Apple Silicon optimization
- **diffusers**: Stable Diffusion models
- **Audio**: bark, coqui, faster-whisper, kitten-tts
- **Audio**: coqui, faster-whisper, kitten-tts
- **Vision**: mlx-vlm, rfdetr
- **Specialized**: rerankers, chatterbox, kokoro
@@ -55,7 +55,6 @@ The backend system provides language-specific Dockerfiles that handle the build
- **stablediffusion-ggml**: Stable Diffusion in Go with GGML Cpp backend
- **huggingface**: Hugging Face model integration
- **piper**: Text-to-speech synthesis Golang with C bindings using rhaspy/piper
- **bark-cpp**: Bark TTS models Golang with Cpp bindings
- **local-store**: Vector storage backend
#### C++ Backends (`cpp/`)

View File

@@ -17,6 +17,7 @@ service Backend {
rpc GenerateVideo(GenerateVideoRequest) returns (Result) {}
rpc AudioTranscription(TranscriptRequest) returns (TranscriptResult) {}
rpc TTS(TTSRequest) returns (Result) {}
rpc TTSStream(TTSRequest) returns (stream Reply) {}
rpc SoundGeneration(SoundGenerationRequest) returns (Result) {}
rpc TokenizeString(PredictOptions) returns (TokenizationResponse) {}
rpc Status(HealthMessage) returns (StatusResponse) {}
@@ -32,6 +33,8 @@ service Backend {
rpc GetMetrics(MetricsRequest) returns (MetricsResponse);
rpc VAD(VADRequest) returns (VADResponse) {}
rpc ModelMetadata(ModelOptions) returns (ModelMetadataResponse) {}
}
// Define the empty request
@@ -296,6 +299,7 @@ message TranscriptSegment {
int64 end = 3;
string text = 4;
repeated int32 tokens = 5;
string speaker = 6;
}
message GenerateImageRequest {
@@ -410,3 +414,8 @@ message Detection {
message DetectResponse {
repeated Detection Detections = 1;
}
message ModelMetadataResponse {
bool supports_thinking = 1;
string rendered_template = 2; // The rendered chat template with enable_thinking=true (empty if not applicable)
}

View File

@@ -1,5 +1,5 @@
LLAMA_VERSION?=785a71008573e2d84728fb0ba9e851d72d3f8fab
LLAMA_VERSION?=2634ed207a17db1a54bd8df0555bd8499a6ab691
LLAMA_REPO?=https://github.com/ggerganov/llama.cpp
CMAKE_ARGS?=

View File

@@ -83,8 +83,8 @@ static void start_llama_server(server_context& ctx_server) {
// print sample chat example to make it clear which template is used
// LOG_INF("%s: chat template, chat_template: %s, example_format: '%s'\n", __func__,
// common_chat_templates_source(ctx_server.impl->chat_templates.get()),
// common_chat_format_example(ctx_server.impl->chat_templates.get(), ctx_server.impl->params_base.use_jinja).c_str(), ctx_server.impl->params_base.default_template_kwargs);
// common_chat_templates_source(ctx_server.impl->chat_params.tmpls.get()),
// common_chat_format_example(ctx_server.impl->chat_params.tmpls.get(), ctx_server.impl->params_base.use_jinja).c_str(), ctx_server.impl->params_base.default_template_kwargs);
// Keep the chat templates initialized in load_model() so they can be used when UseTokenizerTemplate is enabled
// Templates will only be used conditionally in Predict/PredictStream when UseTokenizerTemplate is true and Messages are provided
@@ -778,8 +778,8 @@ public:
if (!params.mmproj.path.empty()) {
error_msg += " (with mmproj: " + params.mmproj.path + ")";
}
if (params.has_speculative() && !params.speculative.model.path.empty()) {
error_msg += " (with draft model: " + params.speculative.model.path + ")";
if (params.speculative.has_dft() && !params.speculative.mparams_dft.path.empty()) {
error_msg += " (with draft model: " + params.speculative.mparams_dft.path + ")";
}
// Add captured error details if available
@@ -882,7 +882,7 @@ public:
std::string prompt_str;
std::vector<raw_buffer> files; // Declare files early so it's accessible in both branches
// Handle chat templates when UseTokenizerTemplate is enabled and Messages are provided
if (request->usetokenizertemplate() && request->messages_size() > 0 && ctx_server.impl->chat_templates != nullptr) {
if (request->usetokenizertemplate() && request->messages_size() > 0 && ctx_server.impl->chat_params.tmpls != nullptr) {
// Convert proto Messages to JSON format compatible with oaicompat_chat_params_parse
json body_json;
json messages_json = json::array();
@@ -1261,12 +1261,7 @@ public:
// Use the same approach as server.cpp: call oaicompat_chat_params_parse
// This handles all template application, grammar merging, etc. automatically
// Files extracted from multimodal content in messages will be added to the files vector
// Create parser options with current chat_templates to ensure tmpls is not null
oaicompat_parser_options parser_opt = ctx_server.impl->oai_parser_opt;
parser_opt.tmpls = ctx_server.impl->chat_templates.get(); // Ensure tmpls is set to current chat_templates
// Update allow_image and allow_audio based on current mctx state
parser_opt.allow_image = ctx_server.impl->mctx ? mtmd_support_vision(ctx_server.impl->mctx) : false;
parser_opt.allow_audio = ctx_server.impl->mctx ? mtmd_support_audio(ctx_server.impl->mctx) : false;
// chat_params already contains tmpls, allow_image, and allow_audio set during model loading
// Debug: Log tools before template processing
if (body_json.contains("tools")) {
@@ -1312,7 +1307,7 @@ public:
}
}
json parsed_data = oaicompat_chat_params_parse(body_json, parser_opt, files);
json parsed_data = oaicompat_chat_params_parse(body_json, ctx_server.impl->chat_params, files);
// Debug: Log tools after template processing
if (parsed_data.contains("tools")) {
@@ -1365,7 +1360,7 @@ public:
// If not using chat templates, extract files from image_data/audio_data fields
// (If using chat templates, files were already extracted by oaicompat_chat_params_parse)
if (!request->usetokenizertemplate() || request->messages_size() == 0 || ctx_server.impl->chat_templates == nullptr) {
if (!request->usetokenizertemplate() || request->messages_size() == 0 || ctx_server.impl->chat_params.tmpls == nullptr) {
const auto &images_data = data.find("image_data");
if (images_data != data.end() && images_data->is_array())
{
@@ -1593,7 +1588,7 @@ public:
std::string prompt_str;
std::vector<raw_buffer> files; // Declare files early so it's accessible in both branches
// Handle chat templates when UseTokenizerTemplate is enabled and Messages are provided
if (request->usetokenizertemplate() && request->messages_size() > 0 && ctx_server.impl->chat_templates != nullptr) {
if (request->usetokenizertemplate() && request->messages_size() > 0 && ctx_server.impl->chat_params.tmpls != nullptr) {
// Convert proto Messages to JSON format compatible with oaicompat_chat_params_parse
json body_json;
json messages_json = json::array();
@@ -1997,12 +1992,7 @@ public:
// Use the same approach as server.cpp: call oaicompat_chat_params_parse
// This handles all template application, grammar merging, etc. automatically
// Files extracted from multimodal content in messages will be added to the files vector
// Create parser options with current chat_templates to ensure tmpls is not null
oaicompat_parser_options parser_opt = ctx_server.impl->oai_parser_opt;
parser_opt.tmpls = ctx_server.impl->chat_templates.get(); // Ensure tmpls is set to current chat_templates
// Update allow_image and allow_audio based on current mctx state
parser_opt.allow_image = ctx_server.impl->mctx ? mtmd_support_vision(ctx_server.impl->mctx) : false;
parser_opt.allow_audio = ctx_server.impl->mctx ? mtmd_support_audio(ctx_server.impl->mctx) : false;
// chat_params already contains tmpls, allow_image, and allow_audio set during model loading
// Debug: Log tools before template processing
if (body_json.contains("tools")) {
@@ -2048,7 +2038,7 @@ public:
}
}
json parsed_data = oaicompat_chat_params_parse(body_json, parser_opt, files);
json parsed_data = oaicompat_chat_params_parse(body_json, ctx_server.impl->chat_params, files);
// Debug: Log tools after template processing
if (parsed_data.contains("tools")) {
@@ -2101,7 +2091,7 @@ public:
// If not using chat templates, extract files from image_data/audio_data fields
// (If using chat templates, files were already extracted by oaicompat_chat_params_parse)
if (!request->usetokenizertemplate() || request->messages_size() == 0 || ctx_server.impl->chat_templates == nullptr) {
if (!request->usetokenizertemplate() || request->messages_size() == 0 || ctx_server.impl->chat_params.tmpls == nullptr) {
const auto &images_data = data.find("image_data");
if (images_data != data.end() && images_data->is_array())
{
@@ -2486,6 +2476,47 @@ public:
response->set_prompt_tokens_processed(res_metrics->n_prompt_tokens_processed_total);
return grpc::Status::OK;
}
grpc::Status ModelMetadata(ServerContext* /*context*/, const backend::ModelOptions* /*request*/, backend::ModelMetadataResponse* response) override {
// Check if model is loaded
if (params_base.model.path.empty()) {
return grpc::Status(grpc::StatusCode::FAILED_PRECONDITION, "Model not loaded");
}
// Check if chat templates are initialized
if (ctx_server.impl->chat_params.tmpls == nullptr) {
// If templates are not initialized, we can't detect thinking support
// Return false as default
response->set_supports_thinking(false);
response->set_rendered_template("");
return grpc::Status::OK;
}
// Detect thinking support using llama.cpp's function
bool supports_thinking = common_chat_templates_support_enable_thinking(ctx_server.impl->chat_params.tmpls.get());
response->set_supports_thinking(supports_thinking);
// Render the template with enable_thinking=true so Go code can detect thinking tokens
// This allows reusing existing detection functions in Go
std::string rendered_template = "";
if (params_base.use_jinja) {
// Render the template with enable_thinking=true to see what the actual prompt looks like
common_chat_templates_inputs dummy_inputs;
common_chat_msg msg;
msg.role = "user";
msg.content = "test";
dummy_inputs.messages = {msg};
dummy_inputs.enable_thinking = true;
dummy_inputs.use_jinja = params_base.use_jinja;
const auto rendered = common_chat_templates_apply(ctx_server.impl->chat_params.tmpls.get(), dummy_inputs);
rendered_template = rendered.prompt;
}
response->set_rendered_template(rendered_template);
return grpc::Status::OK;
}
};

View File

@@ -1,51 +0,0 @@
INCLUDE_PATH := $(abspath ./)
LIBRARY_PATH := $(abspath ./)
AR?=ar
CMAKE_ARGS?=-DGGML_NATIVE=OFF
BUILD_TYPE?=
GOCMD=go
# keep standard at C11 and C++11
CXXFLAGS = -I. -I$(INCLUDE_PATH)/sources/bark.cpp/examples -I$(INCLUDE_PATH)/sources/bark.cpp/encodec.cpp/ggml/include -I$(INCLUDE_PATH)/sources/bark.cpp/spm-headers -I$(INCLUDE_PATH)/sources/bark.cpp -O3 -DNDEBUG -std=c++17 -fPIC
LDFLAGS = -L$(LIBRARY_PATH) -L$(LIBRARY_PATH)/sources/bark.cpp/build/examples -lbark -lstdc++ -lm
# bark.cpp
BARKCPP_REPO?=https://github.com/PABannier/bark.cpp.git
BARKCPP_VERSION?=5d5be84f089ab9ea53b7a793f088d3fbf7247495
# warnings
CXXFLAGS += -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function
## bark.cpp
sources/bark.cpp:
git clone --recursive $(BARKCPP_REPO) sources/bark.cpp && \
cd sources/bark.cpp && \
git checkout $(BARKCPP_VERSION) && \
git submodule update --init --recursive --depth 1 --single-branch
sources/bark.cpp/build/libbark.a: sources/bark.cpp
cd sources/bark.cpp && \
mkdir -p build && \
cd build && \
cmake $(CMAKE_ARGS) .. && \
cmake --build . --config Release
gobark.o:
$(CXX) $(CXXFLAGS) gobark.cpp -o gobark.o -c $(LDFLAGS)
libbark.a: sources/bark.cpp/build/libbark.a gobark.o
cp $(INCLUDE_PATH)/sources/bark.cpp/build/libbark.a ./
$(AR) rcs libbark.a gobark.o
bark-cpp: libbark.a
CGO_LDFLAGS="$(CGO_LDFLAGS)" C_INCLUDE_PATH="$(CURDIR)" LIBRARY_PATH=$(CURDIR) \
$(GOCMD) build -v -ldflags "$(LD_FLAGS)" -tags "$(GO_TAGS)" -o bark-cpp ./
package:
bash package.sh
build: bark-cpp package
clean:
rm -f gobark.o libbark.a

View File

@@ -1,85 +0,0 @@
#include <iostream>
#include <tuple>
#include "bark.h"
#include "gobark.h"
#include "common.h"
#include "ggml.h"
struct bark_context *c;
void bark_print_progress_callback(struct bark_context *bctx, enum bark_encoding_step step, int progress, void *user_data) {
if (step == bark_encoding_step::SEMANTIC) {
printf("\rGenerating semantic tokens... %d%%", progress);
} else if (step == bark_encoding_step::COARSE) {
printf("\rGenerating coarse tokens... %d%%", progress);
} else if (step == bark_encoding_step::FINE) {
printf("\rGenerating fine tokens... %d%%", progress);
}
fflush(stdout);
}
int load_model(char *model) {
// initialize bark context
struct bark_context_params ctx_params = bark_context_default_params();
bark_params params;
params.model_path = model;
// ctx_params.verbosity = verbosity;
ctx_params.progress_callback = bark_print_progress_callback;
ctx_params.progress_callback_user_data = nullptr;
struct bark_context *bctx = bark_load_model(params.model_path.c_str(), ctx_params, params.seed);
if (!bctx) {
fprintf(stderr, "%s: Could not load model\n", __func__);
return 1;
}
c = bctx;
return 0;
}
int tts(char *text,int threads, char *dst ) {
ggml_time_init();
const int64_t t_main_start_us = ggml_time_us();
// generate audio
if (!bark_generate_audio(c, text, threads)) {
fprintf(stderr, "%s: An error occurred. If the problem persists, feel free to open an issue to report it.\n", __func__);
return 1;
}
const float *audio_data = bark_get_audio_data(c);
if (audio_data == NULL) {
fprintf(stderr, "%s: Could not get audio data\n", __func__);
return 1;
}
const int audio_arr_size = bark_get_audio_data_size(c);
std::vector<float> audio_arr(audio_data, audio_data + audio_arr_size);
write_wav_on_disk(audio_arr, dst);
// report timing
{
const int64_t t_main_end_us = ggml_time_us();
const int64_t t_load_us = bark_get_load_time(c);
const int64_t t_eval_us = bark_get_eval_time(c);
printf("\n\n");
printf("%s: load time = %8.2f ms\n", __func__, t_load_us / 1000.0f);
printf("%s: eval time = %8.2f ms\n", __func__, t_eval_us / 1000.0f);
printf("%s: total time = %8.2f ms\n", __func__, (t_main_end_us - t_main_start_us) / 1000.0f);
}
return 0;
}
int unload() {
bark_free(c);
}

View File

@@ -1,52 +0,0 @@
package main
// #cgo CXXFLAGS: -I${SRCDIR}/sources/bark.cpp/ -I${SRCDIR}/sources/bark.cpp/encodec.cpp -I${SRCDIR}/sources/bark.cpp/encodec.cpp/ggml/include -I${SRCDIR}/sources/bark.cpp/examples -I${SRCDIR}/sources/bark.cpp/spm-headers
// #cgo LDFLAGS: -L${SRCDIR}/ -L${SRCDIR}/sources/bark.cpp/build/examples -L${SRCDIR}/sources/bark.cpp/build/encodec.cpp/ggml/src/ -L${SRCDIR}/sources/bark.cpp/build/encodec.cpp/ -lbark -lencodec -lcommon -lggml -lgomp
// #include <gobark.h>
// #include <stdlib.h>
import "C"
import (
"fmt"
"unsafe"
"github.com/mudler/LocalAI/pkg/grpc/base"
pb "github.com/mudler/LocalAI/pkg/grpc/proto"
)
type Bark struct {
base.SingleThread
threads int
}
func (sd *Bark) Load(opts *pb.ModelOptions) error {
sd.threads = int(opts.Threads)
modelFile := C.CString(opts.ModelFile)
defer C.free(unsafe.Pointer(modelFile))
ret := C.load_model(modelFile)
if ret != 0 {
return fmt.Errorf("inference failed")
}
return nil
}
func (sd *Bark) TTS(opts *pb.TTSRequest) error {
t := C.CString(opts.Text)
defer C.free(unsafe.Pointer(t))
dst := C.CString(opts.Dst)
defer C.free(unsafe.Pointer(dst))
threads := C.int(sd.threads)
ret := C.tts(t, threads, dst)
if ret != 0 {
return fmt.Errorf("inference failed")
}
return nil
}

View File

@@ -1,8 +0,0 @@
#ifdef __cplusplus
extern "C" {
#endif
int load_model(char *model);
int tts(char *text,int threads, char *dst );
#ifdef __cplusplus
}
#endif

View File

@@ -1,20 +0,0 @@
package main
// Note: this is started internally by LocalAI and a server is allocated for each model
import (
"flag"
grpc "github.com/mudler/LocalAI/pkg/grpc"
)
var (
addr = flag.String("addr", "localhost:50051", "the address to connect to")
)
func main() {
flag.Parse()
if err := grpc.StartServer(*addr, &Bark{}); err != nil {
panic(err)
}
}

View File

@@ -1,41 +0,0 @@
#!/bin/bash
# Script to copy the appropriate libraries based on architecture
# This script is used in the final stage of the Dockerfile
set -e
CURDIR=$(dirname "$(realpath $0)")
# Create lib directory
mkdir -p $CURDIR/package/lib
cp -avrf $CURDIR/bark-cpp $CURDIR/package/
cp -rfv $CURDIR/run.sh $CURDIR/package/
# Detect architecture and copy appropriate libraries
if [ -f "/lib64/ld-linux-x86-64.so.2" ]; then
# x86_64 architecture
echo "Detected x86_64 architecture, copying x86_64 libraries..."
cp -arfLv /lib64/ld-linux-x86-64.so.2 $CURDIR/package/lib/ld.so
cp -arfLv /lib/x86_64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
cp -arfLv /lib/x86_64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
cp -arfLv /lib/x86_64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
cp -arfLv /lib/x86_64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
cp -arfLv /lib/x86_64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
elif [ -f "/lib/ld-linux-aarch64.so.1" ]; then
# ARM64 architecture
echo "Detected ARM64 architecture, copying ARM64 libraries..."
cp -arfLv /lib/ld-linux-aarch64.so.1 $CURDIR/package/lib/ld.so
cp -arfLv /lib/aarch64-linux-gnu/libc.so.6 $CURDIR/package/lib/libc.so.6
cp -arfLv /lib/aarch64-linux-gnu/libgcc_s.so.1 $CURDIR/package/lib/libgcc_s.so.1
cp -arfLv /lib/aarch64-linux-gnu/libstdc++.so.6 $CURDIR/package/lib/libstdc++.so.6
cp -arfLv /lib/aarch64-linux-gnu/libm.so.6 $CURDIR/package/lib/libm.so.6
cp -arfLv /lib/aarch64-linux-gnu/libgomp.so.1 $CURDIR/package/lib/libgomp.so.1
else
echo "Error: Could not detect architecture"
exit 1
fi
echo "Packaging completed successfully"
ls -liah $CURDIR/package/
ls -liah $CURDIR/package/lib/

View File

@@ -1,13 +0,0 @@
#!/bin/bash
set -ex
CURDIR=$(dirname "$(realpath $0)")
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
# If there is a lib/ld.so, use it
if [ -f $CURDIR/lib/ld.so ]; then
echo "Using lib/ld.so"
exec $CURDIR/lib/ld.so $CURDIR/bark-cpp "$@"
fi
exec $CURDIR/bark-cpp "$@"

View File

@@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
# stablediffusion.cpp (ggml)
STABLEDIFFUSION_GGML_REPO?=https://github.com/leejet/stable-diffusion.cpp
STABLEDIFFUSION_GGML_VERSION?=7010bb4dff7bd55b03d35ef9772142c21699eba9
STABLEDIFFUSION_GGML_VERSION?=e411520407663e1ddf8ff2e5ed4ff3a116fbbc97
CMAKE_ARGS+=-DGGML_MAX_NAME=128

View File

@@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
# whisper.cpp version
WHISPER_REPO?=https://github.com/ggml-org/whisper.cpp
WHISPER_CPP_VERSION?=2eeeba56e9edd762b4b38467bab96c2517163158
WHISPER_CPP_VERSION?=aa1bc0d1a6dfd70dbb9f60c11df12441e03a9075
SO_TARGET?=libgowhisper.so
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF

View File

@@ -130,8 +130,9 @@ func (w *Whisper) AudioTranscription(opts *pb.TranscriptRequest) (pb.TranscriptR
segments := []*pb.TranscriptSegment{}
text := ""
for i := range int(segsLen) {
s := CppGetSegmentStart(i)
t := CppGetSegmentEnd(i)
// segment start/end conversion factor taken from https://github.com/ggml-org/whisper.cpp/blob/master/examples/cli/cli.cpp#L895
s := CppGetSegmentStart(i) * (10000000)
t := CppGetSegmentEnd(i) * (10000000)
txt := strings.Clone(CppGetSegmentText(i))
tokens := make([]int32, CppNTokens(i))

View File

@@ -142,6 +142,31 @@
amd: "rocm-vllm"
intel: "intel-vllm"
nvidia-cuda-12: "cuda12-vllm"
- &vllm-omni
name: "vllm-omni"
license: apache-2.0
urls:
- https://github.com/vllm-project/vllm-omni
tags:
- text-to-image
- image-generation
- text-to-video
- video-generation
- text-to-speech
- TTS
- multimodal
- LLM
icon: https://raw.githubusercontent.com/vllm-project/vllm/main/docs/assets/logos/vllm-logo-text-dark.png
description: |
vLLM-Omni is a unified interface for multimodal generation with vLLM.
It supports image generation (text-to-image, image editing), video generation
(text-to-video, image-to-video), text generation with multimodal inputs, and
text-to-speech generation. Only supports NVIDIA (CUDA) and ROCm platforms.
alias: "vllm-omni"
capabilities:
nvidia: "cuda12-vllm-omni"
amd: "rocm-vllm-omni"
nvidia-cuda-12: "cuda12-vllm-omni"
- &mlx
name: "mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx"
@@ -200,7 +225,7 @@
amd: "rocm-rerankers"
- &transformers
name: "transformers"
icon: https://camo.githubusercontent.com/26569a27b8a30a488dd345024b71dbc05da7ff1b2ba97bb6080c9f1ee0f26cc7/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f68756767696e67666163652f646f63756d656e746174696f6e2d696d616765732f7265736f6c76652f6d61696e2f7472616e73666f726d6572732f7472616e73666f726d6572735f61735f615f6d6f64656c5f646566696e6974696f6e2e706e67
icon: https://avatars.githubusercontent.com/u/25720743?s=200&v=4
alias: "transformers"
license: apache-2.0
description: |
@@ -241,22 +266,6 @@
nvidia-cuda-12: "cuda12-diffusers"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-diffusers"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-diffusers"
- &exllama2
name: "exllama2"
urls:
- https://github.com/turboderp-org/exllamav2
tags:
- text-to-text
- LLM
- EXL2
license: MIT
description: |
ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.
alias: "exllama2"
capabilities:
nvidia: "cuda12-exllama2"
intel: "intel-exllama2"
nvidia-cuda-12: "cuda12-exllama2"
- &faster-whisper
icon: https://avatars.githubusercontent.com/u/1520500?s=200&v=4
description: |
@@ -293,6 +302,25 @@
default: "cpu-moonshine"
nvidia-cuda-13: "cuda13-moonshine"
nvidia-cuda-12: "cuda12-moonshine"
- &whisperx
description: |
WhisperX provides fast automatic speech recognition with word-level timestamps, speaker diarization,
and forced alignment. Built on faster-whisper and pyannote-audio for high-accuracy transcription
with speaker identification.
urls:
- https://github.com/m-bain/whisperX
tags:
- speech-to-text
- diarization
- whisperx
license: BSD-4-Clause
name: "whisperx"
capabilities:
nvidia: "cuda12-whisperx"
amd: "rocm-whisperx"
default: "cpu-whisperx"
nvidia-cuda-13: "cuda13-whisperx"
nvidia-cuda-12: "cuda12-whisperx"
- &kokoro
icon: https://avatars.githubusercontent.com/u/166769057?v=4
description: |
@@ -339,51 +367,6 @@
nvidia-cuda-13: "cuda13-coqui"
nvidia-cuda-12: "cuda12-coqui"
icon: https://avatars.githubusercontent.com/u/1338804?s=200&v=4
- &bark
urls:
- https://github.com/suno-ai/bark
description: |
Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints, which are ready for inference and available for commercial use.
tags:
- text-to-speech
- TTS
license: MIT
name: "bark"
alias: "bark"
capabilities:
cuda: "cuda12-bark"
intel: "intel-bark"
rocm: "rocm-bark"
nvidia-cuda-13: "cuda13-bark"
nvidia-cuda-12: "cuda12-bark"
icon: https://avatars.githubusercontent.com/u/99442120?s=200&v=4
- &barkcpp
urls:
- https://github.com/PABannier/bark.cpp
description: |
With bark.cpp, our goal is to bring real-time realistic multilingual text-to-speech generation to the community.
Plain C/C++ implementation without dependencies
AVX, AVX2 and AVX512 for x86 architectures
CPU and GPU compatible backends
Mixed F16 / F32 precision
4-bit, 5-bit and 8-bit integer quantization
Metal and CUDA backends
Models supported
Bark Small
Bark Large
tags:
- text-to-speech
- TTS
license: MIT
icon: https://github.com/PABannier/bark.cpp/raw/main/assets/banner.png
name: "bark-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-bark-cpp"
mirrors:
- localai/localai-backends:latest-bark-cpp
alias: "bark-cpp"
- &chatterbox
urls:
- https://github.com/resemble-ai/chatterbox
@@ -394,7 +377,7 @@
- text-to-speech
- TTS
license: MIT
icon: https://private-user-images.githubusercontent.com/660224/448166653-bd8c5f03-e91d-4ee5-b680-57355da204d1.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTAxOTE0MDAsIm5iZiI6MTc1MDE5MTEwMCwicGF0aCI6Ii82NjAyMjQvNDQ4MTY2NjUzLWJkOGM1ZjAzLWU5MWQtNGVlNS1iNjgwLTU3MzU1ZGEyMDRkMS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwNjE3JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDYxN1QyMDExNDBaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hMmI1NGY3OGFiZTlhNGFkNTVlYTY4NTIwMWEzODRiZGE4YzdhNGQ5MGNhNzE3MDYyYTA2NDIxYTkyYzhiODkwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.mR9kM9xX0TdzPuSpuspCllHYQiq79dFQ2rtuNvjrl6w
icon: https://avatars.githubusercontent.com/u/49844015?s=200&v=4
name: "chatterbox"
alias: "chatterbox"
capabilities:
@@ -428,6 +411,69 @@
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &qwen-tts
urls:
- https://github.com/QwenLM/Qwen3-TTS
description: |
Qwen3-TTS is a high-quality text-to-speech model supporting custom voice, voice design, and voice cloning.
tags:
- text-to-speech
- TTS
license: apache-2.0
name: "qwen-tts"
alias: "qwen-tts"
capabilities:
nvidia: "cuda12-qwen-tts"
intel: "intel-qwen-tts"
amd: "rocm-qwen-tts"
nvidia-l4t: "nvidia-l4t-qwen-tts"
default: "cpu-qwen-tts"
nvidia-cuda-13: "cuda13-qwen-tts"
nvidia-cuda-12: "cuda12-qwen-tts"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
- &qwen-asr
urls:
- https://github.com/QwenLM/Qwen3-ASR
description: |
Qwen3-ASR is an automatic speech recognition model supporting multiple languages and batch inference.
tags:
- speech-recognition
- ASR
license: apache-2.0
name: "qwen-asr"
alias: "qwen-asr"
capabilities:
nvidia: "cuda12-qwen-asr"
intel: "intel-qwen-asr"
amd: "rocm-qwen-asr"
nvidia-l4t: "nvidia-l4t-qwen-asr"
default: "cpu-qwen-asr"
nvidia-cuda-13: "cuda13-qwen-asr"
nvidia-cuda-12: "cuda12-qwen-asr"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-asr"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-asr"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
- &voxcpm
urls:
- https://github.com/ModelBest/VoxCPM
description: |
VoxCPM is an innovative end-to-end TTS model from ModelBest, designed to generate highly expressive speech.
tags:
- text-to-speech
- TTS
license: mit
name: "voxcpm"
alias: "voxcpm"
capabilities:
nvidia: "cuda12-voxcpm"
intel: "intel-voxcpm"
amd: "rocm-voxcpm"
default: "cpu-voxcpm"
nvidia-cuda-13: "cuda13-voxcpm"
nvidia-cuda-12: "cuda12-voxcpm"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &pocket-tts
urls:
- https://github.com/kyutai-labs/pocket-tts
@@ -449,7 +495,7 @@
nvidia-cuda-12: "cuda12-pocket-tts"
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
icon: https://avatars.githubusercontent.com/u/151010778?s=200&v=4
- &piper
name: "piper"
uri: "quay.io/go-skynet/local-ai-backends:latest-piper"
@@ -537,18 +583,14 @@
default: "cpu-neutts"
nvidia: "cuda12-neutts"
amd: "rocm-neutts"
nvidia-l4t: "nvidia-l4t-neutts"
nvidia-cuda-12: "cuda12-neutts"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-neutts"
- !!merge <<: *neutts
name: "neutts-development"
capabilities:
default: "cpu-neutts-development"
nvidia: "cuda12-neutts-development"
amd: "rocm-neutts-development"
nvidia-l4t: "nvidia-l4t-neutts-development"
nvidia-cuda-12: "cuda12-neutts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-neutts-development"
- !!merge <<: *llamacpp
name: "llama-cpp-development"
capabilities:
@@ -578,11 +620,6 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-neutts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "nvidia-l4t-arm64-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-neutts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-neutts
- !!merge <<: *neutts
name: "cpu-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-neutts"
@@ -598,11 +635,6 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-neutts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "nvidia-l4t-arm64-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-neutts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-neutts
- !!merge <<: *mlx
name: "mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx"
@@ -981,6 +1013,33 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vllm"
mirrors:
- localai/localai-backends:master-gpu-intel-vllm
# vllm-omni
- !!merge <<: *vllm-omni
name: "vllm-omni-development"
capabilities:
nvidia: "cuda12-vllm-omni-development"
amd: "rocm-vllm-omni-development"
nvidia-cuda-12: "cuda12-vllm-omni-development"
- !!merge <<: *vllm-omni
name: "cuda12-vllm-omni"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vllm-omni"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vllm-omni
- !!merge <<: *vllm-omni
name: "rocm-vllm-omni"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vllm-omni"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-vllm-omni
- !!merge <<: *vllm-omni
name: "cuda12-vllm-omni-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vllm-omni"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-vllm-omni
- !!merge <<: *vllm-omni
name: "rocm-vllm-omni-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vllm-omni"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-vllm-omni
# rfdetr
- !!merge <<: *rfdetr
name: "rfdetr-development"
@@ -1243,22 +1302,6 @@
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-diffusers"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-diffusers
## exllama2
- !!merge <<: *exllama2
name: "exllama2-development"
capabilities:
nvidia: "cuda12-exllama2-development"
intel: "intel-exllama2-development"
- !!merge <<: *exllama2
name: "cuda12-exllama2"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-exllama2"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-exllama2
- !!merge <<: *exllama2
name: "cuda12-exllama2-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-exllama2"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-exllama2
## kokoro
- !!merge <<: *kokoro
name: "kokoro-development"
@@ -1393,6 +1436,55 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-moonshine"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-moonshine
## whisperx
- !!merge <<: *whisperx
name: "whisperx-development"
capabilities:
nvidia: "cuda12-whisperx-development"
amd: "rocm-whisperx-development"
default: "cpu-whisperx-development"
nvidia-cuda-13: "cuda13-whisperx-development"
nvidia-cuda-12: "cuda12-whisperx-development"
- !!merge <<: *whisperx
name: "cpu-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisperx"
mirrors:
- localai/localai-backends:latest-cpu-whisperx
- !!merge <<: *whisperx
name: "cpu-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-whisperx"
mirrors:
- localai/localai-backends:master-cpu-whisperx
- !!merge <<: *whisperx
name: "cuda12-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-whisperx"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-whisperx
- !!merge <<: *whisperx
name: "cuda12-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-whisperx"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-whisperx
- !!merge <<: *whisperx
name: "rocm-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-whisperx"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-whisperx
- !!merge <<: *whisperx
name: "rocm-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-whisperx"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-whisperx
- !!merge <<: *whisperx
name: "cuda13-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-whisperx"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-whisperx
- !!merge <<: *whisperx
name: "cuda13-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-whisperx"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-whisperx
## coqui
- !!merge <<: *coqui
@@ -1431,47 +1523,6 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-coqui"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-coqui
## bark
- !!merge <<: *bark
name: "bark-development"
capabilities:
nvidia: "cuda12-bark-development"
intel: "intel-bark-development"
amd: "rocm-bark-development"
- !!merge <<: *bark
name: "rocm-bark-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-bark"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-bark
- !!merge <<: *bark
name: "intel-bark"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-bark"
mirrors:
- localai/localai-backends:latest-gpu-intel-bark
- !!merge <<: *bark
name: "intel-bark-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-bark"
mirrors:
- localai/localai-backends:master-gpu-intel-bark
- !!merge <<: *bark
name: "cuda12-bark"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-bark"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-bark
- !!merge <<: *bark
name: "rocm-bark"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-bark"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-bark
- !!merge <<: *bark
name: "cuda12-bark-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-bark"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-bark
- !!merge <<: *barkcpp
name: "bark-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-bark-cpp"
alias: "bark-cpp"
## chatterbox
- !!merge <<: *chatterbox
name: "chatterbox-development"
@@ -1627,6 +1678,232 @@
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice
## qwen-tts
- !!merge <<: *qwen-tts
name: "qwen-tts-development"
capabilities:
nvidia: "cuda12-qwen-tts-development"
intel: "intel-qwen-tts-development"
amd: "rocm-qwen-tts-development"
nvidia-l4t: "nvidia-l4t-qwen-tts-development"
default: "cpu-qwen-tts-development"
nvidia-cuda-13: "cuda13-qwen-tts-development"
nvidia-cuda-12: "cuda12-qwen-tts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
- !!merge <<: *qwen-tts
name: "cpu-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-tts"
mirrors:
- localai/localai-backends:latest-cpu-qwen-tts
- !!merge <<: *qwen-tts
name: "cpu-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-tts"
mirrors:
- localai/localai-backends:master-cpu-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda12-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda12-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-tts
- !!merge <<: *qwen-tts
name: "intel-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-intel-qwen-tts
- !!merge <<: *qwen-tts
name: "intel-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-intel-qwen-tts
- !!merge <<: *qwen-tts
name: "rocm-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-tts
- !!merge <<: *qwen-tts
name: "rocm-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-tts
- !!merge <<: *qwen-tts
name: "nvidia-l4t-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-qwen-tts
- !!merge <<: *qwen-tts
name: "nvidia-l4t-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-nvidia-l4t-arm64-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts
## qwen-asr
- !!merge <<: *qwen-asr
name: "qwen-asr-development"
capabilities:
nvidia: "cuda12-qwen-asr-development"
intel: "intel-qwen-asr-development"
amd: "rocm-qwen-asr-development"
nvidia-l4t: "nvidia-l4t-qwen-asr-development"
default: "cpu-qwen-asr-development"
nvidia-cuda-13: "cuda13-qwen-asr-development"
nvidia-cuda-12: "cuda12-qwen-asr-development"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-asr-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-asr-development"
- !!merge <<: *qwen-asr
name: "cpu-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-asr"
mirrors:
- localai/localai-backends:latest-cpu-qwen-asr
- !!merge <<: *qwen-asr
name: "cpu-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-asr"
mirrors:
- localai/localai-backends:master-cpu-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda12-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda12-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-asr
- !!merge <<: *qwen-asr
name: "intel-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-intel-qwen-asr
- !!merge <<: *qwen-asr
name: "intel-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-intel-qwen-asr
- !!merge <<: *qwen-asr
name: "rocm-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-asr
- !!merge <<: *qwen-asr
name: "rocm-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-asr
- !!merge <<: *qwen-asr
name: "nvidia-l4t-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-asr"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-qwen-asr
- !!merge <<: *qwen-asr
name: "nvidia-l4t-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-asr"
mirrors:
- localai/localai-backends:master-nvidia-l4t-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-nvidia-l4t-arm64-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-asr"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-nvidia-l4t-arm64-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-asr"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-asr
## voxcpm
- !!merge <<: *voxcpm
name: "voxcpm-development"
capabilities:
nvidia: "cuda12-voxcpm-development"
intel: "intel-voxcpm-development"
amd: "rocm-voxcpm-development"
default: "cpu-voxcpm-development"
nvidia-cuda-13: "cuda13-voxcpm-development"
nvidia-cuda-12: "cuda12-voxcpm-development"
- !!merge <<: *voxcpm
name: "cpu-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-voxcpm"
mirrors:
- localai/localai-backends:latest-cpu-voxcpm
- !!merge <<: *voxcpm
name: "cpu-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-voxcpm"
mirrors:
- localai/localai-backends:master-cpu-voxcpm
- !!merge <<: *voxcpm
name: "cuda12-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-voxcpm
- !!merge <<: *voxcpm
name: "cuda12-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-voxcpm
- !!merge <<: *voxcpm
name: "cuda13-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-voxcpm
- !!merge <<: *voxcpm
name: "cuda13-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-voxcpm
- !!merge <<: *voxcpm
name: "intel-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-intel-voxcpm
- !!merge <<: *voxcpm
name: "intel-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-intel-voxcpm
- !!merge <<: *voxcpm
name: "rocm-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-voxcpm
- !!merge <<: *voxcpm
name: "rocm-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-voxcpm
## pocket-tts
- !!merge <<: *pocket-tts
name: "pocket-tts-development"

View File

@@ -16,10 +16,8 @@ The Python backends use a unified build system based on `libbackend.sh` that pro
- **transformers** - Hugging Face Transformers framework (PyTorch-based)
- **vllm** - High-performance LLM inference engine
- **mlx** - Apple Silicon optimized ML framework
- **exllama2** - ExLlama2 quantized models
### Audio & Speech
- **bark** - Text-to-speech synthesis
- **coqui** - Coqui TTS models
- **faster-whisper** - Fast Whisper speech recognition
- **kitten-tts** - Lightweight TTS

View File

@@ -1,16 +0,0 @@
# Creating a separate environment for ttsbark project
```
make ttsbark
```
# Testing the gRPC server
```
<The path of your python interpreter> -m unittest test_ttsbark.py
```
For example
```
/opt/conda/envs/bark/bin/python -m unittest extra/grpc/bark/test_ttsbark.py
``````

View File

@@ -1,98 +0,0 @@
#!/usr/bin/env python3
"""
This is an extra gRPC server of LocalAI for Bark TTS
"""
from concurrent import futures
import time
import argparse
import signal
import sys
import os
from scipy.io.wavfile import write as write_wav
import backend_pb2
import backend_pb2_grpc
from bark import SAMPLE_RATE, generate_audio, preload_models
import grpc
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
BackendServicer is the class that implements the gRPC service
"""
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
model_name = request.Model
try:
print("Preparing models, please wait", file=sys.stderr)
# download and load all models
preload_models()
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
# Implement your logic here for the LoadModel service
# Replace this with your desired response
return backend_pb2.Result(message="Model loaded successfully", success=True)
def TTS(self, request, context):
model = request.model
print(request, file=sys.stderr)
try:
audio_array = None
if model != "":
audio_array = generate_audio(request.text, history_prompt=model)
else:
audio_array = generate_audio(request.text)
print("saving to", request.dst, file=sys.stderr)
# save audio to disk
write_wav(request.dst, SAMPLE_RATE, audio_array)
print("saved to", request.dst, file=sys.stderr)
print("tts for", file=sys.stderr)
print(request, file=sys.stderr)
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(success=True)
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Define the signal handler function
def signal_handler(sig, frame):
print("Received termination signal. Shutting down...")
server.stop(0)
sys.exit(0)
# Set the signal handlers for SIGINT and SIGTERM
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
serve(args.addr)

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
# This is here because the Intel pip index is broken and returns 200 status codes for every package name, it just doesn't return any package links.
# This makes uv think that the package exists in the Intel pip index, and by default it stops looking at other pip indexes once it finds a match.
# We need uv to continue falling through to the pypi default index to find optimum[openvino] in the pypi index
# the --upgrade actually allows us to *downgrade* torch to the version provided in the Intel pip index
if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
installRequirements

View File

@@ -1,4 +0,0 @@
transformers
accelerate
torch==2.4.1
torchaudio==2.4.1

View File

@@ -1,4 +0,0 @@
torch==2.4.1
torchaudio==2.4.1
transformers
accelerate

View File

@@ -1,5 +0,0 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.4
torch==2.8.0+rocm6.4
torchaudio==2.8.0+rocm6.4
transformers
accelerate

View File

@@ -1,9 +0,0 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.8.10+xpu
torch==2.3.1+cxx11.abi
torchaudio==2.3.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu
optimum[openvino]
setuptools
transformers
accelerate

View File

@@ -1,4 +0,0 @@
bark==0.1.5
grpcio==1.76.0
protobuf
certifi

View File

@@ -1,7 +1,6 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.3.1+cxx11.abi
torchaudio==2.3.1+cxx11.abi
--extra-index-url https://download.pytorch.org/whl/xpu
torch
torchaudio
transformers
numpy>=1.24.0,<1.26.0
# https://github.com/mudler/LocalAI/pull/6240#issuecomment-3329518289

View File

@@ -398,7 +398,7 @@ function runProtogen() {
# NOTE: for BUILD_PROFILE==intel, this function does NOT automatically use the Intel python package index.
# you may want to add the following line to a requirements-intel.txt if you use one:
#
# --index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# --index-url https://download.pytorch.org/whl/xpu
#
# If you need to add extra flags into the pip install command you can do so by setting the variable EXTRA_PIP_INSTALL_FLAGS
# before calling installRequirements. For example:

View File

@@ -1,5 +1,4 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.8.10+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch==2.8.0
oneccl_bind_pt==2.8.0+xpu
optimum[openvino]

View File

@@ -1,4 +1,4 @@
# Creating a separate environment for ttsbark project
# Creating a separate environment for coqui project
```
make coqui

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python3
"""
This is an extra gRPC server of LocalAI for Bark TTS
This is an extra gRPC server of LocalAI for Coqui TTS
"""
from concurrent import futures
import time

View File

@@ -1,8 +1,6 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.3.1+cxx11.abi
torchaudio==2.3.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch==2.8.0+xpu
torchaudio==2.8.0+xpu
optimum[openvino]
setuptools
transformers==4.48.3

View File

@@ -42,12 +42,8 @@ from transformers import T5EncoderModel
from safetensors.torch import load_file
# Import LTX-2 specific utilities
try:
from diffusers.pipelines.ltx2.export_utils import encode_video as ltx2_encode_video
LTX2_AVAILABLE = True
except ImportError:
LTX2_AVAILABLE = False
ltx2_encode_video = None
from diffusers.pipelines.ltx2.export_utils import encode_video as ltx2_encode_video
from diffusers import LTX2VideoTransformer3DModel, GGUFQuantizationConfig
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
COMPEL = os.environ.get("COMPEL", "0") == "1"
@@ -302,12 +298,96 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
if pipeline_type == "LTX2ImageToVideoPipeline":
self.img2vid = True
self.ltx2_pipeline = True
pipe = load_diffusers_pipeline(
class_name="LTX2ImageToVideoPipeline",
model_id=request.Model,
torch_dtype=torchType,
variant=variant
)
# Check if loading from single file (GGUF)
if fromSingleFile and LTX2VideoTransformer3DModel is not None:
_, single_file_ext = os.path.splitext(modelFile)
if single_file_ext == ".gguf":
# Load transformer from single GGUF file with quantization
transformer_kwargs = {}
quantization_config = GGUFQuantizationConfig(compute_dtype=torchType)
transformer_kwargs["quantization_config"] = quantization_config
transformer = LTX2VideoTransformer3DModel.from_single_file(
modelFile,
config=request.Model, # Use request.Model as the config/model_id
subfolder="transformer",
**transformer_kwargs,
)
# Load pipeline with custom transformer
pipe = load_diffusers_pipeline(
class_name="LTX2ImageToVideoPipeline",
model_id=request.Model,
transformer=transformer,
torch_dtype=torchType,
)
else:
# Single file but not GGUF - use standard single file loading
pipe = load_diffusers_pipeline(
class_name="LTX2ImageToVideoPipeline",
model_id=modelFile,
from_single_file=True,
torch_dtype=torchType,
)
else:
# Standard loading from pretrained
pipe = load_diffusers_pipeline(
class_name="LTX2ImageToVideoPipeline",
model_id=request.Model,
torch_dtype=torchType,
variant=variant
)
if not DISABLE_CPU_OFFLOAD:
pipe.enable_model_cpu_offload()
return pipe
# LTX2Pipeline - text-to-video pipeline, needs txt2vid flag, CPU offload, and special handling
if pipeline_type == "LTX2Pipeline":
self.txt2vid = True
self.ltx2_pipeline = True
# Check if loading from single file (GGUF)
if fromSingleFile and LTX2VideoTransformer3DModel is not None:
_, single_file_ext = os.path.splitext(modelFile)
if single_file_ext == ".gguf":
# Load transformer from single GGUF file with quantization
transformer_kwargs = {}
quantization_config = GGUFQuantizationConfig(compute_dtype=torchType)
transformer_kwargs["quantization_config"] = quantization_config
transformer = LTX2VideoTransformer3DModel.from_single_file(
modelFile,
config=request.Model, # Use request.Model as the config/model_id
subfolder="transformer",
**transformer_kwargs,
)
# Load pipeline with custom transformer
pipe = load_diffusers_pipeline(
class_name="LTX2Pipeline",
model_id=request.Model,
transformer=transformer,
torch_dtype=torchType,
)
else:
# Single file but not GGUF - use standard single file loading
pipe = load_diffusers_pipeline(
class_name="LTX2Pipeline",
model_id=modelFile,
from_single_file=True,
torch_dtype=torchType,
)
else:
# Standard loading from pretrained
pipe = load_diffusers_pipeline(
class_name="LTX2Pipeline",
model_id=request.Model,
torch_dtype=torchType,
variant=variant
)
if not DISABLE_CPU_OFFLOAD:
pipe.enable_model_cpu_offload()
return pipe
@@ -428,6 +508,8 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
self.txt2vid = False
self.ltx2_pipeline = False
print(f"LoadModel: PipelineType from request: {request.PipelineType}", file=sys.stderr)
# Load pipeline using dynamic loader
# Special cases that require custom initialization are handled first
self.pipe = self._load_pipeline(
@@ -437,6 +519,8 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
torchType=torchType,
variant=variant
)
print(f"LoadModel: After loading - ltx2_pipeline: {self.ltx2_pipeline}, img2vid: {self.img2vid}, txt2vid: {self.txt2vid}, PipelineType: {self.PipelineType}", file=sys.stderr)
if CLIPSKIP and request.CLIPSkip != 0:
self.clip_skip = request.CLIPSkip
@@ -674,14 +758,20 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
try:
prompt = request.prompt
if not prompt:
print(f"GenerateVideo: No prompt provided for video generation.", file=sys.stderr)
return backend_pb2.Result(success=False, message="No prompt provided for video generation")
# Debug: Print raw request values
print(f"GenerateVideo: Raw request values - num_frames: {request.num_frames}, fps: {request.fps}, cfg_scale: {request.cfg_scale}, step: {request.step}", file=sys.stderr)
# Set default values from request or use defaults
num_frames = request.num_frames if request.num_frames > 0 else 81
fps = request.fps if request.fps > 0 else 16
cfg_scale = request.cfg_scale if request.cfg_scale > 0 else 4.0
num_inference_steps = request.step if request.step > 0 else 40
print(f"GenerateVideo: Using values - num_frames: {num_frames}, fps: {fps}, cfg_scale: {cfg_scale}, num_inference_steps: {num_inference_steps}", file=sys.stderr)
# Prepare generation parameters
kwargs = {
"prompt": prompt,
@@ -707,19 +797,34 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
kwargs["end_image"] = load_image(request.end_image)
print(f"Generating video with {kwargs=}", file=sys.stderr)
print(f"GenerateVideo: Pipeline type: {self.PipelineType}, ltx2_pipeline flag: {self.ltx2_pipeline}", file=sys.stderr)
# Generate video frames based on pipeline type
if self.ltx2_pipeline or self.PipelineType == "LTX2ImageToVideoPipeline":
# LTX-2 image-to-video generation with audio
if not LTX2_AVAILABLE:
return backend_pb2.Result(success=False, message="LTX-2 pipeline requires diffusers.pipelines.ltx2.export_utils")
if self.ltx2_pipeline or self.PipelineType in ["LTX2Pipeline", "LTX2ImageToVideoPipeline"]:
# LTX-2 generation with audio (supports both text-to-video and image-to-video)
# Determine if this is text-to-video (no image) or image-to-video (has image)
has_image = bool(request.start_image)
# LTX-2 uses 'image' parameter instead of 'start_image'
if request.start_image:
image = load_image(request.start_image)
kwargs["image"] = image
# Remove start_image if it was added
kwargs.pop("start_image", None)
# Remove image-related parameters that might have been added earlier
kwargs.pop("start_image", None)
kwargs.pop("end_image", None)
# LTX2ImageToVideoPipeline uses 'image' parameter for image-to-video
# LTX2Pipeline (text-to-video) doesn't need an image parameter
if has_image:
# Image-to-video: use 'image' parameter
if self.PipelineType == "LTX2ImageToVideoPipeline":
image = load_image(request.start_image)
kwargs["image"] = image
print(f"LTX-2: Using image-to-video mode with image", file=sys.stderr)
else:
# If pipeline type is LTX2Pipeline but we have an image, we can't do image-to-video
return backend_pb2.Result(success=False, message="LTX2Pipeline does not support image-to-video. Use LTX2ImageToVideoPipeline for image-to-video generation.")
else:
# Text-to-video: no image parameter needed
# Ensure no image-related kwargs are present
kwargs.pop("image", None)
print(f"LTX-2: Using text-to-video mode (no image)", file=sys.stderr)
# LTX-2 uses 'frame_rate' instead of 'fps'
frame_rate = float(fps)
@@ -730,20 +835,45 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
kwargs["return_dict"] = False
# Generate video and audio
video, audio = self.pipe(**kwargs)
print(f"LTX-2: Generating with kwargs: {kwargs}", file=sys.stderr)
try:
video, audio = self.pipe(**kwargs)
print(f"LTX-2: Generated video shape: {video.shape}, audio shape: {audio.shape}", file=sys.stderr)
except Exception as e:
print(f"LTX-2: Error during pipe() call: {e}", file=sys.stderr)
traceback.print_exc()
return backend_pb2.Result(success=False, message=f"Error generating video with LTX-2 pipeline: {e}")
# Convert video to uint8 format
video = (video * 255).round().astype("uint8")
video = torch.from_numpy(video)
print(f"LTX-2: Converting video, shape after conversion: {video.shape}", file=sys.stderr)
print(f"LTX-2: Audio sample rate: {self.pipe.vocoder.config.output_sampling_rate}", file=sys.stderr)
print(f"LTX-2: Output path: {request.dst}", file=sys.stderr)
# Use LTX-2's encode_video function which handles audio
ltx2_encode_video(
video[0],
fps=frame_rate,
audio=audio[0].float().cpu(),
audio_sample_rate=self.pipe.vocoder.config.output_sampling_rate,
output_path=request.dst,
)
try:
ltx2_encode_video(
video[0],
fps=frame_rate,
audio=audio[0].float().cpu(),
audio_sample_rate=self.pipe.vocoder.config.output_sampling_rate,
output_path=request.dst,
)
# Verify file was created and has content
import os
if os.path.exists(request.dst):
file_size = os.path.getsize(request.dst)
print(f"LTX-2: Video file created successfully, size: {file_size} bytes", file=sys.stderr)
if file_size == 0:
return backend_pb2.Result(success=False, message=f"Video file was created but is empty (0 bytes). Check LTX-2 encode_video function.")
else:
return backend_pb2.Result(success=False, message=f"Video file was not created at {request.dst}")
except Exception as e:
print(f"LTX-2: Error encoding video: {e}", file=sys.stderr)
traceback.print_exc()
return backend_pb2.Result(success=False, message=f"Error encoding video: {e}")
return backend_pb2.Result(message="Video generated successfully", success=True)
elif self.PipelineType == "WanPipeline":
@@ -785,11 +915,23 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
output = self.pipe(**kwargs)
frames = output.frames[0]
else:
print(f"GenerateVideo: Pipeline {self.PipelineType} does not match any known video pipeline handler", file=sys.stderr)
return backend_pb2.Result(success=False, message=f"Pipeline {self.PipelineType} does not support video generation")
# Export video (for non-LTX-2 pipelines)
print(f"GenerateVideo: Exporting video to {request.dst} with fps={fps}", file=sys.stderr)
export_to_video(frames, request.dst, fps=fps)
# Verify file was created
import os
if os.path.exists(request.dst):
file_size = os.path.getsize(request.dst)
print(f"GenerateVideo: Video file created, size: {file_size} bytes", file=sys.stderr)
if file_size == 0:
return backend_pb2.Result(success=False, message=f"Video file was created but is empty (0 bytes)")
else:
return backend_pb2.Result(success=False, message=f"Video file was not created at {request.dst}")
return backend_pb2.Result(message="Video generated successfully", success=True)
except Exception as err:

View File

@@ -1,8 +1,6 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.5.1+cxx11.abi
torchvision==0.20.1+cxx11.abi
oneccl_bind_pt==2.8.0+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch
torchvision
optimum[openvino]
setuptools
git+https://github.com/huggingface/diffusers

View File

@@ -3,3 +3,4 @@ grpcio==1.76.0
pillow
protobuf
certifi
av

View File

@@ -1 +0,0 @@
source

View File

@@ -1,17 +0,0 @@
.PHONY: exllama2
exllama2:
bash install.sh
.PHONY: run
run: exllama2
@echo "Running exllama2..."
bash run.sh
@echo "exllama2 run."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
$(RM) -r venv source __pycache__

View File

@@ -1,143 +0,0 @@
#!/usr/bin/env python3
import grpc
from concurrent import futures
import time
import backend_pb2
import backend_pb2_grpc
import argparse
import signal
import sys
import os
import glob
from pathlib import Path
import torch
import torch.nn.functional as F
from torch import version as torch_version
from exllamav2.generator import (
ExLlamaV2BaseGenerator,
ExLlamaV2Sampler
)
from exllamav2 import (
ExLlamaV2,
ExLlamaV2Config,
ExLlamaV2Cache,
ExLlamaV2Cache_8bit,
ExLlamaV2Tokenizer,
model_init,
)
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
try:
model_directory = request.ModelFile
config = ExLlamaV2Config()
config.model_dir = model_directory
config.prepare()
model = ExLlamaV2(config)
cache = ExLlamaV2Cache(model, lazy=True)
model.load_autosplit(cache)
tokenizer = ExLlamaV2Tokenizer(config)
# Initialize generator
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer)
self.generator = generator
generator.warmup()
self.model = model
self.tokenizer = tokenizer
self.cache = cache
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(message="Model loaded successfully", success=True)
def Predict(self, request, context):
penalty = 1.15
if request.Penalty != 0.0:
penalty = request.Penalty
settings = ExLlamaV2Sampler.Settings()
settings.temperature = request.Temperature
settings.top_k = request.TopK
settings.top_p = request.TopP
settings.token_repetition_penalty = penalty
settings.disallow_tokens(self.tokenizer, [self.tokenizer.eos_token_id])
tokens = 512
if request.Tokens != 0:
tokens = request.Tokens
output = self.generator.generate_simple(
request.Prompt, settings, tokens)
# Remove prompt from response if present
if request.Prompt in output:
output = output.replace(request.Prompt, "")
return backend_pb2.Result(message=bytes(output, encoding='utf-8'))
def PredictStream(self, request, context):
# Implement PredictStream RPC
# for reply in some_data_generator():
# yield reply
# Not implemented yet
return self.Predict(request, context)
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Define the signal handler function
def signal_handler(sig, frame):
print("Received termination signal. Shutting down...")
server.stop(0)
sys.exit(0)
# Set the signal handlers for SIGINT and SIGTERM
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
serve(args.addr)

View File

@@ -1,21 +0,0 @@
#!/bin/bash
set -e
LIMIT_TARGETS="cublas"
EXTRA_PIP_INSTALL_FLAGS="--no-build-isolation"
EXLLAMA2_VERSION=c0ddebaaaf8ffd1b3529c2bb654e650bce2f790f
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
installRequirements
git clone https://github.com/turboderp/exllamav2 $MY_DIR/source
pushd ${MY_DIR}/source && git checkout -b build ${EXLLAMA2_VERSION} && popd
# This installs exllamav2 in JIT mode so it will compile the appropriate torch extension at runtime
EXLLAMA_NOCOMPILE= uv pip install ${EXTRA_PIP_INSTALL_FLAGS} ${MY_DIR}/source/

View File

@@ -1,3 +0,0 @@
transformers
accelerate
torch==2.4.1

View File

@@ -1,3 +0,0 @@
torch==2.4.1
transformers
accelerate

View File

@@ -1,4 +0,0 @@
# This is here to trigger the install script to add --no-build-isolation to the uv pip install commands
# exllama2 does not specify it's build requirements per PEP517, so we need to provide some things ourselves
wheel
setuptools

View File

@@ -1,5 +0,0 @@
grpcio==1.76.0
protobuf
certifi
wheel
setuptools

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python3
"""
This is an extra gRPC server of LocalAI for Bark TTS
This is an extra gRPC server of LocalAI for Faster Whisper TTS
"""
from concurrent import futures
import time
@@ -40,7 +40,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
device = "mps"
try:
print("Preparing models, please wait", file=sys.stderr)
self.model = WhisperModel(request.Model, device=device, compute_type="float16")
self.model = WhisperModel(request.Model, device=device, compute_type="default")
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
# Implement your logic here for the LoadModel service
@@ -55,11 +55,12 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
id = 0
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
resultSegments.append(backend_pb2.TranscriptSegment(id=id, start=segment.start, end=segment.end, text=segment.text))
resultSegments.append(backend_pb2.TranscriptSegment(id=id, start=int(segment.start)*1e9, end=int(segment.end)*1e9, text=segment.text))
text += segment.text
id += 1
id += 1
except Exception as err:
print(f"Unexpected {err=}, {type(err)=}", file=sys.stderr)
raise err
return backend_pb2.TranscriptResult(segments=resultSegments, text=text)

View File

@@ -1,6 +1,4 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.3.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch
optimum[openvino]
faster-whisper

View File

@@ -1,8 +1,6 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.8.10+xpu
torch==2.5.1+cxx11.abi
oneccl_bind_pt==2.8.0+xpu
torchaudio==2.5.1+cxx11.abi
--extra-index-url https://download.pytorch.org/whl/xpu
torch
torchaudio
optimum[openvino]
setuptools
transformers==4.48.3

View File

@@ -1,4 +1,4 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
--extra-index-url https://download.pytorch.org/whl/xpu
pocket-tts
scipy
torch==2.5.1+cxx11.abi
torch

View File

@@ -0,0 +1,25 @@
.DEFAULT_GOAL := install
.PHONY: qwen-asr
qwen-asr:
bash install.sh
.PHONY: run
run: qwen-asr
@echo "Running qwen-asr..."
bash run.sh
@echo "qwen-asr run."
.PHONY: test
test: qwen-asr
@echo "Testing qwen-asr..."
bash test.sh
@echo "qwen-asr tested."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,212 @@
#!/usr/bin/env python3
"""
gRPC server of LocalAI for Qwen3-ASR (transformers backend, non-vLLM).
"""
from concurrent import futures
import time
import argparse
import signal
import sys
import os
import backend_pb2
import backend_pb2_grpc
import torch
from qwen_asr import Qwen3ASRModel
import grpc
def is_float(s):
try:
float(s)
return True
except ValueError:
return False
def is_int(s):
try:
int(s)
return True
except ValueError:
return False
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
class BackendServicer(backend_pb2_grpc.BackendServicer):
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")
self.device = device
self.options = {}
for opt in request.Options:
if ":" not in opt:
continue
key, value = opt.split(":", 1)
if is_float(value):
value = float(value)
elif is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
model_path = request.Model or "Qwen/Qwen3-ASR-1.7B"
default_dtype = torch.bfloat16 if self.device == "cuda" else torch.float32
load_dtype = default_dtype
if "torch_dtype" in self.options:
d = str(self.options["torch_dtype"]).lower()
if d == "fp16":
load_dtype = torch.float16
elif d == "bf16":
load_dtype = torch.bfloat16
elif d == "fp32":
load_dtype = torch.float32
del self.options["torch_dtype"]
self.max_inference_batch_size = self.options.get("max_inference_batch_size", 32)
self.max_new_tokens = self.options.get("max_new_tokens", 256)
forced_aligner = self.options.get("forced_aligner")
if forced_aligner is not None and isinstance(forced_aligner, str):
forced_aligner = forced_aligner.strip() or None
attn_implementation = self.options.get("attn_implementation")
if attn_implementation is not None and isinstance(attn_implementation, str):
attn_implementation = attn_implementation.strip() or None
if self.device == "mps":
device_map = None
elif self.device == "cuda":
device_map = "cuda:0"
else:
device_map = "cpu"
load_kwargs = dict(
dtype=load_dtype,
device_map=device_map,
max_inference_batch_size=self.max_inference_batch_size,
max_new_tokens=self.max_new_tokens,
)
if attn_implementation:
load_kwargs["attn_implementation"] = attn_implementation
if forced_aligner:
load_kwargs["forced_aligner"] = forced_aligner
forced_aligner_kwargs = dict(
dtype=load_dtype,
device_map=device_map,
)
if attn_implementation:
forced_aligner_kwargs["attn_implementation"] = attn_implementation
load_kwargs["forced_aligner_kwargs"] = forced_aligner_kwargs
try:
print(f"Loading Qwen3-ASR from {model_path}", file=sys.stderr)
if attn_implementation:
print(f"Using attn_implementation: {attn_implementation}", file=sys.stderr)
if forced_aligner:
print(f"Loading with forced_aligner: {forced_aligner}", file=sys.stderr)
self.model = Qwen3ASRModel.from_pretrained(model_path, **load_kwargs)
print("Qwen3-ASR model loaded successfully", file=sys.stderr)
except Exception as err:
print(f"[ERROR] LoadModel failed: {err}", file=sys.stderr)
import traceback
traceback.print_exc(file=sys.stderr)
return backend_pb2.Result(success=False, message=str(err))
return backend_pb2.Result(message="Model loaded successfully", success=True)
def AudioTranscription(self, request, context):
result_segments = []
text = ""
try:
audio_path = request.dst
if not audio_path or not os.path.exists(audio_path):
print(f"Error: Audio file not found: {audio_path}", file=sys.stderr)
return backend_pb2.TranscriptResult(segments=[], text="")
language = None
if request.language and request.language.strip():
language = request.language.strip()
results = self.model.transcribe(audio=audio_path, language=language)
if not results:
return backend_pb2.TranscriptResult(segments=[], text="")
r = results[0]
text = r.text or ""
if getattr(r, 'time_stamps', None) and len(r.time_stamps) > 0:
for idx, ts in enumerate(r.time_stamps):
start_ms = 0
end_ms = 0
seg_text = text
if isinstance(ts, (list, tuple)) and len(ts) >= 3:
start_ms = int(float(ts[0]) * 1000) if ts[0] is not None else 0
end_ms = int(float(ts[1]) * 1000) if ts[1] is not None else 0
seg_text = ts[2] if len(ts) > 2 and ts[2] is not None else ""
result_segments.append(backend_pb2.TranscriptSegment(
id=idx, start=start_ms, end=end_ms, text=seg_text
))
else:
if text:
result_segments.append(backend_pb2.TranscriptSegment(
id=0, start=0, end=0, text=text
))
except Exception as err:
print(f"Error in AudioTranscription: {err}", file=sys.stderr)
import traceback
traceback.print_exc(file=sys.stderr)
return backend_pb2.TranscriptResult(segments=[], text="")
return backend_pb2.TranscriptResult(segments=result_segments, text=text)
def serve(address):
server = grpc.server(
futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024),
('grpc.max_send_message_length', 50 * 1024 * 1024),
('grpc.max_receive_message_length', 50 * 1024 * 1024),
])
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
def signal_handler(sig, frame):
print("Received termination signal. Shutting down...")
server.stop(0)
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument("--addr", default="localhost:50051", help="The address to bind the server to.")
args = parser.parse_args()
serve(args.addr)

View File

@@ -0,0 +1,21 @@
#!/bin/bash
set -e
EXTRA_PIP_INSTALL_FLAGS="--no-build-isolation"
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
PYTHON_VERSION="3.12"
PYTHON_PATCH="12"
PY_STANDALONE_TAG="20251120"
installRequirements

View File

@@ -0,0 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/cpu
torch
qwen-asr

View File

@@ -0,0 +1 @@
flash-attn

View File

@@ -0,0 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/cu121
torch
qwen-asr

View File

@@ -0,0 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
qwen-asr

View File

@@ -0,0 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.3
torch==2.7.1+rocm6.3
qwen-asr

View File

@@ -0,0 +1 @@
flash-attn

View File

@@ -0,0 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/xpu
torch
qwen-asr

View File

@@ -0,0 +1,3 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu129/
torch
qwen-asr

View File

@@ -0,0 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
qwen-asr

View File

@@ -0,0 +1,2 @@
torch==2.7.1
qwen-asr

View File

@@ -0,0 +1,5 @@
grpcio==1.71.0
protobuf
certifi
packaging==24.1
setuptools

View File

@@ -6,4 +6,4 @@ else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@
startBackend $@

View File

@@ -0,0 +1,94 @@
"""
Tests for the Qwen3-ASR gRPC backend.
"""
import unittest
import subprocess
import time
import os
import tempfile
import shutil
import backend_pb2
import backend_pb2_grpc
import grpc
# Skip heavy transcription test in CI (model download + inference)
SKIP_ASR_TESTS = os.environ.get("SKIP_ASR_TESTS", "false").lower() == "true"
class TestBackendServicer(unittest.TestCase):
def setUp(self):
self.service = subprocess.Popen(["python3", "backend.py", "--addr", "localhost:50051"])
time.sleep(15)
def tearDown(self):
self.service.terminate()
self.service.wait()
def test_server_startup(self):
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.Health(backend_pb2.HealthMessage())
self.assertEqual(response.message, b'OK')
except Exception as err:
print(err)
self.fail("Server failed to start")
finally:
self.tearDown()
def test_load_model(self):
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="Qwen/Qwen3-ASR-1.7B"))
self.assertTrue(response.success, response.message)
self.assertEqual(response.message, "Model loaded successfully")
except Exception as err:
print(err)
self.fail("LoadModel service failed")
finally:
self.tearDown()
@unittest.skipIf(SKIP_ASR_TESTS, "ASR transcription test skipped (SKIP_ASR_TESTS=true)")
def test_audio_transcription(self):
temp_dir = tempfile.mkdtemp()
audio_file = os.path.join(temp_dir, 'audio.wav')
try:
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR-Repo/asr_en.wav"
result = subprocess.run(
["wget", "-q", url, "-O", audio_file],
capture_output=True,
text=True,
timeout=30,
)
if result.returncode != 0:
self.skipTest(f"Could not download sample audio: {result.stderr}")
if not os.path.exists(audio_file):
self.skipTest("Sample audio file not found after download")
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
load_response = stub.LoadModel(backend_pb2.ModelOptions(Model="Qwen/Qwen3-ASR-0.6B"))
self.assertTrue(load_response.success, load_response.message)
transcript_response = stub.AudioTranscription(
backend_pb2.TranscriptRequest(dst=audio_file)
)
self.assertIsNotNone(transcript_response)
self.assertIsNotNone(transcript_response.text)
self.assertGreaterEqual(len(transcript_response.segments), 0)
all_text = ""
for segment in transcript_response.segments:
all_text += segment.text
print(f"All text: {all_text}")
self.assertIn("big", all_text)
if transcript_response.segments:
self.assertIsNotNone(transcript_response.segments[0].text)
finally:
self.tearDown()
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)

View File

View File

@@ -0,0 +1,23 @@
.PHONY: qwen-tts
qwen-tts:
bash install.sh
.PHONY: run
run: qwen-tts
@echo "Running qwen-tts..."
bash run.sh
@echo "qwen-tts run."
.PHONY: test
test: qwen-tts
@echo "Testing qwen-tts..."
bash test.sh
@echo "qwen-tts tested."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,475 @@
#!/usr/bin/env python3
"""
This is an extra gRPC server of LocalAI for Qwen3-TTS
"""
from concurrent import futures
import time
import argparse
import signal
import sys
import os
import copy
import traceback
from pathlib import Path
import backend_pb2
import backend_pb2_grpc
import torch
import soundfile as sf
from qwen_tts import Qwen3TTSModel
import grpc
def is_float(s):
"""Check if a string can be converted to float."""
try:
float(s)
return True
except ValueError:
return False
def is_int(s):
"""Check if a string can be converted to int."""
try:
int(s)
return True
except ValueError:
return False
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
BackendServicer is the class that implements the gRPC service
"""
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
# Get device
if torch.cuda.is_available():
print("CUDA is available", file=sys.stderr)
device = "cuda"
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")
# Normalize potential 'mpx' typo to 'mps'
if device == "mpx":
print("Note: device 'mpx' detected, treating it as 'mps'.", file=sys.stderr)
device = "mps"
# Validate mps availability if requested
if device == "mps" and not torch.backends.mps.is_available():
print("Warning: MPS not available. Falling back to CPU.", file=sys.stderr)
device = "cpu"
self.device = device
self._torch_device = torch.device(device)
options = request.Options
# empty dict
self.options = {}
# The options are a list of strings in this form optname:optvalue
# We are storing all the options in a dict so we can use it later when
# generating the audio
for opt in options:
if ":" not in opt:
continue
key, value = opt.split(":", 1) # Split only on first colon
# if value is a number, convert it to the appropriate type
if is_float(value):
value = float(value)
elif is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
# Get model path from request
model_path = request.Model
if not model_path:
model_path = "Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice"
# Determine model type from model path or options
self.model_type = self.options.get("model_type", None)
if not self.model_type:
if "CustomVoice" in model_path:
self.model_type = "CustomVoice"
elif "VoiceDesign" in model_path:
self.model_type = "VoiceDesign"
elif "Base" in model_path or "0.6B" in model_path or "1.7B" in model_path:
self.model_type = "Base" # VoiceClone model
else:
# Default to CustomVoice
self.model_type = "CustomVoice"
# Cache for voice clone prompts
self._voice_clone_cache = {}
# Store AudioPath, ModelFile, and ModelPath from LoadModel request
# These are used later in TTS for VoiceClone mode
self.audio_path = request.AudioPath if hasattr(request, 'AudioPath') and request.AudioPath else None
self.model_file = request.ModelFile if hasattr(request, 'ModelFile') and request.ModelFile else None
self.model_path = request.ModelPath if hasattr(request, 'ModelPath') and request.ModelPath else None
# Decide dtype & attention implementation
if self.device == "mps":
load_dtype = torch.float32 # MPS requires float32
device_map = None
attn_impl_primary = "sdpa" # flash_attention_2 not supported on MPS
elif self.device == "cuda":
load_dtype = torch.bfloat16
device_map = "cuda"
attn_impl_primary = "flash_attention_2"
else: # cpu
load_dtype = torch.float32
device_map = "cpu"
attn_impl_primary = "sdpa"
print(f"Using device: {self.device}, torch_dtype: {load_dtype}, attn_implementation: {attn_impl_primary}, model_type: {self.model_type}", file=sys.stderr)
print(f"Loading model from: {model_path}", file=sys.stderr)
# Load model with device-specific logic
# Common parameters for all devices
load_kwargs = {
"dtype": load_dtype,
"attn_implementation": attn_impl_primary,
"trust_remote_code": True, # Required for qwen-tts models
}
try:
if self.device == "mps":
load_kwargs["device_map"] = None # load then move
self.model = Qwen3TTSModel.from_pretrained(model_path, **load_kwargs)
self.model.to("mps")
elif self.device == "cuda":
load_kwargs["device_map"] = device_map
self.model = Qwen3TTSModel.from_pretrained(model_path, **load_kwargs)
else: # cpu
load_kwargs["device_map"] = device_map
self.model = Qwen3TTSModel.from_pretrained(model_path, **load_kwargs)
except Exception as e:
error_msg = str(e)
print(f"[ERROR] Loading model: {type(e).__name__}: {error_msg}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
# Check if it's a missing feature extractor/tokenizer error
if "speech_tokenizer" in error_msg or "preprocessor_config.json" in error_msg or "feature extractor" in error_msg.lower():
print("\n[ERROR] Model files appear to be incomplete. This usually means:", file=sys.stderr)
print(" 1. The model download was interrupted or incomplete", file=sys.stderr)
print(" 2. The model cache is corrupted", file=sys.stderr)
print("\nTo fix this, try:", file=sys.stderr)
print(f" rm -rf ~/.cache/huggingface/hub/models--Qwen--Qwen3-TTS-*", file=sys.stderr)
print(" Then re-run to trigger a fresh download.", file=sys.stderr)
print("\nAlternatively, try using a different model variant:", file=sys.stderr)
print(" - Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice", file=sys.stderr)
print(" - Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign", file=sys.stderr)
print(" - Qwen/Qwen3-TTS-12Hz-1.7B-Base", file=sys.stderr)
if attn_impl_primary == 'flash_attention_2':
print("\nTrying to use SDPA instead of flash_attention_2...", file=sys.stderr)
load_kwargs["attn_implementation"] = 'sdpa'
try:
if self.device == "mps":
load_kwargs["device_map"] = None
self.model = Qwen3TTSModel.from_pretrained(model_path, **load_kwargs)
self.model.to("mps")
else:
load_kwargs["device_map"] = (self.device if self.device in ("cuda", "cpu") else None)
self.model = Qwen3TTSModel.from_pretrained(model_path, **load_kwargs)
except Exception as e2:
print(f"[ERROR] Failed to load with SDPA: {type(e2).__name__}: {e2}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
raise e2
else:
raise e
print(f"Model loaded successfully: {model_path}", file=sys.stderr)
return backend_pb2.Result(message="Model loaded successfully", success=True)
def _detect_mode(self, request):
"""Detect which mode to use based on request parameters."""
# Priority: VoiceClone > VoiceDesign > CustomVoice
# model_type explicitly set
if self.model_type == "CustomVoice":
return "CustomVoice"
if self.model_type == "VoiceClone":
return "VoiceClone"
if self.model_type == "VoiceDesign":
return "VoiceDesign"
# VoiceClone: AudioPath is provided (from LoadModel, stored in self.audio_path)
if self.audio_path:
return "VoiceClone"
# VoiceDesign: instruct option is provided
if "instruct" in self.options and self.options["instruct"]:
return "VoiceDesign"
# Default to CustomVoice
return "CustomVoice"
def _get_ref_audio_path(self, request):
"""Get reference audio path from stored AudioPath (from LoadModel)."""
if not self.audio_path:
return None
# If absolute path, use as-is
if os.path.isabs(self.audio_path):
return self.audio_path
# Try relative to ModelFile
if self.model_file:
model_file_base = os.path.dirname(self.model_file)
ref_path = os.path.join(model_file_base, self.audio_path)
if os.path.exists(ref_path):
return ref_path
# Try relative to ModelPath
if self.model_path:
ref_path = os.path.join(self.model_path, self.audio_path)
if os.path.exists(ref_path):
return ref_path
# Return as-is (might be URL or base64)
return self.audio_path
def _get_voice_clone_prompt(self, request, ref_audio, ref_text):
"""Get or create voice clone prompt, with caching."""
cache_key = f"{ref_audio}:{ref_text}"
if cache_key not in self._voice_clone_cache:
print(f"Creating voice clone prompt from {ref_audio}", file=sys.stderr)
try:
prompt_items = self.model.create_voice_clone_prompt(
ref_audio=ref_audio,
ref_text=ref_text,
x_vector_only_mode=self.options.get("x_vector_only_mode", False),
)
self._voice_clone_cache[cache_key] = prompt_items
except Exception as e:
print(f"Error creating voice clone prompt: {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return None
return self._voice_clone_cache[cache_key]
def TTS(self, request, context):
try:
# Check if dst is provided
if not request.dst:
return backend_pb2.Result(
success=False,
message="dst (output path) is required"
)
# Prepare text
text = request.text.strip()
if not text:
return backend_pb2.Result(
success=False,
message="Text is empty"
)
# Get language (auto-detect if not provided)
language = request.language if hasattr(request, 'language') and request.language else None
if not language or language == "":
language = "Auto" # Auto-detect language
# Detect mode
mode = self._detect_mode(request)
print(f"Detected mode: {mode}", file=sys.stderr)
# Get generation parameters from options
max_new_tokens = self.options.get("max_new_tokens", None)
top_p = self.options.get("top_p", None)
temperature = self.options.get("temperature", None)
do_sample = self.options.get("do_sample", None)
# Prepare generation kwargs
generation_kwargs = {}
if max_new_tokens is not None:
generation_kwargs["max_new_tokens"] = max_new_tokens
if top_p is not None:
generation_kwargs["top_p"] = top_p
if temperature is not None:
generation_kwargs["temperature"] = temperature
if do_sample is not None:
generation_kwargs["do_sample"] = do_sample
instruct = self.options.get("instruct", "")
if instruct is not None and instruct != "":
generation_kwargs["instruct"] = instruct
# Generate audio based on mode
if mode == "VoiceClone":
# VoiceClone mode
ref_audio = self._get_ref_audio_path(request)
if not ref_audio:
return backend_pb2.Result(
success=False,
message="AudioPath is required for VoiceClone mode"
)
ref_text = self.options.get("ref_text", None)
if not ref_text:
# Try to get from request if available
if hasattr(request, 'ref_text') and request.ref_text:
ref_text = request.ref_text
else:
# x_vector_only_mode doesn't require ref_text
if not self.options.get("x_vector_only_mode", False):
return backend_pb2.Result(
success=False,
message="ref_text is required for VoiceClone mode (or set x_vector_only_mode=true)"
)
# Check if we should use cached prompt
use_cached_prompt = self.options.get("use_cached_prompt", True)
voice_clone_prompt = None
if use_cached_prompt:
voice_clone_prompt = self._get_voice_clone_prompt(request, ref_audio, ref_text)
if voice_clone_prompt is None:
return backend_pb2.Result(
success=False,
message="Failed to create voice clone prompt"
)
if voice_clone_prompt:
# Use cached prompt
wavs, sr = self.model.generate_voice_clone(
text=text,
language=language,
voice_clone_prompt=voice_clone_prompt,
**generation_kwargs
)
else:
# Create prompt on-the-fly
wavs, sr = self.model.generate_voice_clone(
text=text,
language=language,
ref_audio=ref_audio,
ref_text=ref_text,
x_vector_only_mode=self.options.get("x_vector_only_mode", False),
**generation_kwargs
)
elif mode == "VoiceDesign":
# VoiceDesign mode
if not instruct:
return backend_pb2.Result(
success=False,
message="instruct option is required for VoiceDesign mode"
)
wavs, sr = self.model.generate_voice_design(
text=text,
language=language,
instruct=instruct,
**generation_kwargs
)
else:
# CustomVoice mode (default)
speaker = request.voice if request.voice else None
if not speaker:
# Try to get from options
speaker = self.options.get("speaker", None)
if not speaker:
# Use default speaker
speaker = "Vivian"
print(f"No speaker specified, using default: {speaker}", file=sys.stderr)
# Validate speaker if model supports it
if hasattr(self.model, 'get_supported_speakers'):
try:
supported_speakers = self.model.get_supported_speakers()
if speaker not in supported_speakers:
print(f"Warning: Speaker '{speaker}' not in supported list. Available: {supported_speakers}", file=sys.stderr)
# Try to find a close match (case-insensitive)
speaker_lower = speaker.lower()
for sup_speaker in supported_speakers:
if sup_speaker.lower() == speaker_lower:
speaker = sup_speaker
print(f"Using matched speaker: {speaker}", file=sys.stderr)
break
except Exception as e:
print(f"Warning: Could not get supported speakers: {e}", file=sys.stderr)
wavs, sr = self.model.generate_custom_voice(
text=text,
language=language,
speaker=speaker,
**generation_kwargs
)
# Save output
if wavs is not None and len(wavs) > 0:
# wavs is a list, take first element
audio_data = wavs[0] if isinstance(wavs, list) else wavs
sf.write(request.dst, audio_data, sr)
print(f"Saved output to {request.dst}", file=sys.stderr)
else:
return backend_pb2.Result(
success=False,
message="No audio output generated"
)
except Exception as err:
print(f"Error in TTS: {err}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(success=True)
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Define the signal handler function
def signal_handler(sig, frame):
print("Received termination signal. Shutting down...")
server.stop(0)
sys.exit(0)
# Set the signal handlers for SIGINT and SIGTERM
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
serve(args.addr)

View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -e
EXTRA_PIP_INSTALL_FLAGS="--no-build-isolation"
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
installRequirements

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cpu
torch
torchaudio
qwen-tts
sox

View File

@@ -0,0 +1 @@
flash-attn

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu121
torch
torchaudio
qwen-tts
sox

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
torchaudio
qwen-tts
sox

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.3
torch==2.7.1+rocm6.3
torchaudio==2.7.1+rocm6.3
qwen-tts
sox

View File

@@ -0,0 +1 @@
flash-attn

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/xpu
torch
torchaudio
qwen-tts
sox

View File

@@ -0,0 +1,5 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu129/
torch
torchaudio
qwen-tts
sox

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
torchaudio
qwen-tts
sox

View File

@@ -0,0 +1,4 @@
torch==2.7.1
torchaudio==0.22.1
qwen-tts
sox

View File

@@ -0,0 +1,6 @@
grpcio==1.71.0
protobuf
certifi
packaging==24.1
soundfile
setuptools

9
backend/python/qwen-tts/run.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@

View File

@@ -0,0 +1,98 @@
"""
A test script to test the gRPC service
"""
import unittest
import subprocess
import time
import os
import sys
import tempfile
import threading
import backend_pb2
import backend_pb2_grpc
import grpc
class TestBackendServicer(unittest.TestCase):
"""
TestBackendServicer is the class that tests the gRPC service
"""
def setUp(self):
"""
This method sets up the gRPC service by starting the server
"""
self.service = subprocess.Popen(
["python3", "backend.py", "--addr", "localhost:50051"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
time.sleep(5)
def tearDown(self) -> None:
"""
This method tears down the gRPC service by terminating the server
"""
self.service.terminate()
try:
stdout, stderr = self.service.communicate(timeout=5)
# Output should already be printed by threads, but print any remaining
if stdout:
print("=== REMAINING STDOUT ===")
print(stdout)
if stderr:
print("=== REMAINING STDERR ===")
print(stderr)
except subprocess.TimeoutExpired:
self.service.kill()
stdout, stderr = self.service.communicate()
if stdout:
print("=== REMAINING STDOUT ===")
print(stdout)
if stderr:
print("=== REMAINING STDERR ===")
print(stderr)
def test_tts(self):
"""
This method tests if the TTS generation works successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
# Allow up to 10 minutes for model download on first run
response = stub.LoadModel(
backend_pb2.ModelOptions(Model="Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice"),
timeout=600.0
)
self.assertTrue(response.success)
# Create temporary output file
with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as tmp_file:
output_path = tmp_file.name
tts_request = backend_pb2.TTSRequest(
text="Hello, this is a test of the qwen-tts backend.",
voice="Vivian",
dst=output_path
)
# Allow up to 2 minutes for TTS generation
tts_response = stub.TTS(tts_request, timeout=120.0)
self.assertIsNotNone(tts_response)
self.assertTrue(tts_response.success)
# Verify output file exists and is not empty
self.assertTrue(os.path.exists(output_path))
self.assertGreater(os.path.getsize(output_path), 0)
# Cleanup
os.unlink(output_path)
except Exception as err:
print(f"Exception: {err}", file=sys.stderr)
# Give threads a moment to flush any remaining output
time.sleep(1)
self.fail("TTS service failed")
finally:
self.tearDown()

View File

@@ -1,9 +1,7 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
transformers
accelerate
torch==2.3.1+cxx11.abi
oneccl_bind_pt==2.8.0+xpu
torch
rerankers[transformers]
optimum[openvino]
setuptools

View File

@@ -1,8 +1,6 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.3.1+cxx11.abi
torchvision==0.18.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch
torchvision
optimum[openvino]
setuptools
rfdetr

View File

@@ -1,12 +1,9 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.5.1+cxx11.abi
oneccl_bind_pt==2.8.0+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch
optimum[openvino]
llvmlite==0.43.0
numba==0.60.0
transformers
intel-extension-for-transformers
bitsandbytes
outetts
sentence-transformers==5.2.0

View File

@@ -2,6 +2,43 @@
vibevoice:
bash install.sh
.PHONY: download-voices
download-voices:
@echo "Downloading voice preset files..."
@mkdir -p voices/streaming_model
@if command -v wget >/dev/null 2>&1; then \
wget -q -O voices/streaming_model/en-Frank_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Frank_man.pt && \
wget -q -O voices/streaming_model/en-Grace_woman.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Grace_woman.pt && \
wget -q -O voices/streaming_model/en-Mike_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Mike_man.pt && \
wget -q -O voices/streaming_model/en-Emma_woman.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Emma_woman.pt && \
wget -q -O voices/streaming_model/en-Carter_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Carter_man.pt && \
wget -q -O voices/streaming_model/en-Davis_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Davis_man.pt && \
echo "Voice files downloaded successfully"; \
elif command -v curl >/dev/null 2>&1; then \
curl -sL -o voices/streaming_model/en-Frank_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Frank_man.pt && \
curl -sL -o voices/streaming_model/en-Grace_woman.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Grace_woman.pt && \
curl -sL -o voices/streaming_model/en-Mike_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Mike_man.pt && \
curl -sL -o voices/streaming_model/en-Emma_woman.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Emma_woman.pt && \
curl -sL -o voices/streaming_model/en-Carter_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Carter_man.pt && \
curl -sL -o voices/streaming_model/en-Davis_man.pt \
https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Davis_man.pt && \
echo "Voice files downloaded successfully"; \
else \
echo "Error: Neither wget nor curl found. Cannot download voice files."; \
exit 1; \
fi
.PHONY: run
run: vibevoice
@echo "Running vibevoice..."
@@ -9,7 +46,7 @@ run: vibevoice
@echo "vibevoice run."
.PHONY: test
test: vibevoice
test: vibevoice download-voices
@echo "Testing vibevoice..."
bash test.sh
@echo "vibevoice tested."

View File

@@ -16,6 +16,8 @@ import backend_pb2_grpc
import torch
from vibevoice.modular.modeling_vibevoice_streaming_inference import VibeVoiceStreamingForConditionalGenerationInference
from vibevoice.processor.vibevoice_streaming_processor import VibeVoiceStreamingProcessor
from vibevoice.modular.modeling_vibevoice_asr import VibeVoiceASRForConditionalGeneration
from vibevoice.processor.vibevoice_asr_processor import VibeVoiceASRProcessor
import grpc
@@ -95,21 +97,72 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
value = value.lower() == "true"
self.options[key] = value
# Check if ASR mode is enabled
self.asr_mode = self.options.get("asr_mode", False)
if not isinstance(self.asr_mode, bool):
# Handle string "true"/"false" case
self.asr_mode = str(self.asr_mode).lower() == "true"
# Get model path from request
model_path = request.Model
if not model_path:
model_path = "microsoft/VibeVoice-Realtime-0.5B"
if self.asr_mode:
model_path = "microsoft/VibeVoice-ASR" # Default ASR model
else:
model_path = "microsoft/VibeVoice-Realtime-0.5B" # Default TTS model
# Get inference steps from options, default to 5
default_dtype = torch.bfloat16 if self.device == "cuda" else torch.float32
load_dtype = default_dtype
if "torch_dtype" in self.options:
torch_dtype_str = str(self.options["torch_dtype"]).lower()
if torch_dtype_str == "fp16":
load_dtype = torch.float16
elif torch_dtype_str == "bf16":
load_dtype = torch.bfloat16
elif torch_dtype_str == "fp32":
load_dtype = torch.float32
# remove it from options after reading
del self.options["torch_dtype"]
# Get inference steps from options, default to 5 (TTS only)
self.inference_steps = self.options.get("inference_steps", 5)
if not isinstance(self.inference_steps, int) or self.inference_steps <= 0:
self.inference_steps = 5
# Get cfg_scale from options, default to 1.5
# Get cfg_scale from options, default to 1.5 (TTS only)
self.cfg_scale = self.options.get("cfg_scale", 1.5)
if not isinstance(self.cfg_scale, (int, float)) or self.cfg_scale <= 0:
self.cfg_scale = 1.5
# Get ASR generation parameters from options
self.max_new_tokens = self.options.get("max_new_tokens", 512)
if not isinstance(self.max_new_tokens, int) or self.max_new_tokens <= 0:
self.max_new_tokens = 512
self.temperature = self.options.get("temperature", 0.0)
if not isinstance(self.temperature, (int, float)) or self.temperature < 0:
self.temperature = 0.0
self.top_p = self.options.get("top_p", 1.0)
if not isinstance(self.top_p, (int, float)) or self.top_p <= 0:
self.top_p = 1.0
self.do_sample = self.options.get("do_sample", None)
if self.do_sample is None:
# Default: use sampling if temperature > 0
self.do_sample = self.temperature > 0
elif not isinstance(self.do_sample, bool):
self.do_sample = str(self.do_sample).lower() == "true"
self.num_beams = self.options.get("num_beams", 1)
if not isinstance(self.num_beams, int) or self.num_beams < 1:
self.num_beams = 1
self.repetition_penalty = self.options.get("repetition_penalty", 1.0)
if not isinstance(self.repetition_penalty, (int, float)) or self.repetition_penalty <= 0:
self.repetition_penalty = 1.0
# Determine voices directory
# Priority order:
# 1. voices_dir option (explicitly set by user - highest priority)
@@ -163,91 +216,151 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
else:
voices_dir = None
# Initialize voice-related attributes (TTS only)
self.voices_dir = voices_dir
self.voice_presets = {}
self._voice_cache = {}
self.default_voice_key = None
# Load voice presets if directory exists
if self.voices_dir and os.path.exists(self.voices_dir):
self._load_voice_presets()
else:
print(f"Warning: Voices directory not found. Voice presets will not be available.", file=sys.stderr)
# Store AudioPath, ModelFile, and ModelPath from LoadModel request for use in TTS
self.audio_path = request.AudioPath if hasattr(request, 'AudioPath') and request.AudioPath else None
self.model_file = request.ModelFile if hasattr(request, 'ModelFile') and request.ModelFile else None
self.model_path = request.ModelPath if hasattr(request, 'ModelPath') and request.ModelPath else None
# Decide attention implementation and device_map (matching upstream example)
if self.device == "mps":
device_map = None
attn_impl_primary = "sdpa" # flash_attention_2 not supported on MPS
elif self.device == "cuda":
device_map = "cuda"
attn_impl_primary = "flash_attention_2"
else: # cpu
device_map = "cpu" # Match upstream example: use "cpu" for CPU device_map
attn_impl_primary = "sdpa"
try:
print(f"Loading processor & model from {model_path}", file=sys.stderr)
self.processor = VibeVoiceStreamingProcessor.from_pretrained(model_path)
if self.asr_mode:
# Load ASR model and processor
print(f"Loading ASR processor & model from {model_path}", file=sys.stderr)
# Load ASR processor
self.processor = VibeVoiceASRProcessor.from_pretrained(
model_path,
language_model_pretrained_name="Qwen/Qwen2.5-7B"
)
# Decide dtype & attention implementation
if self.device == "mps":
load_dtype = torch.float32 # MPS requires float32
device_map = None
attn_impl_primary = "sdpa" # flash_attention_2 not supported on MPS
elif self.device == "cuda":
load_dtype = torch.bfloat16
device_map = "cuda"
attn_impl_primary = "flash_attention_2"
else: # cpu
load_dtype = torch.float32
device_map = "cpu"
attn_impl_primary = "sdpa"
print(f"Using device: {self.device}, torch_dtype: {load_dtype}, attn_implementation: {attn_impl_primary}", file=sys.stderr)
print(f"Using device: {self.device}, torch_dtype: {load_dtype}, attn_implementation: {attn_impl_primary}", file=sys.stderr)
# Load model with device-specific logic
try:
if self.device == "mps":
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
# Load ASR model - use device_map=None and move manually to avoid JSON serialization issues
# Load with dtype to ensure all components are in correct dtype from the start
try:
print(f"Using attention implementation: {attn_impl_primary}", file=sys.stderr)
# Load model with dtype to ensure all components are in correct dtype
self.model = VibeVoiceASRForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=load_dtype,
dtype=load_dtype,
device_map=None, # Always use None, move manually to avoid JSON serialization issues
attn_implementation=attn_impl_primary,
device_map=None, # load then move
trust_remote_code=True
)
self.model.to("mps")
elif self.device == "cuda":
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map="cuda",
attn_implementation=attn_impl_primary,
)
else: # cpu
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map="cpu",
attn_implementation=attn_impl_primary,
)
except Exception as e:
if attn_impl_primary == 'flash_attention_2':
print(f"[ERROR] : {type(e).__name__}: {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
print("Error loading the model. Trying to use SDPA. However, note that only flash_attention_2 has been fully tested, and using SDPA may result in lower audio quality.", file=sys.stderr)
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map=(self.device if self.device in ("cuda", "cpu") else None),
attn_implementation='sdpa'
)
if self.device == "mps":
self.model.to("mps")
else:
raise e
# Move to device manually
self.model = self.model.to(self.device)
except Exception as e:
if attn_impl_primary == 'flash_attention_2':
print(f"[ERROR] : {type(e).__name__}: {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
print("Error loading the ASR model. Trying to use SDPA.", file=sys.stderr)
self.model = VibeVoiceASRForConditionalGeneration.from_pretrained(
model_path,
dtype=load_dtype,
device_map=None,
attn_implementation='sdpa',
trust_remote_code=True
)
# Move to device manually
self.model = self.model.to(self.device)
else:
raise e
self.model.eval()
self.model.set_ddpm_inference_steps(num_steps=self.inference_steps)
# Set default voice key
if self.voice_presets:
# Try to get default from environment or use first available
preset_name = os.environ.get("VOICE_PRESET")
self.default_voice_key = self._determine_voice_key(preset_name)
print(f"Default voice preset: {self.default_voice_key}", file=sys.stderr)
self.model.eval()
print(f"ASR model loaded successfully", file=sys.stderr)
else:
print("Warning: No voice presets available. Voice selection will not work.", file=sys.stderr)
# Load TTS model and processor (existing logic)
# Load voice presets if directory exists
if self.voices_dir and os.path.exists(self.voices_dir):
self._load_voice_presets()
else:
print(f"Warning: Voices directory not found. Voice presets will not be available.", file=sys.stderr)
print(f"Loading TTS processor & model from {model_path}", file=sys.stderr)
self.processor = VibeVoiceStreamingProcessor.from_pretrained(model_path)
print(f"Using device: {self.device}, torch_dtype: {load_dtype}, attn_implementation: {attn_impl_primary}", file=sys.stderr)
# Load model with device-specific logic (matching upstream example exactly)
try:
if self.device == "mps":
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
attn_implementation=attn_impl_primary,
device_map=None, # load then move
)
self.model.to("mps")
elif self.device == "cuda":
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map=device_map,
attn_implementation=attn_impl_primary,
)
else: # cpu
# Match upstream example: use device_map="cpu" for CPU
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map="cpu",
attn_implementation=attn_impl_primary,
)
except Exception as e:
if attn_impl_primary == 'flash_attention_2':
print(f"[ERROR] : {type(e).__name__}: {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
print("Error loading the model. Trying to use SDPA. However, note that only flash_attention_2 has been fully tested, and using SDPA may result in lower audio quality.", file=sys.stderr)
# Match upstream example fallback pattern
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map=(self.device if self.device in ("cuda", "cpu") else None),
attn_implementation='sdpa'
)
if self.device == "mps":
self.model.to("mps")
else:
raise e
self.model.eval()
self.model.set_ddpm_inference_steps(num_steps=self.inference_steps)
# Set default voice key
if self.voice_presets:
# Try to get default from environment or use first available
preset_name = os.environ.get("VOICE_PRESET")
self.default_voice_key = self._determine_voice_key(preset_name)
print(f"Default voice preset: {self.default_voice_key}", file=sys.stderr)
else:
print("Warning: No voice presets available. Voice selection will not work.", file=sys.stderr)
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
# Format error message safely, avoiding JSON serialization issues
error_msg = str(err)
error_type = type(err).__name__
# Include traceback for debugging
tb_str = traceback.format_exc()
print(f"[ERROR] LoadModel failed: {error_type}: {error_msg}", file=sys.stderr)
print(tb_str, file=sys.stderr)
return backend_pb2.Result(success=False, message=f"{error_type}: {error_msg}")
return backend_pb2.Result(message="Model loaded successfully", success=True)
@@ -327,14 +440,30 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
if not voice_path or not os.path.exists(voice_path):
return None
# Ensure cache exists (should be initialized in LoadModel)
if not hasattr(self, '_voice_cache'):
self._voice_cache = {}
# Use path as cache key
if voice_path not in self._voice_cache:
print(f"Loading prefilled prompt from {voice_path}", file=sys.stderr)
prefilled_outputs = torch.load(
voice_path,
map_location=self._torch_device,
weights_only=False,
)
# Match self-test.py: use string device name for map_location
# Ensure self.device exists (should be set in LoadModel)
try:
if not hasattr(self, 'device'):
# Fallback to CPU if device not set
device_str = "cpu"
else:
device_str = str(self.device)
except AttributeError as e:
print(f"Error accessing self.device: {e}, falling back to CPU", file=sys.stderr)
device_str = "cpu"
if device_str != "cpu":
map_loc = device_str
else:
map_loc = "cpu"
# Call torch.load with explicit arguments
prefilled_outputs = torch.load(voice_path, map_location=map_loc, weights_only=False)
self._voice_cache[voice_path] = prefilled_outputs
return self._voice_cache[voice_path]
@@ -351,17 +480,17 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
voice_path = self._get_voice_path(request.voice)
if voice_path:
voice_key = request.voice
elif request.AudioPath:
# Use AudioPath as voice file
if os.path.isabs(request.AudioPath):
voice_path = request.AudioPath
elif request.ModelFile:
model_file_base = os.path.dirname(request.ModelFile)
voice_path = os.path.join(model_file_base, request.AudioPath)
elif hasattr(request, 'ModelPath') and request.ModelPath:
voice_path = os.path.join(request.ModelPath, request.AudioPath)
elif self.audio_path:
# Use AudioPath from LoadModel as voice file
if os.path.isabs(self.audio_path):
voice_path = self.audio_path
elif self.model_file:
model_file_base = os.path.dirname(self.model_file)
voice_path = os.path.join(model_file_base, self.audio_path)
elif self.model_path:
voice_path = os.path.join(self.model_path, self.audio_path)
else:
voice_path = request.AudioPath
voice_path = self.audio_path
elif self.default_voice_key:
voice_path = self._get_voice_path(self.default_voice_key)
voice_key = self.default_voice_key
@@ -404,8 +533,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
return_attention_mask=True,
)
# Move tensors to target device
target_device = self._torch_device
# Move tensors to target device (matching self-test.py exactly)
# Explicitly ensure it's a string to avoid any variable name collisions
target_device = str(self.device) if str(self.device) != "cpu" else "cpu"
for k, v in inputs.items():
if torch.is_tensor(v):
inputs[k] = v.to(target_device)
@@ -447,6 +577,147 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
return backend_pb2.Result(success=True)
def AudioTranscription(self, request, context):
"""Transcribe audio file to text using ASR model."""
try:
# Validate ASR mode is active
if not self.asr_mode:
return backend_pb2.TranscriptResult(
segments=[],
text="",
)
# Note: We return empty result instead of error to match faster-whisper behavior
# Get audio file path
audio_path = request.dst
if not audio_path or not os.path.exists(audio_path):
print(f"Error: Audio file not found: {audio_path}", file=sys.stderr)
return backend_pb2.TranscriptResult(
segments=[],
text="",
)
print(f"Transcribing audio file: {audio_path}", file=sys.stderr)
# Get context_info from options if available
context_info = self.options.get("context_info", None)
if context_info and isinstance(context_info, str) and context_info.strip():
context_info = context_info.strip()
else:
context_info = None
# Process audio with ASR processor (matching gradio example)
inputs = self.processor(
audio=audio_path,
sampling_rate=None,
return_tensors="pt",
add_generation_prompt=True,
context_info=context_info
)
# Move to device (matching gradio example)
inputs = {k: v.to(self.device) if isinstance(v, torch.Tensor) else v
for k, v in inputs.items()}
# Prepare generation config (matching gradio example)
generation_config = {
"max_new_tokens": self.max_new_tokens,
"temperature": self.temperature if self.temperature > 0 else None,
"top_p": self.top_p if self.do_sample else None,
"do_sample": self.do_sample,
"num_beams": self.num_beams,
"repetition_penalty": self.repetition_penalty,
"pad_token_id": self.processor.pad_id,
"eos_token_id": self.processor.tokenizer.eos_token_id,
}
# Remove None values (matching gradio example)
generation_config = {k: v for k, v in generation_config.items() if v is not None}
print(f"Generating transcription with max_new_tokens: {self.max_new_tokens}, temperature: {self.temperature}, do_sample: {self.do_sample}, num_beams: {self.num_beams}, repetition_penalty: {self.repetition_penalty}", file=sys.stderr)
# Generate transcription (matching gradio example)
with torch.no_grad():
output_ids = self.model.generate(
**inputs,
**generation_config
)
# Decode output (matching gradio example)
generated_ids = output_ids[0, inputs['input_ids'].shape[1]:]
generated_text = self.processor.decode(generated_ids, skip_special_tokens=True)
# Parse structured output to get segments
result_segments = []
try:
transcription_segments = self.processor.post_process_transcription(generated_text)
if transcription_segments:
# Map segments to TranscriptSegment format
for idx, seg in enumerate(transcription_segments):
# Extract timing information (if available)
# Handle both dict and object with attributes
if isinstance(seg, dict):
start_time = seg.get('start_time', 0)
end_time = seg.get('end_time', 0)
text = seg.get('text', '')
speaker_id = seg.get('speaker_id', None)
else:
# Handle object with attributes
start_time = getattr(seg, 'start_time', 0)
end_time = getattr(seg, 'end_time', 0)
text = getattr(seg, 'text', '')
speaker_id = getattr(seg, 'speaker_id', None)
# Convert time to milliseconds (assuming seconds)
start_ms = int(start_time * 1000) if isinstance(start_time, (int, float)) else 0
end_ms = int(end_time * 1000) if isinstance(end_time, (int, float)) else 0
# Add speaker info to text if available
if speaker_id is not None:
text = f"[Speaker {speaker_id}] {text}"
result_segments.append(backend_pb2.TranscriptSegment(
id=idx,
start=start_ms,
end=end_ms,
text=text,
tokens=[] # Token IDs not extracted for now
))
except Exception as e:
print(f"Warning: Failed to parse structured output: {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
# Fallback: create a single segment with the full text
if generated_text:
result_segments.append(backend_pb2.TranscriptSegment(
id=0,
start=0,
end=0,
text=generated_text,
tokens=[]
))
# Combine all segment texts into full transcription
if result_segments:
full_text = " ".join([seg.text for seg in result_segments])
else:
full_text = generated_text if generated_text else ""
print(f"Transcription completed: {len(result_segments)} segments", file=sys.stderr)
return backend_pb2.TranscriptResult(
segments=result_segments,
text=full_text
)
except Exception as err:
print(f"Error in AudioTranscription: {err}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return backend_pb2.TranscriptResult(
segments=[],
text="",
)
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[

View File

@@ -29,11 +29,13 @@ fi
installRequirements
git clone https://github.com/microsoft/VibeVoice.git
cd VibeVoice/
if [ ! -d VibeVoice ]; then
git clone https://github.com/microsoft/VibeVoice.git
cd VibeVoice/
if [ "x${USE_PIP}" == "xtrue" ]; then
pip install ${EXTRA_PIP_INSTALL_FLAGS:-} .
else
uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} .
if [ "x${USE_PIP}" == "xtrue" ]; then
pip install ${EXTRA_PIP_INSTALL_FLAGS:-} .
else
uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} .
fi
fi

View File

@@ -1,7 +1,7 @@
--extra-index-url https://download.pytorch.org/whl/cpu
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
transformers>=4.51.3,<5.0.0
torchvision==0.22.1
accelerate
compel

View File

@@ -1,7 +1,7 @@
--extra-index-url https://download.pytorch.org/whl/cu121
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
transformers>=4.51.3,<5.0.0
torchvision
accelerate
compel

View File

@@ -1,7 +1,7 @@
--extra-index-url https://download.pytorch.org/whl/cu130
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
transformers>=4.51.3,<5.0.0
torchvision
accelerate
compel

View File

@@ -3,7 +3,7 @@ torch==2.7.1+rocm6.3
torchvision==0.22.1+rocm6.3
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
transformers>=4.51.3,<5.0.0
accelerate
compel
peft

View File

@@ -1,13 +1,11 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.5.1+cxx11.abi
torchvision==0.20.1+cxx11.abi
oneccl_bind_pt==2.8.0+xpu
--extra-index-url https://download.pytorch.org/whl/xpu
torch
torchvision
optimum[openvino]
setuptools
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
transformers>=4.51.3,<5.0.0
accelerate
compel
peft

Some files were not shown because too many files have changed in this diff Show More