LocalAI [bot]
b1c434f0fc
chore: ⬆️ Update ggml-org/llama.cpp to 11c325c6e0666a30590cde390d5746a405e536b9 ( #8607 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-19 23:32:35 +01:00
LocalAI [bot]
bb42b342de
chore: ⬆️ Update ggml-org/whisper.cpp to 21411d81ea736ed5d9cdea4df360d3c4b60a4adb ( #8606 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-19 23:32:21 +01:00
LocalAI [bot]
e555057f8b
fix: multi-GPU support for Diffusers (Issue #8575 ) ( #8605 )
...
* chore: init
* feat: implement multi-GPU support for Diffusers backend (fixes #8575 )
---------
Co-authored-by: localai-bot <localai-bot@users.noreply.github.com >
2026-02-19 21:35:58 +01:00
Ettore Di Giacinto
dadc7158fb
fix(diffusers): sd_embed is not always available ( #8602 )
...
Seems sd_embed doesn't play well with MPS and L4T. Making it optional
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-19 10:45:17 +01:00
LocalAI [bot]
68c7077491
chore: ⬆️ Update ggml-org/llama.cpp to b55dcdef5dcd74dc75c4921090e928d43453c157 ( #8599 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-18 22:33:25 +01:00
LocalAI [bot]
ed832cf0e0
chore: ⬆️ Update ggml-org/llama.cpp to 2b089c77580d347767f440205103e4da8ec33d89 ( #8592 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2026-02-17 22:35:07 +00:00
Richard Palethorpe
9e692967c3
fix(llama-cpp): Pass parameters when using embedded template ( #8590 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com >
2026-02-17 18:50:05 +01:00
LocalAI [bot]
067a255435
chore: ⬆️ Update ggml-org/llama.cpp to d612901116ab2066c7923372d4827032ff296bc4 ( #8588 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-17 00:57:32 +01:00
LocalAI [bot]
109f29cc24
chore: ⬆️ Update ggml-org/llama.cpp to 27b93cbd157fc4ad94573a1fbc226d3e18ea1bb4 ( #8577 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-15 23:42:36 +01:00
LocalAI [bot]
587e4a21b3
chore: ⬆️ Update antirez/voxtral.c to 134d366c24d20c64b614a3dcc8bda2a6922d077d ( #8578 )
...
⬆️ Update antirez/voxtral.c
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-15 23:42:11 +01:00
LocalAI [bot]
3f1f58b2ab
chore: ⬆️ Update ggml-org/whisper.cpp to 364c77f4ca2737e3287652e0e8a8c6dce3231bba ( #8576 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-15 21:20:04 +00:00
LocalAI [bot]
d784851337
chore: ⬆️ Update ggml-org/llama.cpp to 01d8eaa28d57bfc6d06e30072085ed0ef12e06c5 ( #8567 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-14 22:52:32 +01:00
LocalAI [bot]
94df096fb9
fix: pin neutts-air to known working commit ( #8566 )
...
* chore: init
* fix: pin neutts-air to known working commit
---------
Co-authored-by: localai-bot <localai-bot@users.noreply.github.com >
2026-02-14 21:16:37 +01:00
Ettore Di Giacinto
820bd7dd01
fix(ci): try to fix deps for l4t13 on qwen-*
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-14 10:21:23 +01:00
Austen
42cb7bda19
fix(llama-cpp): populate tensor_buft_override buffer so llama-cpp properly performs fit calculations ( #8560 )
...
fix auto-fit for llama-cpp
2026-02-14 10:07:37 +01:00
Ettore Di Giacinto
2fb9940b8a
fix(voxcpm): pin setuptools ( #8556 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-13 23:44:35 +01:00
LocalAI [bot]
2ff0ad4190
chore: ⬆️ Update ggml-org/llama.cpp to 05a6f0e8946914918758db767f6eb04bc1e38507 ( #8553 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-13 22:48:01 +01:00
Ettore Di Giacinto
2fd026e958
fix: update moonshine API, add setuptools to voxcpm requirements ( #8541 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-12 23:22:37 +01:00
LocalAI [bot]
08718b656e
chore: ⬆️ Update ggml-org/llama.cpp to 338085c69e486b7155e5b03d7b5087e02c0e2528 ( #8538 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-12 23:21:53 +01:00
Austen
cff972094c
feat(diffusers): add experimental support for sd_embed-style prompt embedding ( #8504 )
...
* add experimental support for sd_embed-style prompt embedding
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
* add doc equivalent to compel
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
* need to use flux1 embedding function for flux model
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
---------
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
2026-02-11 22:58:19 +01:00
LocalAI [bot]
79a25f7ae9
chore: ⬆️ Update ggml-org/llama.cpp to 4d3daf80f8834e0eb5148efc7610513f1e263653 ( #8513 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-11 21:48:39 +00:00
LocalAI [bot]
0ee92317ec
chore: ⬆️ Update ggml-org/llama.cpp to 57487a64c88c152ac72f3aea09bd1cc491b2f61e ( #8499 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-10 21:32:46 +00:00
LocalAI [bot]
743d2d1947
chore: ⬆️ Update ggml-org/whisper.cpp to 764482c3175d9c3bc6089c1ec84df7d1b9537d83 ( #8478 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-10 15:14:59 +01:00
LocalAI [bot]
df04843f34
chore: ⬆️ Update ggml-org/llama.cpp to 262364e31d1da43596fe84244fba44e94a0de64e ( #8479 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-10 15:14:33 +01:00
LocalAI [bot]
0c040beb59
chore: ⬆️ Update antirez/voxtral.c to c9e8773a2042d67c637fc492c8a655c485354080 ( #8477 )
...
⬆️ Update antirez/voxtral.c
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-09 22:20:03 +01:00
Ettore Di Giacinto
bf5a1dd840
feat(voxtral): add voxtral backend ( #8451 )
...
* feat(voxtral): add voxtral backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* simplify
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-09 09:12:05 +01:00
LocalAI [bot]
3b1b08efd6
chore: ⬆️ Update ggml-org/llama.cpp to e06088da0fa86aa444409f38dff274904931c507 ( #8464 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-09 09:09:32 +01:00
LocalAI [bot]
3d8791067f
chore: ⬆️ Update ggml-org/whisper.cpp to 4b23ff249e7f93137cb870b28fb27818e074c255 ( #8463 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-09 09:08:55 +01:00
Austen
da8207b73b
feat(stablediffusion-ggml): Improve legacy CPU support for stablediffusion-ggml backend ( #8461 )
...
* Port AVX logic from whisper to stablediffusion-ggml
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
* disable BMI2 on AVX builds
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
---------
Signed-off-by: Austen Dicken <cvpcsm@gmail.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2026-02-08 23:11:33 +00:00
LocalAI [bot]
944874d08b
chore: ⬆️ Update ggml-org/llama.cpp to 8872ad2125336d209a9911a82101f80095a9831d ( #8448 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-07 21:22:18 +00:00
Ettore Di Giacinto
3370d807c2
feat(nemo): add Nemo (only asr for now) backend ( #8436 )
...
* feat(nemo): add Nemo (only asr for now) backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(nemo): add Nemo backend without Python version pins (#8438 )
* Initial plan
* Remove Python version pins from nemo backend install.sh
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
* Pin pyarrow to 20.0.0 in nemo requirements
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-07 08:19:37 +01:00
LocalAI [bot]
ae2689936a
chore: ⬆️ Update ggml-org/llama.cpp to b83111815e9a79949257e9d4b087206b320a3063 ( #8434 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-06 21:22:33 +00:00
Richard Palethorpe
15c12674b6
fix(qwen-asr): Remove contagious slop (DEFAULT_GOAL) from Makefile ( #8431 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com >
2026-02-06 17:12:45 +01:00
Andres
efd552f83e
fix(api)!: Stop model prior to deletion ( #8422 )
...
* Unload model prior to deletion
Signed-off-by: Andres Smith <andressmithdev@pm.me >
* Fix LFM model in gallery
Signed-off-by: Andres Smith <andressmithdev@pm.me >
* Remove mistakenly added files
Signed-off-by: Andres Smith <andressmithdev@pm.me >
---------
Signed-off-by: Andres Smith <andressmithdev@pm.me >
2026-02-06 09:22:10 +01:00
LocalAI [bot]
bcd927da6e
chore: ⬆️ Update ggml-org/llama.cpp to 22cae832188a1f08d18bd0a707a4ba5cd03c7349 ( #8419 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-06 09:21:33 +01:00
Ettore Di Giacinto
218d0526cb
fix(qwen-tts): add six dependency
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 18:05:31 +01:00
Ettore Di Giacinto
9bc5ab18fa
fix(voxcpm): make sed call unix-compliant
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 17:15:58 +01:00
Ettore Di Giacinto
a9267f391c
fix(huggingface): add clean target
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 16:54:41 +01:00
Ettore Di Giacinto
029ae3420d
fix(package.sh): drop redundant -a and -R
...
-a implies already -R
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 16:39:38 +01:00
Ettore Di Giacinto
c0461f32a1
fix: add missing clean targets
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 16:38:16 +01:00
Ettore Di Giacinto
8989d2944e
fix: add clean target to local-store
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 14:55:34 +01:00
Ettore Di Giacinto
7aea2add44
Revert "chore(deps): bump torch from 2.4.1 to 2.7.1+xpu in /backend/python/rerankers in the pip group across 1 directory" ( #8412 )
...
Revert "chore(deps): bump torch from 2.4.1 to 2.7.1+xpu in /backend/python/re…"
This reverts commit 55e43b3f92 .
2026-02-05 14:17:33 +01:00
dependabot[bot]
55e43b3f92
chore(deps): bump torch from 2.4.1 to 2.7.1+xpu in /backend/python/rerankers in the pip group across 1 directory ( #8407 )
...
chore(deps): bump torch
Bumps the pip group with 1 update in the /backend/python/rerankers directory: torch.
Updates `torch` from 2.4.1 to 2.7.1+xpu
---
updated-dependencies:
- dependency-name: torch
dependency-version: 2.7.1+xpu
dependency-type: direct:production
dependency-group: pip
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-05 12:37:52 +00:00
Ettore Di Giacinto
53276d28e7
feat(musicgen): add ace-step and UI interface ( #8396 )
...
* feat(musicgen): add ace-step and UI interface
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Correctly handle model dir
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Drop auto-download
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add to models, fixup UIs icons
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* l4t13 is incompatbile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* avoid pinning version for cuda12
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Drop l4t12
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-05 12:04:53 +01:00
LocalAI [bot]
c30866ba95
chore: ⬆️ Update ggml-org/llama.cpp to b536eb023368701fe3564210440e2df6151c3e65 ( #8399 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-04 23:08:08 +01:00
LocalAI [bot]
b413beba2d
chore: ⬆️ Update ggml-org/whisper.cpp to 941bdabbe4561bc6de68981aea01bc5ab05781c5 ( #8398 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-02-04 21:20:59 +00:00
Ettore Di Giacinto
9db4df22f3
chore: update torch and torchaudio version specifications for qwen-tts in MPS
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2026-02-04 16:55:42 +01:00
Ettore Di Giacinto
5201b58d3e
feat(mlx): Add support for CUDA12, CUDA13, L4T, SBSA and CPU ( #8380 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-03 23:53:34 +01:00
Ettore Di Giacinto
3039ced287
chore(ci): enlarge sleep startup time
...
Even if suboptimal as we should poll to wait for the service to be available, this should at least alleviate tests for now
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2026-02-03 22:07:07 +01:00
Ettore Di Giacinto
e7fc604dbc
feat(metal): try to extend support to remaining backends ( #8374 )
...
* feat(metal): try to extend support to remaining backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* neutts doesn't work
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* split outetts out of transformers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Remove torch pin to whisperx
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-02-03 21:57:50 +01:00