469 Commits

Author SHA1 Message Date
Ettore Di Giacinto
a8057b952c fix(cuda): be consistent with image tag naming (#5916)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-26 08:30:59 +02:00
LocalAI [bot]
5ce982b9c9 chore: ⬆️ Update ggml-org/llama.cpp to c7f3169cd523140a288095f2d79befb20a0b73f4 (#5913)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-25 23:08:20 +02:00
Ettore Di Giacinto
b3600b3c50 feat(backend gallery): add mirrors (#5910)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-25 19:20:08 +02:00
LocalAI [bot]
fb6ec68090 chore: ⬆️ Update ggml-org/whisper.cpp to 7de8dd783f7b2eab56bff6bbc5d3369e34f0e77f (#5902)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-25 08:40:24 +02:00
LocalAI [bot]
0301fc7c46 chore: ⬆️ Update leejet/stable-diffusion.cpp to eed97a5e1d054f9c1e7ac01982ae480411d4157e (#5901)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-25 08:40:06 +02:00
LocalAI [bot]
813cb4296d chore: ⬆️ Update ggml-org/llama.cpp to 3f4fc97f1d745f1d5d3c853949503136d419e6de (#5900)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-25 08:39:44 +02:00
Richard Palethorpe
8fe9fa98f2 fix(stablediffusion-cpp): Switch back to upstream and update (#5880)
* sync(stablediffusion-cpp): Switch back to upstream and update

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(stablediffusion-ggml): NULL terminate options array to prevent segfault

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(build): Add BUILD_TYPE and BASE_IMAGE to all backends

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-07-24 16:03:18 +02:00
LocalAI [bot]
61c2304638 chore: ⬆️ Update ggml-org/llama.cpp to a86f52b2859dae4db5a7a0bbc0f1ad9de6b43ec6 (#5894)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-24 15:02:37 +02:00
LocalAI [bot]
76e471441c chore: ⬆️ Update richiejp/stable-diffusion.cpp to 10c6501bd05a697e014f1bee3a84e5664290c489 (#5732)
⬆️ Update richiejp/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-23 21:09:02 +00:00
Dave
9cecf5e7ac fix: rename Dockerfile.go --> Dockerfile.golang to avoid IDE errors (#5892)
extract up and out Dockerfile.go --> Dockerfile.golang rename. Prevents syntax highlighting and IDE errors

Signed-off-by: Dave Lee <dave@gray101.com>
2025-07-23 21:33:26 +02:00
Ettore Di Giacinto
b7b3164736 chore: try to speedup build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-23 21:21:23 +02:00
Ettore Di Giacinto
6030b12283 chore(backend gallery): add name to 'diffusers' meta
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-23 09:21:04 +02:00
LocalAI [bot]
b5be867e28 chore: ⬆️ Update ggml-org/llama.cpp to acd6cb1c41676f6bbb25c2a76fa5abeb1719301e (#5882)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-22 21:12:06 +00:00
Ettore Di Giacinto
9b806250d4 chore: drop vllm for cuda 11 (#5881)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-22 18:47:31 +02:00
Richard Palethorpe
51230a801e fix(build): Add and update ONEAPI_VERSION (#5874)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-07-22 16:41:49 +02:00
Ettore Di Giacinto
98e5291afc feat: refactor build process, drop embedded backends (#5875)
* feat: split remaining backends and drop embedded backends

- Drop silero-vad, huggingface, and stores backend from embedded
  binaries
- Refactor Makefile and Dockerfile to avoid building grpc backends
- Drop golang code that was used to embed backends
- Simplify building by using goreleaser

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(gallery): be specific with llama-cpp backend templates

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(docs): update

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(ci): minor fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: drop all ffmpeg references

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: run protogen-go

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Always enable p2p mode

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update gorelease file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(stores): do not always load

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix linting issues

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Mac OS fixup

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-22 16:31:04 +02:00
LocalAI [bot]
e29b2c3aff chore: ⬆️ Update ggml-org/llama.cpp to 6c9ee3b17e19dcc82ab93d52ae46fdd0226d4777 (#5877)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-22 08:25:43 +02:00
LocalAI [bot]
8dc574f3c4 chore: ⬆️ Update ggml-org/whisper.cpp to 1f5cf0b2888402d57bb17b2029b2caa97e5f3baf (#5876)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-22 08:25:13 +02:00
LocalAI [bot]
fa284f7445 chore: ⬆️ Update ggml-org/llama.cpp to 2be60cbc2707359241c2784f9d2e30d8fc7cdabb (#5867)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-21 09:14:09 +02:00
Ettore Di Giacinto
8f69b80520 Update index.yaml
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-07-20 22:54:12 +02:00
Ettore Di Giacinto
b1fc5acd4a feat: split whisper from main binary (#5863)
* feat: split whisper from main binary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Cleanup makefile

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add backend builds (missing only darwin)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Test CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add whisper backend to test runs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make sure we have runtime libs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Less grpc on the main Dockerfile

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix hipblas build

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add whisper to index

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Re-enable CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt auto-bumper

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-20 22:52:45 +02:00
LocalAI [bot]
7659461036 chore: ⬆️ Update ggml-org/llama.cpp to a979ca22db0d737af1e548a73291193655c6be99 (#5862)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-20 08:43:36 +02:00
Ettore Di Giacinto
580687da46 feat: remove stablediffusion-ggml from main binary (#5861)
* feat: split stablediffusion-ggml from main binary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Test CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt ci tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to support nvidial4t

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Latest fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-19 21:58:53 +02:00
LocalAI [bot]
1929eb2894 chore: ⬆️ Update ggml-org/llama.cpp to bf9087f59aab940cf312b85a67067ce33d9e365a (#5860)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-19 08:52:07 +02:00
Ettore Di Giacinto
b29544d747 feat: split piper from main binary (#5858)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-19 08:31:33 +02:00
Ettore Di Giacinto
294f7022f3 feat: do not bundle llama-cpp anymore (#5790)
* Build llama.cpp separately

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Start to try to attach some tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add git and small fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: correctly autoload external backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run AIO tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Slightly update the Makefile helps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt auto-bumper

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run linux test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add llama-cpp into build pipelines

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add default capability (for cpu)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop llama-cpp specific logic from the backend loader

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* drop grpc install in ci for tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pass by backends path for tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Build protogen at start

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(tests): set backends path consistently

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Correctly configure the backends path

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to build for darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Compile for metal on arm64/darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run build off from cross-arch

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to the backend index nvidia-l4t and cpu's llama-cpp backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Build also darwin-x86 for llama-cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Disable arm64 builds temporary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Test backend build on PR

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixup build backend reusable workflow

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* pass by skip drivers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use crane

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Skip drivers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* x86 darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add packaging step for llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix leftover from bark-cpp extraction

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to fix hipblas build

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-18 13:24:12 +02:00
Ettore Di Giacinto
61b64a65ab chore(bark-cpp): generalize and move to bark-cpp (#5786)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-03 19:31:10 +02:00
Ettore Di Giacinto
b7cd5bfaec feat(backends): add metas in the gallery (#5784)
* chore(backends): add metas in the gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: correctly handle aliases and metas with same names

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-03 18:01:55 +02:00
Richard Palethorpe
b37cef3718 fix: Diffusers and XPU fixes (#5737)
* fix(README): Add device flags for Intel/XPU

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(diffusers/xpu): Set device to XPU and ignore CUDA request when on Intel

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-07-01 12:36:17 +02:00
Ettore Di Giacinto
dfadc3696e feat(llama.cpp): allow to set kv-overrides (#5745)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-28 21:26:07 +02:00
Maxim Evtush
add8fc35a2 Fix Typos in Documentation and Python Comments (#5658)
* Update istftnet.py

Signed-off-by: Maxim Evtush <154841002+maximevtush@users.noreply.github.com>

* Update GPU-acceleration.md

Signed-off-by: Maxim Evtush <154841002+maximevtush@users.noreply.github.com>

---------

Signed-off-by: Maxim Evtush <154841002+maximevtush@users.noreply.github.com>
2025-06-18 22:11:13 +02:00
Ettore Di Giacinto
1e1f0ee321 chore(backends): move bark-cpp to the backend gallery (#5682)
chore(bark-cpp): move outside from binary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-18 19:48:50 +02:00
Ettore Di Giacinto
fb9a09d49c chore(backend gallery): add description for remaining backends (#5679)
* chore(backend gallery): add description for remaining backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(backend gallery): add linter

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-17 22:21:44 +02:00
Ettore Di Giacinto
0a78f0ad2d chore(backend gallery): re-order and add description for vLLM (#5676)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-17 17:31:53 +02:00
Ettore Di Giacinto
d68660bd5a chore(deps): bump llama.cpp to 'e434e69183fd9e1031f4445002083178c331a28b (#5665)
chore(deps): bump llama.cpp to 'e434e69183fd9e1031f4445002083178c331a28b'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-17 17:00:10 +02:00
Ettore Di Giacinto
89040ff6f7 fix: add python symlink, use absolute python env path when running backends (#5664)
* fix: add python symlink, use absolute python env path when running backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(ci): do not push images when building PRs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-16 23:00:53 +02:00
Ettore Di Giacinto
2d64269763 feat: Add backend gallery (#5607)
* feat: Add backend gallery

This PR add support to manage backends as similar to models. There is
now available a backend gallery which can be used to install and remove
extra backends.
The backend gallery can be configured similarly as a model gallery, and
API calls allows to install and remove new backends in runtime, and as
well during the startup phase of LocalAI.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add backends docs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* wip: Backend Dockerfile for python backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: drop extras images, build python backends separately

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup on all backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* test CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Tweaks

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop old backends leftovers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixup CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move dockerfile upper

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix proto

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Feature dropped for consistency - we prefer model galleries

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add missing packages in the build image

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* exllama is ponly available on cublas

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* pin torch on chatterbox

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups to index

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Debug CI

* Install accellerators deps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add target arch

* Add cuda minor version

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use self-hosted runners

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: use quay for test images

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups for vllm and chatterbox

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small fixups on CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chatterbox is only available for nvidia

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Simplify CI builds

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt test, use qwen3

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(model gallery): add jina-reranker-v1-tiny-en-gguf

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(gguf-parser): recover from potential panics that can happen while reading ggufs with gguf-parser

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use reranker from llama.cpp in AIO images

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Limit concurrent jobs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-06-15 14:56:52 +02:00
fuder.eth
eb8c29f90a Minor Documentation Updates: Clarified Comments in Python and Go Files (#5641)
* Update ui.go

Signed-off-by: fuder.eth <139509124+vtjl10@users.noreply.github.com>

* Update backend.py

Signed-off-by: fuder.eth <139509124+vtjl10@users.noreply.github.com>

---------

Signed-off-by: fuder.eth <139509124+vtjl10@users.noreply.github.com>
2025-06-13 19:55:25 +02:00
Ettore Di Giacinto
88e570b5de fix(deps): pin grpcio (#5621)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-10 14:21:51 +02:00
Ettore Di Giacinto
8b889955b4 chore(deps): bump pytorch to 2.7 in vllm (#5576)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-04 08:56:45 +02:00
Ettore Di Giacinto
cd3cd899ad chore(deps): bump llama.cpp to '363757628848a27a435bbf22ff9476e9aeda5f40' (#5571)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-03 12:19:16 +02:00
Ettore Di Giacinto
ec0868e691 chore(deps): bump grpcio from 1.72.0 to 1.72.1 (#5570)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-03 09:59:43 +02:00
Ettore Di Giacinto
80f7f17843 chore(deps): bump llama.cpp to 'e562eece7cb476276bfc4cbb18deb7c0369b2233' (#5552)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-31 12:46:32 +02:00
Ettore Di Giacinto
d5c9c717b5 feat(chatterbox): add new backend (#5524)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-30 10:52:55 +02:00
Ettore Di Giacinto
dd7fa6b9f7 chore(deps): bump llama.cpp to 'e83ba3e460651b20a594e9f2f0f0bffb998d3ce1 (#5527)
chore(deps): bump llama.cpp to 'e83ba3e460651b20a594e9f2f0f0bffb998d3ce1'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-30 10:29:01 +02:00
Ettore Di Giacinto
5ffad3b004 chore(deps): remove pin on transformers (#5501)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-27 09:24:27 +02:00
Ettore Di Giacinto
88de2ea01a feat(llama.cpp): add support for audio input (#5466)
* feat(llama.cpp): add support for audio input

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-26 16:06:03 +02:00
Richard Palethorpe
bf6426aef2 feat: Realtime API support reboot (#5392)
* feat(realtime): Initial Realtime API implementation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: go mod tidy

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* feat: Implement transcription only mode for realtime API

Reduce the scope of the real time API for the initial realease and make
transcription only mode functional.

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* chore(build): Build backends on a separate layer to speed up core only changes

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-25 22:25:05 +02:00
Ettore Di Giacinto
3b0cf52f6a feat(llama.cpp): add reranking (#5396)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-22 21:49:30 +02:00
Ettore Di Giacinto
6a382a1afe fix(transformers): try to pin to working release (#5426)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-22 12:50:51 +02:00