LocalAI [bot]
d3525b7509
chore: ⬆️ Update ggml-org/llama.cpp to 959ecf7f234dc0bc0cd6829b25cb0ee1481aa78a ( #8122 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-19 22:50:47 +01:00
LocalAI [bot]
c8aa821e0e
chore: ⬆️ Update leejet/stable-diffusion.cpp to a48b4a3ade9972faf0adcad47e51c6fc03f0e46d ( #8121 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-19 22:27:46 +01:00
LocalAI [bot]
8845186955
chore: ⬆️ Update leejet/stable-diffusion.cpp to 2efd19978dd4164e387bf226025c9666b6ef35e2 ( #8099 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-18 22:40:35 +01:00
LocalAI [bot]
ab8ed24358
chore: ⬆️ Update ggml-org/llama.cpp to 287a33017b32600bfc0e81feeb0ad6e81e0dd484 ( #8100 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-18 22:40:14 +01:00
Ettore Di Giacinto
5f403b1631
chore: drop neutts for l4t ( #8101 )
...
Builds exhausts CI currently, and there are better backends at this
point in time. We will probably deprecate it in the future.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-18 21:55:56 +01:00
LocalAI [bot]
16a18a2e55
chore: ⬆️ Update leejet/stable-diffusion.cpp to 9565c7f6bd5fcff124c589147b2621244f2c4aa1 ( #8086 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-17 22:12:21 +01:00
LocalAI [bot]
1cd33047b4
chore: ⬆️ Update ggml-org/llama.cpp to 2fbde785bc106ae1c4102b0e82b9b41d9c466579 ( #8087 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-17 21:10:18 +00:00
LocalAI [bot]
5fe9bf9f84
chore: ⬆️ Update ggml-org/whisper.cpp to f53dc74843e97f19f94a79241357f74ad5b691a6 ( #8074 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-17 08:32:53 +01:00
LocalAI [bot]
d4fd0c0609
chore: ⬆️ Update ggml-org/llama.cpp to 388ce822415f24c60fcf164a321455f1e008cafb ( #8073 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-16 21:22:33 +00:00
Ettore Di Giacinto
d16722ee13
Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/rerankers in the pip group across 1 directory" ( #8072 )
...
Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/pyt…"
This reverts commit 1f10ab39a9 .
2026-01-16 20:50:33 +01:00
dependabot[bot]
1f10ab39a9
chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/rerankers in the pip group across 1 directory ( #8066 )
...
chore(deps): bump torch
Bumps the pip group with 1 update in the /backend/python/rerankers directory: [torch](https://github.com/pytorch/pytorch ).
Updates `torch` from 2.3.1+cxx11.abi to 2.8.0
- [Release notes](https://github.com/pytorch/pytorch/releases )
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md )
- [Commits](https://github.com/pytorch/pytorch/commits/v2.8.0 )
---
updated-dependencies:
- dependency-name: torch
dependency-version: 2.8.0
dependency-type: direct:production
dependency-group: pip
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 19:38:12 +00:00
LocalAI [bot]
cb8616c7d1
chore: ⬆️ Update ggml-org/llama.cpp to 785a71008573e2d84728fb0ba9e851d72d3f8fab ( #8053 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-15 22:53:17 +01:00
LocalAI [bot]
ff31d50488
chore: ⬆️ Update ggml-org/whisper.cpp to 2eeeba56e9edd762b4b38467bab96c2517163158 ( #8052 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-15 22:52:56 +01:00
LocalAI [bot]
49d6305509
chore: ⬆️ Update ggml-org/llama.cpp to d98b548120eecf98f0f6eaa1ba7e29b3afda9f2e ( #8040 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-15 08:39:46 +01:00
LocalAI [bot]
cbaa793520
chore: ⬆️ Update ggml-org/whisper.cpp to 47af2fb70f7e4ee1ba40c8bed513760fdfe7a704 ( #8039 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-14 22:12:32 +01:00
Ettore Di Giacinto
b19afc9e64
feat(diffusers): add support to LTX-2 ( #8019 )
...
* feat(diffusers): add support to LTX-2
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add to the gallery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-14 09:07:30 +01:00
LocalAI [bot]
d6e698876b
chore: ⬆️ Update ggml-org/llama.cpp to e4832e3ae4d58ac0ecbdbf4ae055424d6e628c9f ( #8015 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-14 08:09:37 +01:00
LocalAI [bot]
8962205546
chore: ⬆️ Update ggml-org/whisper.cpp to a96310871a3b294f026c3bcad4e715d17b5905fe ( #8014 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-14 08:09:00 +01:00
LocalAI [bot]
eddc460118
chore: ⬆️ Update leejet/stable-diffusion.cpp to 7010bb4dff7bd55b03d35ef9772142c21699eba9 ( #8013 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-14 08:08:31 +01:00
Ettore Di Giacinto
a6ff354c86
feat(tts): add pocket-tts backend ( #8018 )
...
* feat(pocket-tts): add new backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add to the gallery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-13 23:35:19 +01:00
dependabot[bot]
94eecc43a3
chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers ( #7993 )
...
chore(deps): bump protobuf in /backend/python/transformers
Bumps [protobuf](https://github.com/protocolbuffers/protobuf ) from 6.33.2 to 6.33.4.
- [Release notes](https://github.com/protocolbuffers/protobuf/releases )
- [Commits](https://github.com/protocolbuffers/protobuf/commits )
---
updated-dependencies:
- dependency-name: protobuf
dependency-version: 6.33.4
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 23:46:32 +00:00
LocalAI [bot]
7e35ec6c4f
chore: ⬆️ Update ggml-org/llama.cpp to bcf7546160982f56bc290d2e538544bbc0772f63 ( #7991 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-12 21:14:33 +00:00
Ettore Di Giacinto
7891c33cb1
chore(vulkan): bump vulkan-sdk to 1.4.335.0 ( #7981 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-12 07:51:26 +01:00
LocalAI [bot]
3d12d5e70d
chore: ⬆️ Update leejet/stable-diffusion.cpp to 885e62ea822e674c6837a8225d2d75f021b97a6a ( #7979 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-11 22:44:11 +01:00
LocalAI [bot]
bc180c2638
chore: ⬆️ Update ggml-org/llama.cpp to 0c3b7a9efebc73d206421c99b7eb6b6716231322 ( #7978 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-11 22:06:30 +01:00
Ettore Di Giacinto
2de30440fe
fix(l4t-12): use pip to install python deps ( #7967 )
...
* fix: install only torch/torchvision from jetson index
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: use pip for l4t-12
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Revert "fix: install only torch/torchvision from jetson index"
This reverts commit 2d2b020078
* chatterbox needs wheel
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-11 00:21:32 +01:00
LocalAI [bot]
5bfc3eebf8
chore: ⬆️ Update ggml-org/llama.cpp to b1377188784f9aea26b8abde56d4aee8c733eec7 ( #7965 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-10 22:24:26 +01:00
LocalAI [bot]
fdc2c0737c
chore: ⬆️ Update ggml-org/llama.cpp to 593da7fa49503b68f9f01700be9f508f1e528992 ( #7946 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-09 21:13:04 +00:00
Ettore Di Giacinto
f4b0a304d7
chore(llama.cpp): propagate errors during model load ( #7937 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-09 07:52:49 +01:00
Ettore Di Giacinto
d16ec7aa9e
chore(deps): Bump llama.cpp to '480160d47297df43b43746294963476fc0a6e10f' ( #7933 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-09 07:52:32 +01:00
Ettore Di Giacinto
a4d224dd1b
Revert "chore(uv): add --index-strategy=unsafe-first-match to l4t" ( #7936 )
...
Revert "chore(uv): add --index-strategy=unsafe-first-match to l4t (#7934 )"
This reverts commit f5dee90962 .
2026-01-08 23:31:51 +01:00
Ettore Di Giacinto
917c7aa9f3
chore(ci): roll back l4t-cuda12 configurations ( #7935 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-08 23:04:33 +01:00
LocalAI [bot]
5aa66842dd
chore: ⬆️ Update leejet/stable-diffusion.cpp to 0e52afc6513cc2dea9a1a017afc4a008d5acf2b0 ( #7930 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-08 22:48:46 +01:00
Ettore Di Giacinto
f5dee90962
chore(uv): add --index-strategy=unsafe-first-match to l4t ( #7934 )
...
This is because the main index might not contain all the dependencies
for torch
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-08 22:48:03 +01:00
Ettore Di Giacinto
383312b50e
chore(l4t-12): do not use python 3.12 (wheels are only for 3.10) ( #7928 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-08 19:00:07 +01:00
LocalAI [bot]
c03e532a18
chore: ⬆️ Update ggml-org/llama.cpp to ae9f8df77882716b1702df2bed8919499e64cc28 ( #7915 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-07 23:24:01 +01:00
Copilot
b2ff1cea2a
feat: enable Vulkan arm64 image builds ( #7912 )
...
* Initial plan
* Add arm64 support for Vulkan builds in Dockerfiles and workflows
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-07 21:49:50 +01:00
Ettore Di Giacinto
b964b3d53e
feat(backends): add moonshine backend for faster transcription ( #7833 )
...
* feat(backends): add moonshine backend for faster transcription
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add backend to CI, update AGENTS.md from this exercise
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-07 21:44:35 +01:00
Copilot
fd53978a7b
feat: package GPU libraries inside backend containers for unified base image ( #7891 )
...
* Initial plan
* Add GPU library packaging for isolated backend environments
- Create scripts/build/package-gpu-libs.sh for packaging CUDA, ROCm, SYCL, and Vulkan libraries
- Update llama-cpp, whisper, stablediffusion-ggml package.sh to include GPU libraries
- Update Dockerfile.python to package GPU libraries into Python backends
- Update libbackend.sh to set LD_LIBRARY_PATH for GPU library loading
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
* Address code review feedback: fix variable consistency and quoting
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
* Fix code review issues: improve glob handling and remove redundant variable
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
* Simplify main Dockerfile and workflow to use unified base image
- Remove GPU-specific driver installation from Dockerfile (CUDA, ROCm, Vulkan, Intel)
- Simplify image.yml workflow to build single unified base image for linux/amd64 and linux/arm64
- GPU libraries are now packaged in individual backend containers
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2026-01-07 15:48:51 +01:00
LocalAI [bot]
23df29fbd3
chore: ⬆️ Update leejet/stable-diffusion.cpp to 9be0b91927dfa4007d053df72dea7302990226bb ( #7895 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-06 22:18:53 +01:00
LocalAI [bot]
fb9879949c
chore: ⬆️ Update ggml-org/llama.cpp to ccbc84a5374bab7a01f68b129411772ddd8e7c79 ( #7894 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-06 22:18:35 +01:00
Richard Palethorpe
e6ba26c3e7
chore: Update to Ubuntu24.04 (cont #7423 ) ( #7769 )
...
* ci(workflows): bump GitHub Actions images to Ubuntu 24.04
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* ci(workflows): remove CUDA 11.x support from GitHub Actions (incompatible with ubuntu:24.04)
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* ci(workflows): bump GitHub Actions CUDA support to 12.9
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* build(docker): bump base image to ubuntu:24.04 and adjust Vulkan SDK/packages
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* fix(backend): correct context paths for Python backends in workflows, Makefile and Dockerfile
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(make): disable parallel backend builds to avoid race conditions
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(make): export CUDA_MAJOR_VERSION and CUDA_MINOR_VERSION for override
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* build(backend): update backend Dockerfiles to Ubuntu 24.04
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(backend): add ROCm env vars and default AMDGPU_TARGETS for hipBLAS builds
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(chatterbox): bump ROCm PyTorch to 2.9.1+rocm6.4 and update index URL; align hipblas requirements
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore: add local-ai-launcher to .gitignore
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* ci(workflows): fix backends GitHub Actions workflows after rebase
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* build(docker): use build-time UBUNTU_VERSION variable
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(docker): remove libquadmath0 from requirements-stage base image
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(make): add backends/vllm to .NOTPARALLEL to prevent parallel builds
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* fix(docker): correct CUDA installation steps in backend Dockerfiles
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* chore(backend): update ROCm to 6.4 and align Python hipblas requirements
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* ci(workflows): switch GitHub Actions runners to Ubuntu-24.04 for CUDA on arm64 builds
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* build(docker): update base image and backend Dockerfiles for Ubuntu 24.04 compatibility on arm64
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* build(backend): increase timeout for uv installs behind slow networks on backend/Dockerfile.python
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* ci(workflows): switch GitHub Actions runners to Ubuntu-24.04 for vibevoice backend
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* ci(workflows): fix failing GitHub Actions runners
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
* fix: Allow FROM_SOURCE to be unset, use upstream Intel images etc.
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* chore(build): rm all traces of CUDA 11
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* chore(build): Add Ubuntu codename as an argument
Signed-off-by: Richard Palethorpe <io@richiejp.com >
---------
Signed-off-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
Signed-off-by: Richard Palethorpe <io@richiejp.com >
Co-authored-by: Alessandro Sturniolo <alessandro.sturniolo@gmail.com >
2026-01-06 15:26:42 +01:00
Ettore Di Giacinto
26c4f80d1b
chore(llama.cpp/flags): simplify conditionals ( #7887 )
...
If ggml handle conditionals correctly we don't need to handle it here.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2026-01-06 15:02:20 +01:00
coffeerunhobby
5add7b47f5
fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) ( #7864 )
...
* Fix BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge)
Signed-off-by: coffeerunhobby <coffeerunhobby@users.noreply.github.com >
* Address feedback from review
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: coffeerunhobby <coffeerunhobby@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: coffeerunhobby <coffeerunhobby@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2026-01-06 00:13:48 +00:00
LocalAI [bot]
4f7b6b0bff
chore: ⬆️ Update ggml-org/llama.cpp to e443fbcfa51a8a27b15f949397ab94b5e87b2450 ( #7881 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-05 22:55:40 +01:00
LocalAI [bot]
3a629cea2f
chore: ⬆️ Update ggml-org/whisper.cpp to 679bdb53dbcbfb3e42685f50c7ff367949fd4d48 ( #7879 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-05 22:55:16 +01:00
LocalAI [bot]
f917feda29
chore: ⬆️ Update leejet/stable-diffusion.cpp to c5602a676caff5fe5a9f3b76b2bc614faf5121a5 ( #7880 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-05 22:54:56 +01:00
LocalAI [bot]
9d3da0bed5
chore: ⬆️ Update ggml-org/llama.cpp to 4974bf53cf14073c7b66e1151348156aabd42cb8 ( #7861 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-05 00:10:18 +01:00
LocalAI [bot]
1b063b5595
chore: ⬆️ Update leejet/stable-diffusion.cpp to b90b1ee9cf84ea48b478c674dd2ec6a33fd504d6 ( #7862 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-04 23:52:01 +01:00
LocalAI [bot]
a7e155240b
chore: ⬆️ Update ggml-org/llama.cpp to e57f52334b2e8436a94f7e332462dfc63a08f995 ( #7848 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2026-01-04 10:27:45 +01:00