This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-01-06 05:19:30 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
a35a7010522ef709bd8bed8a6d94e0cfef2c97c7
LocalAI
/
backend
History
Ettore Di Giacinto
3d8ec72dbf
chore(stable-diffusion): bump, set GGML_MAX_NAME (
#5961
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2025-08-03 10:47:02 +02:00
..
cpp
chore:
⬆️
Update ggml-org/llama.cpp to
5c0eb5ef544aeefd81c303e03208f768e158d93c
(
#5959
)
2025-08-02 23:35:24 +02:00
go
chore(stable-diffusion): bump, set GGML_MAX_NAME (
#5961
)
2025-08-03 10:47:02 +02:00
python
feat(rfdetr): add object detection API (
#5923
)
2025-07-27 22:02:51 +02:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (
#5935
)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (
#5945
)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: Add backend gallery (
#5607
)
2025-06-15 14:56:52 +02:00
index.yaml
fix(backend gallery): intel images for python-based backends, re-add exllama2 (
#5928
)
2025-07-28 15:15:19 +02:00