This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-03-04 15:07:56 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
70f7d0c25facbd1174ff84efdd060015c7e1b1c1
LocalAI
/
backend
History
Ettore Di Giacinto
70f7d0c25f
Revert "chore(build): Convert stablediffusion-ggml backend to Purego (
#5989
)" (
#6064
)
...
This reverts commit
94cb20ae7f
.
2025-08-15 15:18:40 +02:00
..
cpp
chore(deps): bump llama.cpp to 'df36bce667bf14f8e538645547754386f9516326 (
#6062
)
2025-08-15 13:28:15 +02:00
go
Revert "chore(build): Convert stablediffusion-ggml backend to Purego (
#5989
)" (
#6064
)
2025-08-15 15:18:40 +02:00
python
chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/transformers (
#6013
)
2025-08-12 22:05:16 +02:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (
#5935
)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (
#5945
)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: Add backend gallery (
#5607
)
2025-06-15 14:56:52 +02:00
index.yaml
feat(diffusers): add builds for nvidia-l4t (
#6004
)
2025-08-08 22:48:38 +02:00