This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-03-04 15:07:56 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
18fcd8557cf394780cfeae11afe064aa187e449a
LocalAI
/
backend
History
Ettore Di Giacinto
18fcd8557c
fix(llama.cpp): support gfx1200 (
#6045
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2025-08-12 22:04:30 +02:00
..
cpp
fix(llama.cpp): support gfx1200 (
#6045
)
2025-08-12 22:04:30 +02:00
go
chore(build): Convert stablediffusion-ggml backend to Purego (
#5989
)
2025-08-12 16:42:15 +02:00
python
chore(deps): bump oneccl-bind-pt from 2.3.100+xpu to 2.8.0+xpu in /backend/python/common/template (
#6016
)
2025-08-12 18:57:20 +00:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (
#5935
)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (
#5945
)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: Add backend gallery (
#5607
)
2025-06-15 14:56:52 +02:00
index.yaml
feat(diffusers): add builds for nvidia-l4t (
#6004
)
2025-08-08 22:48:38 +02:00