Logo
Explore Help
Register Sign In
mirror/LocalAI
1
0
Fork 0
You've already forked LocalAI
mirror of https://github.com/mudler/LocalAI.git synced 2026-01-30 01:02:37 -05:00
Code Issues Packages Projects Releases Wiki Activity
Files
v3.4.0
LocalAI/backend
History
LocalAI [bot] b2e8b6d1aa chore: ⬆️ Update ggml-org/llama.cpp to be48528b068111304e4a0bb82c028558b5705f05 (#6012)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-11 21:06:10 +00:00
..
cpp
chore: ⬆️ Update ggml-org/llama.cpp to be48528b068111304e4a0bb82c028558b5705f05 (#6012)
2025-08-11 21:06:10 +00:00
go
chore: ⬆️ Update ggml-org/whisper.cpp to b02242d0adb5c6c4896d59ac86d9ec9fe0d0fe33 (#6009)
2025-08-11 12:54:41 +02:00
python
fix(l4t-diffusers): add sentencepiece (#6005)
2025-08-09 09:08:35 +02:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (#5945)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (#5790)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: Add backend gallery (#5607)
2025-06-15 14:56:52 +02:00
index.yaml
feat(diffusers): add builds for nvidia-l4t (#6004)
2025-08-08 22:48:38 +02:00
Powered by Gitea Version: 1.26.0+dev-152-g4c51acb26b Page: 39ms Template: 3ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API