Logo
Explore Help
Register Sign In
mirror/LocalAI
1
0
Fork 0
You've already forked LocalAI
mirror of https://github.com/mudler/LocalAI.git synced 2026-04-17 05:18:53 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
151ad271f23d5037d59d64c2e6da3e98a2f82cc0
LocalAI/backend/cpp/llama-cpp
History
Ettore Di Giacinto 151ad271f2 feat(rocm): bump to 7.x (#9323)
feat(rocm): bump to 7.2.1

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-12 08:51:30 +02:00
..
CMakeLists.txt
fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (#7864)
2026-01-06 00:13:48 +00:00
grpc-server.cpp
fix(streaming): skip chat deltas for role-init elements to prevent first token duplication (#9299)
2026-04-10 08:45:47 +02:00
Makefile
feat(rocm): bump to 7.x (#9323)
2026-04-12 08:51:30 +02:00
package.sh
fix(llama.cpp): bundle libdl, librt, libpthread in llama-cpp backend (#9099)
2026-03-22 00:58:14 +01:00
prepare.sh
chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a (#7402)
2025-12-01 07:50:40 +01:00
run.sh
feat(rocm): bump to 7.x (#9323)
2026-04-12 08:51:30 +02:00
Powered by Gitea Version: 1.26.0+dev-152-g4c51acb26b Page: 55ms Template: 3ms
Auto
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API