This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-04-16 21:08:16 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
87e6de1989f91fad7b16f53686372c6af4c597a3
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
87e6de1989
feat: wire transcription for llama.cpp, add streaming support (
#9353
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2026-04-14 16:13:40 +02:00
..
CMakeLists.txt
fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (
#7864
)
2026-01-06 00:13:48 +00:00
grpc-server.cpp
feat: wire transcription for llama.cpp, add streaming support (
#9353
)
2026-04-14 16:13:40 +02:00
Makefile
feat: wire transcription for llama.cpp, add streaming support (
#9353
)
2026-04-14 16:13:40 +02:00
package.sh
fix(llama.cpp): bundle libdl, librt, libpthread in llama-cpp backend (
#9099
)
2026-03-22 00:58:14 +01:00
prepare.sh
chore:
⬆️
Update ggml-org/llama.cpp to
7f8ef50cce40e3e7e4526a3696cb45658190e69a
(
#7402
)
2025-12-01 07:50:40 +01:00
run.sh
feat(rocm): bump to 7.x (
#9323
)
2026-04-12 08:51:30 +02:00