This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-01-20 20:33:45 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
f6daaa7c35aa0793dd8522ed314d3823500d332d
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
f6daaa7c35
chore(deps): Bump llama.cpp to '1c7cf94b22a9dc6b1d32422f72a627787a4783a3' (
#8136
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2026-01-21 00:12:13 +01:00
..
CMakeLists.txt
fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (
#7864
)
2026-01-06 00:13:48 +00:00
grpc-server.cpp
chore(deps): Bump llama.cpp to '1c7cf94b22a9dc6b1d32422f72a627787a4783a3' (
#8136
)
2026-01-21 00:12:13 +01:00
Makefile
chore(deps): Bump llama.cpp to '1c7cf94b22a9dc6b1d32422f72a627787a4783a3' (
#8136
)
2026-01-21 00:12:13 +01:00
package.sh
feat: package GPU libraries inside backend containers for unified base image (
#7891
)
2026-01-07 15:48:51 +01:00
prepare.sh
…
run.sh
…