This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-01-17 10:51:02 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
d16ec7aa9efb79b47f9a6a8305af1a0d7b3284be
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
d16ec7aa9e
chore(deps): Bump llama.cpp to '480160d47297df43b43746294963476fc0a6e10f' (
#7933
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2026-01-09 07:52:32 +01:00
..
CMakeLists.txt
fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (
#7864
)
2026-01-06 00:13:48 +00:00
grpc-server.cpp
chore(deps): Bump llama.cpp to '480160d47297df43b43746294963476fc0a6e10f' (
#7933
)
2026-01-09 07:52:32 +01:00
Makefile
chore(deps): Bump llama.cpp to '480160d47297df43b43746294963476fc0a6e10f' (
#7933
)
2026-01-09 07:52:32 +01:00
package.sh
feat: package GPU libraries inside backend containers for unified base image (
#7891
)
2026-01-07 15:48:51 +01:00
prepare.sh
…
run.sh
…