This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-12-30 09:59:36 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
739573e41bb5c8dd1f3ff1ff305ba83c0d51f002
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
739573e41b
feat(flash_attention): set auto for flash_attention in llama.cpp (
#6168
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2025-08-31 17:59:09 +02:00
..
patches
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
CMakeLists.txt
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
grpc-server.cpp
feat(flash_attention): set auto for flash_attention in llama.cpp (
#6168
)
2025-08-31 17:59:09 +02:00
Makefile
feat(flash_attention): set auto for flash_attention in llama.cpp (
#6168
)
2025-08-31 17:59:09 +02:00
package.sh
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
prepare.sh
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
run.sh
fix(llama-cpp/darwin): make sure to bundle
libutf8
libs (
#6060
)
2025-08-14 17:56:35 +02:00