This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-01-13 08:50:27 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
propagate_cmake_args
Add File
New File
Upload File
Apply Patch
LocalAI
/
backend
/
cpp
/
llama
History
Ettore Di Giacinto
894a30296a
feat: unify and propagate CMAKE_ARGS to GGML-based backends
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2024-12-11 22:02:58 +01:00
..
patches
chore(deps): update llama.cpp (
#3497
)
2024-09-12 20:55:27 +02:00
CMakeLists.txt
deps(llama.cpp): update, support Gemma models (
#1734
)
2024-02-21 17:23:38 +01:00
grpc-server.cpp
feat(llama.cpp): expose cache_type_k and cache_type_v for quant of kv cache (
#4329
)
2024-12-06 10:23:59 +01:00
json.hpp
🔥
add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (
#1254
)
2023-11-11 13:14:59 +01:00
Makefile
feat: unify and propagate CMAKE_ARGS to GGML-based backends
2024-12-11 22:02:58 +01:00
prepare.sh
chore(deps): update llama.cpp (
#3497
)
2024-09-12 20:55:27 +02:00
utils.hpp
chore(deps): update llama.cpp (
#3497
)
2024-09-12 20:55:27 +02:00