This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-01-14 17:31:29 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
fix_sycl
Add File
New File
Upload File
Apply Patch
LocalAI
/
backend
/
cpp
/
llama
History
Ettore Di Giacinto
59bfc67ead
workaround upstream issue
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2024-07-24 11:02:58 +02:00
..
CMakeLists.txt
workaround upstream issue
2024-07-24 11:02:58 +02:00
CMakeLists.txt.rpc-8662
workaround upstream issue
2024-07-24 11:02:58 +02:00
grpc-server.cpp
feat(llama.cpp): support embeddings endpoints (
#2871
)
2024-07-15 22:54:16 +02:00
json.hpp
🔥
add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (
#1254
)
2023-11-11 13:14:59 +01:00
Makefile
fix: speedup
git submodule update
with
--single-branch
(
#2847
)
2024-07-13 22:32:25 +02:00
prepare.sh
workaround upstream issue
2024-07-24 11:02:58 +02:00
utils.hpp
feat(sycl): Add support for Intel GPUs with sycl (
#1647
) (
#1660
)
2024-02-01 19:21:52 +01:00