LocalAI [bot]
e44ff8514b
chore: ⬆️ Update ggml-org/llama.cpp to 6d7f1117e3e3285d0c5c11b5ebb0439e27920082 ( #6088 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-19 08:09:49 +02:00
LocalAI [bot]
7920d75805
chore: ⬆️ Update ggml-org/llama.cpp to 21c17b5befc5f6be5992bc87fc1ba99d388561df ( #6084 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-18 08:26:58 +00:00
LocalAI [bot]
9eed5ef872
chore: ⬆️ Update ggml-org/llama.cpp to 1fe00296f587dfca0957e006d146f5875b61e43d ( #6079 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-16 21:10:03 +00:00
LocalAI [bot]
243e86176e
chore: ⬆️ Update ggml-org/llama.cpp to 5e6229a8409ac786e62cb133d09f1679a9aec13e ( #6070 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-16 08:38:57 +02:00
Ettore Di Giacinto
22067e3384
chore(rocm): bump rocm image, add gfx1200 support ( #6065 )
...
Fixes: https://github.com/mudler/LocalAI/issues/6044
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-08-15 16:36:54 +02:00
Ettore Di Giacinto
4fbd639463
chore(ci): fixup builds for darwin and hipblas
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-08-15 15:58:02 +02:00
Ettore Di Giacinto
576e821298
chore(deps): bump llama.cpp to 'df36bce667bf14f8e538645547754386f9516326 ( #6062 )
...
chore(deps): bump llama.cpp to 'df36bce667bf14f8e538645547754386f9516326'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-08-15 13:28:15 +02:00
Ettore Di Giacinto
8ab51509cc
Update Makefile
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-08-15 08:33:25 +02:00
Ettore Di Giacinto
b3384e5428
Update Makefile
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-08-15 08:08:24 +02:00
Ettore Di Giacinto
bf60ca5bf0
Update Makefile
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-08-14 11:53:43 +02:00
LocalAI [bot]
2b44467bd1
chore: ⬆️ Update ggml-org/llama.cpp to 29c8fbe4e05fd23c44950d0958299e25fbeabc5c ( #6054 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-14 09:19:15 +02:00
LocalAI [bot]
72f4d541d0
chore: ⬆️ Update ggml-org/llama.cpp to f4586ee5986d6f965becb37876d6f3666478a961 ( #6048 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-08-13 08:33:48 +02:00
Ettore Di Giacinto
18fcd8557c
fix(llama.cpp): support gfx1200 ( #6045 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-08-12 22:04:30 +02:00
LocalAI [bot]
b2e8b6d1aa
chore: ⬆️ Update ggml-org/llama.cpp to be48528b068111304e4a0bb82c028558b5705f05 ( #6012 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-11 21:06:10 +00:00
LocalAI [bot]
6db19c5cb9
chore: ⬆️ Update ggml-org/llama.cpp to 79c1160b073b8148a404f3dd2584be1606dccc66 ( #6006 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-11 12:54:21 +02:00
LocalAI [bot]
def7cdc0bf
chore: ⬆️ Update ggml-org/llama.cpp to cd6983d56d2cce94ecb86bb114ae8379a609073c ( #6003 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-09 08:41:58 +02:00
LocalAI [bot]
4e40a8d1ed
chore: ⬆️ Update ggml-org/llama.cpp to a0552c8beef74e843bb085c8ef0c63f9ed7a2b27 ( #5992 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-07 21:13:14 +00:00
LocalAI [bot]
61ba98d43d
chore: ⬆️ Update ggml-org/llama.cpp to e725a1a982ca870404a9c4935df52466327bbd02 ( #5984 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-06 21:17:20 +00:00
LocalAI [bot]
03e8592450
chore: ⬆️ Update ggml-org/llama.cpp to fd1234cb468935ea087d6929b2487926c3afff4b ( #5972 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-05 23:14:43 +02:00
LocalAI [bot]
2913676157
chore: ⬆️ Update ggml-org/llama.cpp to 41613437ffee0dbccad684fc744788bc504ec213 ( #5968 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-04 23:16:30 +02:00
LocalAI [bot]
4d90971424
chore: ⬆️ Update ggml-org/llama.cpp to d31192b4ee1441bbbecd3cbf9e02633368bdc4f5 ( #5965 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-03 21:03:20 +00:00
LocalAI [bot]
2a9d675d62
chore: ⬆️ Update ggml-org/llama.cpp to 5c0eb5ef544aeefd81c303e03208f768e158d93c ( #5959 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-08-02 23:35:24 +02:00
LocalAI [bot]
0b085089b9
chore: ⬆️ Update ggml-org/llama.cpp to daf2dd788066b8b239cb7f68210e090c2124c199 ( #5951 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-08-01 08:25:36 +02:00
Richard Palethorpe
c07bc55fee
fix(intel): Set GPU vendor on Intel images and cleanup ( #5945 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com >
2025-07-31 19:44:46 +02:00
LocalAI [bot]
8b1e8b4cda
chore: ⬆️ Update ggml-org/llama.cpp to e9192bec564780bd4313ad6524d20a0ab92797db ( #5940 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-31 09:26:02 +02:00
LocalAI [bot]
eb5c3670f1
chore: ⬆️ Update ggml-org/llama.cpp to aa79524c51fb014f8df17069d31d7c44b9ea6cb8 ( #5934 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-29 21:05:00 +00:00
LocalAI [bot]
60726d16f2
chore: ⬆️ Update ggml-org/llama.cpp to 8ad7b3e65b5834e5574c2f5640056c9047b5d93b ( #5931 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-29 08:01:03 +02:00
LocalAI [bot]
d25145e641
chore: ⬆️ Update ggml-org/llama.cpp to bf78f5439ee8e82e367674043303ebf8e92b4805 ( #5927 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-27 21:08:32 +00:00
LocalAI [bot]
932360bf7e
chore: ⬆️ Update ggml-org/llama.cpp to 11dd5a44eb180e1d69fac24d3852b5222d66fb7f ( #5921 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-27 09:50:56 +02:00
LocalAI [bot]
5ce982b9c9
chore: ⬆️ Update ggml-org/llama.cpp to c7f3169cd523140a288095f2d79befb20a0b73f4 ( #5913 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-25 23:08:20 +02:00
LocalAI [bot]
813cb4296d
chore: ⬆️ Update ggml-org/llama.cpp to 3f4fc97f1d745f1d5d3c853949503136d419e6de ( #5900 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-25 08:39:44 +02:00
LocalAI [bot]
61c2304638
chore: ⬆️ Update ggml-org/llama.cpp to a86f52b2859dae4db5a7a0bbc0f1ad9de6b43ec6 ( #5894 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-24 15:02:37 +02:00
Ettore Di Giacinto
b7b3164736
chore: try to speedup build
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-07-23 21:21:23 +02:00
LocalAI [bot]
b5be867e28
chore: ⬆️ Update ggml-org/llama.cpp to acd6cb1c41676f6bbb25c2a76fa5abeb1719301e ( #5882 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-22 21:12:06 +00:00
LocalAI [bot]
e29b2c3aff
chore: ⬆️ Update ggml-org/llama.cpp to 6c9ee3b17e19dcc82ab93d52ae46fdd0226d4777 ( #5877 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-22 08:25:43 +02:00
LocalAI [bot]
fa284f7445
chore: ⬆️ Update ggml-org/llama.cpp to 2be60cbc2707359241c2784f9d2e30d8fc7cdabb ( #5867 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-21 09:14:09 +02:00
LocalAI [bot]
7659461036
chore: ⬆️ Update ggml-org/llama.cpp to a979ca22db0d737af1e548a73291193655c6be99 ( #5862 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-20 08:43:36 +02:00
Ettore Di Giacinto
580687da46
feat: remove stablediffusion-ggml from main binary ( #5861 )
...
* feat: split stablediffusion-ggml from main binary
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Test CI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Adapt ci tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to support nvidial4t
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Latest fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-07-19 21:58:53 +02:00
LocalAI [bot]
1929eb2894
chore: ⬆️ Update ggml-org/llama.cpp to bf9087f59aab940cf312b85a67067ce33d9e365a ( #5860 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-07-19 08:52:07 +02:00
Ettore Di Giacinto
294f7022f3
feat: do not bundle llama-cpp anymore ( #5790 )
...
* Build llama.cpp separately
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Start to try to attach some tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add git and small fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: correctly autoload external backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to run AIO tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Slightly update the Makefile helps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Adapt auto-bumper
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to run linux test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add llama-cpp into build pipelines
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add default capability (for cpu)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Drop llama-cpp specific logic from the backend loader
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* drop grpc install in ci for tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Pass by backends path for tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Build protogen at start
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(tests): set backends path consistently
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Correctly configure the backends path
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to build for darwin
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Compile for metal on arm64/darwin
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to run build off from cross-arch
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add to the backend index nvidia-l4t and cpu's llama-cpp backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Build also darwin-x86 for llama-cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Disable arm64 builds temporary
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Test backend build on PR
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixup build backend reusable workflow
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* pass by skip drivers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Use crane
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Skip drivers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* x86 darwin
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add packaging step for llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fix leftover from bark-cpp extraction
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to fix hipblas build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-07-18 13:24:12 +02:00