LocalAI [bot]
ca2e878aaf
chore: ⬆️ Update ggml-org/llama.cpp to e9f9483464e6f01d843d7f0293bd9c7bc6b2221c ( #7421 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-04 11:54:01 +01:00
LocalAI [bot]
957eea3da3
chore: ⬆️ Update ggml-org/llama.cpp to 61bde8e21f4a1f9a98c9205831ca3e55457b4c78 ( #7415 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-03 16:27:12 +01:00
LocalAI [bot]
665441ca94
chore: ⬆️ Update ggml-org/llama.cpp to ec18edfcba94dacb166e6523612fc0129cead67a ( #7406 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-02 07:59:52 +01:00
Ettore Di Giacinto
e3bcba5c45
chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a ( #7402 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-01 07:50:40 +01:00
LocalAI [bot]
0824fd8efd
chore: ⬆️ Update ggml-org/llama.cpp to 8c32d9d96d9ae345a0150cae8572859e9aafea0b ( #7395 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-30 09:06:18 +01:00
Ettore Di Giacinto
468ac608f3
chore(deps): bump llama.cpp to 'd82b7a7c1d73c0674698d9601b1bbb0200933f29' ( #7392 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-29 08:58:07 +01:00
LocalAI [bot]
1a53fd2b9b
chore: ⬆️ Update ggml-org/llama.cpp to 4abef75f2cf2eee75eb5083b30a94cf981587394 ( #7382 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-28 00:08:27 +01:00
LocalAI [bot]
b5f4f4ac6d
chore: ⬆️ Update ggml-org/llama.cpp to eec1e33a9ed71b79422e39cc489719cf4f8e0777 ( #7363 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-27 09:17:25 +01:00
Ettore Di Giacinto
7a94d237c4
chore(deps): bump llama.cpp to '583cb83416467e8abf9b37349dcf1f6a0083745a ( #7358 )
...
chore(deps): bump llama.cpp to '583cb83416467e8abf9b37349dcf1f6a0083745a'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-26 08:23:21 +01:00
LocalAI [bot]
f6d2a52cd5
chore: ⬆️ Update ggml-org/llama.cpp to 0c7220db56525d40177fcce3baa0d083448ec813 ( #7337 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-24 09:11:38 +01:00
LocalAI [bot]
05a00b2399
chore: ⬆️ Update ggml-org/llama.cpp to 3f3a4fb9c3b907c68598363b204e6f58f4757c8c ( #7336 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-22 21:53:40 +00:00
LocalAI [bot]
bdfe8431fa
chore: ⬆️ Update ggml-org/llama.cpp to 23bc779a6e58762ea892eca1801b2ea1b9050c00 ( #7331 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-22 08:44:01 +01:00
Ettore Di Giacinto
e88db7d142
fix(llama.cpp): handle corner cases with tool content ( #7324 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-21 09:21:49 +01:00
LocalAI [bot]
b7b8a0a748
chore: ⬆️ Update ggml-org/llama.cpp to dd0f3219419b24740864b5343958a97e1b3e4b26 ( #7322 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-21 08:11:47 +01:00
LocalAI [bot]
bfa07df7cd
chore: ⬆️ Update ggml-org/llama.cpp to 7d77f07325985c03a91fa371d0a68ef88a91ec7f ( #7314 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-20 07:58:42 +01:00
Ettore Di Giacinto
3152611184
chore(deps): bump llama.cpp to '10e9780154365b191fb43ca4830659ef12def80f ( #7311 )
...
chore(deps): bump llama.cpp to '10e9780154365b191fb43ca4830659ef12def80f'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-19 14:42:11 +01:00
LocalAI [bot]
4278506876
chore: ⬆️ Update ggml-org/llama.cpp to cb623de3fc61011e5062522b4d05721a22f2e916 ( #7301 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-18 07:43:57 +01:00
LocalAI [bot]
fb834805db
chore: ⬆️ Update ggml-org/llama.cpp to 80deff3648b93727422461c41c7279ef1dac7452 ( #7287 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-17 07:51:08 +01:00
Ettore Di Giacinto
d7f9f3ac93
feat: add support to logitbias and logprobs ( #7283 )
...
* feat: add support to logprobs in results
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat: add support to logitbias
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-16 13:27:36 +01:00
LocalAI [bot]
d1a0dd10e6
chore: ⬆️ Update ggml-org/llama.cpp to 662192e1dcd224bc25759aadd0190577524c6a66 ( #7277 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-16 08:41:12 +01:00
LocalAI [bot]
a09d49da43
chore: ⬆️ Update ggml-org/llama.cpp to 9b17d74ab7d31cb7d15ee7eec1616c3d825a84c0 ( #7273 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-15 00:05:39 +01:00
Ettore Di Giacinto
03e9f4b140
fix: handle tool errors ( #7271 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-14 17:23:56 +01:00
Ettore Di Giacinto
7129409bf6
chore(deps): bump llama.cpp to c4abcb2457217198efdd67d02675f5fddb7071c2 ( #7266 )
...
* chore(deps): bump llama.cpp to '92bb442ad999a0d52df0af2730cd861012e8ac5c'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* DEBUG
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Bump
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* test/debug
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Revert "DEBUG"
This reverts commit 2501ca3ff242076d623c13c86b3d6afcec426281.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-14 12:16:52 +01:00
Ettore Di Giacinto
3728552e94
feat: import models via URI ( #7245 )
...
* feat: initial hook to install elements directly
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP: ui changes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Move HF api client to pkg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add simple importer for gguf files
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add opcache
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* wire importers to CLI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add omitempty to config fields
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fix tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add MLX importer
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Small refactors to star to use HF for discovery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Common preferences
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add support to bare HF repos
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(importer/llama.cpp): add support for mmproj files
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* add mmproj quants to common preferences
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fix vlm usage in tokenizer mode with llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-12 20:48:56 +01:00
Mikhail Khludnev
04fe0b0da8
fix(reranker): llama-cpp sort score desc, crop top_n ( #7211 )
...
Signed-off-by: Mikhail Khludnev <mkhl@apache.org >
2025-11-12 09:13:01 +01:00
LocalAI [bot]
fae93e5ba2
chore: ⬆️ Update ggml-org/llama.cpp to 7d019cff744b73084b15ca81ba9916f3efab1223 ( #7247 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-11 21:31:01 +00:00
LocalAI [bot]
5f4663252d
chore: ⬆️ Update ggml-org/llama.cpp to 13730c183b9e1a32c09bf132b5367697d6c55048 ( #7232 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-11 00:03:01 +01:00
LocalAI [bot]
e42f0f7e79
chore: ⬆️ Update ggml-org/llama.cpp to b8595b16e69e3029e06be3b8f6635f9812b2bc3f ( #7210 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-09 23:56:27 +01:00
Ettore Di Giacinto
679d43c2f5
feat: respect context and add request cancellation ( #7187 )
...
* feat: respect context
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* workaround fasthttp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(ui): allow to abort call
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Refactor
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore: improving error
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Respect context also with MCP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Tie to both contexts
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Make detection more robust
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-09 18:19:19 +01:00
LocalAI [bot]
f678c6b0a9
chore: ⬆️ Update ggml-org/llama.cpp to 333f2595a3e0e4c0abf233f2f29ef1710acd134d ( #7201 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-08 21:06:17 +00:00
LocalAI [bot]
8ac7e28c12
chore: ⬆️ Update ggml-org/llama.cpp to 65156105069fa86a4a81b6cb0e8cb583f6420677 ( #7184 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-08 09:07:44 +01:00
Ettore Di Giacinto
02cc8cbcaa
feat(llama.cpp): consolidate options and respect tokenizer template when enabled ( #7120 )
...
* feat(llama.cpp): expose env vars as options for consistency
This allows to configure everything in the YAML file of the model rather
than have global configurations
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat(llama.cpp): respect usetokenizertemplate and use llama.cpp templating system to process messages
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Detect template exists if use tokenizer template is enabled
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Better recognization of chat
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixes to support tool calls while using templates from tokenizer
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Drop template guessing, fix passing tools to tokenizer
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Extract grammar and other options from chat template, add schema struct
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* WIP
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Automatically set use_jinja
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Cleanups, identify by default gguf models for chat
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Update docs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-07 21:23:50 +01:00
LocalAI [bot]
8f7c499f17
chore: ⬆️ Update ggml-org/llama.cpp to 7f09a680af6e0ef612de81018e1d19c19b8651e8 ( #7156 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-07 08:38:56 +01:00
LocalAI [bot]
db9957b94e
chore: ⬆️ Update ggml-org/llama.cpp to a44d77126c911d105f7f800c17da21b2a5b112d1 ( #7125 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-05 21:22:04 +00:00
LocalAI [bot]
98158881c2
chore: ⬆️ Update ggml-org/llama.cpp to ad51c0a720062a04349c779aae301ad65ca4c856 ( #7098 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-04 21:19:58 +00:00
LocalAI [bot]
e2cb44ef37
chore: ⬆️ Update ggml-org/llama.cpp to c5023daf607c578d6344c628eb7da18ac3d92d32 ( #7069 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-04 09:26:10 +01:00
LocalAI [bot]
2cad2c8591
chore: ⬆️ Update ggml-org/llama.cpp to cd5e3b57541ecc52421130742f4d89acbcf77cd4 ( #7023 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-02 21:24:19 +00:00
Ettore Di Giacinto
424acd66ad
feat(llama.cpp): allow to set cache-ram and ctx_shift ( #7009 )
...
* feat(llama.cpp): allow to set cache-ram and ctx_shift
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Apply suggestion from @mudler
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-11-02 17:33:29 +01:00
LocalAI [bot]
f85e2dd1b8
chore: ⬆️ Update ggml-org/llama.cpp to 2f68ce7cfd20e9e7098514bf730e5389b7bba908 ( #6998 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-02 09:44:37 +01:00
LocalAI [bot]
9ecfdc5938
chore: ⬆️ Update ggml-org/llama.cpp to 31c511a968348281e11d590446bb815048a1e912 ( #6970 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-31 21:04:53 +00:00
LocalAI [bot]
0ddb2e8dcf
chore: ⬆️ Update ggml-org/llama.cpp to 4146d6a1a6228711a487a1e3e9ddd120f8d027d7 ( #6945 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-31 14:51:03 +00:00
LocalAI [bot]
1e5b9135df
chore: ⬆️ Update ggml-org/llama.cpp to 16724b5b6836a2d4b8936a5824d2ff27c52b4517 ( #6925 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-30 21:07:33 +00:00
LocalAI [bot]
dd21a0d2f9
chore: ⬆️ Update ggml-org/llama.cpp to 3464bdac37027c5e9661621fc75ffcef3c19c6ef ( #6896 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-30 14:17:58 +01:00
LocalAI [bot]
fb825a2708
chore: ⬆️ Update ggml-org/llama.cpp to 851553ea6b24cb39fd5fd188b437d777cb411de8 ( #6869 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-29 08:16:55 +01:00
LocalAI [bot]
e13cb8346d
chore: ⬆️ Update ggml-org/llama.cpp to 5a4ff43e7dd049e35942bc3d12361dab2f155544 ( #6841 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-28 08:48:21 +01:00
LocalAI [bot]
8225697139
chore: ⬆️ Update ggml-org/llama.cpp to bbac6a26b2bd7f7c1f0831cb1e7b52734c66673b ( #6783 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-27 08:45:14 +01:00
LocalAI [bot]
192589a17f
chore: ⬆️ Update ggml-org/llama.cpp to 5d195f17bc60eacc15cfb929f9403cf29ccdf419 ( #6757 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-25 21:14:43 +00:00
LocalAI [bot]
ed4ac0b61e
chore: ⬆️ Update ggml-org/llama.cpp to 55945d2ef51b93821d4b6f4a9b994393344a90db ( #6729 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-24 21:11:56 +00:00
LocalAI [bot]
b66bd2706f
chore: ⬆️ Update ggml-org/llama.cpp to 0bf47a1dbba4d36f2aff4e8c34b06210ba34e688 ( #6703 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-10-23 21:10:51 +00:00
Chakib Benziane
32c0ab3a7f
fix: properly terminate llama.cpp kv_overrides array with empty key + updated doc ( #6672 )
...
* fix: properly terminate kv_overrides array with empty key
The llama model loading function expects KV overrides to be terminated
with an empty key (key[0] == 0). Previously, the kv_overrides vector was
not being properly terminated, causing an assertion failure.
This commit ensures that after parsing all KV override strings, we add a
final terminating entry with an empty key to satisfy the C-style array
termination requirement. This fixes the assertion error and allows the
model to load correctly with custom KV overrides.
Fixes #6643
- Also included a reference to the usage of the `overrides` option in
the advanced-usage section.
Signed-off-by: blob42 <contact@blob42.xyz >
* doc: document the `overrides` option
---------
Signed-off-by: blob42 <contact@blob42.xyz >
2025-10-23 09:31:55 +02:00