chore(model gallery): Add entry for Mistral Small 3.1 with mmproj (#8247)

* chore(model gallery): Add entry for Mistral Small 3.1 with mmproj

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

* Use llama-cpp subfolder structure akin to Qwen 3 VL

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

---------

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
This commit is contained in:
rampa3
2026-01-27 16:54:14 +01:00
committed by GitHub
parent 3c1f823c47
commit ff5a54b9d1

View File

@@ -11344,6 +11344,37 @@
- filename: mistralai_Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M.gguf
sha256: c5743c1bf39db0ae8a5ade5df0374b8e9e492754a199cfdad7ef393c1590f7c0
uri: huggingface://bartowski/mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF/mistralai_Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M.gguf
- !!merge <<: *mistral03
name: "mistralai_mistral-small-3.1-24b-instruct-2503-multimodal"
urls:
- https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503
- https://huggingface.co/bartowski/mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF
description: |
Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks.
This model is an instruction-finetuned version of: Mistral-Small-3.1-24B-Base-2503.
Mistral Small 3.1 can be deployed locally and is exceptionally "knowledge-dense," fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
This gallery entry includes mmproj for multimodality.
tags:
- llm
- gguf
- gpu
- mistral
- cpu
- function-calling
- multimodal
overrides:
parameters:
model: llama-cpp/models/mistralai_Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M.gguf
mmproj: llama-cpp/mmproj/mmproj-mistralai_Mistral-Small-3.1-24B-Instruct-2503-f16.gguf
files:
- filename: llama-cpp/models/mistralai_Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M.gguf
sha256: c5743c1bf39db0ae8a5ade5df0374b8e9e492754a199cfdad7ef393c1590f7c0
uri: huggingface://bartowski/mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF/mistralai_Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M.gguf
- filename: llama-cpp/mmproj/mmproj-mistralai_Mistral-Small-3.1-24B-Instruct-2503-f16.gguf
sha256: f5add93ad360ef6ccba571bba15e8b4bd4471f3577440a8b18785f8707d987ed
uri: huggingface://bartowski/mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF/mmproj-mistralai_Mistral-Small-3.1-24B-Instruct-2503-f16.gguf
- !!merge <<: *mistral03
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
name: "gryphe_pantheon-rp-1.8-24b-small-3.1"