Compare commits

...

1 Commits

Author SHA1 Message Date
Ettore Di Giacinto
455aee4eaf chore(model gallery): add qihoo360_tinyr1-32b-preview
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-02 10:23:17 +01:00

View File

@@ -6654,6 +6654,22 @@
- filename: perplexity-ai_r1-1776-distill-llama-70b-Q4_K_M.gguf
sha256: 4030b5778cbbd0723454c9a0c340c32dc4e86a98d46f5e6083527da6a9c90012
uri: huggingface://bartowski/perplexity-ai_r1-1776-distill-llama-70b-GGUF/perplexity-ai_r1-1776-distill-llama-70b-Q4_K_M.gguf
- !!merge <<: *deepseek-r1
name: "qihoo360_tinyr1-32b-preview"
urls:
- https://huggingface.co/qihoo360/TinyR1-32B-Preview
- https://huggingface.co/bartowski/qihoo360_TinyR1-32B-Preview-GGUF
description: |
We introduce our first-generation reasoning model, Tiny-R1-32B-Preview, which outperforms the 70B model Deepseek-R1-Distill-Llama-70B and nearly matches the full R1 model in math.
We applied supervised fine-tuning (SFT) to Deepseek-R1-Distill-Qwen-32B across three target domains—Mathematics, Code, and Science — using the 360-LLaMA-Factory training framework to produce three domain-specific models. We used questions from open-source data as seeds. Meanwhile, responses for mathematics, coding, and science tasks were generated by R1, creating specialized models for each domain. Building on this, we leveraged the Mergekit tool from the Arcee team to combine multiple models, creating Tiny-R1-32B-Preview, which demonstrates strong overall performance.
overrides:
parameters:
model: qihoo360_TinyR1-32B-Preview-Q4_K_M.gguf
files:
- filename: qihoo360_TinyR1-32B-Preview-Q4_K_M.gguf
sha256: 4eb3df9cc3d74e1dd75924a39b2ce7466a1053240375d5c969ea24b6045c2a09
uri: huggingface://bartowski/qihoo360_TinyR1-32B-Preview-GGUF/qihoo360_TinyR1-32B-Preview-Q4_K_M.gguf
- &qwen2
url: "github:mudler/LocalAI/gallery/chatml.yaml@master" ## Start QWEN2
name: "qwen2-7b-instruct"