mirror of
https://github.com/mudler/LocalAI.git
synced 2026-03-31 21:25:59 -04:00
Clean up gallery index by removing obsolete models
Removed multiple models and their associated metadata from the gallery index. Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
This commit is contained in:
committed by
GitHub
parent
996dd7652f
commit
031909d85a
@@ -13908,113 +13908,6 @@
|
||||
- filename: "phi-2-orange.Q4_0.gguf"
|
||||
sha256: "49cb710ae688e1b19b1b299087fa40765a0cd677e3afcc45e5f7ef6750975dcf"
|
||||
uri: "huggingface://TheBloke/phi-2-orange-GGUF/phi-2-orange.Q4_0.gguf"
|
||||
- url: "github:mudler/LocalAI/gallery/phi-3-chat.yaml@master"
|
||||
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/652feb6b4e527bd115ffd6c8/YFwodyNe6LmUrzQNmrl-D.png
|
||||
license: mit
|
||||
tags:
|
||||
- llm
|
||||
- gguf
|
||||
- gpu
|
||||
- cpu
|
||||
- phi-3
|
||||
name: "npc-llm-3-8b"
|
||||
urls:
|
||||
- https://huggingface.co/Gigax/NPC-LLM-3_8B
|
||||
- https://huggingface.co/bartowski/NPC-LLM-3_8B-GGUF
|
||||
description: |
|
||||
NPC model fined-tuned from Phi-3, using LoRA.
|
||||
|
||||
This model parses a text description of a game scene, and outputs commands like:
|
||||
|
||||
- say <player1> "Hello Adventurer, care to join me on a quest?
|
||||
- greet <player1>
|
||||
- attack <player1>
|
||||
- Any other <action> <param> you add to the prompt! (We call these "skills"!)
|
||||
|
||||
⚠️ This model has been trained to overfit on specific input prompt format. Follow it closely to reach optimal performance ⚠️
|
||||
|
||||
Input prompt
|
||||
|
||||
Here's a sample input prompt, showing you the format on which the model has been trained:
|
||||
|
||||
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
|
||||
- KNOWN LOCATIONS: Old Town
|
||||
- NPCS: John the Brave
|
||||
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
|
||||
- CURRENT LOCATION ITEMS: Sword
|
||||
- LAST EVENTS:
|
||||
Aldren: Say Sword What a fine sword!
|
||||
- PROTAGONIST NAME: Aldren
|
||||
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
|
||||
- PROTAGONIST MEMORIES:
|
||||
Saved the village
|
||||
Lost a friend
|
||||
- PROTAGONIST PENDING QUESTS:
|
||||
Find the ancient artifact
|
||||
Defeat the evil warlock
|
||||
- PROTAGONIST ALLOWED ACTIONS:
|
||||
Attack <character> : Deliver a powerful blow
|
||||
Aldren:
|
||||
overrides:
|
||||
context_size: 4096
|
||||
parameters:
|
||||
model: NPC-LLM-3_8B-Q4_K_M.gguf
|
||||
files:
|
||||
- filename: NPC-LLM-3_8B-Q4_K_M.gguf
|
||||
uri: huggingface://bartowski/NPC-LLM-3_8B-GGUF/NPC-LLM-3_8B-Q4_K_M.gguf
|
||||
sha256: 5fcfb314566f0ae9364fe80237f96b12678aafbb8e82f90c6aece5ed2a6b83fd
|
||||
### Internlm2
|
||||
- name: "internlm2_5-7b-chat-1m"
|
||||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||||
urls:
|
||||
- https://huggingface.co/internlm/internlm2_5-7b-chat-1m
|
||||
- https://huggingface.co/bartowski/internlm2_5-7b-chat-1m-GGUF
|
||||
icon: https://avatars.githubusercontent.com/u/135356492
|
||||
tags:
|
||||
- internlm2
|
||||
- gguf
|
||||
- cpu
|
||||
- gpu
|
||||
description: |
|
||||
InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
|
||||
|
||||
Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like Llama3 and Gemma2-9B.
|
||||
|
||||
1M Context window: Nearly perfect at finding needles in the haystack with 1M-long context, with leading performance on long-context tasks like LongBench. Try it with LMDeploy for 1M-context inference and a file chat demo.
|
||||
|
||||
Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation will be released in Lagent soon. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples.
|
||||
overrides:
|
||||
parameters:
|
||||
model: internlm2_5-7b-chat-1m-Q4_K_M.gguf
|
||||
files:
|
||||
- filename: internlm2_5-7b-chat-1m-Q4_K_M.gguf
|
||||
uri: huggingface://bartowski/internlm2_5-7b-chat-1m-GGUF/internlm2_5-7b-chat-1m-Q4_K_M.gguf
|
||||
sha256: 10d5e18a4125f9d4d74a9284a21e0c820b150af06dee48665e54ff6e1be3a564
|
||||
### Internlm3
|
||||
- name: "internlm3-8b-instruct"
|
||||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||||
urls:
|
||||
- https://huggingface.co/internlm/internlm3-8b-instruct
|
||||
- https://huggingface.co/bartowski/internlm3-8b-instruct-GGUF
|
||||
icon: https://avatars.githubusercontent.com/u/135356492
|
||||
tags:
|
||||
- internlm3
|
||||
- gguf
|
||||
- cpu
|
||||
- gpu
|
||||
description: |
|
||||
InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning. The model has the following characteristics:
|
||||
|
||||
Enhanced performance at reduced cost: State-of-the-art performance on reasoning and knowledge-intensive tasks surpass models like Llama3.1-8B and Qwen2.5-7B.
|
||||
|
||||
Deep thinking capability: InternLM3 supports both the deep thinking mode for solving complicated reasoning tasks via the long chain-of-thought and the normal response mode for fluent user interactions.
|
||||
overrides:
|
||||
parameters:
|
||||
model: internlm3-8b-instruct-Q4_K_M.gguf
|
||||
files:
|
||||
- filename: internlm3-8b-instruct-Q4_K_M.gguf
|
||||
uri: huggingface://bartowski/internlm3-8b-instruct-GGUF/internlm3-8b-instruct-Q4_K_M.gguf
|
||||
sha256: 2a9644687318e8659c9cf9b40730d5cc2f5af06f786a50439c7c51359b23896e
|
||||
- &hermes-vllm
|
||||
url: "github:mudler/LocalAI/gallery/hermes-vllm.yaml@master"
|
||||
name: "hermes-3-llama-3.1-8b:vllm"
|
||||
@@ -14047,50 +13940,6 @@
|
||||
overrides:
|
||||
parameters:
|
||||
model: NousResearch/Hermes-3-Llama-3.1-405B
|
||||
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||||
name: "guillaumetell-7b"
|
||||
license: apache-2
|
||||
description: |
|
||||
Guillaume Tell est un Large Language Model (LLM) français basé sur Mistral Open-Hermes 2.5 optimisé pour le RAG (Retrieval Augmented Generation) avec traçabilité des sources et explicabilité.
|
||||
urls:
|
||||
- https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF
|
||||
- https://huggingface.co/AgentPublic/guillaumetell-7b
|
||||
tags:
|
||||
- llm
|
||||
- gguf
|
||||
- gpu
|
||||
- cpu
|
||||
- openhermes
|
||||
- french
|
||||
overrides:
|
||||
context_size: 4096
|
||||
parameters:
|
||||
model: guillaumetell-7b.Q4_K_M.gguf
|
||||
files:
|
||||
- filename: guillaumetell-7b.Q4_K_M.gguf
|
||||
sha256: bf08db5281619335f3ee87e229c8533b04262790063b061bb8f275c3e4de7061
|
||||
uri: huggingface://MaziyarPanahi/guillaumetell-7b-GGUF/guillaumetell-7b.Q4_K_M.gguf
|
||||
### START Cerbero
|
||||
- url: "github:mudler/LocalAI/gallery/cerbero.yaml@master"
|
||||
icon: https://huggingface.co/galatolo/cerbero-7b/resolve/main/README.md.d/cerbero.png
|
||||
description: |
|
||||
cerbero-7b is specifically crafted to fill the void in Italy's AI landscape.
|
||||
urls:
|
||||
- https://huggingface.co/galatolo/cerbero-7b
|
||||
tags:
|
||||
- llm
|
||||
- gguf
|
||||
- gpu
|
||||
- cpu
|
||||
- mistral
|
||||
- italian
|
||||
overrides:
|
||||
parameters:
|
||||
model: galatolo-Q4_K.gguf
|
||||
files:
|
||||
- filename: "galatolo-Q4_K.gguf"
|
||||
sha256: "ca0cfd5a9ad40dc16416aa3a277015d0299b62c0803b67f5709580042202c172"
|
||||
uri: "huggingface://galatolo/cerbero-7b-gguf/ggml-model-Q4_K.gguf"
|
||||
- &codellama
|
||||
url: "github:mudler/LocalAI/gallery/codellama.yaml@master" ### START Codellama
|
||||
name: "codellama-7b"
|
||||
|
||||
Reference in New Issue
Block a user