mirror of
https://github.com/mudler/LocalAI.git
synced 2026-05-17 13:10:23 -04:00
* feat(liquid-audio): add LFM2.5-Audio any-to-any backend + realtime_audio usecase
Wires LiquidAI's LFM2.5-Audio-1.5B as a self-contained Realtime API model:
single engine handles VAD, transcription, LLM, and TTS in one bidirectional
stream — drop-in alternative to a VAD+STT+LLM+TTS pipeline.
Backend
- backend/python/liquid-audio/ — new Python gRPC backend wrapping the
`liquid-audio` package. Modes: chat / asr / tts / s2s, voice presets,
Load/Predict/PredictStream/AudioTranscription/TTS/VAD/AudioToAudioStream/
Free and StartFineTune/FineTuneProgress/StopFineTune. Runtime monkey-patch
on `liquid_audio.utils.snapshot_download` so absolute local paths from
LocalAI's gallery resolve without a HF round-trip. soundfile in place of
torchaudio.load/save (torchcodec drags NVIDIA NPP we don't bundle).
- backend/backend.proto + pkg/grpc/{backend,client,server,base,embed,
interface}.go — new AudioToAudioStream RPC mirroring AudioTransformStream
(config/frame/control oneof in; typed event+pcm+meta out).
- core/services/nodes/{health_mock,inflight}_test.go — add stubs for the
new RPC to the test fakes.
Config + capabilities
- core/config/backend_capabilities.go — UsecaseRealtimeAudio, MethodAudio
ToAudioStream, UsecaseInfoMap entry, liquid-audio BackendCapability row.
- core/config/model_config.go — FLAG_REALTIME_AUDIO bitmask, ModalityGroups
membership in both speech-input and audio-output groups so a lone flag
still reads as multimodal, GetAllModelConfigUsecases entry, GuessUsecases
branch.
Realtime endpoint
- core/http/endpoints/openai/realtime.go — extract prepareRealtimeConfig()
so the gate is unit-testable; accept realtime_audio models and self-fill
empty pipeline slots with the model's own name (user-pinned slots win).
- core/http/endpoints/openai/realtime_gate_test.go — six specs covering nil
cfg, empty pipeline, legacy pipeline, self-contained realtime_audio,
user-pinned VAD slot, and partial legacy pipeline.
UI + endpoints
- core/http/routes/ui.go — /api/pipeline-models accepts either a legacy
VAD+STT+LLM+TTS pipeline or a realtime_audio model; surfaces a
self_contained flag so the Talk page can collapse the four cards.
- core/http/routes/ui_api.go — realtime_audio in usecaseFilters.
- core/http/routes/ui_pipeline_models_test.go — covers both code paths.
- core/http/react-ui/src/pages/Talk.jsx — self-contained badge instead of
the four-slot grid; rename Edit Pipeline → Edit Model Config; less
pipeline-specific wording.
- core/http/react-ui/src/pages/Models.jsx + locales/en/models.json — new
realtime_audio filter button + i18n.
- core/http/react-ui/src/utils/capabilities.js — CAP_REALTIME_AUDIO.
- core/http/react-ui/src/pages/FineTune.jsx — voice + validation-dataset
fields, surfaced when backend === liquid-audio, plumbed via
extra_options on submit/export/import.
Gallery + importer
- gallery/liquid-audio.yaml — config template with known_usecases:
[realtime_audio, chat, tts, transcript, vad].
- gallery/index.yaml — four model entries (realtime/chat/asr/tts) keyed by
mode option. Fixed pre-existing `transcribe` typo on the asr entry
(loader silently dropped the unknown string → entry never surfaced as a
transcript model).
- gallery/lfm.yaml — function block for the LFM2 Pythonic tool-call format
`<|tool_call_start|>[name(k="v")]<|tool_call_end|>` matching
common_chat_params_init_lfm2 in vendored llama.cpp.
- core/gallery/importers/{liquid-audio,liquid-audio_test}.go — detector
matches LFM2-Audio HF repos (excludes -gguf mirrors); mode/voice
preferences plumbed through to options.
- core/gallery/importers/importers.go — register LiquidAudioImporter
before LlamaCPPImporter.
- pkg/functions/parse_lfm2_test.go — seven specs for the response/argument
regex pair on the LFM2 pythonic format.
Build matrix
- .github/backend-matrix.yml — seven liquid-audio targets (cuda12, cuda13,
l4t-cuda-13, hipblas, intel, cpu amd64, cpu arm64). Jetpack r36 cuda-12
is skipped (Ubuntu 22.04 / Python 3.10 incompatible with liquid-audio's
3.12 floor).
- backend/index.yaml — anchor + 13 image entries.
- Makefile — .NOTPARALLEL, prepare-test-extra, test-extra,
docker-build-liquid-audio.
Docs
- .agents/plans/liquid-audio-integration.md — phased plan; PR-D (real
any-to-any wiring via AudioToAudioStream), PR-E (mid-audio tool-call
detector), PR-G (GGUF entries once upstream llama.cpp PR #18641 lands)
remain.
- .agents/api-endpoints-and-auth.md — expand the capability-surface
checklist with every place a new FLAG_* needs to be registered.
Assisted-by: claude-code:claude-opus-4-7-1m [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* feat(realtime): function calling + history cap for any-to-any models
Three pieces, all on the realtime_audio path that just landed:
1. liquid-audio backend (backend/python/liquid-audio/backend.py):
- _build_chat_state grows a `tools_prelude` arg.
- new _render_tools_prelude parses request.Tools (the OpenAI Chat
Completions function array realtime.go already serialises) and
emits an LFM2 `<|tool_list_start|>…<|tool_list_end|>` system turn
ahead of the user history. Mirrors gallery/lfm.yaml's `function:`
template so the model sees the same prompt shape whether served
via llama-cpp or here. Without this the backend silently dropped
tools — function calling was wired end-to-end on the Go side but
the model never saw a tool list.
2. Realtime history cap (core/http/endpoints/openai/realtime.go):
- Session grows MaxHistoryItems int; default picked by new
defaultMaxHistoryItems(cfg) — 6 for realtime_audio models (LFM2.5
1.5B degrades quickly past a handful of turns), 0/unlimited for
legacy pipelines composing larger LLMs.
- triggerResponse runs conv.Items through trimRealtimeItems before
building conversationHistory. Helper walks the cut left if it
would orphan a function_call_output, so tool result + call pairs
stay intact.
- realtime_gate_test.go: specs for defaultMaxHistoryItems and
trimRealtimeItems (zero cap, under cap, over cap, tool-call pair
preservation).
3. Talk page (core/http/react-ui/src/pages/Talk.jsx):
- Reuses the chat page's MCP plumbing — useMCPClient hook,
ClientMCPDropdown component, same auto-connect/disconnect effect
pattern. No bespoke tool registry, no new REST endpoints; tools
come from whichever MCP servers the user toggles on, exactly as
on the chat page.
- sendSessionUpdate now passes session.tools=getToolsForLLM(); the
update re-fires when the active server set changes mid-session.
- New response.function_call_arguments.done handler executes via
the hook's executeTool (which round-trips through the MCP client
SDK), then replies with conversation.item.create
{type:function_call_output} + response.create so the model
completes its turn with the tool output. Mirrors chat's
client-side agentic loop, translated to the realtime wire shape.
UI changes require a LocalAI image rebuild (Dockerfile:308-313 bakes
react-ui/dist into the runtime image). Backend.py changes can be
swapped live in /backends/<id>/backend.py + /backend/shutdown.
Assisted-by: claude-code:claude-opus-4-7-1m [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* feat(realtime): LocalAI Assistant ("Manage Mode") for the Talk page
Mirrors the chat-page metadata.localai_assistant flow so users can ask the
realtime model what's loaded / installed / configured. Tools are run
server-side via the same in-process MCP holder that powers the chat
modality — no transport switch, no proxy, no new wire protocol.
Wire:
- core/http/endpoints/openai/realtime.go:
- RealtimeSessionOptions{LocalAIAssistant,IsAdmin}; isCurrentUserAdmin
helper mirrors chat.go's requireAssistantAccess (no-op when auth
disabled, else requires auth.RoleAdmin).
- Session grows AssistantExecutor mcpTools.ToolExecutor.
- runRealtimeSession, when opts.LocalAIAssistant is set: gate on admin,
fail closed if DisableLocalAIAssistant or the holder has no tools,
DiscoverTools and inject into session.Tools, prepend
holder.SystemPrompt() to instructions.
- Tool-call dispatch loop: when AssistantExecutor.IsTool(name), run
ExecuteTool inproc, append a FunctionCallOutput to conv.Items, skip
the function_call_arguments client emit (the client can't execute
these — it doesn't know about them). After the loop, if any
assistant tool ran, trigger another response so the model speaks the
result. Mirrors chat's agentic loop, driven server-side rather than
via client round-trip.
- core/http/endpoints/openai/realtime_webrtc.go: RealtimeCallRequest
gains `localai_assistant` (JSON omitempty). Handshake calls
isCurrentUserAdmin and builds RealtimeSessionOptions.
- core/http/react-ui/src/pages/Talk.jsx: admin-only "Manage Mode"
checkbox under the Tools dropdown; passes localai_assistant: true to
realtimeApi.call's body, captured in the connect callback's deps.
Mirroring chat's pattern means the in-process MCP tools surface "just
works" for the Talk page without exposing a Streamable-HTTP MCP endpoint
(which was the alternative). Clients with their own MCP servers can
still use the existing ClientMCPDropdown path in parallel; the realtime
handler distinguishes them by AssistantExecutor.IsTool() at dispatch
time.
Assisted-by: claude-code:claude-opus-4-7-1m [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* feat(realtime): render Manage Mode tool calls in the Talk transcript
Previously the realtime endpoint only emitted response.output_item.added
for the FunctionCall item, and Talk.jsx's switch ignored the event — so
server-side tool runs were invisible in the UI. The model would speak
the result but the user had no way to see what tool was actually
called.
realtime.go: after executing an assistant tool inproc, emit a second
output_item.added/.done pair for the FunctionCallOutput item. Mirrors
the way the chat page displays tool_call + tool_result blocks.
Talk.jsx: handle both response.output_item.added and .done. Render
FunctionCall (with arguments) and FunctionCallOutput (pretty-printed
JSON when possible) as two transcript entries — `tool_call` with the
wrench icon, `tool_result` with the clipboard icon, both in mono-space
secondary-colour. Resets streamingRef after the result so the next
assistant text delta starts a fresh transcript entry instead of
appending to the previous turn.
Assisted-by: claude-code:claude-opus-4-7-1m [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* refactor(realtime): bound the Manage Mode tool-loop + preserve assistant tools
Fallout from a review pass on the Manage Mode patches:
- Bound the server-side agentic loop. triggerResponse used to recurse on
executedAssistantTool with no cap — a model that kept calling tools
would blow the goroutine stack. New maxAssistantToolTurns = 10 (mirrors
useChat.js's maxToolTurns). Public triggerResponse is now a thin shim
over triggerResponseAtTurn(toolTurn int); recursion increments the
counter and stops at the cap with an xlog.Warn.
- Preserve Manage Mode tools across client session.update. The handler
used to blindly overwrite session.Tools, so toggling a client MCP
server mid-session silently wiped the in-process admin tools. Session
now caches the original AssistantTools slice at session creation and
the session.update handler merges them back in (client names win on
collision — the client is explicit).
- strconv.ParseBool for the localai_assistant query param instead of
hand-rolled "1" || "true". Mirrors LocalAIAssistantFromMetadata.
- Talk.jsx: render both tool_call and tool_result on
response.output_item.done instead of splitting them across .added and
.done. The server's event pairing (added → done) stays correct; the
UI just doesn't need to inspect both phases of the same item. One
switch case instead of two, no behavioural change.
Out of scope (noted for follow-ups): extract a shared assistant-tools
helper between chat.go and realtime.go (duplication is small enough
that two parallel implementations stay readable for now), and an i18n
key for the Manage Mode helper text (Talk.jsx doesn't use i18n
anywhere else yet).
Assisted-by: claude-code:claude-opus-4-7-1m [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* ci(test-extra): wire liquid-audio backend smoke test
The backend ships test.py + a `make test` target and is listed in
backend-matrix.yml, so scripts/changed-backends.js already writes a
`liquid-audio=true|false` output when files under backend/python/liquid-audio/
change. The workflow just wasn't reading it.
- Expose the `liquid-audio` output on the detect-changes job
- Add a tests-liquid-audio job that runs `make` + `make test` in
backend/python/liquid-audio, gated on the per-backend detect flag
The smoke covers Health() and LoadModel(mode:finetune); fine-tune mode
short-circuits before any HuggingFace download (backend.py:192), so the
job needs neither weights nor a GPU. The full-inference path remains
gated on LIQUID_AUDIO_MODEL_ID, which CI doesn't set.
The four new Go test files (core/gallery/importers/liquid-audio_test.go,
core/http/endpoints/openai/realtime_gate_test.go,
core/http/routes/ui_pipeline_models_test.go, pkg/functions/parse_lfm2_test.go)
are already picked up by the existing test.yml workflow via `make test` →
`ginkgo -r ./pkg/... ./core/...`; their packages all carry RunSpecs entries.
Assisted-by: Claude:claude-opus-4-7
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
4324 lines
168 KiB
YAML
4324 lines
168 KiB
YAML
---
|
||
## metas
|
||
- &llamacpp
|
||
name: "llama-cpp"
|
||
alias: "llama-cpp"
|
||
license: mit
|
||
icon: https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png
|
||
description: |
|
||
LLM inference in C/C++
|
||
urls:
|
||
- https://github.com/ggerganov/llama.cpp
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- CPU
|
||
- GPU
|
||
- Metal
|
||
- CUDA
|
||
- HIP
|
||
capabilities:
|
||
default: "cpu-llama-cpp"
|
||
nvidia: "cuda12-llama-cpp"
|
||
intel: "intel-sycl-f16-llama-cpp"
|
||
amd: "rocm-llama-cpp"
|
||
metal: "metal-llama-cpp"
|
||
vulkan: "vulkan-llama-cpp"
|
||
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp"
|
||
nvidia-cuda-13: "cuda13-llama-cpp"
|
||
nvidia-cuda-12: "cuda12-llama-cpp"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp"
|
||
- &ikllamacpp
|
||
name: "ik-llama-cpp"
|
||
alias: "ik-llama-cpp"
|
||
license: mit
|
||
description: |
|
||
Fork of llama.cpp optimized for CPU performance by ikawrakow
|
||
urls:
|
||
- https://github.com/ikawrakow/ik_llama.cpp
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- CPU
|
||
capabilities:
|
||
default: "cpu-ik-llama-cpp"
|
||
- &turboquant
|
||
name: "turboquant"
|
||
alias: "turboquant"
|
||
license: mit
|
||
description: |
|
||
Fork of llama.cpp adding the TurboQuant KV-cache quantization scheme.
|
||
Reuses the LocalAI llama.cpp gRPC server sources against the fork's libllama.
|
||
urls:
|
||
- https://github.com/TheTom/llama-cpp-turboquant
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- CPU
|
||
- GPU
|
||
- CUDA
|
||
- HIP
|
||
- turboquant
|
||
- kv-cache
|
||
capabilities:
|
||
default: "cpu-turboquant"
|
||
nvidia: "cuda12-turboquant"
|
||
intel: "intel-sycl-f16-turboquant"
|
||
amd: "rocm-turboquant"
|
||
vulkan: "vulkan-turboquant"
|
||
nvidia-l4t: "nvidia-l4t-arm64-turboquant"
|
||
nvidia-cuda-13: "cuda13-turboquant"
|
||
nvidia-cuda-12: "cuda12-turboquant"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-turboquant"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-turboquant"
|
||
- &ds4
|
||
name: "ds4"
|
||
alias: "ds4"
|
||
license: mit
|
||
description: |
|
||
antirez/ds4 - DeepSeek V4 Flash inference engine. Single-model,
|
||
optimized for Metal (Darwin) and CUDA (Linux). Requires the GGUFs
|
||
published at huggingface.co/antirez/deepseek-v4-gguf.
|
||
urls:
|
||
- https://github.com/antirez/ds4
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- CPU
|
||
- CUDA
|
||
- Metal
|
||
capabilities:
|
||
default: "cpu-ds4"
|
||
nvidia: "cuda13-ds4"
|
||
nvidia-cuda-13: "cuda13-ds4"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-ds4"
|
||
metal: "metal-ds4"
|
||
metal-darwin-arm64: "metal-ds4"
|
||
- &whispercpp
|
||
name: "whisper"
|
||
alias: "whisper"
|
||
license: mit
|
||
icon: https://user-images.githubusercontent.com/1991296/235238348-05d0f6a4-da44-4900-a1de-d0707e75b763.jpeg
|
||
description: |
|
||
Port of OpenAI's Whisper model in C/C++
|
||
urls:
|
||
- https://github.com/ggml-org/whisper.cpp
|
||
tags:
|
||
- audio-transcription
|
||
- CPU
|
||
- GPU
|
||
- CUDA
|
||
- HIP
|
||
capabilities:
|
||
default: "cpu-whisper"
|
||
nvidia: "cuda12-whisper"
|
||
intel: "intel-sycl-f16-whisper"
|
||
metal: "metal-whisper"
|
||
amd: "rocm-whisper"
|
||
vulkan: "vulkan-whisper"
|
||
nvidia-l4t: "nvidia-l4t-arm64-whisper"
|
||
nvidia-cuda-13: "cuda13-whisper"
|
||
nvidia-cuda-12: "cuda12-whisper"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisper"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-whisper"
|
||
- &voxtral
|
||
name: "voxtral"
|
||
alias: "voxtral"
|
||
license: mit
|
||
description: |
|
||
Voxtral Realtime 4B Pure C speech-to-text inference engine
|
||
urls:
|
||
- https://github.com/mudler/voxtral.c
|
||
tags:
|
||
- audio-transcription
|
||
- CPU
|
||
- Metal
|
||
capabilities:
|
||
default: "cpu-voxtral"
|
||
metal-darwin-arm64: "metal-voxtral"
|
||
- &stablediffusionggml
|
||
name: "stablediffusion-ggml"
|
||
alias: "stablediffusion-ggml"
|
||
license: mit
|
||
icon: https://github.com/leejet/stable-diffusion.cpp/raw/master/assets/cat_with_sd_cpp_42.png
|
||
description: |
|
||
Stable Diffusion and Flux in pure C/C++
|
||
urls:
|
||
- https://github.com/leejet/stable-diffusion.cpp
|
||
tags:
|
||
- image-generation
|
||
- CPU
|
||
- GPU
|
||
- Metal
|
||
- CUDA
|
||
- HIP
|
||
capabilities:
|
||
default: "cpu-stablediffusion-ggml"
|
||
nvidia: "cuda12-stablediffusion-ggml"
|
||
intel: "intel-sycl-f16-stablediffusion-ggml"
|
||
# amd: "rocm-stablediffusion-ggml"
|
||
vulkan: "vulkan-stablediffusion-ggml"
|
||
nvidia-l4t: "nvidia-l4t-arm64-stablediffusion-ggml"
|
||
metal: "metal-stablediffusion-ggml"
|
||
nvidia-cuda-13: "cuda13-stablediffusion-ggml"
|
||
nvidia-cuda-12: "cuda12-stablediffusion-ggml"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-stablediffusion-ggml"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml"
|
||
- &rfdetr
|
||
name: "rfdetr"
|
||
alias: "rfdetr"
|
||
license: apache-2.0
|
||
icon: https://avatars.githubusercontent.com/u/53104118?s=200&v=4
|
||
description: |
|
||
RF-DETR is a real-time, transformer-based object detection model architecture developed by Roboflow and released under the Apache 2.0 license.
|
||
RF-DETR is the first real-time model to exceed 60 AP on the Microsoft COCO benchmark alongside competitive performance at base sizes. It also achieves state-of-the-art performance on RF100-VL, an object detection benchmark that measures model domain adaptability to real world problems. RF-DETR is fastest and most accurate for its size when compared current real-time objection models.
|
||
RF-DETR is small enough to run on the edge using Inference, making it an ideal model for deployments that need both strong accuracy and real-time performance.
|
||
urls:
|
||
- https://github.com/roboflow/rf-detr
|
||
tags:
|
||
- object-detection
|
||
- rfdetr
|
||
- gpu
|
||
- cpu
|
||
capabilities:
|
||
nvidia: "cuda12-rfdetr"
|
||
intel: "intel-rfdetr"
|
||
#amd: "rocm-rfdetr"
|
||
nvidia-l4t: "nvidia-l4t-arm64-rfdetr"
|
||
metal: "metal-rfdetr"
|
||
default: "cpu-rfdetr"
|
||
nvidia-cuda-13: "cuda13-rfdetr"
|
||
nvidia-cuda-12: "cuda12-rfdetr"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-rfdetr"
|
||
- &insightface
|
||
name: "insightface"
|
||
alias: "insightface"
|
||
# Upstream insightface library is MIT. The pretrained model packs
|
||
# (buffalo_l, buffalo_s, antelopev2) are released for NON-COMMERCIAL
|
||
# research use only. The backend image also pre-bakes OpenCV Zoo
|
||
# YuNet + SFace (Apache 2.0) for commercial use. Pick the engine
|
||
# via model-gallery entries (insightface-buffalo-l / insightface-opencv
|
||
# / insightface-buffalo-s) or set `options` in your model YAML.
|
||
license: "mixed"
|
||
description: |
|
||
Face recognition backend powered by `insightface` (ONNX Runtime).
|
||
Provides face verification (/v1/face/verify), face analysis
|
||
(/v1/face/analyze), face embedding (/v1/embeddings), face
|
||
detection (/v1/detection), and 1:N identification
|
||
(/v1/face/{register,identify,forget}).
|
||
Ships two engines in a single image: one that drives the insightface
|
||
model packs (buffalo_l/s/m/sc, antelopev2 — non-commercial research
|
||
use only) and one that drives OpenCV Zoo's YuNet + SFace pair
|
||
(Apache 2.0 — commercial-safe). Select via `options: ["engine:..."]`
|
||
in your model YAML, or install one of the ready-made model-gallery
|
||
entries under the `insightface-*` prefix.
|
||
The backend image contains only code and Python deps; all model
|
||
weights are managed by LocalAI's gallery download mechanism.
|
||
urls:
|
||
- https://github.com/deepinsight/insightface
|
||
- https://github.com/opencv/opencv_zoo
|
||
tags:
|
||
- face-recognition
|
||
- face-verification
|
||
- face-embedding
|
||
- gpu
|
||
- cpu
|
||
capabilities:
|
||
default: "cpu-insightface"
|
||
nvidia: "cuda12-insightface"
|
||
nvidia-cuda-12: "cuda12-insightface"
|
||
- &sam3cpp
|
||
name: "sam3-cpp"
|
||
alias: "sam3-cpp"
|
||
license: mit
|
||
description: |
|
||
Segment Anything Model (SAM 3/2/EdgeTAM) in C/C++ using GGML.
|
||
Supports text-prompted and point/box-prompted image segmentation.
|
||
urls:
|
||
- https://github.com/PABannier/sam3.cpp
|
||
tags:
|
||
- image-segmentation
|
||
- object-detection
|
||
- sam3
|
||
- gpu
|
||
- cpu
|
||
capabilities:
|
||
default: "cpu-sam3-cpp"
|
||
nvidia: "cuda12-sam3-cpp"
|
||
nvidia-cuda-12: "cuda12-sam3-cpp"
|
||
nvidia-cuda-13: "cuda13-sam3-cpp"
|
||
nvidia-l4t: "nvidia-l4t-arm64-sam3-cpp"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-sam3-cpp"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sam3-cpp"
|
||
intel: "intel-sycl-f32-sam3-cpp"
|
||
vulkan: "vulkan-sam3-cpp"
|
||
- &vllm
|
||
name: "vllm"
|
||
license: apache-2.0
|
||
urls:
|
||
- https://github.com/vllm-project/vllm
|
||
tags:
|
||
- text-to-text
|
||
- multimodal
|
||
- GPTQ
|
||
- AWQ
|
||
- AutoRound
|
||
- INT4
|
||
- INT8
|
||
- FP8
|
||
icon: https://raw.githubusercontent.com/vllm-project/vllm/main/docs/assets/logos/vllm-logo-text-dark.png
|
||
description: |
|
||
vLLM is a fast and easy-to-use library for LLM inference and serving.
|
||
Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
|
||
vLLM is fast with:
|
||
State-of-the-art serving throughput
|
||
Efficient management of attention key and value memory with PagedAttention
|
||
Continuous batching of incoming requests
|
||
Fast model execution with CUDA/HIP graph
|
||
Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
|
||
Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
|
||
Speculative decoding
|
||
Chunked prefill
|
||
alias: "vllm"
|
||
capabilities:
|
||
nvidia: "cuda12-vllm"
|
||
amd: "rocm-vllm"
|
||
intel: "intel-vllm"
|
||
nvidia-cuda-12: "cuda12-vllm"
|
||
nvidia-cuda-13: "cuda13-vllm"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vllm"
|
||
cpu: "cpu-vllm"
|
||
- &sglang
|
||
name: "sglang"
|
||
license: apache-2.0
|
||
urls:
|
||
- https://github.com/sgl-project/sglang
|
||
tags:
|
||
- text-to-text
|
||
- multimodal
|
||
icon: https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png
|
||
description: |
|
||
SGLang is a fast serving framework for large language models and vision language models.
|
||
It co-designs the backend runtime (RadixAttention, continuous batching, structured
|
||
decoding) and the frontend language to make interaction with models faster and more
|
||
controllable. Features include fast backend runtime, flexible frontend language,
|
||
extensive model support, and an active community.
|
||
alias: "sglang"
|
||
capabilities:
|
||
nvidia: "cuda12-sglang"
|
||
amd: "rocm-sglang"
|
||
intel: "intel-sglang"
|
||
nvidia-cuda-12: "cuda12-sglang"
|
||
nvidia-cuda-13: "cuda13-sglang"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sglang"
|
||
cpu: "cpu-sglang"
|
||
- &vllm-omni
|
||
name: "vllm-omni"
|
||
license: apache-2.0
|
||
urls:
|
||
- https://github.com/vllm-project/vllm-omni
|
||
tags:
|
||
- text-to-image
|
||
- image-generation
|
||
- text-to-video
|
||
- video-generation
|
||
- text-to-speech
|
||
- TTS
|
||
- multimodal
|
||
- LLM
|
||
icon: https://raw.githubusercontent.com/vllm-project/vllm/main/docs/assets/logos/vllm-logo-text-dark.png
|
||
description: |
|
||
vLLM-Omni is a unified interface for multimodal generation with vLLM.
|
||
It supports image generation (text-to-image, image editing), video generation
|
||
(text-to-video, image-to-video), text generation with multimodal inputs, and
|
||
text-to-speech generation. Only supports NVIDIA (CUDA) and ROCm platforms.
|
||
alias: "vllm-omni"
|
||
capabilities:
|
||
nvidia: "cuda12-vllm-omni"
|
||
amd: "rocm-vllm-omni"
|
||
nvidia-cuda-12: "cuda12-vllm-omni"
|
||
nvidia-cuda-13: "cuda13-vllm-omni"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vllm-omni"
|
||
- &mlx
|
||
name: "mlx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx"
|
||
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
|
||
urls:
|
||
- https://github.com/ml-explore/mlx-lm
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-mlx
|
||
license: MIT
|
||
description: |
|
||
Run LLMs with MLX
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- MLX
|
||
capabilities:
|
||
default: "cpu-mlx"
|
||
nvidia: "cuda12-mlx"
|
||
metal: "metal-mlx"
|
||
nvidia-cuda-12: "cuda12-mlx"
|
||
nvidia-cuda-13: "cuda13-mlx"
|
||
nvidia-l4t: "nvidia-l4t-mlx"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-mlx"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx"
|
||
- &mlx-vlm
|
||
name: "mlx-vlm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-vlm"
|
||
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
|
||
urls:
|
||
- https://github.com/Blaizzy/mlx-vlm
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-mlx-vlm
|
||
license: MIT
|
||
description: |
|
||
Run Vision-Language Models with MLX
|
||
tags:
|
||
- text-to-text
|
||
- multimodal
|
||
- vision-language
|
||
- LLM
|
||
- MLX
|
||
capabilities:
|
||
default: "cpu-mlx-vlm"
|
||
nvidia: "cuda12-mlx-vlm"
|
||
metal: "metal-mlx-vlm"
|
||
nvidia-cuda-12: "cuda12-mlx-vlm"
|
||
nvidia-cuda-13: "cuda13-mlx-vlm"
|
||
nvidia-l4t: "nvidia-l4t-mlx-vlm"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-mlx-vlm"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx-vlm"
|
||
- &mlx-audio
|
||
name: "mlx-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-audio"
|
||
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
|
||
urls:
|
||
- https://github.com/Blaizzy/mlx-audio
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-mlx-audio
|
||
license: MIT
|
||
description: |
|
||
Run Audio Models with MLX
|
||
tags:
|
||
- audio-to-text
|
||
- audio-generation
|
||
- text-to-audio
|
||
- LLM
|
||
- MLX
|
||
capabilities:
|
||
default: "cpu-mlx-audio"
|
||
nvidia: "cuda12-mlx-audio"
|
||
metal: "metal-mlx-audio"
|
||
nvidia-cuda-12: "cuda12-mlx-audio"
|
||
nvidia-cuda-13: "cuda13-mlx-audio"
|
||
nvidia-l4t: "nvidia-l4t-mlx-audio"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-mlx-audio"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx-audio"
|
||
- &mlx-distributed
|
||
name: "mlx-distributed"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-distributed"
|
||
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
|
||
urls:
|
||
- https://github.com/ml-explore/mlx-lm
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-mlx-distributed
|
||
license: MIT
|
||
description: |
|
||
Run distributed LLM inference with MLX across multiple Apple Silicon Macs
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- MLX
|
||
- distributed
|
||
capabilities:
|
||
default: "cpu-mlx-distributed"
|
||
nvidia: "cuda12-mlx-distributed"
|
||
metal: "metal-mlx-distributed"
|
||
nvidia-cuda-12: "cuda12-mlx-distributed"
|
||
nvidia-cuda-13: "cuda13-mlx-distributed"
|
||
nvidia-l4t: "nvidia-l4t-mlx-distributed"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-mlx-distributed"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx-distributed"
|
||
- &rerankers
|
||
name: "rerankers"
|
||
alias: "rerankers"
|
||
capabilities:
|
||
nvidia: "cuda12-rerankers"
|
||
intel: "intel-rerankers"
|
||
amd: "rocm-rerankers"
|
||
metal: "metal-rerankers"
|
||
- &tinygrad
|
||
name: "tinygrad"
|
||
alias: "tinygrad"
|
||
license: MIT
|
||
description: |
|
||
tinygrad is a minimalist deep-learning framework with zero runtime
|
||
dependencies that targets CUDA, ROCm, Metal, WebGPU and CPU (CLANG).
|
||
The LocalAI tinygrad backend exposes a single multimodal runtime that
|
||
covers LLM text generation (Llama / Qwen / Mistral via safetensors or
|
||
GGUF) with native tool-call extraction, BERT-family embeddings,
|
||
Stable Diffusion 1.x / 2 / XL image generation, and Whisper speech-to-text.
|
||
|
||
Single image: tinygrad generates its own GPU kernels and dlopens the
|
||
host driver libraries at runtime, so there is no per-toolkit build
|
||
split. The same image runs CPU-only or accelerates against
|
||
CUDA / ROCm / Metal when the host driver is visible.
|
||
urls:
|
||
- https://github.com/tinygrad/tinygrad
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-tinygrad"
|
||
mirrors:
|
||
- localai/localai-backends:latest-tinygrad
|
||
tags:
|
||
- text-to-text
|
||
- LLM
|
||
- embeddings
|
||
- image-generation
|
||
- transcription
|
||
- multimodal
|
||
- &transformers
|
||
name: "transformers"
|
||
icon: https://avatars.githubusercontent.com/u/25720743?s=200&v=4
|
||
alias: "transformers"
|
||
license: apache-2.0
|
||
description: |
|
||
Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training.
|
||
It centralizes the model definition so that this definition is agreed upon across the ecosystem. transformers is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...), and adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from transformers.
|
||
urls:
|
||
- https://github.com/huggingface/transformers
|
||
tags:
|
||
- text-to-text
|
||
- multimodal
|
||
capabilities:
|
||
nvidia: "cuda12-transformers"
|
||
intel: "intel-transformers"
|
||
amd: "rocm-transformers"
|
||
metal: "metal-transformers"
|
||
nvidia-cuda-13: "cuda13-transformers"
|
||
nvidia-cuda-12: "cuda12-transformers"
|
||
- &diffusers
|
||
name: "diffusers"
|
||
icon: https://raw.githubusercontent.com/huggingface/diffusers/main/docs/source/en/imgs/diffusers_library.jpg
|
||
description: |
|
||
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both.
|
||
urls:
|
||
- https://github.com/huggingface/diffusers
|
||
tags:
|
||
- image-generation
|
||
- video-generation
|
||
- diffusion-models
|
||
license: apache-2.0
|
||
alias: "diffusers"
|
||
capabilities:
|
||
nvidia: "cuda12-diffusers"
|
||
intel: "intel-diffusers"
|
||
amd: "rocm-diffusers"
|
||
nvidia-l4t: "nvidia-l4t-diffusers"
|
||
metal: "metal-diffusers"
|
||
default: "cpu-diffusers"
|
||
nvidia-cuda-13: "cuda13-diffusers"
|
||
nvidia-cuda-12: "cuda12-diffusers"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-diffusers"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-diffusers"
|
||
- &ace-step
|
||
name: "ace-step"
|
||
description: |
|
||
ACE-Step 1.5 is an open-source music generation model. It supports simple mode (natural language description) and advanced mode (caption, lyrics, think, bpm, keyscale, etc.). Uses in-process acestep (LLMHandler for metadata, DiT for audio).
|
||
urls:
|
||
- https://github.com/ace-step/ACE-Step-1.5
|
||
tags:
|
||
- music-generation
|
||
- sound-generation
|
||
alias: "ace-step"
|
||
capabilities:
|
||
nvidia: "cuda12-ace-step"
|
||
intel: "intel-ace-step"
|
||
amd: "rocm-ace-step"
|
||
metal: "metal-ace-step"
|
||
default: "cpu-ace-step"
|
||
nvidia-cuda-13: "cuda13-ace-step"
|
||
nvidia-cuda-12: "cuda12-ace-step"
|
||
- !!merge <<: *ace-step
|
||
name: "ace-step-development"
|
||
capabilities:
|
||
nvidia: "cuda12-ace-step-development"
|
||
intel: "intel-ace-step-development"
|
||
amd: "rocm-ace-step-development"
|
||
metal: "metal-ace-step-development"
|
||
default: "cpu-ace-step-development"
|
||
nvidia-cuda-13: "cuda13-ace-step-development"
|
||
nvidia-cuda-12: "cuda12-ace-step-development"
|
||
- &acestepcpp
|
||
name: "acestep-cpp"
|
||
description: |
|
||
ACE-Step 1.5 C++ backend using GGML. Native C++ implementation of ACE-Step music generation with GPU support through GGML backends.
|
||
Generates stereo 48kHz audio from text descriptions and optional lyrics via a two-stage pipeline: text-to-code (ace-qwen3 LLM) + code-to-audio (DiT-VAE).
|
||
urls:
|
||
- https://github.com/ace-step/acestep.cpp
|
||
tags:
|
||
- music-generation
|
||
- sound-generation
|
||
alias: "acestep-cpp"
|
||
capabilities:
|
||
default: "cpu-acestep-cpp"
|
||
nvidia: "cuda12-acestep-cpp"
|
||
nvidia-cuda-13: "cuda13-acestep-cpp"
|
||
nvidia-cuda-12: "cuda12-acestep-cpp"
|
||
intel: "intel-sycl-f16-acestep-cpp"
|
||
metal: "metal-acestep-cpp"
|
||
amd: "rocm-acestep-cpp"
|
||
vulkan: "vulkan-acestep-cpp"
|
||
nvidia-l4t: "nvidia-l4t-arm64-acestep-cpp"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-acestep-cpp"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-acestep-cpp"
|
||
- &qwen3ttscpp
|
||
name: "qwen3-tts-cpp"
|
||
description: |
|
||
Qwen3-TTS C++ backend using GGML. Native C++ text-to-speech with voice cloning support.
|
||
Generates 24kHz mono audio from text with optional reference audio for voice cloning via ECAPA-TDNN speaker embeddings.
|
||
urls:
|
||
- https://github.com/predict-woo/qwen3-tts.cpp
|
||
tags:
|
||
- text-to-speech
|
||
- tts
|
||
- voice-cloning
|
||
alias: "qwen3-tts-cpp"
|
||
capabilities:
|
||
default: "cpu-qwen3-tts-cpp"
|
||
nvidia: "cuda12-qwen3-tts-cpp"
|
||
nvidia-cuda-13: "cuda13-qwen3-tts-cpp"
|
||
nvidia-cuda-12: "cuda12-qwen3-tts-cpp"
|
||
intel: "intel-sycl-f16-qwen3-tts-cpp"
|
||
metal: "metal-qwen3-tts-cpp"
|
||
amd: "rocm-qwen3-tts-cpp"
|
||
vulkan: "vulkan-qwen3-tts-cpp"
|
||
nvidia-l4t: "nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
- &vibevoicecpp
|
||
name: "vibevoice-cpp"
|
||
description: |
|
||
vibevoice.cpp C++ backend using GGML. Native C++ port of Microsoft VibeVoice for both
|
||
text-to-speech (with voice cloning via voice prompt GGUFs) and long-form ASR with
|
||
speaker diarization. Outputs 24kHz mono WAV; ASR returns per-speaker JSON segments.
|
||
urls:
|
||
- https://github.com/mudler/vibevoice.cpp
|
||
tags:
|
||
- text-to-speech
|
||
- tts
|
||
- speech-to-text
|
||
- asr
|
||
- voice-cloning
|
||
- diarization
|
||
alias: "vibevoice-cpp"
|
||
capabilities:
|
||
default: "cpu-vibevoice-cpp"
|
||
nvidia: "cuda12-vibevoice-cpp"
|
||
nvidia-cuda-13: "cuda13-vibevoice-cpp"
|
||
nvidia-cuda-12: "cuda12-vibevoice-cpp"
|
||
intel: "intel-sycl-f16-vibevoice-cpp"
|
||
metal: "metal-vibevoice-cpp"
|
||
amd: "rocm-vibevoice-cpp"
|
||
vulkan: "vulkan-vibevoice-cpp"
|
||
nvidia-l4t: "nvidia-l4t-arm64-vibevoice-cpp"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-vibevoice-cpp"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice-cpp"
|
||
- &localvqecpp
|
||
name: "localvqe"
|
||
description: |
|
||
LocalVQE C++ backend using GGML — joint acoustic echo cancellation, noise
|
||
suppression, and dereverberation (DeepVQE-style architecture). 16 kHz mono
|
||
in / out, supports both batch and low-latency streaming. Implements the
|
||
audio-transform capability.
|
||
urls:
|
||
- https://github.com/localai-org/LocalVQE
|
||
tags:
|
||
- audio-transform
|
||
- aec
|
||
- acoustic-echo-cancellation
|
||
- noise-suppression
|
||
- dereverberation
|
||
license: apache2
|
||
alias: "localvqe"
|
||
# Upstream LocalVQE only supports CPU and Vulkan; no CUDA/ROCm/SYCL/Metal
|
||
# builds. GPU-class hardware that exposes a Vulkan ICD (NVIDIA, AMD, Intel
|
||
# discrete + iGPU, Tegra) routes to the Vulkan image; everything else
|
||
# falls back to the CPU build, which is already ~9× realtime on a desktop.
|
||
capabilities:
|
||
default: "cpu-localvqe"
|
||
nvidia: "vulkan-localvqe"
|
||
nvidia-cuda-12: "vulkan-localvqe"
|
||
nvidia-cuda-13: "vulkan-localvqe"
|
||
intel: "vulkan-localvqe"
|
||
amd: "vulkan-localvqe"
|
||
vulkan: "vulkan-localvqe"
|
||
nvidia-l4t: "vulkan-localvqe"
|
||
nvidia-l4t-cuda-12: "vulkan-localvqe"
|
||
nvidia-l4t-cuda-13: "vulkan-localvqe"
|
||
- &faster-whisper
|
||
icon: https://avatars.githubusercontent.com/u/1520500?s=200&v=4
|
||
description: |
|
||
faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models.
|
||
This implementation is up to 4 times faster than openai/whisper for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.
|
||
urls:
|
||
- https://github.com/SYSTRAN/faster-whisper
|
||
tags:
|
||
- speech-to-text
|
||
- Whisper
|
||
license: MIT
|
||
name: "faster-whisper"
|
||
capabilities:
|
||
default: "cpu-faster-whisper"
|
||
nvidia: "cuda12-faster-whisper"
|
||
intel: "intel-faster-whisper"
|
||
amd: "rocm-faster-whisper"
|
||
metal: "metal-faster-whisper"
|
||
nvidia-cuda-13: "cuda13-faster-whisper"
|
||
nvidia-cuda-12: "cuda12-faster-whisper"
|
||
nvidia-l4t: "nvidia-l4t-arm64-faster-whisper"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-faster-whisper"
|
||
- &moonshine
|
||
description: |
|
||
Moonshine is a fast, accurate, and efficient speech-to-text transcription model using ONNX Runtime.
|
||
It provides real-time transcription capabilities with support for multiple model sizes and GPU acceleration.
|
||
urls:
|
||
- https://github.com/moonshine-ai/moonshine
|
||
tags:
|
||
- speech-to-text
|
||
- transcription
|
||
- ONNX
|
||
license: MIT
|
||
name: "moonshine"
|
||
alias: "moonshine"
|
||
capabilities:
|
||
nvidia: "cuda12-moonshine"
|
||
metal: "metal-moonshine"
|
||
default: "cpu-moonshine"
|
||
nvidia-cuda-13: "cuda13-moonshine"
|
||
nvidia-cuda-12: "cuda12-moonshine"
|
||
- &whisperx
|
||
description: |
|
||
WhisperX provides fast automatic speech recognition with word-level timestamps, speaker diarization,
|
||
and forced alignment. Built on faster-whisper and pyannote-audio for high-accuracy transcription
|
||
with speaker identification.
|
||
urls:
|
||
- https://github.com/m-bain/whisperX
|
||
tags:
|
||
- speech-to-text
|
||
- diarization
|
||
- whisperx
|
||
license: BSD-4-Clause
|
||
name: "whisperx"
|
||
alias: "whisperx"
|
||
capabilities:
|
||
nvidia: "cuda12-whisperx"
|
||
metal: "metal-whisperx"
|
||
default: "cpu-whisperx"
|
||
nvidia-cuda-13: "cuda13-whisperx"
|
||
nvidia-cuda-12: "cuda12-whisperx"
|
||
nvidia-l4t: "nvidia-l4t-arm64-whisperx"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisperx"
|
||
- &kokoro
|
||
icon: https://avatars.githubusercontent.com/u/166769057?v=4
|
||
description: |
|
||
Kokoro is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, Kokoro can be deployed anywhere from production environments to personal projects.
|
||
urls:
|
||
- https://huggingface.co/hexgrad/Kokoro-82M
|
||
- https://github.com/hexgrad/kokoro
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
- LLM
|
||
license: apache-2.0
|
||
alias: "kokoro"
|
||
name: "kokoro"
|
||
capabilities:
|
||
nvidia: "cuda12-kokoro"
|
||
intel: "intel-kokoro"
|
||
amd: "rocm-kokoro"
|
||
nvidia-l4t: "nvidia-l4t-kokoro"
|
||
metal: "metal-kokoro"
|
||
nvidia-cuda-13: "cuda13-kokoro"
|
||
nvidia-cuda-12: "cuda12-kokoro"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-kokoro"
|
||
- &kokoros
|
||
icon: https://avatars.githubusercontent.com/u/166769057?v=4
|
||
description: |
|
||
Kokoros is a pure Rust TTS backend using the Kokoro ONNX model (82M parameters).
|
||
It provides fast, high-quality text-to-speech with streaming support, built on
|
||
ONNX Runtime for efficient CPU inference. Supports English, Japanese, Mandarin
|
||
Chinese, and German.
|
||
urls:
|
||
- https://huggingface.co/hexgrad/Kokoro-82M
|
||
- https://github.com/lucasjinreal/Kokoros
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
- Rust
|
||
- ONNX
|
||
license: apache-2.0
|
||
alias: "kokoros"
|
||
name: "kokoros"
|
||
capabilities:
|
||
default: "cpu-kokoros"
|
||
- &coqui
|
||
urls:
|
||
- https://github.com/idiap/coqui-ai-TTS
|
||
description: |
|
||
🐸 Coqui TTS is a library for advanced Text-to-Speech generation.
|
||
|
||
🚀 Pretrained models in +1100 languages.
|
||
|
||
🛠️ Tools for training new models and fine-tuning existing models in any language.
|
||
|
||
📚 Utilities for dataset analysis and curation.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: mpl-2.0
|
||
name: "coqui"
|
||
alias: "coqui"
|
||
capabilities:
|
||
nvidia: "cuda12-coqui"
|
||
intel: "intel-coqui"
|
||
amd: "rocm-coqui"
|
||
metal: "metal-coqui"
|
||
nvidia-cuda-13: "cuda13-coqui"
|
||
nvidia-cuda-12: "cuda12-coqui"
|
||
icon: https://avatars.githubusercontent.com/u/1338804?s=200&v=4
|
||
- &outetts
|
||
urls:
|
||
- https://github.com/OuteAI/outetts
|
||
description: |
|
||
OuteTTS is an open-weight text-to-speech model from OuteAI (OuteAI/OuteTTS-0.3-1B).
|
||
Supports custom speaker voices via audio path or default speakers.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: apache-2.0
|
||
name: "outetts"
|
||
alias: "outetts"
|
||
capabilities:
|
||
default: "cpu-outetts"
|
||
nvidia-cuda-12: "cuda12-outetts"
|
||
- &chatterbox
|
||
urls:
|
||
- https://github.com/resemble-ai/chatterbox
|
||
description: |
|
||
Resemble AI's first production-grade open source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
|
||
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support emotion exaggeration control, a powerful feature that makes your voices stand out.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: MIT
|
||
icon: https://avatars.githubusercontent.com/u/49844015?s=200&v=4
|
||
name: "chatterbox"
|
||
alias: "chatterbox"
|
||
capabilities:
|
||
nvidia: "cuda12-chatterbox"
|
||
metal: "metal-chatterbox"
|
||
default: "cpu-chatterbox"
|
||
nvidia-l4t: "nvidia-l4t-arm64-chatterbox"
|
||
nvidia-cuda-13: "cuda13-chatterbox"
|
||
nvidia-cuda-12: "cuda12-chatterbox"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-chatterbox"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-chatterbox"
|
||
- &vibevoice
|
||
urls:
|
||
- https://github.com/microsoft/VibeVoice
|
||
description: |
|
||
VibeVoice-Realtime is a real-time text-to-speech model that generates natural-sounding speech.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: mit
|
||
name: "vibevoice"
|
||
alias: "vibevoice"
|
||
capabilities:
|
||
nvidia: "cuda12-vibevoice"
|
||
intel: "intel-vibevoice"
|
||
amd: "rocm-vibevoice"
|
||
nvidia-l4t: "nvidia-l4t-vibevoice"
|
||
metal: "metal-vibevoice"
|
||
default: "cpu-vibevoice"
|
||
nvidia-cuda-13: "cuda13-vibevoice"
|
||
nvidia-cuda-12: "cuda12-vibevoice"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice"
|
||
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
|
||
- &liquid-audio
|
||
urls:
|
||
- https://github.com/Liquid4All/liquid-audio
|
||
- https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B
|
||
description: |
|
||
LiquidAI LFM2 / LFM2.5 Audio Python backend. End-to-end speech-to-speech, ASR,
|
||
TTS (4 baked voices), and text chat from a single 1.5B model. Wraps the
|
||
upstream `liquid-audio` package; supports fine-tuning via LocalAI's
|
||
/v1/fine-tuning/jobs endpoint.
|
||
tags:
|
||
- speech-to-speech
|
||
- any-to-any
|
||
- text-to-speech
|
||
- speech-to-text
|
||
- TTS
|
||
- ASR
|
||
- realtime
|
||
license: LFM-Open-License-v1.0
|
||
name: "liquid-audio"
|
||
alias: "liquid-audio"
|
||
capabilities:
|
||
nvidia: "cuda12-liquid-audio"
|
||
intel: "intel-liquid-audio"
|
||
amd: "rocm-liquid-audio"
|
||
default: "cpu-liquid-audio"
|
||
nvidia-cuda-13: "cuda13-liquid-audio"
|
||
nvidia-cuda-12: "cuda12-liquid-audio"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-liquid-audio"
|
||
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png
|
||
- &qwen-tts
|
||
urls:
|
||
- https://github.com/QwenLM/Qwen3-TTS
|
||
description: |
|
||
Qwen3-TTS is a high-quality text-to-speech model supporting custom voice, voice design, and voice cloning.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: apache-2.0
|
||
name: "qwen-tts"
|
||
alias: "qwen-tts"
|
||
capabilities:
|
||
nvidia: "cuda12-qwen-tts"
|
||
intel: "intel-qwen-tts"
|
||
amd: "rocm-qwen-tts"
|
||
nvidia-l4t: "nvidia-l4t-qwen-tts"
|
||
metal: "metal-qwen-tts"
|
||
default: "cpu-qwen-tts"
|
||
nvidia-cuda-13: "cuda13-qwen-tts"
|
||
nvidia-cuda-12: "cuda12-qwen-tts"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts"
|
||
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
|
||
- &fish-speech
|
||
urls:
|
||
- https://github.com/fishaudio/fish-speech
|
||
description: |
|
||
Fish Speech is a high-quality text-to-speech model supporting voice cloning via reference audio.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
- voice-cloning
|
||
license: apache-2.0
|
||
name: "fish-speech"
|
||
alias: "fish-speech"
|
||
capabilities:
|
||
nvidia: "cuda12-fish-speech"
|
||
intel: "intel-fish-speech"
|
||
amd: "rocm-fish-speech"
|
||
nvidia-l4t: "nvidia-l4t-fish-speech"
|
||
metal: "metal-fish-speech"
|
||
default: "cpu-fish-speech"
|
||
nvidia-cuda-13: "cuda13-fish-speech"
|
||
nvidia-cuda-12: "cuda12-fish-speech"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-fish-speech"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-fish-speech"
|
||
icon: https://avatars.githubusercontent.com/u/148526220?s=200&v=4
|
||
- &faster-qwen3-tts
|
||
urls:
|
||
- https://github.com/andimarafioti/faster-qwen3-tts
|
||
- https://pypi.org/project/faster-qwen3-tts/
|
||
description: |
|
||
Real-time Qwen3-TTS inference using CUDA graph capture. Voice clone only; requires NVIDIA GPU with CUDA.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
- voice-clone
|
||
license: apache-2.0
|
||
name: "faster-qwen3-tts"
|
||
alias: "faster-qwen3-tts"
|
||
capabilities:
|
||
nvidia: "cuda12-faster-qwen3-tts"
|
||
default: "cuda12-faster-qwen3-tts"
|
||
nvidia-cuda-13: "cuda13-faster-qwen3-tts"
|
||
nvidia-cuda-12: "cuda12-faster-qwen3-tts"
|
||
nvidia-l4t: "nvidia-l4t-faster-qwen3-tts"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-faster-qwen3-tts"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts"
|
||
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
|
||
- &qwen-asr
|
||
urls:
|
||
- https://github.com/QwenLM/Qwen3-ASR
|
||
description: |
|
||
Qwen3-ASR is an automatic speech recognition model supporting multiple languages and batch inference.
|
||
tags:
|
||
- speech-recognition
|
||
- ASR
|
||
license: apache-2.0
|
||
name: "qwen-asr"
|
||
alias: "qwen-asr"
|
||
capabilities:
|
||
nvidia: "cuda12-qwen-asr"
|
||
intel: "intel-qwen-asr"
|
||
amd: "rocm-qwen-asr"
|
||
nvidia-l4t: "nvidia-l4t-qwen-asr"
|
||
metal: "metal-qwen-asr"
|
||
default: "cpu-qwen-asr"
|
||
nvidia-cuda-13: "cuda13-qwen-asr"
|
||
nvidia-cuda-12: "cuda12-qwen-asr"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-asr"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-asr"
|
||
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
|
||
- &nemo
|
||
urls:
|
||
- https://github.com/NVIDIA/NeMo
|
||
description: |
|
||
NVIDIA NEMO Toolkit for ASR provides state-of-the-art automatic speech recognition models including Parakeet models for various languages and use cases.
|
||
tags:
|
||
- speech-recognition
|
||
- ASR
|
||
- NVIDIA
|
||
license: apache-2.0
|
||
name: "nemo"
|
||
alias: "nemo"
|
||
capabilities:
|
||
nvidia: "cuda12-nemo"
|
||
intel: "intel-nemo"
|
||
amd: "rocm-nemo"
|
||
metal: "metal-nemo"
|
||
default: "cpu-nemo"
|
||
nvidia-cuda-13: "cuda13-nemo"
|
||
nvidia-cuda-12: "cuda12-nemo"
|
||
icon: https://www.nvidia.com/favicon.ico
|
||
- &voxcpm
|
||
urls:
|
||
- https://github.com/ModelBest/VoxCPM
|
||
description: |
|
||
VoxCPM is an innovative end-to-end TTS model from ModelBest, designed to generate highly expressive speech.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: mit
|
||
name: "voxcpm"
|
||
alias: "voxcpm"
|
||
capabilities:
|
||
nvidia: "cuda12-voxcpm"
|
||
intel: "intel-voxcpm"
|
||
amd: "rocm-voxcpm"
|
||
metal: "metal-voxcpm"
|
||
default: "cpu-voxcpm"
|
||
nvidia-cuda-13: "cuda13-voxcpm"
|
||
nvidia-cuda-12: "cuda12-voxcpm"
|
||
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
|
||
- &pocket-tts
|
||
urls:
|
||
- https://github.com/kyutai-labs/pocket-tts
|
||
description: |
|
||
Pocket TTS is a lightweight text-to-speech model designed to run efficiently on CPUs.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: mit
|
||
name: "pocket-tts"
|
||
alias: "pocket-tts"
|
||
capabilities:
|
||
nvidia: "cuda12-pocket-tts"
|
||
intel: "intel-pocket-tts"
|
||
amd: "rocm-pocket-tts"
|
||
nvidia-l4t: "nvidia-l4t-pocket-tts"
|
||
metal: "metal-pocket-tts"
|
||
default: "cpu-pocket-tts"
|
||
nvidia-cuda-13: "cuda13-pocket-tts"
|
||
nvidia-cuda-12: "cuda12-pocket-tts"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts"
|
||
icon: https://avatars.githubusercontent.com/u/151010778?s=200&v=4
|
||
- &piper
|
||
name: "piper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-piper"
|
||
icon: https://github.com/OHF-Voice/piper1-gpl/raw/main/etc/logo.png
|
||
urls:
|
||
- https://github.com/rhasspy/piper
|
||
- https://github.com/mudler/go-piper
|
||
mirrors:
|
||
- localai/localai-backends:latest-piper
|
||
license: MIT
|
||
description: |
|
||
A fast, local neural text to speech system
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
- &opus
|
||
name: "opus"
|
||
alias: "opus"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-opus"
|
||
urls:
|
||
- https://opus-codec.org/
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-opus
|
||
license: BSD-3-Clause
|
||
description: |
|
||
Opus audio codec backend for encoding and decoding audio.
|
||
Required for WebRTC transport in the Realtime API.
|
||
tags:
|
||
- audio-codec
|
||
- opus
|
||
- WebRTC
|
||
- realtime
|
||
- CPU
|
||
- &silero-vad
|
||
name: "silero-vad"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-silero-vad"
|
||
icon: https://user-images.githubusercontent.com/12515440/89997349-b3523080-dc94-11ea-9906-ca2e8bc50535.png
|
||
urls:
|
||
- https://github.com/snakers4/silero-vad
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-silero-vad
|
||
description: |
|
||
Silero VAD: pre-trained enterprise-grade Voice Activity Detector.
|
||
Silero VAD is a voice activity detection model that can be used to detect whether a given audio contains speech or not.
|
||
tags:
|
||
- voice-activity-detection
|
||
- VAD
|
||
- silero-vad
|
||
- CPU
|
||
- &local-store
|
||
name: "local-store"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-local-store"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-local-store
|
||
urls:
|
||
- https://github.com/mudler/LocalAI
|
||
description: |
|
||
Local Store is a local-first, self-hosted, and open-source vector database.
|
||
tags:
|
||
- vector-database
|
||
- local-first
|
||
- open-source
|
||
- CPU
|
||
license: MIT
|
||
- &kitten-tts
|
||
name: "kitten-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-kitten-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-kitten-tts
|
||
urls:
|
||
- https://github.com/KittenML/KittenTTS
|
||
description: |
|
||
Kitten TTS is a text-to-speech model that can generate speech from text.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: apache-2.0
|
||
- &neutts
|
||
name: "neutts"
|
||
urls:
|
||
- https://github.com/neuphonic/neutts-air
|
||
description: |
|
||
NeuTTS Air is the world’s first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
license: apache-2.0
|
||
capabilities:
|
||
default: "cpu-neutts"
|
||
nvidia: "cuda12-neutts"
|
||
amd: "rocm-neutts"
|
||
nvidia-cuda-12: "cuda12-neutts"
|
||
- &sherpa-onnx
|
||
name: "sherpa-onnx"
|
||
alias: "sherpa-onnx"
|
||
urls:
|
||
- https://k2-fsa.github.io/sherpa/onnx/
|
||
description: |
|
||
Sherpa-ONNX backend for text-to-speech (VITS, Matcha, Kokoro), speech-to-text (Whisper, Paraformer, SenseVoice, Omnilingual ASR CTC), and voice activity detection via ONNX Runtime.
|
||
Supports multi-speaker voices, 1600+ language ASR, and GPU acceleration.
|
||
tags:
|
||
- text-to-speech
|
||
- TTS
|
||
- speech-to-text
|
||
- ASR
|
||
capabilities:
|
||
default: "cpu-sherpa-onnx"
|
||
nvidia: "cuda12-sherpa-onnx"
|
||
nvidia-cuda-12: "cuda12-sherpa-onnx"
|
||
- !!merge <<: *neutts
|
||
name: "neutts-development"
|
||
capabilities:
|
||
default: "cpu-neutts-development"
|
||
nvidia: "cuda12-neutts-development"
|
||
amd: "rocm-neutts-development"
|
||
nvidia-cuda-12: "cuda12-neutts-development"
|
||
- !!merge <<: *llamacpp
|
||
name: "llama-cpp-development"
|
||
capabilities:
|
||
default: "cpu-llama-cpp-development"
|
||
nvidia: "cuda12-llama-cpp-development"
|
||
intel: "intel-sycl-f16-llama-cpp-development"
|
||
amd: "rocm-llama-cpp-development"
|
||
metal: "metal-llama-cpp-development"
|
||
vulkan: "vulkan-llama-cpp-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp-development"
|
||
nvidia-cuda-13: "cuda13-llama-cpp-development"
|
||
nvidia-cuda-12: "cuda12-llama-cpp-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
|
||
- !!merge <<: *ikllamacpp
|
||
name: "ik-llama-cpp-development"
|
||
capabilities:
|
||
default: "cpu-ik-llama-cpp-development"
|
||
- !!merge <<: *turboquant
|
||
name: "turboquant-development"
|
||
capabilities:
|
||
default: "cpu-turboquant-development"
|
||
nvidia: "cuda12-turboquant-development"
|
||
intel: "intel-sycl-f16-turboquant-development"
|
||
amd: "rocm-turboquant-development"
|
||
vulkan: "vulkan-turboquant-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-turboquant-development"
|
||
nvidia-cuda-13: "cuda13-turboquant-development"
|
||
nvidia-cuda-12: "cuda12-turboquant-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-turboquant-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-turboquant-development"
|
||
- !!merge <<: *ds4
|
||
name: "ds4-development"
|
||
capabilities:
|
||
default: "cpu-ds4-development"
|
||
nvidia: "cuda13-ds4-development"
|
||
nvidia-cuda-13: "cuda13-ds4-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-ds4-development"
|
||
metal: "metal-ds4-development"
|
||
metal-darwin-arm64: "metal-ds4-development"
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "stablediffusion-ggml-development"
|
||
capabilities:
|
||
default: "cpu-stablediffusion-ggml-development"
|
||
nvidia: "cuda12-stablediffusion-ggml-development"
|
||
intel: "intel-sycl-f16-stablediffusion-ggml-development"
|
||
# amd: "rocm-stablediffusion-ggml-development"
|
||
vulkan: "vulkan-stablediffusion-ggml-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||
metal: "metal-stablediffusion-ggml-development"
|
||
nvidia-cuda-13: "cuda13-stablediffusion-ggml-development"
|
||
nvidia-cuda-12: "cuda12-stablediffusion-ggml-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||
- !!merge <<: *neutts
|
||
name: "cpu-neutts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-neutts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-neutts
|
||
- !!merge <<: *neutts
|
||
name: "cuda12-neutts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-neutts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-neutts
|
||
- !!merge <<: *neutts
|
||
name: "rocm-neutts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-neutts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-neutts
|
||
- !!merge <<: *neutts
|
||
name: "cpu-neutts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-neutts"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-neutts
|
||
- !!merge <<: *neutts
|
||
name: "cuda12-neutts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-neutts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-neutts
|
||
- !!merge <<: *neutts
|
||
name: "rocm-neutts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-neutts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-neutts
|
||
- !!merge <<: *mlx
|
||
name: "mlx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-mlx
|
||
- !!merge <<: *mlx-vlm
|
||
name: "mlx-vlm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-mlx-vlm
|
||
- !!merge <<: *mlx-audio
|
||
name: "mlx-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-mlx-audio
|
||
- !!merge <<: *mlx-distributed
|
||
name: "mlx-distributed-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-mlx-distributed
|
||
## mlx
|
||
- !!merge <<: *mlx
|
||
name: "cpu-mlx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cpu-mlx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cuda12-mlx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cuda12-mlx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cuda13-mlx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cuda13-mlx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx
|
||
- !!merge <<: *mlx
|
||
name: "nvidia-l4t-mlx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-mlx
|
||
- !!merge <<: *mlx
|
||
name: "nvidia-l4t-mlx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cuda13-nvidia-l4t-arm64-mlx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx
|
||
- !!merge <<: *mlx
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx
|
||
## mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cpu-mlx-vlm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cpu-mlx-vlm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cuda12-mlx-vlm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cuda12-mlx-vlm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cuda13-mlx-vlm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cuda13-mlx-vlm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "nvidia-l4t-mlx-vlm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "nvidia-l4t-mlx-vlm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-vlm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-vlm
|
||
- !!merge <<: *mlx-vlm
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-vlm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-vlm"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-vlm
|
||
## mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cpu-mlx-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cpu-mlx-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cuda12-mlx-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cuda12-mlx-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cuda13-mlx-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cuda13-mlx-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "nvidia-l4t-mlx-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "nvidia-l4t-mlx-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-audio
|
||
- !!merge <<: *mlx-audio
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-audio
|
||
## mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cpu-mlx-distributed"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cpu-mlx-distributed-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cuda12-mlx-distributed"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cuda12-mlx-distributed-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cuda13-mlx-distributed"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cuda13-mlx-distributed-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "nvidia-l4t-mlx-distributed"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "nvidia-l4t-mlx-distributed-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-distributed"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-distributed
|
||
- !!merge <<: *mlx-distributed
|
||
name: "cuda13-nvidia-l4t-arm64-mlx-distributed-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-distributed"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-distributed
|
||
- !!merge <<: *kitten-tts
|
||
name: "kitten-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-kitten-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-kitten-tts
|
||
- !!merge <<: *kitten-tts
|
||
name: "metal-kitten-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-kitten-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-kitten-tts
|
||
- !!merge <<: *kitten-tts
|
||
name: "metal-kitten-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-kitten-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-kitten-tts
|
||
- !!merge <<: *local-store
|
||
name: "local-store-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-local-store"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-local-store
|
||
- !!merge <<: *local-store
|
||
name: "metal-local-store"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-local-store"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-local-store
|
||
- !!merge <<: *local-store
|
||
name: "metal-local-store-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-local-store"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-local-store
|
||
- !!merge <<: *opus
|
||
name: "opus-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-opus"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-opus
|
||
- !!merge <<: *opus
|
||
name: "metal-opus"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-opus"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-opus
|
||
- !!merge <<: *opus
|
||
name: "metal-opus-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-opus"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-opus
|
||
- !!merge <<: *silero-vad
|
||
name: "silero-vad-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-silero-vad"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-silero-vad
|
||
- !!merge <<: *silero-vad
|
||
name: "metal-silero-vad"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-silero-vad"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-silero-vad
|
||
- !!merge <<: *silero-vad
|
||
name: "metal-silero-vad-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-silero-vad"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-silero-vad
|
||
- !!merge <<: *piper
|
||
name: "piper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-piper"
|
||
mirrors:
|
||
- localai/localai-backends:master-piper
|
||
- !!merge <<: *piper
|
||
name: "metal-piper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-piper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-piper
|
||
- !!merge <<: *piper
|
||
name: "metal-piper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-piper"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-piper
|
||
## llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "nvidia-l4t-arm64-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "nvidia-l4t-arm64-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cuda13-nvidia-l4t-arm64-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cpu-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cpu-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cuda12-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "rocm-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "intel-sycl-f32-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "intel-sycl-f16-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "vulkan-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "vulkan-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "metal-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "metal-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cuda12-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "rocm-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "intel-sycl-f32-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "intel-sycl-f16-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cuda13-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-llama-cpp
|
||
- !!merge <<: *llamacpp
|
||
name: "cuda13-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-llama-cpp
|
||
## ik-llama-cpp
|
||
- !!merge <<: *ikllamacpp
|
||
name: "cpu-ik-llama-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-ik-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-ik-llama-cpp
|
||
- !!merge <<: *ikllamacpp
|
||
name: "cpu-ik-llama-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-ik-llama-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-ik-llama-cpp
|
||
## turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cpu-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cpu-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cuda12-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cuda12-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cuda13-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cuda13-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "rocm-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "rocm-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "intel-sycl-f32-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "intel-sycl-f32-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "intel-sycl-f16-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "intel-sycl-f16-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "vulkan-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "vulkan-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "nvidia-l4t-arm64-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "nvidia-l4t-arm64-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cuda13-nvidia-l4t-arm64-turboquant"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-turboquant
|
||
- !!merge <<: *turboquant
|
||
name: "cuda13-nvidia-l4t-arm64-turboquant-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-turboquant"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-turboquant
|
||
## ds4
|
||
- !!merge <<: *ds4
|
||
name: "cpu-ds4"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-ds4
|
||
- !!merge <<: *ds4
|
||
name: "cpu-ds4-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-ds4
|
||
- !!merge <<: *ds4
|
||
name: "cuda13-ds4"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-ds4
|
||
- !!merge <<: *ds4
|
||
name: "cuda13-ds4-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-ds4
|
||
- !!merge <<: *ds4
|
||
name: "cuda13-nvidia-l4t-arm64-ds4"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-ds4
|
||
- !!merge <<: *ds4
|
||
name: "cuda13-nvidia-l4t-arm64-ds4-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-ds4
|
||
- !!merge <<: *ds4
|
||
name: "metal-ds4"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-ds4
|
||
- !!merge <<: *ds4
|
||
name: "metal-ds4-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-ds4"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-ds4
|
||
## whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "whisper-development"
|
||
capabilities:
|
||
default: "cpu-whisper-development"
|
||
nvidia: "cuda12-whisper-development"
|
||
intel: "intel-sycl-f16-whisper-development"
|
||
metal: "metal-whisper-development"
|
||
amd: "rocm-whisper-development"
|
||
vulkan: "vulkan-whisper-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-whisper-development"
|
||
nvidia-cuda-13: "cuda13-whisper-development"
|
||
nvidia-cuda-12: "cuda12-whisper-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisper-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-whisper-development"
|
||
- !!merge <<: *whispercpp
|
||
name: "nvidia-l4t-arm64-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "nvidia-l4t-arm64-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cuda13-nvidia-l4t-arm64-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cuda13-nvidia-l4t-arm64-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cpu-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "metal-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "metal-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cpu-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cuda12-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "rocm-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "intel-sycl-f32-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "intel-sycl-f16-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "vulkan-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "vulkan-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "metal-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "metal-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cuda12-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "rocm-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "intel-sycl-f32-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "intel-sycl-f16-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cuda13-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-whisper
|
||
- !!merge <<: *whispercpp
|
||
name: "cuda13-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-whisper
|
||
## stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cpu-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cpu-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "metal-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "metal-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "vulkan-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "vulkan-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cuda12-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "intel-sycl-f32-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-stablediffusion-ggml"
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "intel-sycl-f16-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cuda12-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "intel-sycl-f32-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "intel-sycl-f16-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "nvidia-l4t-arm64-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cuda13-stablediffusion-ggml"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-stablediffusion-ggml
|
||
- !!merge <<: *stablediffusionggml
|
||
name: "cuda13-stablediffusion-ggml-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-stablediffusion-ggml"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-stablediffusion-ggml
|
||
# vllm
|
||
- !!merge <<: *vllm
|
||
name: "vllm-development"
|
||
capabilities:
|
||
nvidia: "cuda12-vllm-development"
|
||
amd: "rocm-vllm-development"
|
||
intel: "intel-vllm-development"
|
||
nvidia-cuda-12: "cuda12-vllm-development"
|
||
nvidia-cuda-13: "cuda13-vllm-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vllm-development"
|
||
cpu: "cpu-vllm-development"
|
||
- !!merge <<: *vllm
|
||
name: "cuda12-vllm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cuda13-vllm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cuda13-nvidia-l4t-arm64-vllm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-vllm
|
||
- !!merge <<: *vllm
|
||
name: "rocm-vllm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-vllm
|
||
- !!merge <<: *vllm
|
||
name: "intel-vllm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cpu-vllm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cuda12-vllm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cuda13-vllm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cuda13-nvidia-l4t-arm64-vllm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vllm
|
||
- !!merge <<: *vllm
|
||
name: "rocm-vllm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-vllm
|
||
- !!merge <<: *vllm
|
||
name: "intel-vllm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-vllm
|
||
- !!merge <<: *vllm
|
||
name: "cpu-vllm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vllm"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-vllm
|
||
# sglang
|
||
- !!merge <<: *sglang
|
||
name: "sglang-development"
|
||
capabilities:
|
||
nvidia: "cuda12-sglang-development"
|
||
amd: "rocm-sglang-development"
|
||
intel: "intel-sglang-development"
|
||
nvidia-cuda-12: "cuda12-sglang-development"
|
||
nvidia-cuda-13: "cuda13-sglang-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sglang-development"
|
||
cpu: "cpu-sglang-development"
|
||
- !!merge <<: *sglang
|
||
name: "cuda12-sglang"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cuda13-sglang"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cuda13-nvidia-l4t-arm64-sglang"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-sglang
|
||
- !!merge <<: *sglang
|
||
name: "rocm-sglang"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-sglang
|
||
- !!merge <<: *sglang
|
||
name: "intel-sglang"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cpu-sglang"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cuda12-sglang-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cuda13-sglang-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cuda13-nvidia-l4t-arm64-sglang-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-sglang
|
||
- !!merge <<: *sglang
|
||
name: "rocm-sglang-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-sglang
|
||
- !!merge <<: *sglang
|
||
name: "intel-sglang-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sglang
|
||
- !!merge <<: *sglang
|
||
name: "cpu-sglang-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sglang"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-sglang
|
||
# vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "vllm-omni-development"
|
||
capabilities:
|
||
nvidia: "cuda12-vllm-omni-development"
|
||
amd: "rocm-vllm-omni-development"
|
||
nvidia-cuda-12: "cuda12-vllm-omni-development"
|
||
nvidia-cuda-13: "cuda13-vllm-omni-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vllm-omni-development"
|
||
- !!merge <<: *vllm-omni
|
||
name: "cuda12-vllm-omni"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "cuda13-vllm-omni"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "cuda13-nvidia-l4t-arm64-vllm-omni"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "rocm-vllm-omni"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "cuda12-vllm-omni-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "cuda13-vllm-omni-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "cuda13-nvidia-l4t-arm64-vllm-omni-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vllm-omni
|
||
- !!merge <<: *vllm-omni
|
||
name: "rocm-vllm-omni-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vllm-omni"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-vllm-omni
|
||
# rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "rfdetr-development"
|
||
capabilities:
|
||
nvidia: "cuda12-rfdetr-development"
|
||
intel: "intel-rfdetr-development"
|
||
#amd: "rocm-rfdetr-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-rfdetr-development"
|
||
metal: "metal-rfdetr-development"
|
||
default: "cpu-rfdetr-development"
|
||
nvidia-cuda-13: "cuda13-rfdetr-development"
|
||
- !!merge <<: *rfdetr
|
||
name: "cuda12-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "intel-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-rfdetr
|
||
# - !!merge <<: *rfdetr
|
||
# name: "rocm-rfdetr"
|
||
# uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-hipblas-rfdetr"
|
||
# mirrors:
|
||
# - localai/localai-backends:latest-gpu-hipblas-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "nvidia-l4t-arm64-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "nvidia-l4t-arm64-rfdetr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "cpu-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "cuda12-rfdetr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "intel-rfdetr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-rfdetr
|
||
# - !!merge <<: *rfdetr
|
||
# name: "rocm-rfdetr-development"
|
||
# uri: "quay.io/go-skynet/local-ai-backends:master-gpu-hipblas-rfdetr"
|
||
# mirrors:
|
||
# - localai/localai-backends:master-gpu-hipblas-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "cpu-rfdetr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "intel-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "cuda13-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "cuda13-rfdetr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "metal-rfdetr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-rfdetr
|
||
- !!merge <<: *rfdetr
|
||
name: "metal-rfdetr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-rfdetr"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-rfdetr
|
||
## sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "sam3-cpp-development"
|
||
capabilities:
|
||
default: "cpu-sam3-cpp-development"
|
||
nvidia: "cuda12-sam3-cpp-development"
|
||
nvidia-cuda-12: "cuda12-sam3-cpp-development"
|
||
nvidia-cuda-13: "cuda13-sam3-cpp-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-sam3-cpp-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-sam3-cpp-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sam3-cpp-development"
|
||
intel: "intel-sycl-f32-sam3-cpp-development"
|
||
vulkan: "vulkan-sam3-cpp-development"
|
||
- !!merge <<: *sam3cpp
|
||
name: "cpu-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cpu-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cuda12-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cuda12-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cuda13-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cuda13-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "nvidia-l4t-arm64-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "nvidia-l4t-arm64-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cuda13-nvidia-l4t-arm64-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "cuda13-nvidia-l4t-arm64-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "intel-sycl-f32-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "intel-sycl-f32-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "vulkan-sam3-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-sam3-cpp
|
||
- !!merge <<: *sam3cpp
|
||
name: "vulkan-sam3-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-sam3-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-sam3-cpp
|
||
## Rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "rerankers-development"
|
||
capabilities:
|
||
nvidia: "cuda12-rerankers-development"
|
||
intel: "intel-rerankers-development"
|
||
amd: "rocm-rerankers-development"
|
||
metal: "metal-rerankers-development"
|
||
nvidia-cuda-13: "cuda13-rerankers-development"
|
||
- !!merge <<: *rerankers
|
||
name: "cuda12-rerankers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "intel-rerankers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "rocm-rerankers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "cuda12-rerankers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "rocm-rerankers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "intel-rerankers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "cuda13-rerankers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "cuda13-rerankers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "metal-rerankers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-rerankers
|
||
- !!merge <<: *rerankers
|
||
name: "metal-rerankers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-rerankers"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-rerankers
|
||
## tinygrad
|
||
## Single image — the meta anchor above carries the latest uri directly
|
||
## since there is only one variant. The development entry below points at
|
||
## the master tag.
|
||
- !!merge <<: *tinygrad
|
||
name: "tinygrad-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-tinygrad"
|
||
mirrors:
|
||
- localai/localai-backends:master-tinygrad
|
||
## Transformers
|
||
- !!merge <<: *transformers
|
||
name: "transformers-development"
|
||
capabilities:
|
||
nvidia: "cuda12-transformers-development"
|
||
intel: "intel-transformers-development"
|
||
amd: "rocm-transformers-development"
|
||
metal: "metal-transformers-development"
|
||
nvidia-cuda-13: "cuda13-transformers-development"
|
||
- !!merge <<: *transformers
|
||
name: "cuda12-transformers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-transformers
|
||
- !!merge <<: *transformers
|
||
name: "rocm-transformers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-transformers
|
||
- !!merge <<: *transformers
|
||
name: "intel-transformers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-transformers
|
||
- !!merge <<: *transformers
|
||
name: "cuda12-transformers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-transformers
|
||
- !!merge <<: *transformers
|
||
name: "rocm-transformers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-transformers
|
||
- !!merge <<: *transformers
|
||
name: "intel-transformers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-transformers
|
||
- !!merge <<: *transformers
|
||
name: "cuda13-transformers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-transformers
|
||
- !!merge <<: *transformers
|
||
name: "cuda13-transformers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-transformers
|
||
- !!merge <<: *transformers
|
||
name: "metal-transformers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-transformers
|
||
- !!merge <<: *transformers
|
||
name: "metal-transformers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-transformers"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-transformers
|
||
## Diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "diffusers-development"
|
||
capabilities:
|
||
nvidia: "cuda12-diffusers-development"
|
||
intel: "intel-diffusers-development"
|
||
amd: "rocm-diffusers-development"
|
||
nvidia-l4t: "nvidia-l4t-diffusers-development"
|
||
metal: "metal-diffusers-development"
|
||
default: "cpu-diffusers-development"
|
||
nvidia-cuda-13: "cuda13-diffusers-development"
|
||
- !!merge <<: *diffusers
|
||
name: "cpu-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cpu-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "nvidia-l4t-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "nvidia-l4t-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cuda13-nvidia-l4t-arm64-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cuda13-nvidia-l4t-arm64-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cuda12-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "rocm-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "intel-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cuda12-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "rocm-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "intel-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cuda13-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "cuda13-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "metal-diffusers"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-diffusers
|
||
- !!merge <<: *diffusers
|
||
name: "metal-diffusers-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-diffusers"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-diffusers
|
||
## ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "cpu-ace-step"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "cpu-ace-step-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "cuda12-ace-step"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "cuda12-ace-step-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "cuda13-ace-step"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "cuda13-ace-step-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "rocm-ace-step"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "rocm-ace-step-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "intel-ace-step"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "intel-ace-step-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "metal-ace-step"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-ace-step
|
||
- !!merge <<: *ace-step
|
||
name: "metal-ace-step-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-ace-step"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-ace-step
|
||
## acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "nvidia-l4t-arm64-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "nvidia-l4t-arm64-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cuda13-nvidia-l4t-arm64-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cuda13-nvidia-l4t-arm64-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cpu-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "metal-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "metal-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cpu-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cuda12-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "rocm-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "intel-sycl-f32-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "intel-sycl-f16-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "vulkan-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "vulkan-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cuda12-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "rocm-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "intel-sycl-f32-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "intel-sycl-f16-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cuda13-acestep-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-acestep-cpp
|
||
- !!merge <<: *acestepcpp
|
||
name: "cuda13-acestep-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-acestep-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-acestep-cpp
|
||
## qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "nvidia-l4t-arm64-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cpu-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "metal-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "metal-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cpu-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cuda12-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "rocm-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "intel-sycl-f32-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "intel-sycl-f16-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "vulkan-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "vulkan-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cuda12-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "rocm-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "intel-sycl-f32-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "intel-sycl-f16-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cuda13-qwen3-tts-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen3-tts-cpp
|
||
- !!merge <<: *qwen3ttscpp
|
||
name: "cuda13-qwen3-tts-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen3-tts-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen3-tts-cpp
|
||
## vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "nvidia-l4t-arm64-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "nvidia-l4t-arm64-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cuda13-nvidia-l4t-arm64-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cuda13-nvidia-l4t-arm64-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cpu-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "metal-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "metal-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cpu-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cuda12-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "rocm-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "intel-sycl-f32-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f32-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "intel-sycl-f16-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-sycl-f16-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "vulkan-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "vulkan-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cuda12-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "rocm-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "intel-sycl-f32-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f32-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "intel-sycl-f16-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-sycl-f16-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cuda13-vibevoice-cpp"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-vibevoice-cpp
|
||
- !!merge <<: *vibevoicecpp
|
||
name: "cuda13-vibevoice-cpp-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-vibevoice-cpp"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-vibevoice-cpp
|
||
## localvqe
|
||
- !!merge <<: *localvqecpp
|
||
name: "cpu-localvqe"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-localvqe"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-localvqe
|
||
- !!merge <<: *localvqecpp
|
||
name: "cpu-localvqe-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-localvqe"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-localvqe
|
||
- !!merge <<: *localvqecpp
|
||
name: "vulkan-localvqe"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-localvqe"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-vulkan-localvqe
|
||
- !!merge <<: *localvqecpp
|
||
name: "vulkan-localvqe-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-localvqe"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-vulkan-localvqe
|
||
## kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "kokoro-development"
|
||
capabilities:
|
||
nvidia: "cuda12-kokoro-development"
|
||
intel: "intel-kokoro-development"
|
||
amd: "rocm-kokoro-development"
|
||
nvidia-l4t: "nvidia-l4t-kokoro-development"
|
||
metal: "metal-kokoro-development"
|
||
- !!merge <<: *kokoro
|
||
name: "cuda12-kokoro-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "rocm-kokoro-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "intel-kokoro"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "intel-kokoro-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "nvidia-l4t-kokoro"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "nvidia-l4t-kokoro-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "cuda12-kokoro"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "rocm-kokoro"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "cuda13-kokoro"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "cuda13-kokoro-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "metal-kokoro"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-kokoro
|
||
- !!merge <<: *kokoro
|
||
name: "metal-kokoro-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-kokoro"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-kokoro
|
||
## kokoros (Rust)
|
||
- !!merge <<: *kokoros
|
||
name: "kokoros-development"
|
||
capabilities:
|
||
default: "cpu-kokoros-development"
|
||
- !!merge <<: *kokoros
|
||
name: "cpu-kokoros"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-kokoros"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-kokoros
|
||
- !!merge <<: *kokoros
|
||
name: "cpu-kokoros-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-kokoros"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-kokoros
|
||
## faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "faster-whisper-development"
|
||
capabilities:
|
||
default: "cpu-faster-whisper-development"
|
||
nvidia: "cuda12-faster-whisper-development"
|
||
intel: "intel-faster-whisper-development"
|
||
amd: "rocm-faster-whisper-development"
|
||
metal: "metal-faster-whisper-development"
|
||
nvidia-cuda-13: "cuda13-faster-whisper-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-faster-whisper-development"
|
||
- !!merge <<: *faster-whisper
|
||
name: "cuda12-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "rocm-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "intel-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "intel-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "cuda13-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "cuda13-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "metal-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "metal-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "cuda12-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "rocm-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "cpu-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "cpu-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "nvidia-l4t-arm64-faster-whisper"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-faster-whisper
|
||
- !!merge <<: *faster-whisper
|
||
name: "nvidia-l4t-arm64-faster-whisper-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-faster-whisper"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-faster-whisper
|
||
## moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "moonshine-development"
|
||
capabilities:
|
||
nvidia: "cuda12-moonshine-development"
|
||
default: "cpu-moonshine-development"
|
||
nvidia-cuda-13: "cuda13-moonshine-development"
|
||
nvidia-cuda-12: "cuda12-moonshine-development"
|
||
- !!merge <<: *moonshine
|
||
name: "cpu-moonshine"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "cpu-moonshine-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "cuda12-moonshine"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "cuda12-moonshine-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "cuda13-moonshine"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "cuda13-moonshine-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "metal-moonshine"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-moonshine
|
||
- !!merge <<: *moonshine
|
||
name: "metal-moonshine-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-moonshine"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-moonshine
|
||
## whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "whisperx-development"
|
||
capabilities:
|
||
nvidia: "cuda12-whisperx-development"
|
||
metal: "metal-whisperx-development"
|
||
default: "cpu-whisperx-development"
|
||
nvidia-cuda-13: "cuda13-whisperx-development"
|
||
nvidia-cuda-12: "cuda12-whisperx-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-whisperx-development"
|
||
- !!merge <<: *whisperx
|
||
name: "cpu-whisperx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "cpu-whisperx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "cuda12-whisperx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "cuda12-whisperx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "cuda13-whisperx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "cuda13-whisperx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "metal-whisperx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "metal-whisperx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "nvidia-l4t-arm64-whisperx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-whisperx
|
||
- !!merge <<: *whisperx
|
||
name: "nvidia-l4t-arm64-whisperx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-whisperx"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-whisperx
|
||
## coqui
|
||
|
||
- !!merge <<: *coqui
|
||
name: "coqui-development"
|
||
capabilities:
|
||
nvidia: "cuda12-coqui-development"
|
||
intel: "intel-coqui-development"
|
||
amd: "rocm-coqui-development"
|
||
metal: "metal-coqui-development"
|
||
- !!merge <<: *coqui
|
||
name: "cuda12-coqui"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-coqui
|
||
- !!merge <<: *coqui
|
||
name: "cuda12-coqui-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-coqui
|
||
- !!merge <<: *coqui
|
||
name: "rocm-coqui-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-coqui
|
||
- !!merge <<: *coqui
|
||
name: "intel-coqui"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-coqui
|
||
- !!merge <<: *coqui
|
||
name: "intel-coqui-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-coqui
|
||
- !!merge <<: *coqui
|
||
name: "rocm-coqui"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-coqui
|
||
- !!merge <<: *coqui
|
||
name: "metal-coqui"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-coqui
|
||
- !!merge <<: *coqui
|
||
name: "metal-coqui-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-coqui"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-coqui
|
||
## outetts
|
||
- !!merge <<: *outetts
|
||
name: "outetts-development"
|
||
capabilities:
|
||
default: "cpu-outetts-development"
|
||
nvidia-cuda-12: "cuda12-outetts-development"
|
||
- !!merge <<: *outetts
|
||
name: "cpu-outetts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-outetts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-outetts
|
||
- !!merge <<: *outetts
|
||
name: "cpu-outetts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-outetts"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-outetts
|
||
- !!merge <<: *outetts
|
||
name: "cuda12-outetts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-outetts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-outetts
|
||
- !!merge <<: *outetts
|
||
name: "cuda12-outetts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-outetts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-outetts
|
||
## chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "chatterbox-development"
|
||
capabilities:
|
||
nvidia: "cuda12-chatterbox-development"
|
||
metal: "metal-chatterbox-development"
|
||
default: "cpu-chatterbox-development"
|
||
nvidia-l4t: "nvidia-l4t-arm64-chatterbox"
|
||
nvidia-cuda-13: "cuda13-chatterbox-development"
|
||
nvidia-cuda-12: "cuda12-chatterbox-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-chatterbox"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-chatterbox-development"
|
||
- !!merge <<: *chatterbox
|
||
name: "cpu-chatterbox"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cpu-chatterbox-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "nvidia-l4t-arm64-chatterbox"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-arm64-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "nvidia-l4t-arm64-chatterbox-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-arm64-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "metal-chatterbox"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "metal-chatterbox-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cuda12-chatterbox-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cuda12-chatterbox"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cuda13-chatterbox"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cuda13-chatterbox-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cuda13-nvidia-l4t-arm64-chatterbox"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-chatterbox
|
||
- !!merge <<: *chatterbox
|
||
name: "cuda13-nvidia-l4t-arm64-chatterbox-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-chatterbox"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-chatterbox
|
||
## vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "vibevoice-development"
|
||
capabilities:
|
||
nvidia: "cuda12-vibevoice-development"
|
||
intel: "intel-vibevoice-development"
|
||
amd: "rocm-vibevoice-development"
|
||
nvidia-l4t: "nvidia-l4t-vibevoice-development"
|
||
metal: "metal-vibevoice-development"
|
||
default: "cpu-vibevoice-development"
|
||
nvidia-cuda-13: "cuda13-vibevoice-development"
|
||
nvidia-cuda-12: "cuda12-vibevoice-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice-development"
|
||
- !!merge <<: *vibevoice
|
||
name: "cpu-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cpu-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cuda12-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cuda12-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cuda13-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cuda13-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "intel-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "intel-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "rocm-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "rocm-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "nvidia-l4t-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "nvidia-l4t-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cuda13-nvidia-l4t-arm64-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "cuda13-nvidia-l4t-arm64-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "metal-vibevoice"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-vibevoice
|
||
- !!merge <<: *vibevoice
|
||
name: "metal-vibevoice-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-vibevoice"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-vibevoice
|
||
## liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "liquid-audio-development"
|
||
capabilities:
|
||
nvidia: "cuda12-liquid-audio-development"
|
||
intel: "intel-liquid-audio-development"
|
||
amd: "rocm-liquid-audio-development"
|
||
default: "cpu-liquid-audio-development"
|
||
nvidia-cuda-13: "cuda13-liquid-audio-development"
|
||
nvidia-cuda-12: "cuda12-liquid-audio-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-liquid-audio-development"
|
||
- !!merge <<: *liquid-audio
|
||
name: "cpu-liquid-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cpu-liquid-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cuda12-liquid-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cuda12-liquid-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cuda13-liquid-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cuda13-liquid-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "intel-liquid-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "intel-liquid-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "rocm-liquid-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "rocm-liquid-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cuda13-nvidia-l4t-arm64-liquid-audio"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-liquid-audio
|
||
- !!merge <<: *liquid-audio
|
||
name: "cuda13-nvidia-l4t-arm64-liquid-audio-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-liquid-audio"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-liquid-audio
|
||
## qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "qwen-tts-development"
|
||
capabilities:
|
||
nvidia: "cuda12-qwen-tts-development"
|
||
intel: "intel-qwen-tts-development"
|
||
amd: "rocm-qwen-tts-development"
|
||
nvidia-l4t: "nvidia-l4t-qwen-tts-development"
|
||
metal: "metal-qwen-tts-development"
|
||
default: "cpu-qwen-tts-development"
|
||
nvidia-cuda-13: "cuda13-qwen-tts-development"
|
||
nvidia-cuda-12: "cuda12-qwen-tts-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
|
||
- !!merge <<: *qwen-tts
|
||
name: "cpu-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cpu-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cuda12-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cuda12-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cuda13-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cuda13-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "intel-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "intel-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "rocm-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "rocm-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "nvidia-l4t-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "nvidia-l4t-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cuda13-nvidia-l4t-arm64-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "metal-qwen-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-qwen-tts
|
||
- !!merge <<: *qwen-tts
|
||
name: "metal-qwen-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-qwen-tts
|
||
## fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "fish-speech-development"
|
||
capabilities:
|
||
nvidia: "cuda12-fish-speech-development"
|
||
intel: "intel-fish-speech-development"
|
||
amd: "rocm-fish-speech-development"
|
||
nvidia-l4t: "nvidia-l4t-fish-speech-development"
|
||
metal: "metal-fish-speech-development"
|
||
default: "cpu-fish-speech-development"
|
||
nvidia-cuda-13: "cuda13-fish-speech-development"
|
||
nvidia-cuda-12: "cuda12-fish-speech-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-fish-speech-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-fish-speech-development"
|
||
- !!merge <<: *fish-speech
|
||
name: "cpu-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cpu-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cuda12-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cuda12-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cuda13-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cuda13-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "intel-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "intel-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "rocm-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "rocm-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "nvidia-l4t-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "nvidia-l4t-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cuda13-nvidia-l4t-arm64-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "cuda13-nvidia-l4t-arm64-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "metal-fish-speech"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-fish-speech
|
||
- !!merge <<: *fish-speech
|
||
name: "metal-fish-speech-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-fish-speech"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-fish-speech
|
||
## faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "faster-qwen3-tts-development"
|
||
capabilities:
|
||
nvidia: "cuda12-faster-qwen3-tts-development"
|
||
default: "cuda12-faster-qwen3-tts-development"
|
||
nvidia-cuda-13: "cuda13-faster-qwen3-tts-development"
|
||
nvidia-cuda-12: "cuda12-faster-qwen3-tts-development"
|
||
nvidia-l4t: "nvidia-l4t-faster-qwen3-tts-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-faster-qwen3-tts-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts-development"
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "cuda12-faster-qwen3-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "cuda12-faster-qwen3-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "cuda13-faster-qwen3-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "cuda13-faster-qwen3-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "nvidia-l4t-faster-qwen3-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "nvidia-l4t-faster-qwen3-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts
|
||
- !!merge <<: *faster-qwen3-tts
|
||
name: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts
|
||
## qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "qwen-asr-development"
|
||
capabilities:
|
||
nvidia: "cuda12-qwen-asr-development"
|
||
intel: "intel-qwen-asr-development"
|
||
amd: "rocm-qwen-asr-development"
|
||
nvidia-l4t: "nvidia-l4t-qwen-asr-development"
|
||
metal: "metal-qwen-asr-development"
|
||
default: "cpu-qwen-asr-development"
|
||
nvidia-cuda-13: "cuda13-qwen-asr-development"
|
||
nvidia-cuda-12: "cuda12-qwen-asr-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-asr-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-asr-development"
|
||
- !!merge <<: *qwen-asr
|
||
name: "cpu-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cpu-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cuda12-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cuda12-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cuda13-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cuda13-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "intel-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "intel-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "rocm-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "rocm-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "nvidia-l4t-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "nvidia-l4t-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cuda13-nvidia-l4t-arm64-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "cuda13-nvidia-l4t-arm64-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "metal-qwen-asr"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-qwen-asr
|
||
- !!merge <<: *qwen-asr
|
||
name: "metal-qwen-asr-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen-asr"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-qwen-asr
|
||
## nemo
|
||
- !!merge <<: *nemo
|
||
name: "nemo-development"
|
||
capabilities:
|
||
nvidia: "cuda12-nemo-development"
|
||
intel: "intel-nemo-development"
|
||
amd: "rocm-nemo-development"
|
||
metal: "metal-nemo-development"
|
||
default: "cpu-nemo-development"
|
||
nvidia-cuda-13: "cuda13-nemo-development"
|
||
nvidia-cuda-12: "cuda12-nemo-development"
|
||
- !!merge <<: *nemo
|
||
name: "cpu-nemo"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-nemo
|
||
- !!merge <<: *nemo
|
||
name: "cpu-nemo-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-nemo
|
||
- !!merge <<: *nemo
|
||
name: "cuda12-nemo"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-nemo
|
||
- !!merge <<: *nemo
|
||
name: "cuda12-nemo-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-nemo
|
||
- !!merge <<: *nemo
|
||
name: "cuda13-nemo"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-nemo
|
||
- !!merge <<: *nemo
|
||
name: "cuda13-nemo-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-nemo
|
||
- !!merge <<: *nemo
|
||
name: "intel-nemo"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-nemo
|
||
- !!merge <<: *nemo
|
||
name: "intel-nemo-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-nemo
|
||
- !!merge <<: *nemo
|
||
name: "rocm-nemo"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-nemo
|
||
- !!merge <<: *nemo
|
||
name: "rocm-nemo-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-nemo
|
||
- !!merge <<: *nemo
|
||
name: "metal-nemo"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-nemo
|
||
- !!merge <<: *nemo
|
||
name: "metal-nemo-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-nemo"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-nemo
|
||
## voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "voxcpm-development"
|
||
capabilities:
|
||
nvidia: "cuda12-voxcpm-development"
|
||
intel: "intel-voxcpm-development"
|
||
amd: "rocm-voxcpm-development"
|
||
metal: "metal-voxcpm-development"
|
||
default: "cpu-voxcpm-development"
|
||
nvidia-cuda-13: "cuda13-voxcpm-development"
|
||
nvidia-cuda-12: "cuda12-voxcpm-development"
|
||
- !!merge <<: *voxcpm
|
||
name: "cpu-voxcpm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "cpu-voxcpm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "cuda12-voxcpm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "cuda12-voxcpm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "cuda13-voxcpm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "cuda13-voxcpm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "intel-voxcpm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "intel-voxcpm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "rocm-voxcpm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "rocm-voxcpm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "metal-voxcpm"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-voxcpm
|
||
- !!merge <<: *voxcpm
|
||
name: "metal-voxcpm-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-voxcpm"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-voxcpm
|
||
## pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "pocket-tts-development"
|
||
capabilities:
|
||
nvidia: "cuda12-pocket-tts-development"
|
||
intel: "intel-pocket-tts-development"
|
||
amd: "rocm-pocket-tts-development"
|
||
nvidia-l4t: "nvidia-l4t-pocket-tts-development"
|
||
metal: "metal-pocket-tts-development"
|
||
default: "cpu-pocket-tts-development"
|
||
nvidia-cuda-13: "cuda13-pocket-tts-development"
|
||
nvidia-cuda-12: "cuda12-pocket-tts-development"
|
||
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts-development"
|
||
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts-development"
|
||
- !!merge <<: *pocket-tts
|
||
name: "cpu-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cpu-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cuda12-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cuda12-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cuda13-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-13-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cuda13-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-13-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "intel-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-intel-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "intel-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-intel-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "rocm-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-rocm-hipblas-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "rocm-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-rocm-hipblas-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "nvidia-l4t-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "nvidia-l4t-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cuda13-nvidia-l4t-arm64-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "cuda13-nvidia-l4t-arm64-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "metal-pocket-tts"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-pocket-tts
|
||
- !!merge <<: *pocket-tts
|
||
name: "metal-pocket-tts-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-pocket-tts"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-pocket-tts
|
||
## voxtral
|
||
- !!merge <<: *voxtral
|
||
name: "cpu-voxtral"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-voxtral"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-voxtral
|
||
- !!merge <<: *voxtral
|
||
name: "cpu-voxtral-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-voxtral"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-voxtral
|
||
- !!merge <<: *voxtral
|
||
name: "metal-voxtral"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-voxtral"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-voxtral
|
||
- !!merge <<: *voxtral
|
||
name: "metal-voxtral-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-voxtral"
|
||
mirrors:
|
||
- localai/localai-backends:master-metal-darwin-arm64-voxtral
|
||
- &trl
|
||
name: "trl"
|
||
alias: "trl"
|
||
license: apache-2.0
|
||
description: |
|
||
HuggingFace TRL fine-tuning backend. Supports SFT, DPO, GRPO, RLOO, Reward, KTO, ORPO training methods.
|
||
Works on CPU and GPU.
|
||
urls:
|
||
- https://github.com/huggingface/trl
|
||
tags:
|
||
- fine-tuning
|
||
- LLM
|
||
- CPU
|
||
- GPU
|
||
- CUDA
|
||
capabilities:
|
||
default: "cpu-trl"
|
||
nvidia: "cuda12-trl"
|
||
nvidia-cuda-12: "cuda12-trl"
|
||
nvidia-cuda-13: "cuda13-trl"
|
||
## TRL backend images
|
||
- !!merge <<: *trl
|
||
name: "cpu-trl"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-trl"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-trl
|
||
- !!merge <<: *trl
|
||
name: "cpu-trl-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-trl"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-trl
|
||
- !!merge <<: *trl
|
||
name: "cuda12-trl"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cublas-cuda12-trl"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cublas-cuda12-trl
|
||
- !!merge <<: *trl
|
||
name: "cuda12-trl-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cublas-cuda12-trl"
|
||
mirrors:
|
||
- localai/localai-backends:master-cublas-cuda12-trl
|
||
- !!merge <<: *trl
|
||
name: "cuda13-trl"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cublas-cuda13-trl"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cublas-cuda13-trl
|
||
- !!merge <<: *trl
|
||
name: "cuda13-trl-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cublas-cuda13-trl"
|
||
mirrors:
|
||
- localai/localai-backends:master-cublas-cuda13-trl
|
||
## llama.cpp quantization backend
|
||
- &llama-cpp-quantization
|
||
name: "llama-cpp-quantization"
|
||
alias: "llama-cpp-quantization"
|
||
license: mit
|
||
icon: https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png
|
||
description: |
|
||
Model quantization backend using llama.cpp. Downloads HuggingFace models, converts them to GGUF format,
|
||
and quantizes them to various formats (q4_k_m, q5_k_m, q8_0, f16, etc.).
|
||
urls:
|
||
- https://github.com/ggml-org/llama.cpp
|
||
tags:
|
||
- quantization
|
||
- GGUF
|
||
- CPU
|
||
capabilities:
|
||
default: "cpu-llama-cpp-quantization"
|
||
metal: "metal-darwin-arm64-llama-cpp-quantization"
|
||
- !!merge <<: *llama-cpp-quantization
|
||
name: "cpu-llama-cpp-quantization"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-llama-cpp-quantization"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-llama-cpp-quantization
|
||
- !!merge <<: *llama-cpp-quantization
|
||
name: "metal-darwin-arm64-llama-cpp-quantization"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-llama-cpp-quantization"
|
||
mirrors:
|
||
- localai/localai-backends:latest-metal-darwin-arm64-llama-cpp-quantization
|
||
# insightface (face recognition) — development and concrete image entries
|
||
- !!merge <<: *insightface
|
||
name: "insightface-development"
|
||
capabilities:
|
||
default: "cpu-insightface-development"
|
||
nvidia: "cuda12-insightface-development"
|
||
nvidia-cuda-12: "cuda12-insightface-development"
|
||
- !!merge <<: *insightface
|
||
name: "cpu-insightface"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-insightface"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-insightface
|
||
- !!merge <<: *insightface
|
||
name: "cuda12-insightface"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-insightface"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-insightface
|
||
- !!merge <<: *insightface
|
||
name: "cpu-insightface-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-insightface"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-insightface
|
||
- !!merge <<: *insightface
|
||
name: "cuda12-insightface-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-insightface"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-insightface
|
||
|
||
# speaker-recognition (voice/speaker biometrics) — Apache-2.0 stack
|
||
- &speakerrecognition
|
||
name: "speaker-recognition"
|
||
alias: "speaker-recognition"
|
||
# SpeechBrain is Apache-2.0. WeSpeaker / 3D-Speaker ONNX exports are
|
||
# Apache-2.0. The backend itself ships only Python deps — all model
|
||
# weights flow through LocalAI's gallery download mechanism (or
|
||
# SpeechBrain's built-in HF auto-download at first LoadModel).
|
||
license: apache-2.0
|
||
description: |
|
||
Speaker (voice) recognition backend — the audio analog to
|
||
insightface. Wraps SpeechBrain ECAPA-TDNN (default engine, 192-d
|
||
embeddings, ~1.9% EER on VoxCeleb) plus an OnnxDirectEngine for
|
||
pre-exported WeSpeaker / 3D-Speaker ONNX models.
|
||
|
||
Exposes speaker verification (/v1/voice/verify), speaker embedding
|
||
(/v1/voice/embed), speaker analysis (/v1/voice/analyze), and 1:N
|
||
speaker identification (/v1/voice/{register,identify,forget}).
|
||
Registrations use LocalAI's built-in vector store — same in-memory
|
||
backing the face-recognition registry uses, separate instance.
|
||
urls:
|
||
- https://speechbrain.github.io/
|
||
- https://github.com/wenet-e2e/wespeaker
|
||
- https://github.com/modelscope/3D-Speaker
|
||
tags:
|
||
- voice-recognition
|
||
- speaker-verification
|
||
- speaker-embedding
|
||
- gpu
|
||
- cpu
|
||
capabilities:
|
||
default: "cpu-speaker-recognition"
|
||
nvidia: "cuda12-speaker-recognition"
|
||
nvidia-cuda-12: "cuda12-speaker-recognition"
|
||
- !!merge <<: *speakerrecognition
|
||
name: "speaker-recognition-development"
|
||
capabilities:
|
||
default: "cpu-speaker-recognition-development"
|
||
nvidia: "cuda12-speaker-recognition-development"
|
||
nvidia-cuda-12: "cuda12-speaker-recognition-development"
|
||
- !!merge <<: *speakerrecognition
|
||
name: "cpu-speaker-recognition"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-speaker-recognition"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-speaker-recognition
|
||
- !!merge <<: *speakerrecognition
|
||
name: "cuda12-speaker-recognition"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-speaker-recognition"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-speaker-recognition
|
||
- !!merge <<: *speakerrecognition
|
||
name: "cpu-speaker-recognition-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-speaker-recognition"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-speaker-recognition
|
||
- !!merge <<: *speakerrecognition
|
||
name: "cuda12-speaker-recognition-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-speaker-recognition"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-speaker-recognition
|
||
## sherpa-onnx
|
||
- !!merge <<: *sherpa-onnx
|
||
name: "sherpa-onnx-development"
|
||
capabilities:
|
||
default: "cpu-sherpa-onnx-development"
|
||
nvidia: "cuda12-sherpa-onnx-development"
|
||
nvidia-cuda-12: "cuda12-sherpa-onnx-development"
|
||
- !!merge <<: *sherpa-onnx
|
||
name: "cpu-sherpa-onnx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sherpa-onnx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-cpu-sherpa-onnx
|
||
- !!merge <<: *sherpa-onnx
|
||
name: "cpu-sherpa-onnx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sherpa-onnx"
|
||
mirrors:
|
||
- localai/localai-backends:master-cpu-sherpa-onnx
|
||
- !!merge <<: *sherpa-onnx
|
||
name: "cuda12-sherpa-onnx"
|
||
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sherpa-onnx"
|
||
mirrors:
|
||
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sherpa-onnx
|
||
- !!merge <<: *sherpa-onnx
|
||
name: "cuda12-sherpa-onnx-development"
|
||
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sherpa-onnx"
|
||
mirrors:
|
||
- localai/localai-backends:master-gpu-nvidia-cuda-12-sherpa-onnx
|