Files
LocalAI/backend/index.yaml
Ettore Di Giacinto 20baec77ab feat(face-recognition): add insightface/onnx backend for 1:1 verify, 1:N identify, embedding, detection, analysis (#9480)
* feat(face-recognition): add insightface backend for 1:1 verify, 1:N identify, embedding, detection, analysis

Adds face recognition as a new first-class capability in LocalAI via the
`insightface` Python backend, with a pluggable two-engine design so
non-commercial (insightface model packs) and commercial-safe
(OpenCV Zoo YuNet + SFace) models share the same gRPC/HTTP surface.

New gRPC RPCs (backend/backend.proto):
  * FaceVerify(FaceVerifyRequest) returns FaceVerifyResponse
  * FaceAnalyze(FaceAnalyzeRequest) returns FaceAnalyzeResponse

Existing Embedding and Detect RPCs are reused (face image in
PredictOptions.Images / DetectOptions.src) for face embedding and
face detection respectively.

New HTTP endpoints under /v1/face/:
  * verify     — 1:1 image pair same-person decision
  * analyze    — per-face age + gender (emotion/race reserved)
  * register   — 1:N enrollment; stores embedding in vector store
  * identify   — 1:N recognition; detect → embed → StoresFind
  * forget     — remove a registered face by opaque ID

Service layer (core/services/facerecognition/) introduces a
`Registry` interface with one in-memory `storeRegistry` impl backed
by LocalAI's existing local-store gRPC vector backend. HTTP handlers
depend on the interface, not on StoresSet/StoresFind directly, so a
persistent PostgreSQL/pgvector implementation can be slotted in via a
single constructor change in core/application (TODO marker in the
package doc).

New usecase flag FLAG_FACE_RECOGNITION; insightface is also wired
into FLAG_DETECTION so /v1/detection works for face bounding boxes.

Gallery (backend/index.yaml) ships three entries:
  * insightface-buffalo-l   — SCRFD-10GF + ArcFace R50 + genderage
                              (~326MB pre-baked; non-commercial research use only)
  * insightface-opencv      — YuNet + SFace (~40MB pre-baked; Apache 2.0)
  * insightface-buffalo-s   — SCRFD-500MF + MBF (runtime download; non-commercial)

Python backend (backend/python/insightface/):
  * engines.py — FaceEngine protocol with InsightFaceEngine and
    OnnxDirectEngine; resolves model paths relative to the backend
    directory so the same gallery config works in docker-scratch and
    in the e2e-backends rootfs-extraction harness.
  * backend.py — gRPC servicer implementing Health, LoadModel, Status,
    Embedding, Detect, FaceVerify, FaceAnalyze.
  * install.sh — pre-bakes buffalo_l + OpenCV YuNet/SFace inside the
    backend directory so first-run is offline-clean (the final scratch
    image only preserves files under /<backend>/).
  * test.py — parametrized unit tests over both engines.

Tests:
  * Registry unit tests (go test -race ./core/services/facerecognition/...)
    — in-memory fake grpc.Backend, table-driven, covers register/
    identify/forget/error paths + concurrent access.
  * tests/e2e-backends/backend_test.go extended with face caps
    (face_detect, face_embed, face_verify, face_analyze); relative
    ordering + configurable verifyCeiling per engine.
  * Makefile targets: test-extra-backend-insightface-buffalo-l,
    -opencv, and the -all aggregate.
  * CI: .github/workflows/test-extra.yml gains tests-insightface-grpc,
    auto-triggered by changes under backend/python/insightface/.

Docs:
  * docs/content/features/face-recognition.md — feature page with
    license table, quickstart (defaults to the commercial-safe model),
    models matrix, API reference, 1:N workflow, storage caveats.
  * Cross-refs in object-detection.md, stores.md, embeddings.md, and
    whats-new.md.
  * Contributor README at backend/python/insightface/README.md.

Verified end-to-end:
  * buffalo_l: 6/6 specs (health, load, face_detect, face_embed,
    face_verify, face_analyze).
  * opencv: 5/5 specs (same minus face_analyze — SFace has no
    demographic head; correctly skipped via BACKEND_TEST_CAPS).

Assisted-by: Claude:claude-opus-4-7

* fix(face-recognition): move engine selection to model gallery, collapse backend entries

The previous commit put engine/model_pack options on backend gallery
entries (`backend/index.yaml`). That was wrong — `GalleryBackend`
(core/gallery/backend_types.go:32) has no `options` field, so the
YAML decoder silently dropped those keys and all three "different
insightface-*" backend entries resolved to the same container image
with no distinguishing configuration.

Correct split:

  * `backend/index.yaml` now has ONE `insightface` backend entry
    shipping the CPU + CUDA 12 container images. The Python backend
    bundles both the non-commercial insightface model packs
    (buffalo_l / buffalo_s) and the commercial-safe OpenCV Zoo
    weights (YuNet + SFace); the active engine is selected at
    LoadModel time via `options: ["engine:..."]`.

  * `gallery/index.yaml` gains three model entries —
    `insightface-buffalo-l`, `insightface-opencv`,
    `insightface-buffalo-s` — each setting the appropriate
    `overrides.backend` + `overrides.options` so installing one
    actually gives the user the intended engine. This matches how
    `rfdetr-base` lives in the model gallery against the `rfdetr`
    backend.

The earlier e2e tests passed despite this bug because the Makefile
targets pass `BACKEND_TEST_OPTIONS` directly to LoadModel via gRPC,
bypassing any gallery resolution entirely. No code changes needed.

Assisted-by: Claude:claude-opus-4-7

* feat(face-recognition): cover all supported models in the gallery + drop weight baking

Follows up on the model-gallery split: adds entries for every model
configuration either engine actually supports, and switches weight
delivery from image-baked to LocalAI's standard gallery mechanism.

Gallery now has seven `insightface-*` model entries (gallery/index.yaml):

  insightface (family)  — non-commercial research use
    • buffalo-l   (326MB)  — SCRFD-10GF + ResNet50 + genderage, default
    • buffalo-m   (313MB)  — SCRFD-2.5GF + ResNet50 + genderage
    • buffalo-s   (159MB)  — SCRFD-500MF + MBF + genderage
    • buffalo-sc  (16MB)   — SCRFD-500MF + MBF, recognition only
                             (no landmarks, no demographics — analyze
                             returns empty attributes)
    • antelopev2  (407MB)  — SCRFD-10GF + ResNet100@Glint360K + genderage

  OpenCV Zoo family — Apache 2.0 commercial-safe
    • opencv       — YuNet + SFace fp32 (~40MB)
    • opencv-int8  — YuNet + SFace int8 (~12MB, ~3x smaller, faster on CPU)

Model weights are no longer baked into the backend image. The image
now ships only the Python runtime + libraries (~275MB content size,
~1.18GB disk vs ~1.21GB when weights were baked). Weights flow through
LocalAI's gallery mechanism:

  * OpenCV variants list `files:` with ONNX URIs + SHA-256, so
    `local-ai models install insightface-opencv` pulls them into the
    models directory exactly like any other gallery-managed model.

  * insightface packs (upstream distributes .zip archives only, not
    individual ONNX files) auto-download on first LoadModel via
    FaceAnalysis' built-in machinery, rooted at the LocalAI models
    directory so they live alongside everything else — same pattern
    `rfdetr` uses with `inference.get_model()`.

Backend changes (backend/python/insightface/):

  * backend.py — LoadModel propagates `ModelOptions.ModelPath` (the
    LocalAI models directory) to engines via a `_model_dir` hint.
    This replaces the earlier ModelFile-dirname approach; ModelPath
    is the canonical "models directory" variable set by the Go loader
    (pkg/model/initializers.go:144) and is always populated.

  * engines.py::_resolve_model_path — picks up `model_dir` and searches
    it (plus basename-in-model-dir) before falling back to the dev
    script-dir. This is how OnnxDirectEngine finds gallery-downloaded
    YuNet/SFace files by filename only.

  * engines.py::_flatten_insightface_pack — new helper that works
    around an upstream packaging inconsistency: buffalo_l/s/sc zips
    expand flat, but buffalo_m and antelopev2 zips wrap their ONNX
    files in a redundant `<name>/` directory. insightface's own
    loader looks one level too shallow and fails. We call
    `ensure_available()` explicitly, flatten if nested, then hand to
    FaceAnalysis.

  * engines.py::InsightFaceEngine.prepare — root-resolution order now
    includes the `_model_dir` hint so packs download into the LocalAI
    models directory by default.

  * install.sh — no longer pre-downloads any weights. Everything is
    gallery-managed now.

  * smoke.py (new) — parametrized smoke test that iterates over every
    gallery configuration, simulating the LocalAI install flow
    (creates a models dir, fetches OpenCV files with checksum
    verification, lets insightface auto-download its packs), then
    runs detect + embed + verify (+ analyze where supported) through
    the in-process BackendServicer.

  * test.py — OnnxDirectEngineTest no longer hardcodes `/models/opencv/`
    paths; downloads ONNX files to a temp dir at setUpClass time and
    passes ModelPath accordingly.

Registry change (core/services/facerecognition/store_registry.go):

  * `dim=0` in NewStoreRegistry now means "accept whatever dimension
    arrives" — needed because the backend supports 512-d ArcFace/MBF
    and 128-d SFace via the same Registry. A non-zero dim still fails
    fast with ErrDimensionMismatch.

  * core/application plumbs `faceEmbeddingDim = 0`, explaining the
    rationale in the comment.

Backend gallery description updated to reflect that the image carries
no weights — it's just Python + engines.

Smoke-tested all 7 configurations against the rebuilt image (with the
flatten fix applied), exit 0:

    PASS: insightface-buffalo-l    faces=6 dim=512 same-dist=0.000
    PASS: insightface-buffalo-sc   faces=6 dim=512 same-dist=0.000
    PASS: insightface-buffalo-s    faces=6 dim=512 same-dist=0.000
    PASS: insightface-buffalo-m    faces=6 dim=512 same-dist=0.000
    PASS: insightface-antelopev2   faces=6 dim=512 same-dist=0.000
    PASS: insightface-opencv       faces=6 dim=128 same-dist=0.000
    PASS: insightface-opencv-int8  faces=6 dim=128 same-dist=0.000
    7/7 passed

Assisted-by: Claude:claude-opus-4-7

* fix(face-recognition): pre-fetch OpenCV ONNX for e2e target; drop stale pre-baked claim

CI regression from the previous commit: I moved OpenCV Zoo weight
delivery to LocalAI's gallery `files:` mechanism, but the
test-extra-backend-insightface-opencv target was still passing
relative paths `detector_onnx:models/opencv/yunet.onnx` in
BACKEND_TEST_OPTIONS. The e2e suite drives LoadModel directly over
gRPC without going through the gallery, so those relative paths
resolved to nothing and OpenCV's ONNXImporter failed:

    LoadModel failed: Failed to load face engine:
    OpenCV(4.13.0) ... Can't read ONNX file: models/opencv/yunet.onnx

Fix: add an `insightface-opencv-models` prerequisite target that
fetches the two ONNX files (YuNet + SFace) to a deterministic host
cache at /tmp/localai-insightface-opencv-cache/, verifies SHA-256,
and skips the download on re-runs. The opencv test target depends on
it and passes absolute paths in BACKEND_TEST_OPTIONS, so the backend
finds the files via its normal absolute-path resolution branch.

Also refresh the buffalo_l comment: it no longer says "pre-baked"
(nothing is — the pack auto-downloads from upstream's GitHub release
on first LoadModel, same as in CI).

Locally verified: `make test-extra-backend-insightface-opencv` passes
5/5 specs (health, load, face_detect, face_embed, face_verify).

Assisted-by: Claude:claude-opus-4-7

* feat(face-recognition): add POST /v1/face/embed + correct /v1/embeddings docs

The docs promised that /v1/embeddings returns face vectors when you
send an image data-URI. That was never true: /v1/embeddings is
OpenAI-compatible and text-only by contract — its handler goes
through `core/backend/embeddings.go::ModelEmbedding`, which sets
`predictOptions.Embeddings = s` (a string of TEXT to embed) and never
populates `predictOptions.Images[]`. The Python backend's Embedding
gRPC method does handle Images[] (that's how /v1/face/register reaches
it internally via `backend.FaceEmbed`), but the HTTP embeddings
endpoint wasn't wired to populate it.

Rather than overload /v1/embeddings with image-vs-text detection —
messy, and the endpoint is OpenAI-compatible by design — add a
dedicated /v1/face/embed endpoint that wraps `backend.FaceEmbed`
(already used internally by /v1/face/register and /v1/face/identify).

Matches LocalAI's convention of a dedicated path per non-standard flow
(/v1/rerank, /v1/detection, /v1/face/verify etc.).

Response:

    {
      "embedding": [<dim> floats, L2-normed],
      "dim": int,           // 512 for ArcFace R50 / MBF, 128 for SFace
      "model": "<name>"
    }

Live-tested on the opencv engine: returns a 128-d L2-normalized vector
(sum(x^2) = 1.0000). Sentinel in docs updated to note /v1/embeddings
is text-only and point image users at /v1/face/embed instead.

Assisted-by: Claude:claude-opus-4-7

* fix(http): map malformed image input + gRPC status codes to proper 4xx

Image-input failures on LocalAI's single-image endpoints (/v1/detection,
/v1/face/{verify,analyze,embed,register,identify}) have historically
returned 500 — even when the client was the one who sent garbage.
Classic example: you POST an "image" that isn't a URL, isn't a
data-URI, and isn't a valid JPEG/PNG — the server shouldn't claim
that's its fault.

Two helpers land in core/http/endpoints/localai/images.go and every
single-image handler is switched over:

  * decodeImageInput(s)
      Wraps utils.GetContentURIAsBase64 and turns any failure
      (invalid URL, not a data-URI, download error, etc.) into
      echo.NewHTTPError(400, "invalid image input: ...").

  * mapBackendError(err)
      Inspects the gRPC status on a backend call error and maps:
        INVALID_ARGUMENT     → 400 Bad Request
        NOT_FOUND            → 404 Not Found
        FAILED_PRECONDITION  → 412 Precondition Failed
        Unimplemented        → 501 Not Implemented
      All other codes fall through unchanged (still 500).

Before, my 1×1 PNG error-path test returned:
    HTTP 500 "rpc error: code = InvalidArgument desc = failed to decode one or both images"
After:
    HTTP 400 "failed to decode one or both images"

Scope-limited to the LocalAI single-image endpoints. The multi-modal
paths (middleware/request.go, openresponses/responses.go,
openai/realtime.go) intentionally log-and-skip individual media parts
when decoding fails — different design intent (graceful degradation
of a multi-part message), not a 400-worthy failure. Left untouched.

Live-verified: every error case in /tmp/face_errors.py now returns
4xx with a meaningful message; the "image with no face (1x1 PNG)"
case specifically went from 500 → 400.

Assisted-by: Claude:claude-opus-4-7

* refactor(face-recognition): insightface packs go through gallery files:, drop FaceAnalysis

Follows up on the discovery that LocalAI's gallery `files:` mechanism
handles archives (zip, tar.gz, …) via mholt/archiver/v3 — the rhasspy
piper voices use exactly this pattern. Insightface packs are zip
archives, so we can now deliver them the same way every other
gallery-managed model gets delivered: declaratively, checksum-verified,
through LocalAI's standard download+extract pipeline.

Two changes:

1. Gallery (gallery/index.yaml) — every insightface-* entry gains a
   `files:` list with the pack zip's URI + SHA-256. `local-ai models
   install insightface-buffalo-l` now fetches the zip, verifies the
   hash, and extracts it into the models directory. No more reliance
   on insightface's library-internal `ensure_available()` auto-download
   or its hardcoded `BASE_REPO_URL`.

2. InsightFaceEngine (backend/python/insightface/engines.py) — drops
   the FaceAnalysis wrapper and drives insightface's `model_zoo`
   directly. The ~50 lines FaceAnalysis provides — glob ONNX files,
   route each through `model_zoo.get_model()`, build a
   `{taskname: model}` dict, loop per-face at inference — are
   reimplemented in `InsightFaceEngine`. The actual inference classes
   (RetinaFace, ArcFaceONNX, Attribute, Landmark) are still
   insightface's — we only replicate the glue, so drift risk against
   upstream is minimal.

   Why drop FaceAnalysis: it hard-codes a `<root>/models/<name>/*.onnx`
   layout that doesn't match what LocalAI's zip extraction produces.
   LocalAI unpacks archives flat into `<models_dir>`. Upstream packs
   are inconsistent — buffalo_l/s/sc ship ONNX at the zip root (lands
   at `<models_dir>/*.onnx`), buffalo_m/antelopev2 wrap in a redundant
   `<name>/` dir (lands at `<models_dir>/<name>/*.onnx`). The new
   `_locate_insightface_pack` helper searches both locations plus
   legacy paths and returns whichever has ONNX files. Replaces the
   earlier `_flatten_insightface_pack` helper (which tried to fight
   FaceAnalysis's layout expectations; now we just find the files
   wherever they are).

Net effect for users: install once via LocalAI's managed flow,
weights live alongside every other model, progress shows in the
jobs endpoint, no first-load network call. Same API surface,
cleaner plumbing.

Assisted-by: Claude:claude-opus-4-7

* fix(face-recognition): CI's insightface e2e path needs the pack pre-fetched

The e2e suite drives LoadModel over gRPC without going through LocalAI's
gallery flow, so the engine's `_model_dir` option (normally populated
from ModelPath) is empty. Previously the insightface target relied on
FaceAnalysis auto-download to paper over this, but we dropped
FaceAnalysis in favor of direct model_zoo calls — so the buffalo_l
target started failing at LoadModel with "no insightface pack found".

Mirror the opencv target's pre-fetch pattern: download buffalo_sc.zip
(same SHA as the gallery entry), extract it on the host, and pass
`root:<dir>` so the engine locates the pack without needing
ModelPath. Switched to buffalo_sc (smallest pack, ~16MB) to keep CI
fast; it covers the same insightface engine code path as buffalo_l.

Face analyze cap dropped since buffalo_sc has no age/gender head.

Assisted-by: Claude:claude-opus-4-7[1m]

* feat(face-recognition): surface face-recognition in advertised feature maps

The six /v1/face/* endpoints were missing from every place LocalAI
advertises its feature surface to clients:

  * api_instructions — the machine-readable capability index at
    GET /api/instructions. Added `face-recognition` as a dedicated
    instruction area with an intro that calls out the in-memory
    registry caveat and the /v1/face/embed vs /v1/embeddings split.
  * auth/permissions — added FeatureFaceRecognition constant, routed
    all six face endpoints through it so admins can gate them per-user
    like any other API feature. Default ON (matches the other API
    features).
  * React UI capabilities — CAP_FACE_RECOGNITION symbol mapped to
    FLAG_FACE_RECOGNITION. Declared only for now; the Face page is a
    follow-up (noted in the plan).

Instruction count bumped 9 → 10; test updated.

Assisted-by: Claude:claude-opus-4-7[1m]

* docs(agents): capture advertising-surface steps in the endpoint guide

Before this change, adding a new /v1/* endpoint reliably missed one or
more of: the swagger @Tags annotation, the /api/instructions registry,
the auth RouteFeatureRegistry, and the React UI CAP_* symbol. The
endpoint would work but be invisible to API consumers, admins, and the
UI — and nothing in the existing docs said to look in those places.

Extend .agents/api-endpoints-and-auth.md with a new "Advertising
surfaces" section covering all four surfaces (swagger tags, /api/
instructions, capabilities.js, docs/), and expand the closing checklist
so it's impossible to ship a feature without visiting each one. Hoist a
one-liner reminder into AGENTS.md's Quick Reference so agents skim it
before diving in.

Assisted-by: Claude:claude-opus-4-7[1m]
2026-04-22 21:55:41 +02:00

3776 lines
146 KiB
YAML
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
## metas
- &llamacpp
name: "llama-cpp"
alias: "llama-cpp"
license: mit
icon: https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png
description: |
LLM inference in C/C++
urls:
- https://github.com/ggerganov/llama.cpp
tags:
- text-to-text
- LLM
- CPU
- GPU
- Metal
- CUDA
- HIP
capabilities:
default: "cpu-llama-cpp"
nvidia: "cuda12-llama-cpp"
intel: "intel-sycl-f16-llama-cpp"
amd: "rocm-llama-cpp"
metal: "metal-llama-cpp"
vulkan: "vulkan-llama-cpp"
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp"
nvidia-cuda-13: "cuda13-llama-cpp"
nvidia-cuda-12: "cuda12-llama-cpp"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp"
- &ikllamacpp
name: "ik-llama-cpp"
alias: "ik-llama-cpp"
license: mit
description: |
Fork of llama.cpp optimized for CPU performance by ikawrakow
urls:
- https://github.com/ikawrakow/ik_llama.cpp
tags:
- text-to-text
- LLM
- CPU
capabilities:
default: "cpu-ik-llama-cpp"
- &turboquant
name: "turboquant"
alias: "turboquant"
license: mit
description: |
Fork of llama.cpp adding the TurboQuant KV-cache quantization scheme.
Reuses the LocalAI llama.cpp gRPC server sources against the fork's libllama.
urls:
- https://github.com/TheTom/llama-cpp-turboquant
tags:
- text-to-text
- LLM
- CPU
- GPU
- CUDA
- HIP
- turboquant
- kv-cache
capabilities:
default: "cpu-turboquant"
nvidia: "cuda12-turboquant"
intel: "intel-sycl-f16-turboquant"
amd: "rocm-turboquant"
vulkan: "vulkan-turboquant"
nvidia-l4t: "nvidia-l4t-arm64-turboquant"
nvidia-cuda-13: "cuda13-turboquant"
nvidia-cuda-12: "cuda12-turboquant"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-turboquant"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-turboquant"
- &whispercpp
name: "whisper"
alias: "whisper"
license: mit
icon: https://user-images.githubusercontent.com/1991296/235238348-05d0f6a4-da44-4900-a1de-d0707e75b763.jpeg
description: |
Port of OpenAI's Whisper model in C/C++
urls:
- https://github.com/ggml-org/whisper.cpp
tags:
- audio-transcription
- CPU
- GPU
- CUDA
- HIP
capabilities:
default: "cpu-whisper"
nvidia: "cuda12-whisper"
intel: "intel-sycl-f16-whisper"
metal: "metal-whisper"
amd: "rocm-whisper"
vulkan: "vulkan-whisper"
nvidia-l4t: "nvidia-l4t-arm64-whisper"
nvidia-cuda-13: "cuda13-whisper"
nvidia-cuda-12: "cuda12-whisper"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisper"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-whisper"
- &voxtral
name: "voxtral"
alias: "voxtral"
license: mit
description: |
Voxtral Realtime 4B Pure C speech-to-text inference engine
urls:
- https://github.com/mudler/voxtral.c
tags:
- audio-transcription
- CPU
- Metal
capabilities:
default: "cpu-voxtral"
metal-darwin-arm64: "metal-voxtral"
- &stablediffusionggml
name: "stablediffusion-ggml"
alias: "stablediffusion-ggml"
license: mit
icon: https://github.com/leejet/stable-diffusion.cpp/raw/master/assets/cat_with_sd_cpp_42.png
description: |
Stable Diffusion and Flux in pure C/C++
urls:
- https://github.com/leejet/stable-diffusion.cpp
tags:
- image-generation
- CPU
- GPU
- Metal
- CUDA
- HIP
capabilities:
default: "cpu-stablediffusion-ggml"
nvidia: "cuda12-stablediffusion-ggml"
intel: "intel-sycl-f16-stablediffusion-ggml"
# amd: "rocm-stablediffusion-ggml"
vulkan: "vulkan-stablediffusion-ggml"
nvidia-l4t: "nvidia-l4t-arm64-stablediffusion-ggml"
metal: "metal-stablediffusion-ggml"
nvidia-cuda-13: "cuda13-stablediffusion-ggml"
nvidia-cuda-12: "cuda12-stablediffusion-ggml"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-stablediffusion-ggml"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml"
- &rfdetr
name: "rfdetr"
alias: "rfdetr"
license: apache-2.0
icon: https://avatars.githubusercontent.com/u/53104118?s=200&v=4
description: |
RF-DETR is a real-time, transformer-based object detection model architecture developed by Roboflow and released under the Apache 2.0 license.
RF-DETR is the first real-time model to exceed 60 AP on the Microsoft COCO benchmark alongside competitive performance at base sizes. It also achieves state-of-the-art performance on RF100-VL, an object detection benchmark that measures model domain adaptability to real world problems. RF-DETR is fastest and most accurate for its size when compared current real-time objection models.
RF-DETR is small enough to run on the edge using Inference, making it an ideal model for deployments that need both strong accuracy and real-time performance.
urls:
- https://github.com/roboflow/rf-detr
tags:
- object-detection
- rfdetr
- gpu
- cpu
capabilities:
nvidia: "cuda12-rfdetr"
intel: "intel-rfdetr"
#amd: "rocm-rfdetr"
nvidia-l4t: "nvidia-l4t-arm64-rfdetr"
metal: "metal-rfdetr"
default: "cpu-rfdetr"
nvidia-cuda-13: "cuda13-rfdetr"
nvidia-cuda-12: "cuda12-rfdetr"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-rfdetr"
- &insightface
name: "insightface"
alias: "insightface"
# Upstream insightface library is MIT. The pretrained model packs
# (buffalo_l, buffalo_s, antelopev2) are released for NON-COMMERCIAL
# research use only. The backend image also pre-bakes OpenCV Zoo
# YuNet + SFace (Apache 2.0) for commercial use. Pick the engine
# via model-gallery entries (insightface-buffalo-l / insightface-opencv
# / insightface-buffalo-s) or set `options` in your model YAML.
license: "mixed"
description: |
Face recognition backend powered by `insightface` (ONNX Runtime).
Provides face verification (/v1/face/verify), face analysis
(/v1/face/analyze), face embedding (/v1/embeddings), face
detection (/v1/detection), and 1:N identification
(/v1/face/{register,identify,forget}).
Ships two engines in a single image: one that drives the insightface
model packs (buffalo_l/s/m/sc, antelopev2 — non-commercial research
use only) and one that drives OpenCV Zoo's YuNet + SFace pair
(Apache 2.0 — commercial-safe). Select via `options: ["engine:..."]`
in your model YAML, or install one of the ready-made model-gallery
entries under the `insightface-*` prefix.
The backend image contains only code and Python deps; all model
weights are managed by LocalAI's gallery download mechanism.
urls:
- https://github.com/deepinsight/insightface
- https://github.com/opencv/opencv_zoo
tags:
- face-recognition
- face-verification
- face-embedding
- gpu
- cpu
capabilities:
default: "cpu-insightface"
nvidia: "cuda12-insightface"
nvidia-cuda-12: "cuda12-insightface"
- &sam3cpp
name: "sam3-cpp"
alias: "sam3-cpp"
license: mit
description: |
Segment Anything Model (SAM 3/2/EdgeTAM) in C/C++ using GGML.
Supports text-prompted and point/box-prompted image segmentation.
urls:
- https://github.com/PABannier/sam3.cpp
tags:
- image-segmentation
- object-detection
- sam3
- gpu
- cpu
capabilities:
default: "cpu-sam3-cpp"
nvidia: "cuda12-sam3-cpp"
nvidia-cuda-12: "cuda12-sam3-cpp"
nvidia-cuda-13: "cuda13-sam3-cpp"
nvidia-l4t: "nvidia-l4t-arm64-sam3-cpp"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-sam3-cpp"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sam3-cpp"
intel: "intel-sycl-f32-sam3-cpp"
vulkan: "vulkan-sam3-cpp"
- &vllm
name: "vllm"
license: apache-2.0
urls:
- https://github.com/vllm-project/vllm
tags:
- text-to-text
- multimodal
- GPTQ
- AWQ
- AutoRound
- INT4
- INT8
- FP8
icon: https://raw.githubusercontent.com/vllm-project/vllm/main/docs/assets/logos/vllm-logo-text-dark.png
description: |
vLLM is a fast and easy-to-use library for LLM inference and serving.
Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
vLLM is fast with:
State-of-the-art serving throughput
Efficient management of attention key and value memory with PagedAttention
Continuous batching of incoming requests
Fast model execution with CUDA/HIP graph
Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
Speculative decoding
Chunked prefill
alias: "vllm"
capabilities:
nvidia: "cuda12-vllm"
amd: "rocm-vllm"
intel: "intel-vllm"
nvidia-cuda-12: "cuda12-vllm"
cpu: "cpu-vllm"
- &sglang
name: "sglang"
license: apache-2.0
urls:
- https://github.com/sgl-project/sglang
tags:
- text-to-text
- multimodal
icon: https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png
description: |
SGLang is a fast serving framework for large language models and vision language models.
It co-designs the backend runtime (RadixAttention, continuous batching, structured
decoding) and the frontend language to make interaction with models faster and more
controllable. Features include fast backend runtime, flexible frontend language,
extensive model support, and an active community.
alias: "sglang"
capabilities:
nvidia: "cuda12-sglang"
amd: "rocm-sglang"
intel: "intel-sglang"
nvidia-cuda-12: "cuda12-sglang"
cpu: "cpu-sglang"
- &vllm-omni
name: "vllm-omni"
license: apache-2.0
urls:
- https://github.com/vllm-project/vllm-omni
tags:
- text-to-image
- image-generation
- text-to-video
- video-generation
- text-to-speech
- TTS
- multimodal
- LLM
icon: https://raw.githubusercontent.com/vllm-project/vllm/main/docs/assets/logos/vllm-logo-text-dark.png
description: |
vLLM-Omni is a unified interface for multimodal generation with vLLM.
It supports image generation (text-to-image, image editing), video generation
(text-to-video, image-to-video), text generation with multimodal inputs, and
text-to-speech generation. Only supports NVIDIA (CUDA) and ROCm platforms.
alias: "vllm-omni"
capabilities:
nvidia: "cuda12-vllm-omni"
amd: "rocm-vllm-omni"
nvidia-cuda-12: "cuda12-vllm-omni"
- &mlx
name: "mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx"
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
urls:
- https://github.com/ml-explore/mlx-lm
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-mlx
license: MIT
description: |
Run LLMs with MLX
tags:
- text-to-text
- LLM
- MLX
capabilities:
default: "cpu-mlx"
nvidia: "cuda12-mlx"
metal: "metal-mlx"
nvidia-cuda-12: "cuda12-mlx"
nvidia-cuda-13: "cuda13-mlx"
nvidia-l4t: "nvidia-l4t-mlx"
nvidia-l4t-cuda-12: "nvidia-l4t-mlx"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx"
- &mlx-vlm
name: "mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-vlm"
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
urls:
- https://github.com/Blaizzy/mlx-vlm
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-mlx-vlm
license: MIT
description: |
Run Vision-Language Models with MLX
tags:
- text-to-text
- multimodal
- vision-language
- LLM
- MLX
capabilities:
default: "cpu-mlx-vlm"
nvidia: "cuda12-mlx-vlm"
metal: "metal-mlx-vlm"
nvidia-cuda-12: "cuda12-mlx-vlm"
nvidia-cuda-13: "cuda13-mlx-vlm"
nvidia-l4t: "nvidia-l4t-mlx-vlm"
nvidia-l4t-cuda-12: "nvidia-l4t-mlx-vlm"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx-vlm"
- &mlx-audio
name: "mlx-audio"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-audio"
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
urls:
- https://github.com/Blaizzy/mlx-audio
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-mlx-audio
license: MIT
description: |
Run Audio Models with MLX
tags:
- audio-to-text
- audio-generation
- text-to-audio
- LLM
- MLX
capabilities:
default: "cpu-mlx-audio"
nvidia: "cuda12-mlx-audio"
metal: "metal-mlx-audio"
nvidia-cuda-12: "cuda12-mlx-audio"
nvidia-cuda-13: "cuda13-mlx-audio"
nvidia-l4t: "nvidia-l4t-mlx-audio"
nvidia-l4t-cuda-12: "nvidia-l4t-mlx-audio"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx-audio"
- &mlx-distributed
name: "mlx-distributed"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-distributed"
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
urls:
- https://github.com/ml-explore/mlx-lm
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-mlx-distributed
license: MIT
description: |
Run distributed LLM inference with MLX across multiple Apple Silicon Macs
tags:
- text-to-text
- LLM
- MLX
- distributed
capabilities:
default: "cpu-mlx-distributed"
nvidia: "cuda12-mlx-distributed"
metal: "metal-mlx-distributed"
nvidia-cuda-12: "cuda12-mlx-distributed"
nvidia-cuda-13: "cuda13-mlx-distributed"
nvidia-l4t: "nvidia-l4t-mlx-distributed"
nvidia-l4t-cuda-12: "nvidia-l4t-mlx-distributed"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-mlx-distributed"
- &rerankers
name: "rerankers"
alias: "rerankers"
capabilities:
nvidia: "cuda12-rerankers"
intel: "intel-rerankers"
amd: "rocm-rerankers"
metal: "metal-rerankers"
- &tinygrad
name: "tinygrad"
alias: "tinygrad"
license: MIT
description: |
tinygrad is a minimalist deep-learning framework with zero runtime
dependencies that targets CUDA, ROCm, Metal, WebGPU and CPU (CLANG).
The LocalAI tinygrad backend exposes a single multimodal runtime that
covers LLM text generation (Llama / Qwen / Mistral via safetensors or
GGUF) with native tool-call extraction, BERT-family embeddings,
Stable Diffusion 1.x / 2 / XL image generation, and Whisper speech-to-text.
Single image: tinygrad generates its own GPU kernels and dlopens the
host driver libraries at runtime, so there is no per-toolkit build
split. The same image runs CPU-only or accelerates against
CUDA / ROCm / Metal when the host driver is visible.
urls:
- https://github.com/tinygrad/tinygrad
uri: "quay.io/go-skynet/local-ai-backends:latest-tinygrad"
mirrors:
- localai/localai-backends:latest-tinygrad
tags:
- text-to-text
- LLM
- embeddings
- image-generation
- transcription
- multimodal
- &transformers
name: "transformers"
icon: https://avatars.githubusercontent.com/u/25720743?s=200&v=4
alias: "transformers"
license: apache-2.0
description: |
Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training.
It centralizes the model definition so that this definition is agreed upon across the ecosystem. transformers is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...), and adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from transformers.
urls:
- https://github.com/huggingface/transformers
tags:
- text-to-text
- multimodal
capabilities:
nvidia: "cuda12-transformers"
intel: "intel-transformers"
amd: "rocm-transformers"
metal: "metal-transformers"
nvidia-cuda-13: "cuda13-transformers"
nvidia-cuda-12: "cuda12-transformers"
- &diffusers
name: "diffusers"
icon: https://raw.githubusercontent.com/huggingface/diffusers/main/docs/source/en/imgs/diffusers_library.jpg
description: |
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both.
urls:
- https://github.com/huggingface/diffusers
tags:
- image-generation
- video-generation
- diffusion-models
license: apache-2.0
alias: "diffusers"
capabilities:
nvidia: "cuda12-diffusers"
intel: "intel-diffusers"
amd: "rocm-diffusers"
nvidia-l4t: "nvidia-l4t-diffusers"
metal: "metal-diffusers"
default: "cpu-diffusers"
nvidia-cuda-13: "cuda13-diffusers"
nvidia-cuda-12: "cuda12-diffusers"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-diffusers"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-diffusers"
- &ace-step
name: "ace-step"
description: |
ACE-Step 1.5 is an open-source music generation model. It supports simple mode (natural language description) and advanced mode (caption, lyrics, think, bpm, keyscale, etc.). Uses in-process acestep (LLMHandler for metadata, DiT for audio).
urls:
- https://github.com/ace-step/ACE-Step-1.5
tags:
- music-generation
- sound-generation
alias: "ace-step"
capabilities:
nvidia: "cuda12-ace-step"
intel: "intel-ace-step"
amd: "rocm-ace-step"
metal: "metal-ace-step"
default: "cpu-ace-step"
nvidia-cuda-13: "cuda13-ace-step"
nvidia-cuda-12: "cuda12-ace-step"
- !!merge <<: *ace-step
name: "ace-step-development"
capabilities:
nvidia: "cuda12-ace-step-development"
intel: "intel-ace-step-development"
amd: "rocm-ace-step-development"
metal: "metal-ace-step-development"
default: "cpu-ace-step-development"
nvidia-cuda-13: "cuda13-ace-step-development"
nvidia-cuda-12: "cuda12-ace-step-development"
- &acestepcpp
name: "acestep-cpp"
description: |
ACE-Step 1.5 C++ backend using GGML. Native C++ implementation of ACE-Step music generation with GPU support through GGML backends.
Generates stereo 48kHz audio from text descriptions and optional lyrics via a two-stage pipeline: text-to-code (ace-qwen3 LLM) + code-to-audio (DiT-VAE).
urls:
- https://github.com/ace-step/acestep.cpp
tags:
- music-generation
- sound-generation
alias: "acestep-cpp"
capabilities:
default: "cpu-acestep-cpp"
nvidia: "cuda12-acestep-cpp"
nvidia-cuda-13: "cuda13-acestep-cpp"
nvidia-cuda-12: "cuda12-acestep-cpp"
intel: "intel-sycl-f16-acestep-cpp"
metal: "metal-acestep-cpp"
amd: "rocm-acestep-cpp"
vulkan: "vulkan-acestep-cpp"
nvidia-l4t: "nvidia-l4t-arm64-acestep-cpp"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-acestep-cpp"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-acestep-cpp"
- &qwen3ttscpp
name: "qwen3-tts-cpp"
description: |
Qwen3-TTS C++ backend using GGML. Native C++ text-to-speech with voice cloning support.
Generates 24kHz mono audio from text with optional reference audio for voice cloning via ECAPA-TDNN speaker embeddings.
urls:
- https://github.com/predict-woo/qwen3-tts.cpp
tags:
- text-to-speech
- tts
- voice-cloning
alias: "qwen3-tts-cpp"
capabilities:
default: "cpu-qwen3-tts-cpp"
nvidia: "cuda12-qwen3-tts-cpp"
nvidia-cuda-13: "cuda13-qwen3-tts-cpp"
nvidia-cuda-12: "cuda12-qwen3-tts-cpp"
intel: "intel-sycl-f16-qwen3-tts-cpp"
metal: "metal-qwen3-tts-cpp"
amd: "rocm-qwen3-tts-cpp"
vulkan: "vulkan-qwen3-tts-cpp"
nvidia-l4t: "nvidia-l4t-arm64-qwen3-tts-cpp"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-qwen3-tts-cpp"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp"
- &faster-whisper
icon: https://avatars.githubusercontent.com/u/1520500?s=200&v=4
description: |
faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models.
This implementation is up to 4 times faster than openai/whisper for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.
urls:
- https://github.com/SYSTRAN/faster-whisper
tags:
- speech-to-text
- Whisper
license: MIT
name: "faster-whisper"
capabilities:
default: "cpu-faster-whisper"
nvidia: "cuda12-faster-whisper"
intel: "intel-faster-whisper"
amd: "rocm-faster-whisper"
metal: "metal-faster-whisper"
nvidia-cuda-13: "cuda13-faster-whisper"
nvidia-cuda-12: "cuda12-faster-whisper"
nvidia-l4t: "nvidia-l4t-arm64-faster-whisper"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-faster-whisper"
- &moonshine
description: |
Moonshine is a fast, accurate, and efficient speech-to-text transcription model using ONNX Runtime.
It provides real-time transcription capabilities with support for multiple model sizes and GPU acceleration.
urls:
- https://github.com/moonshine-ai/moonshine
tags:
- speech-to-text
- transcription
- ONNX
license: MIT
name: "moonshine"
alias: "moonshine"
capabilities:
nvidia: "cuda12-moonshine"
metal: "metal-moonshine"
default: "cpu-moonshine"
nvidia-cuda-13: "cuda13-moonshine"
nvidia-cuda-12: "cuda12-moonshine"
- &whisperx
description: |
WhisperX provides fast automatic speech recognition with word-level timestamps, speaker diarization,
and forced alignment. Built on faster-whisper and pyannote-audio for high-accuracy transcription
with speaker identification.
urls:
- https://github.com/m-bain/whisperX
tags:
- speech-to-text
- diarization
- whisperx
license: BSD-4-Clause
name: "whisperx"
alias: "whisperx"
capabilities:
nvidia: "cuda12-whisperx"
metal: "metal-whisperx"
default: "cpu-whisperx"
nvidia-cuda-13: "cuda13-whisperx"
nvidia-cuda-12: "cuda12-whisperx"
nvidia-l4t: "nvidia-l4t-arm64-whisperx"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisperx"
- &kokoro
icon: https://avatars.githubusercontent.com/u/166769057?v=4
description: |
Kokoro is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, Kokoro can be deployed anywhere from production environments to personal projects.
urls:
- https://huggingface.co/hexgrad/Kokoro-82M
- https://github.com/hexgrad/kokoro
tags:
- text-to-speech
- TTS
- LLM
license: apache-2.0
alias: "kokoro"
name: "kokoro"
capabilities:
nvidia: "cuda12-kokoro"
intel: "intel-kokoro"
amd: "rocm-kokoro"
nvidia-l4t: "nvidia-l4t-kokoro"
metal: "metal-kokoro"
nvidia-cuda-13: "cuda13-kokoro"
nvidia-cuda-12: "cuda12-kokoro"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-kokoro"
- &kokoros
icon: https://avatars.githubusercontent.com/u/166769057?v=4
description: |
Kokoros is a pure Rust TTS backend using the Kokoro ONNX model (82M parameters).
It provides fast, high-quality text-to-speech with streaming support, built on
ONNX Runtime for efficient CPU inference. Supports English, Japanese, Mandarin
Chinese, and German.
urls:
- https://huggingface.co/hexgrad/Kokoro-82M
- https://github.com/lucasjinreal/Kokoros
tags:
- text-to-speech
- TTS
- Rust
- ONNX
license: apache-2.0
alias: "kokoros"
name: "kokoros"
capabilities:
default: "cpu-kokoros"
- &coqui
urls:
- https://github.com/idiap/coqui-ai-TTS
description: |
🐸 Coqui TTS is a library for advanced Text-to-Speech generation.
🚀 Pretrained models in +1100 languages.
🛠️ Tools for training new models and fine-tuning existing models in any language.
📚 Utilities for dataset analysis and curation.
tags:
- text-to-speech
- TTS
license: mpl-2.0
name: "coqui"
alias: "coqui"
capabilities:
nvidia: "cuda12-coqui"
intel: "intel-coqui"
amd: "rocm-coqui"
metal: "metal-coqui"
nvidia-cuda-13: "cuda13-coqui"
nvidia-cuda-12: "cuda12-coqui"
icon: https://avatars.githubusercontent.com/u/1338804?s=200&v=4
- &outetts
urls:
- https://github.com/OuteAI/outetts
description: |
OuteTTS is an open-weight text-to-speech model from OuteAI (OuteAI/OuteTTS-0.3-1B).
Supports custom speaker voices via audio path or default speakers.
tags:
- text-to-speech
- TTS
license: apache-2.0
name: "outetts"
alias: "outetts"
capabilities:
default: "cpu-outetts"
nvidia-cuda-12: "cuda12-outetts"
- &chatterbox
urls:
- https://github.com/resemble-ai/chatterbox
description: |
Resemble AI's first production-grade open source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support emotion exaggeration control, a powerful feature that makes your voices stand out.
tags:
- text-to-speech
- TTS
license: MIT
icon: https://avatars.githubusercontent.com/u/49844015?s=200&v=4
name: "chatterbox"
alias: "chatterbox"
capabilities:
nvidia: "cuda12-chatterbox"
metal: "metal-chatterbox"
default: "cpu-chatterbox"
nvidia-l4t: "nvidia-l4t-arm64-chatterbox"
nvidia-cuda-13: "cuda13-chatterbox"
nvidia-cuda-12: "cuda12-chatterbox"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-chatterbox"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-chatterbox"
- &vibevoice
urls:
- https://github.com/microsoft/VibeVoice
description: |
VibeVoice-Realtime is a real-time text-to-speech model that generates natural-sounding speech.
tags:
- text-to-speech
- TTS
license: mit
name: "vibevoice"
alias: "vibevoice"
capabilities:
nvidia: "cuda12-vibevoice"
intel: "intel-vibevoice"
amd: "rocm-vibevoice"
nvidia-l4t: "nvidia-l4t-vibevoice"
metal: "metal-vibevoice"
default: "cpu-vibevoice"
nvidia-cuda-13: "cuda13-vibevoice"
nvidia-cuda-12: "cuda12-vibevoice"
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &qwen-tts
urls:
- https://github.com/QwenLM/Qwen3-TTS
description: |
Qwen3-TTS is a high-quality text-to-speech model supporting custom voice, voice design, and voice cloning.
tags:
- text-to-speech
- TTS
license: apache-2.0
name: "qwen-tts"
alias: "qwen-tts"
capabilities:
nvidia: "cuda12-qwen-tts"
intel: "intel-qwen-tts"
amd: "rocm-qwen-tts"
nvidia-l4t: "nvidia-l4t-qwen-tts"
metal: "metal-qwen-tts"
default: "cpu-qwen-tts"
nvidia-cuda-13: "cuda13-qwen-tts"
nvidia-cuda-12: "cuda12-qwen-tts"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
- &fish-speech
urls:
- https://github.com/fishaudio/fish-speech
description: |
Fish Speech is a high-quality text-to-speech model supporting voice cloning via reference audio.
tags:
- text-to-speech
- TTS
- voice-cloning
license: apache-2.0
name: "fish-speech"
alias: "fish-speech"
capabilities:
nvidia: "cuda12-fish-speech"
intel: "intel-fish-speech"
amd: "rocm-fish-speech"
nvidia-l4t: "nvidia-l4t-fish-speech"
metal: "metal-fish-speech"
default: "cpu-fish-speech"
nvidia-cuda-13: "cuda13-fish-speech"
nvidia-cuda-12: "cuda12-fish-speech"
nvidia-l4t-cuda-12: "nvidia-l4t-fish-speech"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-fish-speech"
icon: https://avatars.githubusercontent.com/u/148526220?s=200&v=4
- &faster-qwen3-tts
urls:
- https://github.com/andimarafioti/faster-qwen3-tts
- https://pypi.org/project/faster-qwen3-tts/
description: |
Real-time Qwen3-TTS inference using CUDA graph capture. Voice clone only; requires NVIDIA GPU with CUDA.
tags:
- text-to-speech
- TTS
- voice-clone
license: apache-2.0
name: "faster-qwen3-tts"
alias: "faster-qwen3-tts"
capabilities:
nvidia: "cuda12-faster-qwen3-tts"
default: "cuda12-faster-qwen3-tts"
nvidia-cuda-13: "cuda13-faster-qwen3-tts"
nvidia-cuda-12: "cuda12-faster-qwen3-tts"
nvidia-l4t: "nvidia-l4t-faster-qwen3-tts"
nvidia-l4t-cuda-12: "nvidia-l4t-faster-qwen3-tts"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
- &qwen-asr
urls:
- https://github.com/QwenLM/Qwen3-ASR
description: |
Qwen3-ASR is an automatic speech recognition model supporting multiple languages and batch inference.
tags:
- speech-recognition
- ASR
license: apache-2.0
name: "qwen-asr"
alias: "qwen-asr"
capabilities:
nvidia: "cuda12-qwen-asr"
intel: "intel-qwen-asr"
amd: "rocm-qwen-asr"
nvidia-l4t: "nvidia-l4t-qwen-asr"
metal: "metal-qwen-asr"
default: "cpu-qwen-asr"
nvidia-cuda-13: "cuda13-qwen-asr"
nvidia-cuda-12: "cuda12-qwen-asr"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-asr"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-asr"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
- &nemo
urls:
- https://github.com/NVIDIA/NeMo
description: |
NVIDIA NEMO Toolkit for ASR provides state-of-the-art automatic speech recognition models including Parakeet models for various languages and use cases.
tags:
- speech-recognition
- ASR
- NVIDIA
license: apache-2.0
name: "nemo"
alias: "nemo"
capabilities:
nvidia: "cuda12-nemo"
intel: "intel-nemo"
amd: "rocm-nemo"
metal: "metal-nemo"
default: "cpu-nemo"
nvidia-cuda-13: "cuda13-nemo"
nvidia-cuda-12: "cuda12-nemo"
icon: https://www.nvidia.com/favicon.ico
- &voxcpm
urls:
- https://github.com/ModelBest/VoxCPM
description: |
VoxCPM is an innovative end-to-end TTS model from ModelBest, designed to generate highly expressive speech.
tags:
- text-to-speech
- TTS
license: mit
name: "voxcpm"
alias: "voxcpm"
capabilities:
nvidia: "cuda12-voxcpm"
intel: "intel-voxcpm"
amd: "rocm-voxcpm"
metal: "metal-voxcpm"
default: "cpu-voxcpm"
nvidia-cuda-13: "cuda13-voxcpm"
nvidia-cuda-12: "cuda12-voxcpm"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &pocket-tts
urls:
- https://github.com/kyutai-labs/pocket-tts
description: |
Pocket TTS is a lightweight text-to-speech model designed to run efficiently on CPUs.
tags:
- text-to-speech
- TTS
license: mit
name: "pocket-tts"
alias: "pocket-tts"
capabilities:
nvidia: "cuda12-pocket-tts"
intel: "intel-pocket-tts"
amd: "rocm-pocket-tts"
nvidia-l4t: "nvidia-l4t-pocket-tts"
metal: "metal-pocket-tts"
default: "cpu-pocket-tts"
nvidia-cuda-13: "cuda13-pocket-tts"
nvidia-cuda-12: "cuda12-pocket-tts"
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts"
icon: https://avatars.githubusercontent.com/u/151010778?s=200&v=4
- &piper
name: "piper"
uri: "quay.io/go-skynet/local-ai-backends:latest-piper"
icon: https://github.com/OHF-Voice/piper1-gpl/raw/main/etc/logo.png
urls:
- https://github.com/rhasspy/piper
- https://github.com/mudler/go-piper
mirrors:
- localai/localai-backends:latest-piper
license: MIT
description: |
A fast, local neural text to speech system
tags:
- text-to-speech
- TTS
- &opus
name: "opus"
alias: "opus"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-opus"
urls:
- https://opus-codec.org/
mirrors:
- localai/localai-backends:latest-cpu-opus
license: BSD-3-Clause
description: |
Opus audio codec backend for encoding and decoding audio.
Required for WebRTC transport in the Realtime API.
tags:
- audio-codec
- opus
- WebRTC
- realtime
- CPU
- &silero-vad
name: "silero-vad"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-silero-vad"
icon: https://user-images.githubusercontent.com/12515440/89997349-b3523080-dc94-11ea-9906-ca2e8bc50535.png
urls:
- https://github.com/snakers4/silero-vad
mirrors:
- localai/localai-backends:latest-cpu-silero-vad
description: |
Silero VAD: pre-trained enterprise-grade Voice Activity Detector.
Silero VAD is a voice activity detection model that can be used to detect whether a given audio contains speech or not.
tags:
- voice-activity-detection
- VAD
- silero-vad
- CPU
- &local-store
name: "local-store"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-local-store"
mirrors:
- localai/localai-backends:latest-cpu-local-store
urls:
- https://github.com/mudler/LocalAI
description: |
Local Store is a local-first, self-hosted, and open-source vector database.
tags:
- vector-database
- local-first
- open-source
- CPU
license: MIT
- &kitten-tts
name: "kitten-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-kitten-tts"
mirrors:
- localai/localai-backends:latest-kitten-tts
urls:
- https://github.com/KittenML/KittenTTS
description: |
Kitten TTS is a text-to-speech model that can generate speech from text.
tags:
- text-to-speech
- TTS
license: apache-2.0
- &neutts
name: "neutts"
urls:
- https://github.com/neuphonic/neutts-air
description: |
NeuTTS Air is the worlds first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps.
tags:
- text-to-speech
- TTS
license: apache-2.0
capabilities:
default: "cpu-neutts"
nvidia: "cuda12-neutts"
amd: "rocm-neutts"
nvidia-cuda-12: "cuda12-neutts"
- !!merge <<: *neutts
name: "neutts-development"
capabilities:
default: "cpu-neutts-development"
nvidia: "cuda12-neutts-development"
amd: "rocm-neutts-development"
nvidia-cuda-12: "cuda12-neutts-development"
- !!merge <<: *llamacpp
name: "llama-cpp-development"
capabilities:
default: "cpu-llama-cpp-development"
nvidia: "cuda12-llama-cpp-development"
intel: "intel-sycl-f16-llama-cpp-development"
amd: "rocm-llama-cpp-development"
metal: "metal-llama-cpp-development"
vulkan: "vulkan-llama-cpp-development"
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp-development"
nvidia-cuda-13: "cuda13-llama-cpp-development"
nvidia-cuda-12: "cuda12-llama-cpp-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
- !!merge <<: *ikllamacpp
name: "ik-llama-cpp-development"
capabilities:
default: "cpu-ik-llama-cpp-development"
- !!merge <<: *turboquant
name: "turboquant-development"
capabilities:
default: "cpu-turboquant-development"
nvidia: "cuda12-turboquant-development"
intel: "intel-sycl-f16-turboquant-development"
amd: "rocm-turboquant-development"
vulkan: "vulkan-turboquant-development"
nvidia-l4t: "nvidia-l4t-arm64-turboquant-development"
nvidia-cuda-13: "cuda13-turboquant-development"
nvidia-cuda-12: "cuda12-turboquant-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-turboquant-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-turboquant-development"
- !!merge <<: *stablediffusionggml
name: "stablediffusion-ggml-development"
capabilities:
default: "cpu-stablediffusion-ggml-development"
nvidia: "cuda12-stablediffusion-ggml-development"
intel: "intel-sycl-f16-stablediffusion-ggml-development"
# amd: "rocm-stablediffusion-ggml-development"
vulkan: "vulkan-stablediffusion-ggml-development"
nvidia-l4t: "nvidia-l4t-arm64-stablediffusion-ggml-development"
metal: "metal-stablediffusion-ggml-development"
nvidia-cuda-13: "cuda13-stablediffusion-ggml-development"
nvidia-cuda-12: "cuda12-stablediffusion-ggml-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-stablediffusion-ggml-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml-development"
- !!merge <<: *neutts
name: "cpu-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-neutts"
mirrors:
- localai/localai-backends:latest-cpu-neutts
- !!merge <<: *neutts
name: "cuda12-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-neutts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-neutts
- !!merge <<: *neutts
name: "rocm-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-neutts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "cpu-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-neutts"
mirrors:
- localai/localai-backends:master-cpu-neutts
- !!merge <<: *neutts
name: "cuda12-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-neutts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-neutts
- !!merge <<: *neutts
name: "rocm-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-neutts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-neutts
- !!merge <<: *mlx
name: "mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-mlx
- !!merge <<: *mlx-vlm
name: "mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-vlm"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-mlx-vlm
- !!merge <<: *mlx-audio
name: "mlx-audio-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-audio"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-mlx-audio
- !!merge <<: *mlx-distributed
name: "mlx-distributed-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-distributed"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-mlx-distributed
## mlx
- !!merge <<: *mlx
name: "cpu-mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx"
mirrors:
- localai/localai-backends:latest-cpu-mlx
- !!merge <<: *mlx
name: "cpu-mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx"
mirrors:
- localai/localai-backends:master-cpu-mlx
- !!merge <<: *mlx
name: "cuda12-mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx
- !!merge <<: *mlx
name: "cuda12-mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx
- !!merge <<: *mlx
name: "cuda13-mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx
- !!merge <<: *mlx
name: "cuda13-mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx
- !!merge <<: *mlx
name: "nvidia-l4t-mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-mlx
- !!merge <<: *mlx
name: "nvidia-l4t-mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx"
mirrors:
- localai/localai-backends:master-nvidia-l4t-mlx
- !!merge <<: *mlx
name: "cuda13-nvidia-l4t-arm64-mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx
- !!merge <<: *mlx
name: "cuda13-nvidia-l4t-arm64-mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx
## mlx-vlm
- !!merge <<: *mlx-vlm
name: "cpu-mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx-vlm"
mirrors:
- localai/localai-backends:latest-cpu-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cpu-mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx-vlm"
mirrors:
- localai/localai-backends:master-cpu-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cuda12-mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx-vlm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cuda12-mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx-vlm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cuda13-mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx-vlm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cuda13-mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx-vlm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx-vlm
- !!merge <<: *mlx-vlm
name: "nvidia-l4t-mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx-vlm"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-mlx-vlm
- !!merge <<: *mlx-vlm
name: "nvidia-l4t-mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx-vlm"
mirrors:
- localai/localai-backends:master-nvidia-l4t-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cuda13-nvidia-l4t-arm64-mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-vlm"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-vlm
- !!merge <<: *mlx-vlm
name: "cuda13-nvidia-l4t-arm64-mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-vlm"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-vlm
## mlx-audio
- !!merge <<: *mlx-audio
name: "cpu-mlx-audio"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx-audio"
mirrors:
- localai/localai-backends:latest-cpu-mlx-audio
- !!merge <<: *mlx-audio
name: "cpu-mlx-audio-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx-audio"
mirrors:
- localai/localai-backends:master-cpu-mlx-audio
- !!merge <<: *mlx-audio
name: "cuda12-mlx-audio"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx-audio"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx-audio
- !!merge <<: *mlx-audio
name: "cuda12-mlx-audio-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx-audio"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx-audio
- !!merge <<: *mlx-audio
name: "cuda13-mlx-audio"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx-audio"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx-audio
- !!merge <<: *mlx-audio
name: "cuda13-mlx-audio-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx-audio"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx-audio
- !!merge <<: *mlx-audio
name: "nvidia-l4t-mlx-audio"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx-audio"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-mlx-audio
- !!merge <<: *mlx-audio
name: "nvidia-l4t-mlx-audio-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx-audio"
mirrors:
- localai/localai-backends:master-nvidia-l4t-mlx-audio
- !!merge <<: *mlx-audio
name: "cuda13-nvidia-l4t-arm64-mlx-audio"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-audio"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-audio
- !!merge <<: *mlx-audio
name: "cuda13-nvidia-l4t-arm64-mlx-audio-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-audio"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-audio
## mlx-distributed
- !!merge <<: *mlx-distributed
name: "cpu-mlx-distributed"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-mlx-distributed"
mirrors:
- localai/localai-backends:latest-cpu-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cpu-mlx-distributed-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-mlx-distributed"
mirrors:
- localai/localai-backends:master-cpu-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cuda12-mlx-distributed"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-mlx-distributed"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cuda12-mlx-distributed-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-mlx-distributed"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cuda13-mlx-distributed"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-mlx-distributed"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cuda13-mlx-distributed-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-mlx-distributed"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-mlx-distributed
- !!merge <<: *mlx-distributed
name: "nvidia-l4t-mlx-distributed"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-mlx-distributed"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-mlx-distributed
- !!merge <<: *mlx-distributed
name: "nvidia-l4t-mlx-distributed-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-mlx-distributed"
mirrors:
- localai/localai-backends:master-nvidia-l4t-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cuda13-nvidia-l4t-arm64-mlx-distributed"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-distributed"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-mlx-distributed
- !!merge <<: *mlx-distributed
name: "cuda13-nvidia-l4t-arm64-mlx-distributed-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-distributed"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-mlx-distributed
- !!merge <<: *kitten-tts
name: "kitten-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-kitten-tts"
mirrors:
- localai/localai-backends:master-kitten-tts
- !!merge <<: *kitten-tts
name: "metal-kitten-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-kitten-tts"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-kitten-tts
- !!merge <<: *kitten-tts
name: "metal-kitten-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-kitten-tts"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-kitten-tts
- !!merge <<: *local-store
name: "local-store-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-local-store"
mirrors:
- localai/localai-backends:master-cpu-local-store
- !!merge <<: *local-store
name: "metal-local-store"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-local-store"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-local-store
- !!merge <<: *local-store
name: "metal-local-store-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-local-store"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-local-store
- !!merge <<: *opus
name: "opus-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-opus"
mirrors:
- localai/localai-backends:master-cpu-opus
- !!merge <<: *opus
name: "metal-opus"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-opus"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-opus
- !!merge <<: *opus
name: "metal-opus-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-opus"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-opus
- !!merge <<: *silero-vad
name: "silero-vad-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-silero-vad"
mirrors:
- localai/localai-backends:master-cpu-silero-vad
- !!merge <<: *silero-vad
name: "metal-silero-vad"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-silero-vad"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-silero-vad
- !!merge <<: *silero-vad
name: "metal-silero-vad-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-silero-vad"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-silero-vad
- !!merge <<: *piper
name: "piper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-piper"
mirrors:
- localai/localai-backends:master-piper
- !!merge <<: *piper
name: "metal-piper"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-piper"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-piper
- !!merge <<: *piper
name: "metal-piper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-piper"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-piper
## llama-cpp
- !!merge <<: *llamacpp
name: "nvidia-l4t-arm64-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-llama-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "nvidia-l4t-arm64-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-llama-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-nvidia-l4t-arm64-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-llama-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-llama-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cpu-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-llama-cpp"
mirrors:
- localai/localai-backends:latest-cpu-llama-cpp
- !!merge <<: *llamacpp
name: "cpu-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-llama-cpp"
mirrors:
- localai/localai-backends:master-cpu-llama-cpp
- !!merge <<: *llamacpp
name: "cuda12-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-llama-cpp
- !!merge <<: *llamacpp
name: "rocm-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-llama-cpp
- !!merge <<: *llamacpp
name: "intel-sycl-f32-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f32-llama-cpp
- !!merge <<: *llamacpp
name: "intel-sycl-f16-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f16-llama-cpp
- !!merge <<: *llamacpp
name: "vulkan-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-llama-cpp
- !!merge <<: *llamacpp
name: "vulkan-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-vulkan-llama-cpp
- !!merge <<: *llamacpp
name: "metal-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-llama-cpp"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "metal-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-llama-cpp"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cuda12-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-llama-cpp
- !!merge <<: *llamacpp
name: "rocm-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-llama-cpp
- !!merge <<: *llamacpp
name: "intel-sycl-f32-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-llama-cpp
- !!merge <<: *llamacpp
name: "intel-sycl-f16-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-llama-cpp
## ik-llama-cpp
- !!merge <<: *ikllamacpp
name: "cpu-ik-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-ik-llama-cpp"
mirrors:
- localai/localai-backends:latest-cpu-ik-llama-cpp
- !!merge <<: *ikllamacpp
name: "cpu-ik-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-ik-llama-cpp"
mirrors:
- localai/localai-backends:master-cpu-ik-llama-cpp
## turboquant
- !!merge <<: *turboquant
name: "cpu-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-turboquant"
mirrors:
- localai/localai-backends:latest-cpu-turboquant
- !!merge <<: *turboquant
name: "cpu-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-turboquant"
mirrors:
- localai/localai-backends:master-cpu-turboquant
- !!merge <<: *turboquant
name: "cuda12-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-turboquant"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-turboquant
- !!merge <<: *turboquant
name: "cuda12-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-turboquant"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-turboquant
- !!merge <<: *turboquant
name: "cuda13-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-turboquant"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-turboquant
- !!merge <<: *turboquant
name: "cuda13-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-turboquant"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-turboquant
- !!merge <<: *turboquant
name: "rocm-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-turboquant"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-turboquant
- !!merge <<: *turboquant
name: "rocm-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-turboquant"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-turboquant
- !!merge <<: *turboquant
name: "intel-sycl-f32-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-turboquant"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f32-turboquant
- !!merge <<: *turboquant
name: "intel-sycl-f32-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-turboquant"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-turboquant
- !!merge <<: *turboquant
name: "intel-sycl-f16-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-turboquant"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f16-turboquant
- !!merge <<: *turboquant
name: "intel-sycl-f16-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-turboquant"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-turboquant
- !!merge <<: *turboquant
name: "vulkan-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-turboquant"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-turboquant
- !!merge <<: *turboquant
name: "vulkan-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-turboquant"
mirrors:
- localai/localai-backends:master-gpu-vulkan-turboquant
- !!merge <<: *turboquant
name: "nvidia-l4t-arm64-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-turboquant"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-turboquant
- !!merge <<: *turboquant
name: "nvidia-l4t-arm64-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-turboquant"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-turboquant
- !!merge <<: *turboquant
name: "cuda13-nvidia-l4t-arm64-turboquant"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-turboquant"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-turboquant
- !!merge <<: *turboquant
name: "cuda13-nvidia-l4t-arm64-turboquant-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-turboquant"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-turboquant
## whisper
- !!merge <<: *whispercpp
name: "nvidia-l4t-arm64-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-whisper"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-whisper
- !!merge <<: *whispercpp
name: "nvidia-l4t-arm64-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-whisper"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-whisper
- !!merge <<: *whispercpp
name: "cuda13-nvidia-l4t-arm64-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-whisper"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-whisper
- !!merge <<: *whispercpp
name: "cuda13-nvidia-l4t-arm64-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-whisper"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-whisper
- !!merge <<: *whispercpp
name: "cpu-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisper"
mirrors:
- localai/localai-backends:latest-cpu-whisper
- !!merge <<: *whispercpp
name: "metal-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-whisper"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-whisper
- !!merge <<: *whispercpp
name: "metal-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisper"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-whisper
- !!merge <<: *whispercpp
name: "cpu-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-whisper"
mirrors:
- localai/localai-backends:master-cpu-whisper
- !!merge <<: *whispercpp
name: "cuda12-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-whisper"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-whisper
- !!merge <<: *whispercpp
name: "rocm-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-whisper"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-whisper
- !!merge <<: *whispercpp
name: "intel-sycl-f32-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-whisper"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f32-whisper
- !!merge <<: *whispercpp
name: "intel-sycl-f16-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-whisper"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f16-whisper
- !!merge <<: *whispercpp
name: "vulkan-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-whisper"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-whisper
- !!merge <<: *whispercpp
name: "vulkan-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-whisper"
mirrors:
- localai/localai-backends:master-gpu-vulkan-whisper
- !!merge <<: *whispercpp
name: "metal-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-whisper"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-whisper
- !!merge <<: *whispercpp
name: "metal-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisper"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-whisper
- !!merge <<: *whispercpp
name: "cuda12-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-whisper"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-whisper
- !!merge <<: *whispercpp
name: "rocm-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-whisper"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-whisper
- !!merge <<: *whispercpp
name: "intel-sycl-f32-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-whisper"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-whisper
- !!merge <<: *whispercpp
name: "intel-sycl-f16-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-whisper"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-whisper
- !!merge <<: *whispercpp
name: "cuda13-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-whisper"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-whisper
- !!merge <<: *whispercpp
name: "cuda13-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-whisper"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-whisper
## stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cpu-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-cpu-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cpu-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-cpu-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "metal-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "metal-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "vulkan-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "vulkan-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-gpu-vulkan-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda12-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "intel-sycl-f32-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-stablediffusion-ggml"
- !!merge <<: *stablediffusionggml
name: "intel-sycl-f16-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f16-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda12-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "intel-sycl-f32-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "intel-sycl-f16-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "nvidia-l4t-arm64-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "nvidia-l4t-arm64-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-stablediffusion-ggml
# vllm
- !!merge <<: *vllm
name: "vllm-development"
capabilities:
nvidia: "cuda12-vllm-development"
amd: "rocm-vllm-development"
intel: "intel-vllm-development"
cpu: "cpu-vllm-development"
- !!merge <<: *vllm
name: "cuda12-vllm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vllm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vllm
- !!merge <<: *vllm
name: "rocm-vllm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vllm"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-vllm
- !!merge <<: *vllm
name: "intel-vllm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-vllm"
mirrors:
- localai/localai-backends:latest-gpu-intel-vllm
- !!merge <<: *vllm
name: "cpu-vllm"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vllm"
mirrors:
- localai/localai-backends:latest-cpu-vllm
- !!merge <<: *vllm
name: "cuda12-vllm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vllm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-vllm
- !!merge <<: *vllm
name: "rocm-vllm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vllm"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-vllm
- !!merge <<: *vllm
name: "intel-vllm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vllm"
mirrors:
- localai/localai-backends:master-gpu-intel-vllm
- !!merge <<: *vllm
name: "cpu-vllm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vllm"
mirrors:
- localai/localai-backends:master-cpu-vllm
# sglang
- !!merge <<: *sglang
name: "sglang-development"
capabilities:
nvidia: "cuda12-sglang-development"
amd: "rocm-sglang-development"
intel: "intel-sglang-development"
cpu: "cpu-sglang-development"
- !!merge <<: *sglang
name: "cuda12-sglang"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sglang"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sglang
- !!merge <<: *sglang
name: "rocm-sglang"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-sglang"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-sglang
- !!merge <<: *sglang
name: "intel-sglang"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sglang"
mirrors:
- localai/localai-backends:latest-gpu-intel-sglang
- !!merge <<: *sglang
name: "cpu-sglang"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sglang"
mirrors:
- localai/localai-backends:latest-cpu-sglang
- !!merge <<: *sglang
name: "cuda12-sglang-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sglang"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-sglang
- !!merge <<: *sglang
name: "rocm-sglang-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-sglang"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-sglang
- !!merge <<: *sglang
name: "intel-sglang-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sglang"
mirrors:
- localai/localai-backends:master-gpu-intel-sglang
- !!merge <<: *sglang
name: "cpu-sglang-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sglang"
mirrors:
- localai/localai-backends:master-cpu-sglang
# vllm-omni
- !!merge <<: *vllm-omni
name: "vllm-omni-development"
capabilities:
nvidia: "cuda12-vllm-omni-development"
amd: "rocm-vllm-omni-development"
nvidia-cuda-12: "cuda12-vllm-omni-development"
- !!merge <<: *vllm-omni
name: "cuda12-vllm-omni"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vllm-omni"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vllm-omni
- !!merge <<: *vllm-omni
name: "rocm-vllm-omni"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vllm-omni"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-vllm-omni
- !!merge <<: *vllm-omni
name: "cuda12-vllm-omni-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vllm-omni"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-vllm-omni
- !!merge <<: *vllm-omni
name: "rocm-vllm-omni-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vllm-omni"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-vllm-omni
# rfdetr
- !!merge <<: *rfdetr
name: "rfdetr-development"
capabilities:
nvidia: "cuda12-rfdetr-development"
intel: "intel-rfdetr-development"
#amd: "rocm-rfdetr-development"
nvidia-l4t: "nvidia-l4t-arm64-rfdetr-development"
metal: "metal-rfdetr-development"
default: "cpu-rfdetr-development"
nvidia-cuda-13: "cuda13-rfdetr-development"
- !!merge <<: *rfdetr
name: "cuda12-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-rfdetr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-rfdetr
- !!merge <<: *rfdetr
name: "intel-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rfdetr"
mirrors:
- localai/localai-backends:latest-gpu-intel-rfdetr
# - !!merge <<: *rfdetr
# name: "rocm-rfdetr"
# uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-hipblas-rfdetr"
# mirrors:
# - localai/localai-backends:latest-gpu-hipblas-rfdetr
- !!merge <<: *rfdetr
name: "nvidia-l4t-arm64-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-rfdetr"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-rfdetr
- !!merge <<: *rfdetr
name: "nvidia-l4t-arm64-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-rfdetr"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-rfdetr
- !!merge <<: *rfdetr
name: "cpu-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-rfdetr"
mirrors:
- localai/localai-backends:latest-cpu-rfdetr
- !!merge <<: *rfdetr
name: "cuda12-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-rfdetr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-rfdetr
- !!merge <<: *rfdetr
name: "intel-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-rfdetr"
mirrors:
- localai/localai-backends:master-gpu-intel-rfdetr
# - !!merge <<: *rfdetr
# name: "rocm-rfdetr-development"
# uri: "quay.io/go-skynet/local-ai-backends:master-gpu-hipblas-rfdetr"
# mirrors:
# - localai/localai-backends:master-gpu-hipblas-rfdetr
- !!merge <<: *rfdetr
name: "cpu-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-rfdetr"
mirrors:
- localai/localai-backends:master-cpu-rfdetr
- !!merge <<: *rfdetr
name: "intel-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rfdetr"
mirrors:
- localai/localai-backends:latest-gpu-intel-rfdetr
- !!merge <<: *rfdetr
name: "cuda13-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-rfdetr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-rfdetr
- !!merge <<: *rfdetr
name: "cuda13-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-rfdetr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-rfdetr
- !!merge <<: *rfdetr
name: "metal-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-rfdetr"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-rfdetr
- !!merge <<: *rfdetr
name: "metal-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-rfdetr"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-rfdetr
## sam3-cpp
- !!merge <<: *sam3cpp
name: "sam3-cpp-development"
capabilities:
default: "cpu-sam3-cpp-development"
nvidia: "cuda12-sam3-cpp-development"
nvidia-cuda-12: "cuda12-sam3-cpp-development"
nvidia-cuda-13: "cuda13-sam3-cpp-development"
nvidia-l4t: "nvidia-l4t-arm64-sam3-cpp-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-sam3-cpp-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-sam3-cpp-development"
intel: "intel-sycl-f32-sam3-cpp-development"
vulkan: "vulkan-sam3-cpp-development"
- !!merge <<: *sam3cpp
name: "cpu-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-sam3-cpp"
mirrors:
- localai/localai-backends:latest-cpu-sam3-cpp
- !!merge <<: *sam3cpp
name: "cpu-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-sam3-cpp"
mirrors:
- localai/localai-backends:master-cpu-sam3-cpp
- !!merge <<: *sam3cpp
name: "cuda12-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-sam3-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-sam3-cpp
- !!merge <<: *sam3cpp
name: "cuda12-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-sam3-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-sam3-cpp
- !!merge <<: *sam3cpp
name: "cuda13-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-sam3-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-sam3-cpp
- !!merge <<: *sam3cpp
name: "cuda13-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-sam3-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-sam3-cpp
- !!merge <<: *sam3cpp
name: "nvidia-l4t-arm64-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-sam3-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-sam3-cpp
- !!merge <<: *sam3cpp
name: "nvidia-l4t-arm64-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-sam3-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-sam3-cpp
- !!merge <<: *sam3cpp
name: "cuda13-nvidia-l4t-arm64-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-sam3-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-sam3-cpp
- !!merge <<: *sam3cpp
name: "cuda13-nvidia-l4t-arm64-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-sam3-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-sam3-cpp
- !!merge <<: *sam3cpp
name: "intel-sycl-f32-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-sam3-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f32-sam3-cpp
- !!merge <<: *sam3cpp
name: "intel-sycl-f32-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-sam3-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-sam3-cpp
- !!merge <<: *sam3cpp
name: "vulkan-sam3-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-sam3-cpp"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-sam3-cpp
- !!merge <<: *sam3cpp
name: "vulkan-sam3-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-sam3-cpp"
mirrors:
- localai/localai-backends:master-gpu-vulkan-sam3-cpp
## Rerankers
- !!merge <<: *rerankers
name: "rerankers-development"
capabilities:
nvidia: "cuda12-rerankers-development"
intel: "intel-rerankers-development"
amd: "rocm-rerankers-development"
metal: "metal-rerankers-development"
nvidia-cuda-13: "cuda13-rerankers-development"
- !!merge <<: *rerankers
name: "cuda12-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-rerankers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-rerankers
- !!merge <<: *rerankers
name: "intel-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rerankers"
mirrors:
- localai/localai-backends:latest-gpu-intel-rerankers
- !!merge <<: *rerankers
name: "rocm-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-rerankers"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-rerankers
- !!merge <<: *rerankers
name: "cuda12-rerankers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-rerankers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-rerankers
- !!merge <<: *rerankers
name: "rocm-rerankers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-rerankers"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-rerankers
- !!merge <<: *rerankers
name: "intel-rerankers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-rerankers"
mirrors:
- localai/localai-backends:master-gpu-intel-rerankers
- !!merge <<: *rerankers
name: "cuda13-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-rerankers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-rerankers
- !!merge <<: *rerankers
name: "cuda13-rerankers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-rerankers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-rerankers
- !!merge <<: *rerankers
name: "metal-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-rerankers"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-rerankers
- !!merge <<: *rerankers
name: "metal-rerankers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-rerankers"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-rerankers
## tinygrad
## Single image — the meta anchor above carries the latest uri directly
## since there is only one variant. The development entry below points at
## the master tag.
- !!merge <<: *tinygrad
name: "tinygrad-development"
uri: "quay.io/go-skynet/local-ai-backends:master-tinygrad"
mirrors:
- localai/localai-backends:master-tinygrad
## Transformers
- !!merge <<: *transformers
name: "transformers-development"
capabilities:
nvidia: "cuda12-transformers-development"
intel: "intel-transformers-development"
amd: "rocm-transformers-development"
metal: "metal-transformers-development"
nvidia-cuda-13: "cuda13-transformers-development"
- !!merge <<: *transformers
name: "cuda12-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-transformers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-transformers
- !!merge <<: *transformers
name: "rocm-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-transformers"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-transformers
- !!merge <<: *transformers
name: "intel-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-transformers"
mirrors:
- localai/localai-backends:latest-gpu-intel-transformers
- !!merge <<: *transformers
name: "cuda12-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-transformers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-transformers
- !!merge <<: *transformers
name: "rocm-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-transformers"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-transformers
- !!merge <<: *transformers
name: "intel-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-transformers"
mirrors:
- localai/localai-backends:master-gpu-intel-transformers
- !!merge <<: *transformers
name: "cuda13-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-transformers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-transformers
- !!merge <<: *transformers
name: "cuda13-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-transformers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-transformers
- !!merge <<: *transformers
name: "metal-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-transformers"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-transformers
- !!merge <<: *transformers
name: "metal-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-transformers"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-transformers
## Diffusers
- !!merge <<: *diffusers
name: "diffusers-development"
capabilities:
nvidia: "cuda12-diffusers-development"
intel: "intel-diffusers-development"
amd: "rocm-diffusers-development"
nvidia-l4t: "nvidia-l4t-diffusers-development"
metal: "metal-diffusers-development"
default: "cpu-diffusers-development"
nvidia-cuda-13: "cuda13-diffusers-development"
- !!merge <<: *diffusers
name: "cpu-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-diffusers"
mirrors:
- localai/localai-backends:latest-cpu-diffusers
- !!merge <<: *diffusers
name: "cpu-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-diffusers"
mirrors:
- localai/localai-backends:master-cpu-diffusers
- !!merge <<: *diffusers
name: "nvidia-l4t-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-diffusers"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-diffusers
- !!merge <<: *diffusers
name: "nvidia-l4t-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-diffusers"
mirrors:
- localai/localai-backends:master-nvidia-l4t-diffusers
- !!merge <<: *diffusers
name: "cuda13-nvidia-l4t-arm64-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-diffusers"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-diffusers
- !!merge <<: *diffusers
name: "cuda13-nvidia-l4t-arm64-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-diffusers"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-diffusers
- !!merge <<: *diffusers
name: "cuda12-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-diffusers
- !!merge <<: *diffusers
name: "rocm-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-diffusers
- !!merge <<: *diffusers
name: "intel-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-intel-diffusers
- !!merge <<: *diffusers
name: "cuda12-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-diffusers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-diffusers
- !!merge <<: *diffusers
name: "rocm-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-diffusers"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-diffusers
- !!merge <<: *diffusers
name: "intel-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-diffusers"
mirrors:
- localai/localai-backends:master-gpu-intel-diffusers
- !!merge <<: *diffusers
name: "cuda13-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-diffusers
- !!merge <<: *diffusers
name: "cuda13-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-diffusers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-diffusers
- !!merge <<: *diffusers
name: "metal-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-diffusers"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-diffusers
- !!merge <<: *diffusers
name: "metal-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-diffusers"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-diffusers
## ace-step
- !!merge <<: *ace-step
name: "cpu-ace-step"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-ace-step"
mirrors:
- localai/localai-backends:latest-cpu-ace-step
- !!merge <<: *ace-step
name: "cpu-ace-step-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-ace-step"
mirrors:
- localai/localai-backends:master-cpu-ace-step
- !!merge <<: *ace-step
name: "cuda12-ace-step"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-ace-step"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-ace-step
- !!merge <<: *ace-step
name: "cuda12-ace-step-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-ace-step"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-ace-step
- !!merge <<: *ace-step
name: "cuda13-ace-step"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-ace-step"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-ace-step
- !!merge <<: *ace-step
name: "cuda13-ace-step-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-ace-step"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-ace-step
- !!merge <<: *ace-step
name: "rocm-ace-step"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-ace-step"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-ace-step
- !!merge <<: *ace-step
name: "rocm-ace-step-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-ace-step"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-ace-step
- !!merge <<: *ace-step
name: "intel-ace-step"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-ace-step"
mirrors:
- localai/localai-backends:latest-gpu-intel-ace-step
- !!merge <<: *ace-step
name: "intel-ace-step-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-ace-step"
mirrors:
- localai/localai-backends:master-gpu-intel-ace-step
- !!merge <<: *ace-step
name: "metal-ace-step"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-ace-step"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-ace-step
- !!merge <<: *ace-step
name: "metal-ace-step-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-ace-step"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-ace-step
## acestep-cpp
- !!merge <<: *acestepcpp
name: "nvidia-l4t-arm64-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-acestep-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-acestep-cpp
- !!merge <<: *acestepcpp
name: "nvidia-l4t-arm64-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-acestep-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-acestep-cpp
- !!merge <<: *acestepcpp
name: "cuda13-nvidia-l4t-arm64-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-acestep-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-acestep-cpp
- !!merge <<: *acestepcpp
name: "cuda13-nvidia-l4t-arm64-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-acestep-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-acestep-cpp
- !!merge <<: *acestepcpp
name: "cpu-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-acestep-cpp"
mirrors:
- localai/localai-backends:latest-cpu-acestep-cpp
- !!merge <<: *acestepcpp
name: "metal-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-acestep-cpp"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-acestep-cpp
- !!merge <<: *acestepcpp
name: "metal-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-acestep-cpp"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-acestep-cpp
- !!merge <<: *acestepcpp
name: "cpu-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-acestep-cpp"
mirrors:
- localai/localai-backends:master-cpu-acestep-cpp
- !!merge <<: *acestepcpp
name: "cuda12-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-acestep-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-acestep-cpp
- !!merge <<: *acestepcpp
name: "rocm-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-acestep-cpp"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-acestep-cpp
- !!merge <<: *acestepcpp
name: "intel-sycl-f32-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-acestep-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f32-acestep-cpp
- !!merge <<: *acestepcpp
name: "intel-sycl-f16-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-acestep-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f16-acestep-cpp
- !!merge <<: *acestepcpp
name: "vulkan-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-acestep-cpp"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-acestep-cpp
- !!merge <<: *acestepcpp
name: "vulkan-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-acestep-cpp"
mirrors:
- localai/localai-backends:master-gpu-vulkan-acestep-cpp
- !!merge <<: *acestepcpp
name: "cuda12-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-acestep-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-acestep-cpp
- !!merge <<: *acestepcpp
name: "rocm-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-acestep-cpp"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-acestep-cpp
- !!merge <<: *acestepcpp
name: "intel-sycl-f32-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-acestep-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-acestep-cpp
- !!merge <<: *acestepcpp
name: "intel-sycl-f16-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-acestep-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-acestep-cpp
- !!merge <<: *acestepcpp
name: "cuda13-acestep-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-acestep-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-acestep-cpp
- !!merge <<: *acestepcpp
name: "cuda13-acestep-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-acestep-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-acestep-cpp
## qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "nvidia-l4t-arm64-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "nvidia-l4t-arm64-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cuda13-nvidia-l4t-arm64-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cpu-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-cpu-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "metal-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "metal-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cpu-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-cpu-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cuda12-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "rocm-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "intel-sycl-f32-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f32-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "intel-sycl-f16-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f16-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-gpu-intel-sycl-f16-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "vulkan-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-vulkan-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-gpu-vulkan-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "vulkan-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-vulkan-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-gpu-vulkan-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cuda12-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "rocm-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "intel-sycl-f32-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f32-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "intel-sycl-f16-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cuda13-qwen3-tts-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen3-tts-cpp
- !!merge <<: *qwen3ttscpp
name: "cuda13-qwen3-tts-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen3-tts-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen3-tts-cpp
## kokoro
- !!merge <<: *kokoro
name: "kokoro-development"
capabilities:
nvidia: "cuda12-kokoro-development"
intel: "intel-kokoro-development"
amd: "rocm-kokoro-development"
nvidia-l4t: "nvidia-l4t-kokoro-development"
metal: "metal-kokoro-development"
- !!merge <<: *kokoro
name: "cuda12-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-kokoro"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-kokoro
- !!merge <<: *kokoro
name: "rocm-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-kokoro"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-kokoro
- !!merge <<: *kokoro
name: "intel-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-intel-kokoro
- !!merge <<: *kokoro
name: "intel-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-kokoro"
mirrors:
- localai/localai-backends:master-gpu-intel-kokoro
- !!merge <<: *kokoro
name: "nvidia-l4t-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-kokoro"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-kokoro
- !!merge <<: *kokoro
name: "nvidia-l4t-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-kokoro"
mirrors:
- localai/localai-backends:master-nvidia-l4t-kokoro
- !!merge <<: *kokoro
name: "cuda12-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-kokoro
- !!merge <<: *kokoro
name: "rocm-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-kokoro
- !!merge <<: *kokoro
name: "cuda13-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-kokoro
- !!merge <<: *kokoro
name: "cuda13-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-kokoro"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-kokoro
- !!merge <<: *kokoro
name: "metal-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-kokoro"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-kokoro
- !!merge <<: *kokoro
name: "metal-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-kokoro"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-kokoro
## kokoros (Rust)
- !!merge <<: *kokoros
name: "kokoros-development"
capabilities:
default: "cpu-kokoros-development"
- !!merge <<: *kokoros
name: "cpu-kokoros"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-kokoros"
mirrors:
- localai/localai-backends:latest-cpu-kokoros
- !!merge <<: *kokoros
name: "cpu-kokoros-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-kokoros"
mirrors:
- localai/localai-backends:master-cpu-kokoros
## faster-whisper
- !!merge <<: *faster-whisper
name: "faster-whisper-development"
capabilities:
default: "cpu-faster-whisper-development"
nvidia: "cuda12-faster-whisper-development"
intel: "intel-faster-whisper-development"
amd: "rocm-faster-whisper-development"
metal: "metal-faster-whisper-development"
nvidia-cuda-13: "cuda13-faster-whisper-development"
nvidia-l4t: "nvidia-l4t-arm64-faster-whisper-development"
- !!merge <<: *faster-whisper
name: "cuda12-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-faster-whisper"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-faster-whisper
- !!merge <<: *faster-whisper
name: "rocm-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-faster-whisper"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-faster-whisper
- !!merge <<: *faster-whisper
name: "intel-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-faster-whisper"
mirrors:
- localai/localai-backends:latest-gpu-intel-faster-whisper
- !!merge <<: *faster-whisper
name: "intel-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-faster-whisper"
mirrors:
- localai/localai-backends:master-gpu-intel-faster-whisper
- !!merge <<: *faster-whisper
name: "cuda13-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-faster-whisper"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-faster-whisper
- !!merge <<: *faster-whisper
name: "cuda13-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-faster-whisper"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-faster-whisper
- !!merge <<: *faster-whisper
name: "metal-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-faster-whisper"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-faster-whisper
- !!merge <<: *faster-whisper
name: "metal-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-faster-whisper"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-faster-whisper
- !!merge <<: *faster-whisper
name: "cuda12-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-faster-whisper"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-faster-whisper
- !!merge <<: *faster-whisper
name: "rocm-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-faster-whisper"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-faster-whisper
- !!merge <<: *faster-whisper
name: "cpu-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-faster-whisper"
mirrors:
- localai/localai-backends:latest-cpu-faster-whisper
- !!merge <<: *faster-whisper
name: "cpu-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-faster-whisper"
mirrors:
- localai/localai-backends:master-cpu-faster-whisper
- !!merge <<: *faster-whisper
name: "nvidia-l4t-arm64-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-faster-whisper"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-faster-whisper
- !!merge <<: *faster-whisper
name: "nvidia-l4t-arm64-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-faster-whisper"
mirrors:
- localai/localai-backends:master-nvidia-l4t-faster-whisper
## moonshine
- !!merge <<: *moonshine
name: "moonshine-development"
capabilities:
nvidia: "cuda12-moonshine-development"
default: "cpu-moonshine-development"
nvidia-cuda-13: "cuda13-moonshine-development"
nvidia-cuda-12: "cuda12-moonshine-development"
- !!merge <<: *moonshine
name: "cpu-moonshine"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-moonshine"
mirrors:
- localai/localai-backends:latest-cpu-moonshine
- !!merge <<: *moonshine
name: "cpu-moonshine-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-moonshine"
mirrors:
- localai/localai-backends:master-cpu-moonshine
- !!merge <<: *moonshine
name: "cuda12-moonshine"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-moonshine"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-moonshine
- !!merge <<: *moonshine
name: "cuda12-moonshine-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-moonshine"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-moonshine
- !!merge <<: *moonshine
name: "cuda13-moonshine"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-moonshine"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-moonshine
- !!merge <<: *moonshine
name: "cuda13-moonshine-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-moonshine"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-moonshine
- !!merge <<: *moonshine
name: "metal-moonshine"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-moonshine"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-moonshine
- !!merge <<: *moonshine
name: "metal-moonshine-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-moonshine"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-moonshine
## whisperx
- !!merge <<: *whisperx
name: "whisperx-development"
capabilities:
nvidia: "cuda12-whisperx-development"
metal: "metal-whisperx-development"
default: "cpu-whisperx-development"
nvidia-cuda-13: "cuda13-whisperx-development"
nvidia-cuda-12: "cuda12-whisperx-development"
nvidia-l4t: "nvidia-l4t-arm64-whisperx-development"
- !!merge <<: *whisperx
name: "cpu-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisperx"
mirrors:
- localai/localai-backends:latest-cpu-whisperx
- !!merge <<: *whisperx
name: "cpu-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-whisperx"
mirrors:
- localai/localai-backends:master-cpu-whisperx
- !!merge <<: *whisperx
name: "cuda12-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-whisperx"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-whisperx
- !!merge <<: *whisperx
name: "cuda12-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-whisperx"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-whisperx
- !!merge <<: *whisperx
name: "cuda13-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-whisperx"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-whisperx
- !!merge <<: *whisperx
name: "cuda13-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-whisperx"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-whisperx
- !!merge <<: *whisperx
name: "metal-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-whisperx"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-whisperx
- !!merge <<: *whisperx
name: "metal-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-whisperx"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-whisperx
- !!merge <<: *whisperx
name: "nvidia-l4t-arm64-whisperx"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-whisperx"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-whisperx
- !!merge <<: *whisperx
name: "nvidia-l4t-arm64-whisperx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-whisperx"
mirrors:
- localai/localai-backends:master-nvidia-l4t-whisperx
## coqui
- !!merge <<: *coqui
name: "coqui-development"
capabilities:
nvidia: "cuda12-coqui-development"
intel: "intel-coqui-development"
amd: "rocm-coqui-development"
metal: "metal-coqui-development"
- !!merge <<: *coqui
name: "cuda12-coqui"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-coqui"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-coqui
- !!merge <<: *coqui
name: "cuda12-coqui-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-coqui"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-coqui
- !!merge <<: *coqui
name: "rocm-coqui-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-coqui"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-coqui
- !!merge <<: *coqui
name: "intel-coqui"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-coqui"
mirrors:
- localai/localai-backends:latest-gpu-intel-coqui
- !!merge <<: *coqui
name: "intel-coqui-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-coqui"
mirrors:
- localai/localai-backends:master-gpu-intel-coqui
- !!merge <<: *coqui
name: "rocm-coqui"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-coqui"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-coqui
- !!merge <<: *coqui
name: "metal-coqui"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-coqui"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-coqui
- !!merge <<: *coqui
name: "metal-coqui-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-coqui"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-coqui
## outetts
- !!merge <<: *outetts
name: "outetts-development"
capabilities:
default: "cpu-outetts-development"
nvidia-cuda-12: "cuda12-outetts-development"
- !!merge <<: *outetts
name: "cpu-outetts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-outetts"
mirrors:
- localai/localai-backends:latest-cpu-outetts
- !!merge <<: *outetts
name: "cpu-outetts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-outetts"
mirrors:
- localai/localai-backends:master-cpu-outetts
- !!merge <<: *outetts
name: "cuda12-outetts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-outetts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-outetts
- !!merge <<: *outetts
name: "cuda12-outetts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-outetts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-outetts
## chatterbox
- !!merge <<: *chatterbox
name: "chatterbox-development"
capabilities:
nvidia: "cuda12-chatterbox-development"
metal: "metal-chatterbox-development"
default: "cpu-chatterbox-development"
nvidia-l4t: "nvidia-l4t-arm64-chatterbox"
nvidia-cuda-13: "cuda13-chatterbox-development"
nvidia-cuda-12: "cuda12-chatterbox-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-chatterbox"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-chatterbox-development"
- !!merge <<: *chatterbox
name: "cpu-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-chatterbox"
mirrors:
- localai/localai-backends:latest-cpu-chatterbox
- !!merge <<: *chatterbox
name: "cpu-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-chatterbox"
mirrors:
- localai/localai-backends:master-cpu-chatterbox
- !!merge <<: *chatterbox
name: "nvidia-l4t-arm64-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-chatterbox"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-chatterbox
- !!merge <<: *chatterbox
name: "nvidia-l4t-arm64-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-chatterbox"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-chatterbox
- !!merge <<: *chatterbox
name: "metal-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-chatterbox"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-chatterbox
- !!merge <<: *chatterbox
name: "metal-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-chatterbox"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-chatterbox
- !!merge <<: *chatterbox
name: "cuda12-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-chatterbox"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-chatterbox
- !!merge <<: *chatterbox
name: "cuda12-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-chatterbox"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-chatterbox"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-chatterbox"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-nvidia-l4t-arm64-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-chatterbox"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-nvidia-l4t-arm64-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-chatterbox"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-chatterbox
## vibevoice
- !!merge <<: *vibevoice
name: "vibevoice-development"
capabilities:
nvidia: "cuda12-vibevoice-development"
intel: "intel-vibevoice-development"
amd: "rocm-vibevoice-development"
nvidia-l4t: "nvidia-l4t-vibevoice-development"
metal: "metal-vibevoice-development"
default: "cpu-vibevoice-development"
nvidia-cuda-13: "cuda13-vibevoice-development"
nvidia-cuda-12: "cuda12-vibevoice-development"
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice-development"
- !!merge <<: *vibevoice
name: "cpu-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vibevoice"
mirrors:
- localai/localai-backends:latest-cpu-vibevoice
- !!merge <<: *vibevoice
name: "cpu-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vibevoice"
mirrors:
- localai/localai-backends:master-cpu-vibevoice
- !!merge <<: *vibevoice
name: "cuda12-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vibevoice
- !!merge <<: *vibevoice
name: "cuda12-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-vibevoice
- !!merge <<: *vibevoice
name: "intel-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-intel-vibevoice
- !!merge <<: *vibevoice
name: "intel-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-intel-vibevoice
- !!merge <<: *vibevoice
name: "rocm-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-vibevoice
- !!merge <<: *vibevoice
name: "rocm-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-vibevoice
- !!merge <<: *vibevoice
name: "nvidia-l4t-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-vibevoice"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-vibevoice
- !!merge <<: *vibevoice
name: "nvidia-l4t-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-vibevoice"
mirrors:
- localai/localai-backends:master-nvidia-l4t-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-nvidia-l4t-arm64-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-nvidia-l4t-arm64-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice
- !!merge <<: *vibevoice
name: "metal-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-vibevoice"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-vibevoice
- !!merge <<: *vibevoice
name: "metal-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-vibevoice"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-vibevoice
## qwen-tts
- !!merge <<: *qwen-tts
name: "qwen-tts-development"
capabilities:
nvidia: "cuda12-qwen-tts-development"
intel: "intel-qwen-tts-development"
amd: "rocm-qwen-tts-development"
nvidia-l4t: "nvidia-l4t-qwen-tts-development"
metal: "metal-qwen-tts-development"
default: "cpu-qwen-tts-development"
nvidia-cuda-13: "cuda13-qwen-tts-development"
nvidia-cuda-12: "cuda12-qwen-tts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-tts-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
- !!merge <<: *qwen-tts
name: "cpu-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-tts"
mirrors:
- localai/localai-backends:latest-cpu-qwen-tts
- !!merge <<: *qwen-tts
name: "cpu-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-tts"
mirrors:
- localai/localai-backends:master-cpu-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda12-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda12-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-tts
- !!merge <<: *qwen-tts
name: "intel-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-intel-qwen-tts
- !!merge <<: *qwen-tts
name: "intel-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-intel-qwen-tts
- !!merge <<: *qwen-tts
name: "rocm-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-tts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-tts
- !!merge <<: *qwen-tts
name: "rocm-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-tts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-tts
- !!merge <<: *qwen-tts
name: "nvidia-l4t-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-qwen-tts
- !!merge <<: *qwen-tts
name: "nvidia-l4t-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-nvidia-l4t-arm64-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-tts
- !!merge <<: *qwen-tts
name: "cuda13-nvidia-l4t-arm64-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-tts
- !!merge <<: *qwen-tts
name: "metal-qwen-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen-tts"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-qwen-tts
- !!merge <<: *qwen-tts
name: "metal-qwen-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen-tts"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-qwen-tts
## fish-speech
- !!merge <<: *fish-speech
name: "fish-speech-development"
capabilities:
nvidia: "cuda12-fish-speech-development"
intel: "intel-fish-speech-development"
amd: "rocm-fish-speech-development"
nvidia-l4t: "nvidia-l4t-fish-speech-development"
metal: "metal-fish-speech-development"
default: "cpu-fish-speech-development"
nvidia-cuda-13: "cuda13-fish-speech-development"
nvidia-cuda-12: "cuda12-fish-speech-development"
nvidia-l4t-cuda-12: "nvidia-l4t-fish-speech-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-fish-speech-development"
- !!merge <<: *fish-speech
name: "cpu-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-fish-speech"
mirrors:
- localai/localai-backends:latest-cpu-fish-speech
- !!merge <<: *fish-speech
name: "cpu-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-fish-speech"
mirrors:
- localai/localai-backends:master-cpu-fish-speech
- !!merge <<: *fish-speech
name: "cuda12-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-fish-speech"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-fish-speech
- !!merge <<: *fish-speech
name: "cuda12-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-fish-speech"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-fish-speech
- !!merge <<: *fish-speech
name: "cuda13-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-fish-speech"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-fish-speech
- !!merge <<: *fish-speech
name: "cuda13-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-fish-speech"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-fish-speech
- !!merge <<: *fish-speech
name: "intel-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-fish-speech"
mirrors:
- localai/localai-backends:latest-gpu-intel-fish-speech
- !!merge <<: *fish-speech
name: "intel-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-fish-speech"
mirrors:
- localai/localai-backends:master-gpu-intel-fish-speech
- !!merge <<: *fish-speech
name: "rocm-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-fish-speech"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-fish-speech
- !!merge <<: *fish-speech
name: "rocm-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-fish-speech"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-fish-speech
- !!merge <<: *fish-speech
name: "nvidia-l4t-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-fish-speech"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-fish-speech
- !!merge <<: *fish-speech
name: "nvidia-l4t-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-fish-speech"
mirrors:
- localai/localai-backends:master-nvidia-l4t-fish-speech
- !!merge <<: *fish-speech
name: "cuda13-nvidia-l4t-arm64-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-fish-speech"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-fish-speech
- !!merge <<: *fish-speech
name: "cuda13-nvidia-l4t-arm64-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-fish-speech"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-fish-speech
- !!merge <<: *fish-speech
name: "metal-fish-speech"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-fish-speech"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-fish-speech
- !!merge <<: *fish-speech
name: "metal-fish-speech-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-fish-speech"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-fish-speech
## faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "faster-qwen3-tts-development"
capabilities:
nvidia: "cuda12-faster-qwen3-tts-development"
default: "cuda12-faster-qwen3-tts-development"
nvidia-cuda-13: "cuda13-faster-qwen3-tts-development"
nvidia-cuda-12: "cuda12-faster-qwen3-tts-development"
nvidia-l4t: "nvidia-l4t-faster-qwen3-tts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-faster-qwen3-tts-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts-development"
- !!merge <<: *faster-qwen3-tts
name: "cuda12-faster-qwen3-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-faster-qwen3-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "cuda12-faster-qwen3-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-faster-qwen3-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "cuda13-faster-qwen3-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-faster-qwen3-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "cuda13-faster-qwen3-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-faster-qwen3-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "nvidia-l4t-faster-qwen3-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-faster-qwen3-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "nvidia-l4t-faster-qwen3-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-faster-qwen3-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts
- !!merge <<: *faster-qwen3-tts
name: "cuda13-nvidia-l4t-arm64-faster-qwen3-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts
## qwen-asr
- !!merge <<: *qwen-asr
name: "qwen-asr-development"
capabilities:
nvidia: "cuda12-qwen-asr-development"
intel: "intel-qwen-asr-development"
amd: "rocm-qwen-asr-development"
nvidia-l4t: "nvidia-l4t-qwen-asr-development"
metal: "metal-qwen-asr-development"
default: "cpu-qwen-asr-development"
nvidia-cuda-13: "cuda13-qwen-asr-development"
nvidia-cuda-12: "cuda12-qwen-asr-development"
nvidia-l4t-cuda-12: "nvidia-l4t-qwen-asr-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-qwen-asr-development"
- !!merge <<: *qwen-asr
name: "cpu-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-qwen-asr"
mirrors:
- localai/localai-backends:latest-cpu-qwen-asr
- !!merge <<: *qwen-asr
name: "cpu-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-qwen-asr"
mirrors:
- localai/localai-backends:master-cpu-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda12-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda12-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-qwen-asr
- !!merge <<: *qwen-asr
name: "intel-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-intel-qwen-asr
- !!merge <<: *qwen-asr
name: "intel-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-intel-qwen-asr
- !!merge <<: *qwen-asr
name: "rocm-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-qwen-asr"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-qwen-asr
- !!merge <<: *qwen-asr
name: "rocm-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-qwen-asr"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-qwen-asr
- !!merge <<: *qwen-asr
name: "nvidia-l4t-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-qwen-asr"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-qwen-asr
- !!merge <<: *qwen-asr
name: "nvidia-l4t-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-qwen-asr"
mirrors:
- localai/localai-backends:master-nvidia-l4t-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-nvidia-l4t-arm64-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-asr"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-qwen-asr
- !!merge <<: *qwen-asr
name: "cuda13-nvidia-l4t-arm64-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-asr"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-qwen-asr
- !!merge <<: *qwen-asr
name: "metal-qwen-asr"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-qwen-asr"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-qwen-asr
- !!merge <<: *qwen-asr
name: "metal-qwen-asr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-qwen-asr"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-qwen-asr
## nemo
- !!merge <<: *nemo
name: "nemo-development"
capabilities:
nvidia: "cuda12-nemo-development"
intel: "intel-nemo-development"
amd: "rocm-nemo-development"
metal: "metal-nemo-development"
default: "cpu-nemo-development"
nvidia-cuda-13: "cuda13-nemo-development"
nvidia-cuda-12: "cuda12-nemo-development"
- !!merge <<: *nemo
name: "cpu-nemo"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-nemo"
mirrors:
- localai/localai-backends:latest-cpu-nemo
- !!merge <<: *nemo
name: "cpu-nemo-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-nemo"
mirrors:
- localai/localai-backends:master-cpu-nemo
- !!merge <<: *nemo
name: "cuda12-nemo"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-nemo"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-nemo
- !!merge <<: *nemo
name: "cuda12-nemo-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-nemo"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-nemo
- !!merge <<: *nemo
name: "cuda13-nemo"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-nemo"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-nemo
- !!merge <<: *nemo
name: "cuda13-nemo-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-nemo"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-nemo
- !!merge <<: *nemo
name: "intel-nemo"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-nemo"
mirrors:
- localai/localai-backends:latest-gpu-intel-nemo
- !!merge <<: *nemo
name: "intel-nemo-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-nemo"
mirrors:
- localai/localai-backends:master-gpu-intel-nemo
- !!merge <<: *nemo
name: "rocm-nemo"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-nemo"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-nemo
- !!merge <<: *nemo
name: "rocm-nemo-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-nemo"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-nemo
- !!merge <<: *nemo
name: "metal-nemo"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-nemo"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-nemo
- !!merge <<: *nemo
name: "metal-nemo-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-nemo"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-nemo
## voxcpm
- !!merge <<: *voxcpm
name: "voxcpm-development"
capabilities:
nvidia: "cuda12-voxcpm-development"
intel: "intel-voxcpm-development"
amd: "rocm-voxcpm-development"
metal: "metal-voxcpm-development"
default: "cpu-voxcpm-development"
nvidia-cuda-13: "cuda13-voxcpm-development"
nvidia-cuda-12: "cuda12-voxcpm-development"
- !!merge <<: *voxcpm
name: "cpu-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-voxcpm"
mirrors:
- localai/localai-backends:latest-cpu-voxcpm
- !!merge <<: *voxcpm
name: "cpu-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-voxcpm"
mirrors:
- localai/localai-backends:master-cpu-voxcpm
- !!merge <<: *voxcpm
name: "cuda12-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-voxcpm
- !!merge <<: *voxcpm
name: "cuda12-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-voxcpm
- !!merge <<: *voxcpm
name: "cuda13-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-voxcpm
- !!merge <<: *voxcpm
name: "cuda13-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-voxcpm
- !!merge <<: *voxcpm
name: "intel-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-intel-voxcpm
- !!merge <<: *voxcpm
name: "intel-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-intel-voxcpm
- !!merge <<: *voxcpm
name: "rocm-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-voxcpm"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-voxcpm
- !!merge <<: *voxcpm
name: "rocm-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-voxcpm"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-voxcpm
- !!merge <<: *voxcpm
name: "metal-voxcpm"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-voxcpm"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-voxcpm
- !!merge <<: *voxcpm
name: "metal-voxcpm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-voxcpm"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-voxcpm
## pocket-tts
- !!merge <<: *pocket-tts
name: "pocket-tts-development"
capabilities:
nvidia: "cuda12-pocket-tts-development"
intel: "intel-pocket-tts-development"
amd: "rocm-pocket-tts-development"
nvidia-l4t: "nvidia-l4t-pocket-tts-development"
metal: "metal-pocket-tts-development"
default: "cpu-pocket-tts-development"
nvidia-cuda-13: "cuda13-pocket-tts-development"
nvidia-cuda-12: "cuda12-pocket-tts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts-development"
- !!merge <<: *pocket-tts
name: "cpu-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-pocket-tts"
mirrors:
- localai/localai-backends:latest-cpu-pocket-tts
- !!merge <<: *pocket-tts
name: "cpu-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-pocket-tts"
mirrors:
- localai/localai-backends:master-cpu-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda12-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda12-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-pocket-tts
- !!merge <<: *pocket-tts
name: "intel-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-intel-pocket-tts
- !!merge <<: *pocket-tts
name: "intel-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-intel-pocket-tts
- !!merge <<: *pocket-tts
name: "rocm-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-pocket-tts
- !!merge <<: *pocket-tts
name: "rocm-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-pocket-tts
- !!merge <<: *pocket-tts
name: "nvidia-l4t-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-pocket-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-pocket-tts
- !!merge <<: *pocket-tts
name: "nvidia-l4t-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-pocket-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-nvidia-l4t-arm64-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-pocket-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-nvidia-l4t-arm64-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-pocket-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-pocket-tts
- !!merge <<: *pocket-tts
name: "metal-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-pocket-tts"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-pocket-tts
- !!merge <<: *pocket-tts
name: "metal-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-pocket-tts"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-pocket-tts
## voxtral
- !!merge <<: *voxtral
name: "cpu-voxtral"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-voxtral"
mirrors:
- localai/localai-backends:latest-cpu-voxtral
- !!merge <<: *voxtral
name: "cpu-voxtral-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-voxtral"
mirrors:
- localai/localai-backends:master-cpu-voxtral
- !!merge <<: *voxtral
name: "metal-voxtral"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-voxtral"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-voxtral
- !!merge <<: *voxtral
name: "metal-voxtral-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-voxtral"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-voxtral
- &trl
name: "trl"
alias: "trl"
license: apache-2.0
description: |
HuggingFace TRL fine-tuning backend. Supports SFT, DPO, GRPO, RLOO, Reward, KTO, ORPO training methods.
Works on CPU and GPU.
urls:
- https://github.com/huggingface/trl
tags:
- fine-tuning
- LLM
- CPU
- GPU
- CUDA
capabilities:
default: "cpu-trl"
nvidia: "cuda12-trl"
nvidia-cuda-12: "cuda12-trl"
nvidia-cuda-13: "cuda13-trl"
## TRL backend images
- !!merge <<: *trl
name: "cpu-trl"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-trl"
mirrors:
- localai/localai-backends:latest-cpu-trl
- !!merge <<: *trl
name: "cpu-trl-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-trl"
mirrors:
- localai/localai-backends:master-cpu-trl
- !!merge <<: *trl
name: "cuda12-trl"
uri: "quay.io/go-skynet/local-ai-backends:latest-cublas-cuda12-trl"
mirrors:
- localai/localai-backends:latest-cublas-cuda12-trl
- !!merge <<: *trl
name: "cuda12-trl-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cublas-cuda12-trl"
mirrors:
- localai/localai-backends:master-cublas-cuda12-trl
- !!merge <<: *trl
name: "cuda13-trl"
uri: "quay.io/go-skynet/local-ai-backends:latest-cublas-cuda13-trl"
mirrors:
- localai/localai-backends:latest-cublas-cuda13-trl
- !!merge <<: *trl
name: "cuda13-trl-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cublas-cuda13-trl"
mirrors:
- localai/localai-backends:master-cublas-cuda13-trl
## llama.cpp quantization backend
- &llama-cpp-quantization
name: "llama-cpp-quantization"
alias: "llama-cpp-quantization"
license: mit
icon: https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png
description: |
Model quantization backend using llama.cpp. Downloads HuggingFace models, converts them to GGUF format,
and quantizes them to various formats (q4_k_m, q5_k_m, q8_0, f16, etc.).
urls:
- https://github.com/ggml-org/llama.cpp
tags:
- quantization
- GGUF
- CPU
capabilities:
default: "cpu-llama-cpp-quantization"
metal: "metal-darwin-arm64-llama-cpp-quantization"
- !!merge <<: *llama-cpp-quantization
name: "cpu-llama-cpp-quantization"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-llama-cpp-quantization"
mirrors:
- localai/localai-backends:latest-cpu-llama-cpp-quantization
- !!merge <<: *llama-cpp-quantization
name: "metal-darwin-arm64-llama-cpp-quantization"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-llama-cpp-quantization"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-llama-cpp-quantization
# insightface (face recognition) — development and concrete image entries
- !!merge <<: *insightface
name: "insightface-development"
capabilities:
default: "cpu-insightface-development"
nvidia: "cuda12-insightface-development"
nvidia-cuda-12: "cuda12-insightface-development"
- !!merge <<: *insightface
name: "cpu-insightface"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-insightface"
mirrors:
- localai/localai-backends:latest-cpu-insightface
- !!merge <<: *insightface
name: "cuda12-insightface"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-insightface"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-insightface
- !!merge <<: *insightface
name: "cpu-insightface-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-insightface"
mirrors:
- localai/localai-backends:master-cpu-insightface
- !!merge <<: *insightface
name: "cuda12-insightface-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-insightface"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-insightface