mirror of
https://github.com/mudler/LocalAI.git
synced 2026-05-16 20:52:08 -04:00
* ci: extract free-disk-space composite action Consolidate the apt-clean + dotnet/android/ghc/boost removal blocks from backend_build.yml, image_build.yml, and test.yml into a single composite action. The three callers had slightly different inline blocks; the composite uses the more aggressive backend_build/image_build variant for all three callers — test.yml jobs now also purge snapd, edge/firefox/ powershell/r-base-core, and sweep /opt/ghc + /usr/local/share/boost + $AGENT_TOOLSDIRECTORY. Idempotent and skipped on self-hosted runners. In test.yml, actions/checkout now runs before the composite action call because the composite lives at ./.github/actions/free-disk-space and requires a checked-out repo. The original ordering relied on jlumbroso/free-disk-space@main being a remote action; this is the minimum-invasive change to support a local composite. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: path-filter backend.yml master push Run scripts/changed-backends.js on master pushes too (not just PRs) so unrelated commits don't rebuild all ~210 backend container images. Tag pushes still build the full matrix via FORCE_ALL. Push events use the GitHub Compare API to diff event.before..event.after. Edge cases (first push with zero base, API truncation beyond 300 files, missing fields, network failure) fall back to "run everything" — better safe than silently miss a backend. The matrix literal moves from .github/workflows/backend.yml into a new data-only file at .github/backend-matrix.yml (outside workflows/ so actionlint doesn't try to parse it as a workflow). Both backend.yml and backend_pr.yml now consume the dynamic matrix output uniformly via fromJson(needs.generate-matrix.outputs.matrix); the script reads the matrix from the new location. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: bound max-parallel on backend-jobs matrices Cap to 8 concurrent jobs to avoid queue starvation on the shared GHA free pool while migration is in flight. Lift after Phases 4-5 retire the self-hosted runners. Also drops a leftover commented-out max-parallel line that lived in backend.yml since the previous matrix shape. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: scope backend cache per arch, push by digest Prepare backend_build.yml for the multi-arch split. The reusable workflow now accepts a `platform-tag` input ("amd64" / "arm64") that scopes the registry cache to cache<suffix>-<platform-tag> and (on push events) pushes the resulting image by canonical digest only. Digests are uploaded as artifacts named digests<suffix>-<platform-tag> for the merge job (Task 2.2) to consume. `platform-tag` is optional with empty default during the migration — existing callers continue to work unchanged (their cache key just becomes `cache<suffix>-`, an orphaned but valid key). Tasks 2.3+ will update callers to pass an explicit "amd64" / "arm64" value. Phase 6 flips the input to required: true once every caller is wired. PR builds keep their existing tag-based push to ci-tests but pick up the per-arch cache key. Multi-arch PR builds remain emulated in this commit; they migrate when the matrix entries split (Tasks 2.3+). Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: add backend_merge.yml reusable workflow Joins per-arch digest artifacts (uploaded by backend_build.yml when called with platform-tag) into a single tagged multi-arch manifest list via `docker buildx imagetools create`. Called once per backend by backend.yml after both per-arch build jobs succeed. The workflow generates final tags identically to the previous monolithic build job (same docker/metadata-action invocation), so consumers of quay.io/go-skynet/local-ai-backends and localai/localai-backends see no tag-shape change. Two imagetools calls (one per registry) reference the same per-arch digests under different image names. Not yet wired into backend.yml — Tasks 2.3+ rewrite individual matrix entries to expand into per-arch + merge jobs that call this workflow. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: relocate Docker data-root to /mnt on hosted runners GHA hosted ubuntu-latest runners ship a ~75 GB /mnt drive that's unused by default. Stopping Docker, rsync'ing /var/lib/docker to /mnt, and restarting with data-root pointing there yields ~100 GB of working space (combined with the apt-clean from Task 1.1) — enough for ROCm dev image + vLLM torch install + flash-attn intermediate layers. This is the structural change that lets Phases 4 and 5 of the migration plan move the bigger-runner and arc-runner-set jobs onto ubuntu-latest. The composite action is no-op on self-hosted runners (where /mnt isn't expected) and on non-X64 runners (Task 3.2 verifies the arm64 hosted pool's /mnt shape separately before enabling). Wired into both backend_build.yml and image_build.yml between free-disk-space and the first Docker operation. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci(setup-build-disk): chmod 1777 /mnt/docker-tmp buildx CLI runs as the unprivileged 'runner' user and creates config dirs under TMPDIR before binding them into the buildkit container. /mnt is root-owned by default, so the original mkdir produced a permission-denied when buildx tried to write there: ERROR: mkdir /mnt/docker-tmp/buildkitd-config2740457204: permission denied Mirror /tmp's permission mode (1777 — world-writable with sticky bit) on /mnt/docker-tmp so non-root processes can stage their config. Caught by the first PR run (image-build hipblas job) on PR #9726. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: weekly full-matrix rebuild via cron Path-filtering backend.yml master push (the previous commit's main optimization) skips backends whose source didn't change. That broke the DEPS_REFRESH cache-buster's coverage: the build-arg keyed on %Y-W%V busts the install layer's cache on a new ISO week, but only when the build actually runs. Untouched Python backends (torch, transformers, vllm with no version pin) would otherwise ship stale wheels indefinitely. Add a Sunday 06:00 UTC cron that fires the full matrix. Schedule events have no event.ref / event.before, so the script's changedFiles == null fallback (scripts/changed-backends.js) emits the full matrix automatically — no script change needed. C++/Go backends with pinned deps cache-hit and complete fast, so the weekly cost is dominated by Python re-resolves which is exactly what we want. workflow_dispatch added so a maintainer can trigger an ad-hoc full-matrix rebuild without faking a tag push. Assisted-by: Claude:claude-opus-4-7 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
3224 lines
94 KiB
YAML
3224 lines
94 KiB
YAML
---
|
|
# Matrix data for backend container image builds.
|
|
# Consumed by scripts/changed-backends.js for both backend.yml and backend_pr.yml.
|
|
# This file is NOT a workflow — it has no top-level 'on:' or 'jobs:'.
|
|
|
|
# Linux matrix (consumed by backend-jobs).
|
|
include:
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-diffusers'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-vllm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "vllm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-sglang'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "sglang"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-diffusers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-chatterbox'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "chatterbox"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-moonshine'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "moonshine"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# tinygrad ships a single image — its CPU device uses bundled
|
|
# libLLVM, and its CUDA / HIP / Metal devices dlopen the host
|
|
# driver libraries at runtime via tinygrad's ctypes autogen
|
|
# wrappers. There is no toolkit-version split because tinygrad
|
|
# generates kernels itself (PTX renderer for CUDA) and never
|
|
# links against cuDNN/cuBLAS/torch.
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-tinygrad'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "tinygrad"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-whisperx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "whisperx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-faster-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-ace-step'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "ace-step"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-trl'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "trl"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-llama-cpp-quantization'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "llama-cpp-quantization"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-mlx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "mlx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-mlx-vlm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "mlx-vlm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-mlx-audio'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "mlx-audio"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-mlx-distributed'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "mlx-distributed"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# CUDA 12 builds
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-vibevoice'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-qwen-asr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-nemo'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "nemo"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-qwen-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-fish-speech'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-faster-qwen3-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "faster-qwen3-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-voxcpm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "voxcpm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-pocket-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-rerankers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rerankers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-llama-cpp'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-turboquant'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-vllm'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vllm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-vllm-omni'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vllm-omni"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-sglang'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sglang"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-transformers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "transformers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-diffusers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-ace-step'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "ace-step"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-trl'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "trl"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-kokoro'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "kokoro"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-faster-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-whisperx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisperx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "9"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-coqui'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "coqui"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-outetts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "outetts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-chatterbox'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "chatterbox"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-moonshine'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "moonshine"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-mlx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-mlx-vlm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx-vlm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-mlx-audio'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx-audio"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-mlx-distributed'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx-distributed"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-stablediffusion-ggml'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-sam3-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-acestep-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-qwen3-tts-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-vibevoice-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-rfdetr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rfdetr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-insightface'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "insightface"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-speaker-recognition'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "speaker-recognition"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-neutts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "neutts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# cuda 13
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-rerankers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rerankers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-vibevoice'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-qwen-asr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-nemo'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "nemo"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-qwen-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-fish-speech'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-faster-qwen3-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "faster-qwen3-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-voxcpm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "voxcpm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-pocket-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-llama-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-turboquant'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-llama-cpp'
|
|
base-image: "ubuntu:24.04"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
ubuntu-version: '2404'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-turboquant'
|
|
base-image: "ubuntu:24.04"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
ubuntu-version: '2404'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-vllm'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vllm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-vllm-omni'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vllm-omni"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-sglang'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sglang"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-transformers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "transformers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-diffusers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-ace-step'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "ace-step"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-trl'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "trl"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-vibevoice'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen-asr'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen-tts'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-fish-speech'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-faster-qwen3-tts'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "faster-qwen3-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-pocket-tts'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-chatterbox'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "chatterbox"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-diffusers'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-vllm'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "vllm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-vllm-omni'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "vllm-omni"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-sglang'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "sglang"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-mlx'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "mlx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-mlx-vlm'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "mlx-vlm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-mlx-audio'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "mlx-audio"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-mlx-distributed'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "mlx-distributed"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-whisperx'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "whisperx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-faster-whisper'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
ubuntu-version: '2404'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-kokoro'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "kokoro"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-faster-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-whisperx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisperx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-chatterbox'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "chatterbox"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-moonshine'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "moonshine"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-mlx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-mlx-vlm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx-vlm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-mlx-audio'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx-audio"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-mlx-distributed'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "mlx-distributed"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-stablediffusion-ggml'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml'
|
|
base-image: "ubuntu:24.04"
|
|
ubuntu-version: '2404'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-sam3-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-sam3-cpp'
|
|
base-image: "ubuntu:24.04"
|
|
ubuntu-version: '2404'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-whisper'
|
|
base-image: "ubuntu:24.04"
|
|
ubuntu-version: '2404'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-acestep-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-qwen3-tts-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-vibevoice-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-acestep-cpp'
|
|
base-image: "ubuntu:24.04"
|
|
ubuntu-version: '2404'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-qwen3-tts-cpp'
|
|
base-image: "ubuntu:24.04"
|
|
ubuntu-version: '2404'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-cuda-13-arm64-vibevoice-cpp'
|
|
base-image: "ubuntu:24.04"
|
|
ubuntu-version: '2404'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-rfdetr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rfdetr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# hipblas builds
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-rerankers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "rerankers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-llama-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-turboquant'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-vllm'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "vllm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-vllm-omni'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "vllm-omni"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-sglang'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "sglang"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-transformers'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "transformers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-diffusers'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-ace-step'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "ace-step"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# ROCm additional backends
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-kokoro'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "kokoro"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-vibevoice'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-qwen-asr'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-nemo'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "nemo"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-qwen-tts'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-fish-speech'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-voxcpm'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "voxcpm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-pocket-tts'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-faster-whisper'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-coqui'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "coqui"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# sycl builds
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-rerankers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.2-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rerankers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-llama-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.2-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-turboquant'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-llama-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-turboquant'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-vllm'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vllm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sglang'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sglang"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-transformers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "transformers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-diffusers'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "diffusers"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-ace-step'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "ace-step"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-vibevoice'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-qwen-asr'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-qwen-tts'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-fish-speech'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-faster-qwen3-tts'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "faster-qwen3-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-pocket-tts'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-kokoro'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "kokoro"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-mlx'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "mlx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-mlx-vlm'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "mlx-vlm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-mlx-audio'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "mlx-audio"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-mlx-distributed'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "mlx-distributed"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-whisperx'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "whisperx"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-faster-whisper'
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
skip-drivers: 'true'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
# SYCL additional backends
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-kokoro'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "kokoro"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-faster-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "faster-whisper"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-vibevoice'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-qwen-asr'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-nemo'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "nemo"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-qwen-tts'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-fish-speech'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-voxcpm'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "voxcpm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-pocket-tts'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-coqui'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "coqui"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# piper
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-piper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "piper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-llama-cpp'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-turboquant'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-ik-llama-cpp'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "ik-llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.ik-llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-llama-cpp'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-turboquant'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-llama-cpp'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "llama-cpp"
|
|
dockerfile: "./backend/Dockerfile.llama-cpp"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-turboquant'
|
|
runs-on: 'bigger-runner'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "turboquant"
|
|
dockerfile: "./backend/Dockerfile.turboquant"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# Stablediffusion-ggml
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-stablediffusion-ggml'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# sam3-cpp
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-sam3-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-sam3-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-sam3-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-sam3-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-stablediffusion-ggml'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-stablediffusion-ggml'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-stablediffusion-ggml'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-stablediffusion-ggml'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "stablediffusion-ggml"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-sam3-cpp'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "sam3-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
# whisper
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-whisper'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-whisper'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-whisper'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
runs-on: 'ubuntu-latest'
|
|
skip-drivers: 'false'
|
|
backend: "whisper"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# acestep-cpp
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-acestep-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-acestep-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-acestep-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-acestep-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-acestep-cpp'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-acestep-cpp'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
runs-on: 'ubuntu-latest'
|
|
skip-drivers: 'false'
|
|
backend: "acestep-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# qwen3-tts-cpp
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-qwen3-tts-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-qwen3-tts-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-qwen3-tts-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-qwen3-tts-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-qwen3-tts-cpp'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-qwen3-tts-cpp'
|
|
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
|
runs-on: 'ubuntu-latest'
|
|
skip-drivers: 'false'
|
|
backend: "qwen3-tts-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# vibevoice-cpp
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-vibevoice-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-localvqe'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "localvqe"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f32'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f32-vibevoice-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'sycl_f16'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-sycl-f16-vibevoice-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-vibevoice-cpp'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'vulkan'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-vulkan-localvqe'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "localvqe"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'false'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-vibevoice-cpp'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-vibevoice-cpp'
|
|
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
|
|
runs-on: 'ubuntu-latest'
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice-cpp"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# voxtral
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-voxtral'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "voxtral"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
#opus
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-opus'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "opus"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
#silero-vad
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-silero-vad'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "silero-vad"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# kokoros (Rust TTS)
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-kokoros'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "kokoros"
|
|
dockerfile: "./backend/Dockerfile.rust"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# local-store
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-local-store'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "local-store"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# rfdetr
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-rfdetr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rfdetr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# insightface (face recognition)
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-insightface'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "insightface"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# speaker-recognition (voice/speaker biometrics)
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-speaker-recognition'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "speaker-recognition"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'intel'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-intel-rfdetr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
|
|
skip-drivers: 'false'
|
|
backend: "rfdetr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'true'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-rfdetr'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "rfdetr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
- build-type: 'l4t'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/arm64'
|
|
skip-drivers: 'true'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-nvidia-l4t-arm64-chatterbox'
|
|
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
|
|
runs-on: 'ubuntu-24.04-arm'
|
|
backend: "chatterbox"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2204'
|
|
# runs out of space on the runner
|
|
# - build-type: 'hipblas'
|
|
# cuda-major-version: ""
|
|
# cuda-minor-version: ""
|
|
# platforms: 'linux/amd64'
|
|
# tag-latest: 'auto'
|
|
# tag-suffix: '-gpu-hipblas-rfdetr'
|
|
# base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
# runs-on: 'ubuntu-latest'
|
|
# skip-drivers: 'false'
|
|
# backend: "rfdetr"
|
|
# dockerfile: "./backend/Dockerfile.python"
|
|
# context: "./"
|
|
# kitten-tts
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-kitten-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "kitten-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# neutts
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-neutts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "neutts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: 'hipblas'
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-rocm-hipblas-neutts'
|
|
runs-on: 'arc-runner-set'
|
|
base-image: "rocm/dev-ubuntu-24.04:7.2.1"
|
|
skip-drivers: 'false'
|
|
backend: "neutts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-vibevoice'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "vibevoice"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-qwen-asr'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-asr"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-nemo'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "nemo"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-qwen-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "qwen-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-fish-speech'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "fish-speech"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-voxcpm'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "voxcpm"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-pocket-tts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "pocket-tts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-outetts'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'true'
|
|
backend: "outetts"
|
|
dockerfile: "./backend/Dockerfile.python"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# sherpa-onnx CPU
|
|
- build-type: ''
|
|
cuda-major-version: ""
|
|
cuda-minor-version: ""
|
|
platforms: 'linux/amd64,linux/arm64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-cpu-sherpa-onnx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sherpa-onnx"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# sherpa-onnx CUDA 12
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "12"
|
|
cuda-minor-version: "8"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-12-sherpa-onnx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sherpa-onnx"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
# sherpa-onnx CUDA 13 — requires onnxruntime 1.24.x+ for the
|
|
# gpu_cuda13 tarball; sherpa-onnx SHERPA_COMMIT pins to v1.12.39.
|
|
- build-type: 'cublas'
|
|
cuda-major-version: "13"
|
|
cuda-minor-version: "0"
|
|
platforms: 'linux/amd64'
|
|
tag-latest: 'auto'
|
|
tag-suffix: '-gpu-nvidia-cuda-13-sherpa-onnx'
|
|
runs-on: 'ubuntu-latest'
|
|
base-image: "ubuntu:24.04"
|
|
skip-drivers: 'false'
|
|
backend: "sherpa-onnx"
|
|
dockerfile: "./backend/Dockerfile.golang"
|
|
context: "./"
|
|
ubuntu-version: '2404'
|
|
|
|
# Darwin matrix (consumed by backend-jobs-darwin).
|
|
includeDarwin:
|
|
- backend: "diffusers"
|
|
tag-suffix: "-metal-darwin-arm64-diffusers"
|
|
build-type: "mps"
|
|
- backend: "ace-step"
|
|
tag-suffix: "-metal-darwin-arm64-ace-step"
|
|
build-type: "mps"
|
|
- backend: "mlx"
|
|
tag-suffix: "-metal-darwin-arm64-mlx"
|
|
build-type: "mps"
|
|
- backend: "chatterbox"
|
|
tag-suffix: "-metal-darwin-arm64-chatterbox"
|
|
build-type: "mps"
|
|
- backend: "mlx-vlm"
|
|
tag-suffix: "-metal-darwin-arm64-mlx-vlm"
|
|
build-type: "mps"
|
|
- backend: "mlx-audio"
|
|
tag-suffix: "-metal-darwin-arm64-mlx-audio"
|
|
build-type: "mps"
|
|
- backend: "mlx-distributed"
|
|
tag-suffix: "-metal-darwin-arm64-mlx-distributed"
|
|
build-type: "mps"
|
|
- backend: "stablediffusion-ggml"
|
|
tag-suffix: "-metal-darwin-arm64-stablediffusion-ggml"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "whisper"
|
|
tag-suffix: "-metal-darwin-arm64-whisper"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "acestep-cpp"
|
|
tag-suffix: "-metal-darwin-arm64-acestep-cpp"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "qwen3-tts-cpp"
|
|
tag-suffix: "-metal-darwin-arm64-qwen3-tts-cpp"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "vibevoice-cpp"
|
|
tag-suffix: "-metal-darwin-arm64-vibevoice-cpp"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "voxtral"
|
|
tag-suffix: "-metal-darwin-arm64-voxtral"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "vibevoice"
|
|
tag-suffix: "-metal-darwin-arm64-vibevoice"
|
|
build-type: "mps"
|
|
- backend: "qwen-asr"
|
|
tag-suffix: "-metal-darwin-arm64-qwen-asr"
|
|
build-type: "mps"
|
|
- backend: "nemo"
|
|
tag-suffix: "-metal-darwin-arm64-nemo"
|
|
build-type: "mps"
|
|
- backend: "qwen-tts"
|
|
tag-suffix: "-metal-darwin-arm64-qwen-tts"
|
|
build-type: "mps"
|
|
- backend: "fish-speech"
|
|
tag-suffix: "-metal-darwin-arm64-fish-speech"
|
|
build-type: "mps"
|
|
- backend: "voxcpm"
|
|
tag-suffix: "-metal-darwin-arm64-voxcpm"
|
|
build-type: "mps"
|
|
- backend: "pocket-tts"
|
|
tag-suffix: "-metal-darwin-arm64-pocket-tts"
|
|
build-type: "mps"
|
|
- backend: "moonshine"
|
|
tag-suffix: "-metal-darwin-arm64-moonshine"
|
|
build-type: "mps"
|
|
- backend: "whisperx"
|
|
tag-suffix: "-metal-darwin-arm64-whisperx"
|
|
build-type: "mps"
|
|
- backend: "rerankers"
|
|
tag-suffix: "-metal-darwin-arm64-rerankers"
|
|
build-type: "mps"
|
|
- backend: "transformers"
|
|
tag-suffix: "-metal-darwin-arm64-transformers"
|
|
build-type: "mps"
|
|
- backend: "kokoro"
|
|
tag-suffix: "-metal-darwin-arm64-kokoro"
|
|
build-type: "mps"
|
|
- backend: "faster-whisper"
|
|
tag-suffix: "-metal-darwin-arm64-faster-whisper"
|
|
build-type: "mps"
|
|
- backend: "coqui"
|
|
tag-suffix: "-metal-darwin-arm64-coqui"
|
|
build-type: "mps"
|
|
- backend: "rfdetr"
|
|
tag-suffix: "-metal-darwin-arm64-rfdetr"
|
|
build-type: "mps"
|
|
- backend: "kitten-tts"
|
|
tag-suffix: "-metal-darwin-arm64-kitten-tts"
|
|
build-type: "mps"
|
|
- backend: "piper"
|
|
tag-suffix: "-metal-darwin-arm64-piper"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "opus"
|
|
tag-suffix: "-metal-darwin-arm64-opus"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "silero-vad"
|
|
tag-suffix: "-metal-darwin-arm64-silero-vad"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "local-store"
|
|
tag-suffix: "-metal-darwin-arm64-local-store"
|
|
build-type: "metal"
|
|
lang: "go"
|
|
- backend: "llama-cpp-quantization"
|
|
tag-suffix: "-metal-darwin-arm64-llama-cpp-quantization"
|
|
build-type: "mps"
|