mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-29 03:24:49 -04:00
ci(backends): build cpu-whisperx and cpu-faster-whisper for linux/arm64 (#9573)
Extend the existing CPU build matrix entries to produce a multi-arch manifest (linux/amd64,linux/arm64) at the same image tags. arm64 Linux hosts without an NVIDIA GPU report the "default" capability, which already maps to cpu-whisperx / cpu-faster-whisper in backend/index.yaml -- so the manifest list lets Docker pull the right variant without any gallery changes. Both stacks install cleanly under aarch64: torch (2.4.1/2.8.0), faster-whisper, ctranslate2, whisperx, opencv-python and the remaining deps all ship manylinux2014_aarch64 wheels, so no source builds run under QEMU emulation. Follows the same pattern already used by cpu-llama-cpp-quantization. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
committed by
GitHub
parent
1c45227346
commit
e16e758dff
4
.github/workflows/backend.yml
vendored
4
.github/workflows/backend.yml
vendored
@@ -141,7 +141,7 @@ jobs:
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-whisperx'
|
||||
runs-on: 'ubuntu-latest'
|
||||
@@ -154,7 +154,7 @@ jobs:
|
||||
- build-type: ''
|
||||
cuda-major-version: ""
|
||||
cuda-minor-version: ""
|
||||
platforms: 'linux/amd64'
|
||||
platforms: 'linux/amd64,linux/arm64'
|
||||
tag-latest: 'auto'
|
||||
tag-suffix: '-cpu-faster-whisper'
|
||||
runs-on: 'ubuntu-latest'
|
||||
|
||||
Reference in New Issue
Block a user