Files
LocalAI/backend/python/speaker-recognition/engines.py
Ettore Di Giacinto 181ebb6df4 feat: voice recognition (#9500)
* feat(voice-recognition): add /v1/voice/{verify,analyze,embed} + speaker-recognition backend

Audio analog to face recognition. Adds three gRPC RPCs
(VoiceVerify / VoiceAnalyze / VoiceEmbed), their Go service and HTTP
layers, a new FLAG_SPEAKER_RECOGNITION capability flag, and a Python
backend scaffold under backend/python/speaker-recognition/ wrapping
SpeechBrain ECAPA-TDNN with a parallel OnnxDirectEngine for
WeSpeaker / 3D-Speaker ONNX exports.

The kokoros Rust backend gets matching unimplemented trait stubs —
tonic's async_trait has no defaults, so adding an RPC without Rust
stubs breaks the build (same regression fixed by eb01c772 for face).

Swagger, /api/instructions, and the auth RouteFeatureRegistry /
APIFeatures list are updated so the endpoints surface everywhere a
client or admin UI looks.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): add 1:N identify + register/forget endpoints

Mirrors the face-recognition register/identify/forget surface. New
package core/services/voicerecognition/ carries a Registry interface
and a local-store-backed implementation (same in-memory vector-store
plumbing facerecognition uses, separate instance so the embedding
spaces stay isolated).

Handlers under /v1/voice/{register,identify,forget} reuse
backend.VoiceEmbed to compute the probe vector, then delegate the
nearest-neighbour search to the registry. Default cosine-distance
threshold is tuned for ECAPA-TDNN on VoxCeleb (0.25, EER ~1.9%).

As with the face registry, the current backing is in-memory only — a
pgvector implementation is a future constructor-level swap.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): gallery, docs, CI and e2e coverage

- backend/index.yaml: speaker-recognition backend entry + CPU and
  CUDA-12 image variants (plus matching development variants).
- gallery/index.yaml: speechbrain-ecapa-tdnn (default) and
  wespeaker-resnet34 model entries. The WeSpeaker SHA-256 is a
  deliberate placeholder — the HF URI must be curl'd and its hash
  filled in before the entry installs.
- docs/content/features/voice-recognition.md: API reference + quickstart,
  mirrors the face-recognition docs.
- React UI: CAP_SPEAKER_RECOGNITION flag export (consumers follow face's
  precedent — no dedicated tab yet).
- tests/e2e-backends: voice_embed / voice_verify / voice_analyze specs.
  Helper resolveFaceFixture is reused as-is — the only thing face/voice
  share is "download a file into workDir", so no need for a new helper.
- Makefile: docker-build-speaker-recognition + test-extra-backend-
  speaker-recognition-{ecapa,all} targets. Audio fixtures default to
  VCTK p225/p226 samples from HuggingFace.
- CI: test-extra.yml grows a tests-speaker-recognition-grpc job
  mirroring insightface. backend.yml matrix gains CPU + CUDA-12 image
  build entries — scripts/changed-backends.js auto-picks these up.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): wire a working /v1/voice/analyze head

Adds AnalysisHead: a lazy-loading age / gender / emotion inference
wrapper that plugs into both SpeechBrainEngine and OnnxDirectEngine.

Defaults to two open-licence HuggingFace checkpoints:
  - audeering/wav2vec2-large-robust-24-ft-age-gender (Apache 2.0) —
    age regression + 3-way gender (female / male / child).
  - superb/wav2vec2-base-superb-er (Apache 2.0) — 4-way emotion.

Both are optional and degrade gracefully when transformers or the
model can't be loaded — the engine raises NotImplementedError so the
gRPC layer returns 501 instead of a generic 500.

Emotion classes pass through from the model (neutral/happy/angry/sad
on the default checkpoint); the e2e test now accepts any non-empty
dominant gender so custom age_gender_model overrides don't fail it.

Adds transformers to the backend's CPU and CUDA-12 requirements.

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): pin real WeSpeaker ResNet34 ONNX SHA-256

Replaces the placeholder hash in gallery/index.yaml with the actual
SHA-256 (7bb2f06e…) of the upstream
Wespeaker/wespeaker-voxceleb-resnet34-LM ONNX at ~25MB. `local-ai
models install wespeaker-resnet34` now succeeds.

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): soundfile loader + honest analyze default

Two issues surfaced on first end-to-end smoke with the actual backend
image:

1. torchaudio.load in torchaudio 2.8+ requires the torchcodec package
   for audio decoding. Switch SpeechBrainEngine._load_waveform to the
   already-present soundfile (listed in requirements.txt) plus a numpy
   linear resample to 16kHz. Drops a heavy ffmpeg-linked dep and the
   codepath we never exercise (torchaudio's ffmpeg backend).

2. The AnalysisHead was defaulting to audeering/wav2vec2-large-robust-
   24-ft-age-gender, but AutoModelForAudioClassification silently
   mangles that checkpoint — it reports the age head weights as
   UNEXPECTED and re-initialises the classifier head with random
   values, so the "gender" output is noise and there is no age output
   at all. Make age/gender opt-in instead (empty default; users wire
   a cleanly-loadable Wav2Vec2ForSequenceClassification checkpoint via
   age_gender_model: option). Emotion keeps its working Superb default.
   Also broaden _infer_age_gender's tensor-shape handling and catch
   runtime exceptions so a dodgy age/gender head never takes down the
   whole analyze call.

Docs and README updated to match the new policy.

Verified with the branch-scoped gallery on localhost:
- voice/embed    → 192-d ECAPA-TDNN vector
- voice/verify   → same-clip dist≈6e-08 verified=true; cross-speaker
                   dist 0.76–0.99 verified=false (as expected)
- voice/register/identify/forget → round-trip works, 404 on unknown id
- voice/analyze  → emotion populated, age/gender omitted (opt-in)

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): real CI audio fixtures + fixture-agnostic verify spec

Two issues surfaced after CI actually ran the speaker-recognition e2e
target (I'd curl-tested against a running server but hadn't run the
make target locally):

1. The default BACKEND_TEST_VOICE_AUDIO_* URLs pointed at
   huggingface.co/datasets/CSTR-Edinburgh/vctk paths that return 404
   (the dataset is gated). Swap them for the speechbrain test samples
   served from github.com/speechbrain/speechbrain/raw/develop/ —
   public, no auth, correct 16kHz mono format.

2. The VoiceVerify spec required d(file1,file2) < 0.4, assuming
   file1/file2 were same-speaker. The speechbrain samples are three
   different speakers (example1/2/5), and there is no easy un-gated
   source of true same-speaker audio pairs (VoxCeleb/VCTK/LibriSpeech
   are all license- or size-gated for CI use). Replace the ceiling
   check with a relative-ordering assertion: d(pair) > d(same-clip)
   for both file2 and file3 — that's enough to prove the embeddings
   encode speaker info, and it works with any three non-identical
   clips. Actual speaker ordering d(1,2) vs d(1,3) is logged but not
   asserted.

Local run: 4/4 voice specs pass (Health, LoadModel, VoiceEmbed,
VoiceVerify) on the built backend image. 12 non-voice specs skipped
as expected.

Assisted-by: Claude:claude-opus-4-7

* fix(ci): checkout with submodules in the reusable backend_build workflow

The kokoros Rust backend build fails with

    failed to read .../sources/Kokoros/kokoros/Cargo.toml: No such file

because the reusable backend_build.yml workflow's actions/checkout
step was missing `submodules: true`. Dockerfile.rust does `COPY .
/LocalAI`, and without the submodule files the subsequent `cargo
build` can't find the vendored Kokoros crate.

The bug pre-dates this PR — scripts/changed-backends.js only triggers
the kokoros image job when something under backend/rust/kokoros or
the shared proto changes, so master had been coasting past it. The
voice-recognition proto addition re-broke it.

Other checkouts in backend.yml (llama-cpp-darwin) and test-extra.yml
(insightface, kokoros, speaker-recognition) already pass
`submodules: true`; this brings the shared backend image builder in
line.

Assisted-by: Claude:claude-opus-4-7
2026-04-23 12:07:14 +02:00

388 lines
16 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
"""Speaker-recognition engines.
Two engines are offered, mirroring the insightface backend's split:
* SpeechBrainEngine: full PyTorch / SpeechBrain path. Uses the
ECAPA-TDNN recipe trained on VoxCeleb; 192-d L2-normalized
embeddings, cosine distance for verification. Auto-downloads the
checkpoint into LocalAI's models directory on first LoadModel.
* OnnxDirectEngine: CPU-friendly fallback that runs pre-exported
ONNX speaker encoders (WeSpeaker ResNet34, 3D-Speaker ERes2Net,
CAM++, etc.). Model paths come from the model config — the gallery
`files:` flow drops them into the models directory.
Engine selection follows the same gallery-driven convention face
recognition uses (insightface commits 9c6da0f7 / 405fec0b): the
Python backend reads `engine` / `model_path` / `checkpoint` from the
options dict and picks an engine accordingly.
"""
from __future__ import annotations
import os
from typing import Any, Iterable, Protocol
class SpeakerEngine(Protocol):
"""Interface both concrete engines satisfy."""
name: str
def embed(self, audio_path: str) -> list[float]: # pragma: no cover - interface
...
def compare(self, audio1: str, audio2: str) -> float: # pragma: no cover
...
def analyze(self, audio_path: str, actions: Iterable[str]) -> list[dict[str, Any]]: # pragma: no cover
...
def _cosine_distance(a, b) -> float:
import numpy as np
va = np.asarray(a, dtype=np.float32).reshape(-1)
vb = np.asarray(b, dtype=np.float32).reshape(-1)
na = float(np.linalg.norm(va))
nb = float(np.linalg.norm(vb))
if na == 0.0 or nb == 0.0:
return 1.0
return float(1.0 - np.dot(va, vb) / (na * nb))
class AnalysisHead:
"""Age / gender / emotion head, lazy-loaded on first analyze call.
Wraps two open-licence HuggingFace checkpoints:
* audeering/wav2vec2-large-robust-24-ft-age-gender — age
regression (0100 years) + 3-way gender (female/male/child).
Apache 2.0.
* superb/wav2vec2-base-superb-er — 4-way emotion classification
(neutral / happy / angry / sad). Apache 2.0.
Either model is optional — the head degrades gracefully to only the
attributes it could load. Override the checkpoint with the
`age_gender_model` / `emotion_model` option if you want something
else. Set either to an empty string to disable that head.
"""
# Age + gender is OFF by default: the high-accuracy Apache-2.0
# checkpoint (Audeering wav2vec2-large-robust-24-ft-age-gender) uses a
# custom multi-task head that AutoModelForAudioClassification silently
# mangles — it drops the age weights as UNEXPECTED and re-initialises
# the classifier head with random values, so the output is noise. Users
# who have a cleanly loadable age/gender classifier can opt in with
# `age_gender_model:<repo>` in options. The emotion default below
# (superb/wav2vec2-base-superb-er) loads via the standard audio-
# classification pipeline with no such caveat.
DEFAULT_AGE_GENDER_MODEL = ""
DEFAULT_EMOTION_MODEL = "superb/wav2vec2-base-superb-er"
AGE_GENDER_LABELS = ("female", "male", "child")
def __init__(self, options: dict[str, str]):
self._options = options
self._age_gender = None
self._age_gender_processor = None
self._age_gender_loaded = False
self._age_gender_error: str | None = None
self._emotion = None
self._emotion_loaded = False
self._emotion_error: str | None = None
# --- age / gender -------------------------------------------------
def _ensure_age_gender(self):
if self._age_gender_loaded:
return
self._age_gender_loaded = True
model_id = self._options.get(
"age_gender_model", self.DEFAULT_AGE_GENDER_MODEL
)
if not model_id:
self._age_gender_error = "disabled"
return
try:
# Late imports — torch / transformers are heavy and only
# pulled in when the analyze head actually runs.
import torch # type: ignore
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification # type: ignore
self._torch = torch
self._age_gender_processor = AutoFeatureExtractor.from_pretrained(model_id)
self._age_gender = AutoModelForAudioClassification.from_pretrained(model_id)
self._age_gender.eval()
except Exception as exc: # noqa: BLE001
self._age_gender_error = f"{type(exc).__name__}: {exc}"
def _infer_age_gender(self, waveform_16k) -> dict[str, Any]:
self._ensure_age_gender()
if self._age_gender is None:
return {}
import numpy as np
try:
inputs = self._age_gender_processor(
waveform_16k, sampling_rate=16000, return_tensors="pt"
)
with self._torch.no_grad():
outputs = self._age_gender(**inputs)
# Audeering's checkpoint is published with a custom head: the
# official recipe exposes `(hidden_states, logits_age, logits_gender)`.
# AutoModelForAudioClassification flattens that into a single
# `logits` tensor of shape [batch, 4] — [age_regression, female, male, child].
# Fall back gracefully when the shape is different (e.g. a
# user-supplied age_gender_model checkpoint that returns a proper tuple).
hidden = getattr(outputs, "logits", outputs)
age_years = None
gender_logits = None
if isinstance(hidden, (tuple, list)) and len(hidden) >= 2:
age_years = float(hidden[0].squeeze().item()) * 100.0
gender_logits = hidden[1]
else:
flat = hidden.squeeze()
if flat.ndim == 1 and flat.numel() >= 4:
age_years = float(flat[0].item()) * 100.0
gender_logits = flat[1:4]
elif flat.ndim == 1 and flat.numel() == 1:
age_years = float(flat.item()) * 100.0
if age_years is None and gender_logits is None:
return {}
result: dict[str, Any] = {}
if age_years is not None:
result["age"] = age_years
if gender_logits is not None:
probs = self._torch.softmax(gender_logits, dim=-1).cpu().numpy()
probs = np.asarray(probs).reshape(-1)
gender_map = {
label: float(probs[i])
for i, label in enumerate(self.AGE_GENDER_LABELS[: len(probs)])
}
result["gender"] = gender_map
if gender_map:
dom = max(gender_map.items(), key=lambda kv: kv[1])[0]
result["dominant_gender"] = {
"female": "Female",
"male": "Male",
"child": "Child",
}.get(dom, dom.capitalize())
return result
except Exception as exc: # noqa: BLE001
# Analyze is a best-effort feature — never take down the
# whole analyze call because the age/gender head had a bad
# day. Mark the failure so the emotion branch still runs.
self._age_gender_error = f"runtime: {type(exc).__name__}: {exc}"
return {}
# --- emotion ------------------------------------------------------
def _ensure_emotion(self):
if self._emotion_loaded:
return
self._emotion_loaded = True
model_id = self._options.get("emotion_model", self.DEFAULT_EMOTION_MODEL)
if not model_id:
self._emotion_error = "disabled"
return
try:
from transformers import pipeline # type: ignore
self._emotion = pipeline("audio-classification", model=model_id)
except Exception as exc: # noqa: BLE001
self._emotion_error = f"{type(exc).__name__}: {exc}"
def _infer_emotion(self, audio_path: str) -> dict[str, Any]:
self._ensure_emotion()
if self._emotion is None:
return {}
try:
raw = self._emotion(audio_path, top_k=8)
except Exception as exc: # noqa: BLE001
# Second-line defense: don't fail the whole analyze call
# over a runtime inference hiccup.
self._emotion_error = f"runtime: {type(exc).__name__}: {exc}"
return {}
emotion_map = {row["label"].lower(): float(row["score"]) for row in raw}
if not emotion_map:
return {}
dom = max(emotion_map.items(), key=lambda kv: kv[1])[0]
return {"emotion": emotion_map, "dominant_emotion": dom}
# --- orchestrator -------------------------------------------------
def analyze(self, audio_path: str, waveform_16k, actions: Iterable[str]) -> dict[str, Any]:
wanted = {a.strip().lower() for a in actions} if actions else {"age", "gender", "emotion"}
result: dict[str, Any] = {}
if "age" in wanted or "gender" in wanted:
ag = self._infer_age_gender(waveform_16k)
if "age" in wanted and "age" in ag:
result["age"] = ag["age"]
if "gender" in wanted:
if "gender" in ag:
result["gender"] = ag["gender"]
if "dominant_gender" in ag:
result["dominant_gender"] = ag["dominant_gender"]
if "emotion" in wanted:
em = self._infer_emotion(audio_path)
result.update(em)
return result
class SpeechBrainEngine:
"""ECAPA-TDNN via SpeechBrain. Auto-downloads on first use."""
name = "speechbrain-ecapa-tdnn"
def __init__(self, model_name: str, options: dict[str, str]):
# Late imports so the module can be introspected / tested
# without torch / speechbrain being installed.
from speechbrain.inference.speaker import EncoderClassifier # type: ignore
source = options.get("source") or model_name or "speechbrain/spkrec-ecapa-voxceleb"
savedir = options.get("_model_path") or os.environ.get("HF_HOME") or "./pretrained_models"
self._model = EncoderClassifier.from_hparams(source=source, savedir=savedir)
self._analysis = AnalysisHead(options)
def _load_waveform(self, path: str):
# Use soundfile + torch directly — torchaudio.load in torchaudio
# 2.8+ requires the torchcodec package for decoding, which adds
# another heavy ffmpeg-linked dep. soundfile covers WAV/FLAC
# which is what we care about here.
import numpy as np
import soundfile as sf # type: ignore
import torch # type: ignore
audio, sr = sf.read(path, always_2d=False)
if audio.ndim > 1:
audio = audio.mean(axis=1)
audio = np.asarray(audio, dtype=np.float32)
if sr != 16000:
# Simple linear resample — good enough for 16kHz downsampling
# from 44.1/48kHz, and we expect 16kHz inputs in practice.
ratio = 16000 / float(sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
).astype(np.float32)
return torch.from_numpy(audio).unsqueeze(0) # [1, T]
def embed(self, audio_path: str) -> list[float]:
waveform = self._load_waveform(audio_path)
vec = self._model.encode_batch(waveform).squeeze().detach().cpu().numpy()
return [float(x) for x in vec]
def compare(self, audio1: str, audio2: str) -> float:
return _cosine_distance(self.embed(audio1), self.embed(audio2))
def analyze(self, audio_path: str, actions):
# Age / gender / emotion aren't produced by ECAPA-TDNN itself;
# delegate to AnalysisHead which wraps separate Apache-2.0
# checkpoints. Returns a single segment spanning the clip —
# segmentation / diarisation is a future enhancement.
waveform = self._load_waveform(audio_path)
mono = waveform.squeeze().detach().cpu().numpy()
attrs = self._analysis.analyze(audio_path, mono, actions)
if not attrs:
raise NotImplementedError(
"analyze head failed to load — install transformers + torch or pass age_gender_model/emotion_model options"
)
duration = float(mono.shape[-1]) / 16000.0 if mono.size else 0.0
return [dict(start=0.0, end=duration, **attrs)]
class OnnxDirectEngine:
"""Run a pre-exported ONNX speaker encoder (WeSpeaker / 3D-Speaker)."""
name = "onnx-direct"
def __init__(self, model_name: str, options: dict[str, str]):
import onnxruntime as ort # type: ignore
# The gallery is expected to have dropped the ONNX file under
# the models directory; accept either an absolute path or a
# filename relative to _model_path.
onnx_path = options.get("model_path") or options.get("onnx")
if not onnx_path:
raise ValueError("OnnxDirectEngine requires `model_path: <file.onnx>` in options")
if not os.path.isabs(onnx_path):
onnx_path = os.path.join(options.get("_model_path", ""), onnx_path)
if not os.path.isfile(onnx_path):
raise FileNotFoundError(f"ONNX model not found: {onnx_path}")
providers = options.get("providers")
if providers:
provider_list = [p.strip() for p in providers.split(",") if p.strip()]
else:
provider_list = ["CPUExecutionProvider"]
self._session = ort.InferenceSession(onnx_path, providers=provider_list)
self._input_name = self._session.get_inputs()[0].name
self._expected_sr = int(options.get("sample_rate", "16000"))
self._analysis = AnalysisHead(options)
def _load_waveform(self, path: str):
import numpy as np
import soundfile as sf # type: ignore
audio, sr = sf.read(path, always_2d=False)
if sr != self._expected_sr:
# Cheap linear resample — good enough for sanity; callers
# should pre-resample for production.
ratio = self._expected_sr / float(sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
)
if audio.ndim > 1:
audio = audio.mean(axis=1)
return audio.astype("float32")
def embed(self, audio_path: str) -> list[float]:
import numpy as np
audio = self._load_waveform(audio_path)
feed = audio.reshape(1, -1)
out = self._session.run(None, {self._input_name: feed})
vec = np.asarray(out[0]).reshape(-1)
return [float(x) for x in vec]
def compare(self, audio1: str, audio2: str) -> float:
return _cosine_distance(self.embed(audio1), self.embed(audio2))
def analyze(self, audio_path: str, actions):
# AnalysisHead expects 16kHz mono; _load_waveform already
# resamples to self._expected_sr. If the user configured a
# non-16k expected rate, resample one more time for analyze.
audio = self._load_waveform(audio_path)
if self._expected_sr != 16000:
import numpy as np
ratio = 16000 / float(self._expected_sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
).astype("float32")
attrs = self._analysis.analyze(audio_path, audio, actions)
if not attrs:
raise NotImplementedError(
"analyze head failed to load — install transformers + torch or pass age_gender_model/emotion_model options"
)
duration = float(len(audio)) / 16000.0 if len(audio) else 0.0
return [dict(start=0.0, end=duration, **attrs)]
def build_engine(model_name: str, options: dict[str, str]) -> tuple[SpeakerEngine, str]:
"""Pick an engine based on the options. ONNX path takes priority:
if the gallery has dropped a `model_path:` or `onnx:` option, run
the direct ONNX engine. Otherwise, fall back to SpeechBrain.
"""
engine_kind = (options.get("engine") or "").lower()
if engine_kind == "onnx" or options.get("model_path") or options.get("onnx"):
return OnnxDirectEngine(model_name, options), OnnxDirectEngine.name
return SpeechBrainEngine(model_name, options), SpeechBrainEngine.name