feat: voice recognition (#9500)

* feat(voice-recognition): add /v1/voice/{verify,analyze,embed} + speaker-recognition backend

Audio analog to face recognition. Adds three gRPC RPCs
(VoiceVerify / VoiceAnalyze / VoiceEmbed), their Go service and HTTP
layers, a new FLAG_SPEAKER_RECOGNITION capability flag, and a Python
backend scaffold under backend/python/speaker-recognition/ wrapping
SpeechBrain ECAPA-TDNN with a parallel OnnxDirectEngine for
WeSpeaker / 3D-Speaker ONNX exports.

The kokoros Rust backend gets matching unimplemented trait stubs —
tonic's async_trait has no defaults, so adding an RPC without Rust
stubs breaks the build (same regression fixed by eb01c772 for face).

Swagger, /api/instructions, and the auth RouteFeatureRegistry /
APIFeatures list are updated so the endpoints surface everywhere a
client or admin UI looks.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): add 1:N identify + register/forget endpoints

Mirrors the face-recognition register/identify/forget surface. New
package core/services/voicerecognition/ carries a Registry interface
and a local-store-backed implementation (same in-memory vector-store
plumbing facerecognition uses, separate instance so the embedding
spaces stay isolated).

Handlers under /v1/voice/{register,identify,forget} reuse
backend.VoiceEmbed to compute the probe vector, then delegate the
nearest-neighbour search to the registry. Default cosine-distance
threshold is tuned for ECAPA-TDNN on VoxCeleb (0.25, EER ~1.9%).

As with the face registry, the current backing is in-memory only — a
pgvector implementation is a future constructor-level swap.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): gallery, docs, CI and e2e coverage

- backend/index.yaml: speaker-recognition backend entry + CPU and
  CUDA-12 image variants (plus matching development variants).
- gallery/index.yaml: speechbrain-ecapa-tdnn (default) and
  wespeaker-resnet34 model entries. The WeSpeaker SHA-256 is a
  deliberate placeholder — the HF URI must be curl'd and its hash
  filled in before the entry installs.
- docs/content/features/voice-recognition.md: API reference + quickstart,
  mirrors the face-recognition docs.
- React UI: CAP_SPEAKER_RECOGNITION flag export (consumers follow face's
  precedent — no dedicated tab yet).
- tests/e2e-backends: voice_embed / voice_verify / voice_analyze specs.
  Helper resolveFaceFixture is reused as-is — the only thing face/voice
  share is "download a file into workDir", so no need for a new helper.
- Makefile: docker-build-speaker-recognition + test-extra-backend-
  speaker-recognition-{ecapa,all} targets. Audio fixtures default to
  VCTK p225/p226 samples from HuggingFace.
- CI: test-extra.yml grows a tests-speaker-recognition-grpc job
  mirroring insightface. backend.yml matrix gains CPU + CUDA-12 image
  build entries — scripts/changed-backends.js auto-picks these up.

Assisted-by: Claude:claude-opus-4-7

* feat(voice-recognition): wire a working /v1/voice/analyze head

Adds AnalysisHead: a lazy-loading age / gender / emotion inference
wrapper that plugs into both SpeechBrainEngine and OnnxDirectEngine.

Defaults to two open-licence HuggingFace checkpoints:
  - audeering/wav2vec2-large-robust-24-ft-age-gender (Apache 2.0) —
    age regression + 3-way gender (female / male / child).
  - superb/wav2vec2-base-superb-er (Apache 2.0) — 4-way emotion.

Both are optional and degrade gracefully when transformers or the
model can't be loaded — the engine raises NotImplementedError so the
gRPC layer returns 501 instead of a generic 500.

Emotion classes pass through from the model (neutral/happy/angry/sad
on the default checkpoint); the e2e test now accepts any non-empty
dominant gender so custom age_gender_model overrides don't fail it.

Adds transformers to the backend's CPU and CUDA-12 requirements.

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): pin real WeSpeaker ResNet34 ONNX SHA-256

Replaces the placeholder hash in gallery/index.yaml with the actual
SHA-256 (7bb2f06e…) of the upstream
Wespeaker/wespeaker-voxceleb-resnet34-LM ONNX at ~25MB. `local-ai
models install wespeaker-resnet34` now succeeds.

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): soundfile loader + honest analyze default

Two issues surfaced on first end-to-end smoke with the actual backend
image:

1. torchaudio.load in torchaudio 2.8+ requires the torchcodec package
   for audio decoding. Switch SpeechBrainEngine._load_waveform to the
   already-present soundfile (listed in requirements.txt) plus a numpy
   linear resample to 16kHz. Drops a heavy ffmpeg-linked dep and the
   codepath we never exercise (torchaudio's ffmpeg backend).

2. The AnalysisHead was defaulting to audeering/wav2vec2-large-robust-
   24-ft-age-gender, but AutoModelForAudioClassification silently
   mangles that checkpoint — it reports the age head weights as
   UNEXPECTED and re-initialises the classifier head with random
   values, so the "gender" output is noise and there is no age output
   at all. Make age/gender opt-in instead (empty default; users wire
   a cleanly-loadable Wav2Vec2ForSequenceClassification checkpoint via
   age_gender_model: option). Emotion keeps its working Superb default.
   Also broaden _infer_age_gender's tensor-shape handling and catch
   runtime exceptions so a dodgy age/gender head never takes down the
   whole analyze call.

Docs and README updated to match the new policy.

Verified with the branch-scoped gallery on localhost:
- voice/embed    → 192-d ECAPA-TDNN vector
- voice/verify   → same-clip dist≈6e-08 verified=true; cross-speaker
                   dist 0.76–0.99 verified=false (as expected)
- voice/register/identify/forget → round-trip works, 404 on unknown id
- voice/analyze  → emotion populated, age/gender omitted (opt-in)

Assisted-by: Claude:claude-opus-4-7

* fix(voice-recognition): real CI audio fixtures + fixture-agnostic verify spec

Two issues surfaced after CI actually ran the speaker-recognition e2e
target (I'd curl-tested against a running server but hadn't run the
make target locally):

1. The default BACKEND_TEST_VOICE_AUDIO_* URLs pointed at
   huggingface.co/datasets/CSTR-Edinburgh/vctk paths that return 404
   (the dataset is gated). Swap them for the speechbrain test samples
   served from github.com/speechbrain/speechbrain/raw/develop/ —
   public, no auth, correct 16kHz mono format.

2. The VoiceVerify spec required d(file1,file2) < 0.4, assuming
   file1/file2 were same-speaker. The speechbrain samples are three
   different speakers (example1/2/5), and there is no easy un-gated
   source of true same-speaker audio pairs (VoxCeleb/VCTK/LibriSpeech
   are all license- or size-gated for CI use). Replace the ceiling
   check with a relative-ordering assertion: d(pair) > d(same-clip)
   for both file2 and file3 — that's enough to prove the embeddings
   encode speaker info, and it works with any three non-identical
   clips. Actual speaker ordering d(1,2) vs d(1,3) is logged but not
   asserted.

Local run: 4/4 voice specs pass (Health, LoadModel, VoiceEmbed,
VoiceVerify) on the built backend image. 12 non-voice specs skipped
as expected.

Assisted-by: Claude:claude-opus-4-7

* fix(ci): checkout with submodules in the reusable backend_build workflow

The kokoros Rust backend build fails with

    failed to read .../sources/Kokoros/kokoros/Cargo.toml: No such file

because the reusable backend_build.yml workflow's actions/checkout
step was missing `submodules: true`. Dockerfile.rust does `COPY .
/LocalAI`, and without the submodule files the subsequent `cargo
build` can't find the vendored Kokoros crate.

The bug pre-dates this PR — scripts/changed-backends.js only triggers
the kokoros image job when something under backend/rust/kokoros or
the shared proto changes, so master had been coasting past it. The
voice-recognition proto addition re-broke it.

Other checkouts in backend.yml (llama-cpp-darwin) and test-extra.yml
(insightface, kokoros, speaker-recognition) already pass
`submodules: true`; this brings the shared backend image builder in
line.

Assisted-by: Claude:claude-opus-4-7
This commit is contained in:
Ettore Di Giacinto
2026-04-23 12:07:14 +02:00
committed by GitHub
parent 1c59165d63
commit 181ebb6df4
53 changed files with 3747 additions and 6 deletions

View File

@@ -0,0 +1,13 @@
.DEFAULT_GOAL := install
.PHONY: install
install:
bash install.sh
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,40 @@
# speaker-recognition
Speaker (voice) recognition backend for LocalAI. The audio analog to
`insightface` — produces speaker embeddings and supports 1:1 voice
verification and voice demographic analysis.
## Engines
- **SpeechBrainEngine** (default): ECAPA-TDNN trained on VoxCeleb.
192-d L2-normalised embeddings, cosine distance for verification.
Auto-downloads from HuggingFace on first LoadModel.
- **OnnxDirectEngine**: Any pre-exported ONNX speaker encoder
(WeSpeaker ResNet, 3D-Speaker ERes2Net, CAM++, …). Model path comes
from the gallery `files:` entry.
Engine selection is gallery-driven: if the model config provides
`model_path:` / `onnx:` the ONNX engine is used, otherwise the
SpeechBrain engine.
## Endpoints
- `POST /v1/voice/verify` — 1:1 same-speaker check.
- `POST /v1/voice/embed` — extract a speaker embedding vector.
- `POST /v1/voice/analyze` — voice demographics, loaded lazily on
the first analyze call:
- **Emotion** (default, opt-out): `superb/wav2vec2-base-superb-er`
(Apache-2.0), 4-way categorical (neutral / happy / angry / sad).
- **Age + gender** (opt-in): no default — wire a checkpoint with a
standard `Wav2Vec2ForSequenceClassification` head via
`age_gender_model:<repo>` in options. The Audeering
age-gender model is *not* usable as a drop-in because its
multi-task head isn't loadable via `AutoModelForAudioClassification`.
Both heads are optional. When nothing loads, the engine returns 501.
## Audio input
Audio is materialised by the HTTP layer to a temp wav before calling
the gRPC backend. Accepted input forms on the HTTP side: URL, data-URI,
or raw base64. The backend itself always receives a filesystem path.

View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
"""gRPC server for the LocalAI speaker-recognition backend.
Implements Health / LoadModel / Status plus the voice-specific methods:
VoiceVerify, VoiceAnalyze, VoiceEmbed. The heavy lifting lives in
engines.py — this file is just the gRPC plumbing, mirroring the
insightface backend's two-engine split (SpeechBrain + OnnxDirect).
"""
from __future__ import annotations
import argparse
import os
import signal
import sys
import time
from concurrent import futures
import backend_pb2
import backend_pb2_grpc
import grpc
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "common"))
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "common"))
from grpc_auth import get_auth_interceptors # noqa: E402
from engines import SpeakerEngine, build_engine # noqa: E402
_ONE_DAY = 60 * 60 * 24
MAX_WORKERS = int(os.environ.get("PYTHON_GRPC_MAX_WORKERS", "1"))
# ECAPA-TDNN on VoxCeleb is the reference. Threshold is tuned for
# cosine distance (1 - cosine_similarity). Clients may override.
DEFAULT_VERIFY_THRESHOLD = 0.25
def _parse_options(raw: list[str]) -> dict[str, str]:
out: dict[str, str] = {}
for entry in raw:
if ":" not in entry:
continue
k, v = entry.split(":", 1)
out[k.strip()] = v.strip()
return out
class BackendServicer(backend_pb2_grpc.BackendServicer):
def __init__(self) -> None:
self.engine: SpeakerEngine | None = None
self.engine_name: str = ""
self.model_name: str = ""
self.verify_threshold: float = DEFAULT_VERIFY_THRESHOLD
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", "utf-8"))
def LoadModel(self, request, context):
options = _parse_options(list(request.Options))
# Surface LocalAI's models directory (ModelPath) so engines can
# anchor relative paths and auto-download into a writable spot
# alongside every other gallery-managed asset.
options["_model_path"] = request.ModelPath or ""
try:
engine, engine_name = build_engine(request.Model, options)
except Exception as exc: # noqa: BLE001
return backend_pb2.Result(success=False, message=f"engine init failed: {exc}")
self.engine = engine
self.engine_name = engine_name
self.model_name = request.Model
threshold_opt = options.get("verify_threshold")
if threshold_opt:
try:
self.verify_threshold = float(threshold_opt)
except ValueError:
pass
return backend_pb2.Result(success=True, message=f"loaded {engine_name}")
def Status(self, request, context):
state = backend_pb2.StatusResponse.State.READY if self.engine else backend_pb2.StatusResponse.State.UNINITIALIZED
return backend_pb2.StatusResponse(state=state)
def _require_engine(self, context) -> SpeakerEngine | None:
if self.engine is None:
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
context.set_details("no speaker-recognition model loaded")
return None
return self.engine
def VoiceVerify(self, request, context):
engine = self._require_engine(context)
if engine is None:
return backend_pb2.VoiceVerifyResponse()
if not request.audio1 or not request.audio2:
context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
context.set_details("audio1 and audio2 are required")
return backend_pb2.VoiceVerifyResponse()
threshold = request.threshold if request.threshold > 0 else self.verify_threshold
started = time.time()
try:
distance = engine.compare(request.audio1, request.audio2)
except Exception as exc: # noqa: BLE001
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"voice verify failed: {exc}")
return backend_pb2.VoiceVerifyResponse()
elapsed_ms = (time.time() - started) * 1000.0
# Confidence goes linearly from 100 at distance=0 to 0 at distance=threshold.
confidence = max(0.0, min(100.0, (1.0 - distance / threshold) * 100.0))
return backend_pb2.VoiceVerifyResponse(
verified=distance <= threshold,
distance=distance,
threshold=threshold,
confidence=confidence,
model=self.model_name,
processing_time_ms=elapsed_ms,
)
def VoiceEmbed(self, request, context):
engine = self._require_engine(context)
if engine is None:
return backend_pb2.VoiceEmbedResponse()
if not request.audio:
context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
context.set_details("audio is required")
return backend_pb2.VoiceEmbedResponse()
try:
vec = engine.embed(request.audio)
except Exception as exc: # noqa: BLE001
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"voice embed failed: {exc}")
return backend_pb2.VoiceEmbedResponse()
return backend_pb2.VoiceEmbedResponse(embedding=list(vec), model=self.model_name)
def VoiceAnalyze(self, request, context):
engine = self._require_engine(context)
if engine is None:
return backend_pb2.VoiceAnalyzeResponse()
if not request.audio:
context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
context.set_details("audio is required")
return backend_pb2.VoiceAnalyzeResponse()
actions = list(request.actions) or ["age", "gender", "emotion"]
try:
segments = engine.analyze(request.audio, actions)
except NotImplementedError:
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details(f"analyze not supported by {self.engine_name}")
return backend_pb2.VoiceAnalyzeResponse()
except Exception as exc: # noqa: BLE001
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"voice analyze failed: {exc}")
return backend_pb2.VoiceAnalyzeResponse()
proto_segments = []
for seg in segments:
proto_segments.append(
backend_pb2.VoiceAnalysis(
start=seg.get("start", 0.0),
end=seg.get("end", 0.0),
age=seg.get("age", 0.0),
dominant_gender=seg.get("dominant_gender", ""),
gender=seg.get("gender", {}),
dominant_emotion=seg.get("dominant_emotion", ""),
emotion=seg.get("emotion", {}),
)
)
return backend_pb2.VoiceAnalyzeResponse(segments=proto_segments)
def serve(address: str) -> None:
interceptors = get_auth_interceptors()
server = grpc.server(
futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
interceptors=interceptors,
options=[
("grpc.max_send_message_length", 128 * 1024 * 1024),
("grpc.max_receive_message_length", 128 * 1024 * 1024),
],
)
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("speaker-recognition backend listening on", address, flush=True)
def _stop(*_):
server.stop(0)
sys.exit(0)
signal.signal(signal.SIGTERM, _stop)
signal.signal(signal.SIGINT, _stop)
try:
while True:
time.sleep(_ONE_DAY)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--addr", default="localhost:50051")
args = parser.parse_args()
serve(args.addr)

View File

@@ -0,0 +1,387 @@
"""Speaker-recognition engines.
Two engines are offered, mirroring the insightface backend's split:
* SpeechBrainEngine: full PyTorch / SpeechBrain path. Uses the
ECAPA-TDNN recipe trained on VoxCeleb; 192-d L2-normalized
embeddings, cosine distance for verification. Auto-downloads the
checkpoint into LocalAI's models directory on first LoadModel.
* OnnxDirectEngine: CPU-friendly fallback that runs pre-exported
ONNX speaker encoders (WeSpeaker ResNet34, 3D-Speaker ERes2Net,
CAM++, etc.). Model paths come from the model config — the gallery
`files:` flow drops them into the models directory.
Engine selection follows the same gallery-driven convention face
recognition uses (insightface commits 9c6da0f7 / 405fec0b): the
Python backend reads `engine` / `model_path` / `checkpoint` from the
options dict and picks an engine accordingly.
"""
from __future__ import annotations
import os
from typing import Any, Iterable, Protocol
class SpeakerEngine(Protocol):
"""Interface both concrete engines satisfy."""
name: str
def embed(self, audio_path: str) -> list[float]: # pragma: no cover - interface
...
def compare(self, audio1: str, audio2: str) -> float: # pragma: no cover
...
def analyze(self, audio_path: str, actions: Iterable[str]) -> list[dict[str, Any]]: # pragma: no cover
...
def _cosine_distance(a, b) -> float:
import numpy as np
va = np.asarray(a, dtype=np.float32).reshape(-1)
vb = np.asarray(b, dtype=np.float32).reshape(-1)
na = float(np.linalg.norm(va))
nb = float(np.linalg.norm(vb))
if na == 0.0 or nb == 0.0:
return 1.0
return float(1.0 - np.dot(va, vb) / (na * nb))
class AnalysisHead:
"""Age / gender / emotion head, lazy-loaded on first analyze call.
Wraps two open-licence HuggingFace checkpoints:
* audeering/wav2vec2-large-robust-24-ft-age-gender — age
regression (0100 years) + 3-way gender (female/male/child).
Apache 2.0.
* superb/wav2vec2-base-superb-er — 4-way emotion classification
(neutral / happy / angry / sad). Apache 2.0.
Either model is optional — the head degrades gracefully to only the
attributes it could load. Override the checkpoint with the
`age_gender_model` / `emotion_model` option if you want something
else. Set either to an empty string to disable that head.
"""
# Age + gender is OFF by default: the high-accuracy Apache-2.0
# checkpoint (Audeering wav2vec2-large-robust-24-ft-age-gender) uses a
# custom multi-task head that AutoModelForAudioClassification silently
# mangles — it drops the age weights as UNEXPECTED and re-initialises
# the classifier head with random values, so the output is noise. Users
# who have a cleanly loadable age/gender classifier can opt in with
# `age_gender_model:<repo>` in options. The emotion default below
# (superb/wav2vec2-base-superb-er) loads via the standard audio-
# classification pipeline with no such caveat.
DEFAULT_AGE_GENDER_MODEL = ""
DEFAULT_EMOTION_MODEL = "superb/wav2vec2-base-superb-er"
AGE_GENDER_LABELS = ("female", "male", "child")
def __init__(self, options: dict[str, str]):
self._options = options
self._age_gender = None
self._age_gender_processor = None
self._age_gender_loaded = False
self._age_gender_error: str | None = None
self._emotion = None
self._emotion_loaded = False
self._emotion_error: str | None = None
# --- age / gender -------------------------------------------------
def _ensure_age_gender(self):
if self._age_gender_loaded:
return
self._age_gender_loaded = True
model_id = self._options.get(
"age_gender_model", self.DEFAULT_AGE_GENDER_MODEL
)
if not model_id:
self._age_gender_error = "disabled"
return
try:
# Late imports — torch / transformers are heavy and only
# pulled in when the analyze head actually runs.
import torch # type: ignore
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification # type: ignore
self._torch = torch
self._age_gender_processor = AutoFeatureExtractor.from_pretrained(model_id)
self._age_gender = AutoModelForAudioClassification.from_pretrained(model_id)
self._age_gender.eval()
except Exception as exc: # noqa: BLE001
self._age_gender_error = f"{type(exc).__name__}: {exc}"
def _infer_age_gender(self, waveform_16k) -> dict[str, Any]:
self._ensure_age_gender()
if self._age_gender is None:
return {}
import numpy as np
try:
inputs = self._age_gender_processor(
waveform_16k, sampling_rate=16000, return_tensors="pt"
)
with self._torch.no_grad():
outputs = self._age_gender(**inputs)
# Audeering's checkpoint is published with a custom head: the
# official recipe exposes `(hidden_states, logits_age, logits_gender)`.
# AutoModelForAudioClassification flattens that into a single
# `logits` tensor of shape [batch, 4] — [age_regression, female, male, child].
# Fall back gracefully when the shape is different (e.g. a
# user-supplied age_gender_model checkpoint that returns a proper tuple).
hidden = getattr(outputs, "logits", outputs)
age_years = None
gender_logits = None
if isinstance(hidden, (tuple, list)) and len(hidden) >= 2:
age_years = float(hidden[0].squeeze().item()) * 100.0
gender_logits = hidden[1]
else:
flat = hidden.squeeze()
if flat.ndim == 1 and flat.numel() >= 4:
age_years = float(flat[0].item()) * 100.0
gender_logits = flat[1:4]
elif flat.ndim == 1 and flat.numel() == 1:
age_years = float(flat.item()) * 100.0
if age_years is None and gender_logits is None:
return {}
result: dict[str, Any] = {}
if age_years is not None:
result["age"] = age_years
if gender_logits is not None:
probs = self._torch.softmax(gender_logits, dim=-1).cpu().numpy()
probs = np.asarray(probs).reshape(-1)
gender_map = {
label: float(probs[i])
for i, label in enumerate(self.AGE_GENDER_LABELS[: len(probs)])
}
result["gender"] = gender_map
if gender_map:
dom = max(gender_map.items(), key=lambda kv: kv[1])[0]
result["dominant_gender"] = {
"female": "Female",
"male": "Male",
"child": "Child",
}.get(dom, dom.capitalize())
return result
except Exception as exc: # noqa: BLE001
# Analyze is a best-effort feature — never take down the
# whole analyze call because the age/gender head had a bad
# day. Mark the failure so the emotion branch still runs.
self._age_gender_error = f"runtime: {type(exc).__name__}: {exc}"
return {}
# --- emotion ------------------------------------------------------
def _ensure_emotion(self):
if self._emotion_loaded:
return
self._emotion_loaded = True
model_id = self._options.get("emotion_model", self.DEFAULT_EMOTION_MODEL)
if not model_id:
self._emotion_error = "disabled"
return
try:
from transformers import pipeline # type: ignore
self._emotion = pipeline("audio-classification", model=model_id)
except Exception as exc: # noqa: BLE001
self._emotion_error = f"{type(exc).__name__}: {exc}"
def _infer_emotion(self, audio_path: str) -> dict[str, Any]:
self._ensure_emotion()
if self._emotion is None:
return {}
try:
raw = self._emotion(audio_path, top_k=8)
except Exception as exc: # noqa: BLE001
# Second-line defense: don't fail the whole analyze call
# over a runtime inference hiccup.
self._emotion_error = f"runtime: {type(exc).__name__}: {exc}"
return {}
emotion_map = {row["label"].lower(): float(row["score"]) for row in raw}
if not emotion_map:
return {}
dom = max(emotion_map.items(), key=lambda kv: kv[1])[0]
return {"emotion": emotion_map, "dominant_emotion": dom}
# --- orchestrator -------------------------------------------------
def analyze(self, audio_path: str, waveform_16k, actions: Iterable[str]) -> dict[str, Any]:
wanted = {a.strip().lower() for a in actions} if actions else {"age", "gender", "emotion"}
result: dict[str, Any] = {}
if "age" in wanted or "gender" in wanted:
ag = self._infer_age_gender(waveform_16k)
if "age" in wanted and "age" in ag:
result["age"] = ag["age"]
if "gender" in wanted:
if "gender" in ag:
result["gender"] = ag["gender"]
if "dominant_gender" in ag:
result["dominant_gender"] = ag["dominant_gender"]
if "emotion" in wanted:
em = self._infer_emotion(audio_path)
result.update(em)
return result
class SpeechBrainEngine:
"""ECAPA-TDNN via SpeechBrain. Auto-downloads on first use."""
name = "speechbrain-ecapa-tdnn"
def __init__(self, model_name: str, options: dict[str, str]):
# Late imports so the module can be introspected / tested
# without torch / speechbrain being installed.
from speechbrain.inference.speaker import EncoderClassifier # type: ignore
source = options.get("source") or model_name or "speechbrain/spkrec-ecapa-voxceleb"
savedir = options.get("_model_path") or os.environ.get("HF_HOME") or "./pretrained_models"
self._model = EncoderClassifier.from_hparams(source=source, savedir=savedir)
self._analysis = AnalysisHead(options)
def _load_waveform(self, path: str):
# Use soundfile + torch directly — torchaudio.load in torchaudio
# 2.8+ requires the torchcodec package for decoding, which adds
# another heavy ffmpeg-linked dep. soundfile covers WAV/FLAC
# which is what we care about here.
import numpy as np
import soundfile as sf # type: ignore
import torch # type: ignore
audio, sr = sf.read(path, always_2d=False)
if audio.ndim > 1:
audio = audio.mean(axis=1)
audio = np.asarray(audio, dtype=np.float32)
if sr != 16000:
# Simple linear resample — good enough for 16kHz downsampling
# from 44.1/48kHz, and we expect 16kHz inputs in practice.
ratio = 16000 / float(sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
).astype(np.float32)
return torch.from_numpy(audio).unsqueeze(0) # [1, T]
def embed(self, audio_path: str) -> list[float]:
waveform = self._load_waveform(audio_path)
vec = self._model.encode_batch(waveform).squeeze().detach().cpu().numpy()
return [float(x) for x in vec]
def compare(self, audio1: str, audio2: str) -> float:
return _cosine_distance(self.embed(audio1), self.embed(audio2))
def analyze(self, audio_path: str, actions):
# Age / gender / emotion aren't produced by ECAPA-TDNN itself;
# delegate to AnalysisHead which wraps separate Apache-2.0
# checkpoints. Returns a single segment spanning the clip —
# segmentation / diarisation is a future enhancement.
waveform = self._load_waveform(audio_path)
mono = waveform.squeeze().detach().cpu().numpy()
attrs = self._analysis.analyze(audio_path, mono, actions)
if not attrs:
raise NotImplementedError(
"analyze head failed to load — install transformers + torch or pass age_gender_model/emotion_model options"
)
duration = float(mono.shape[-1]) / 16000.0 if mono.size else 0.0
return [dict(start=0.0, end=duration, **attrs)]
class OnnxDirectEngine:
"""Run a pre-exported ONNX speaker encoder (WeSpeaker / 3D-Speaker)."""
name = "onnx-direct"
def __init__(self, model_name: str, options: dict[str, str]):
import onnxruntime as ort # type: ignore
# The gallery is expected to have dropped the ONNX file under
# the models directory; accept either an absolute path or a
# filename relative to _model_path.
onnx_path = options.get("model_path") or options.get("onnx")
if not onnx_path:
raise ValueError("OnnxDirectEngine requires `model_path: <file.onnx>` in options")
if not os.path.isabs(onnx_path):
onnx_path = os.path.join(options.get("_model_path", ""), onnx_path)
if not os.path.isfile(onnx_path):
raise FileNotFoundError(f"ONNX model not found: {onnx_path}")
providers = options.get("providers")
if providers:
provider_list = [p.strip() for p in providers.split(",") if p.strip()]
else:
provider_list = ["CPUExecutionProvider"]
self._session = ort.InferenceSession(onnx_path, providers=provider_list)
self._input_name = self._session.get_inputs()[0].name
self._expected_sr = int(options.get("sample_rate", "16000"))
self._analysis = AnalysisHead(options)
def _load_waveform(self, path: str):
import numpy as np
import soundfile as sf # type: ignore
audio, sr = sf.read(path, always_2d=False)
if sr != self._expected_sr:
# Cheap linear resample — good enough for sanity; callers
# should pre-resample for production.
ratio = self._expected_sr / float(sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
)
if audio.ndim > 1:
audio = audio.mean(axis=1)
return audio.astype("float32")
def embed(self, audio_path: str) -> list[float]:
import numpy as np
audio = self._load_waveform(audio_path)
feed = audio.reshape(1, -1)
out = self._session.run(None, {self._input_name: feed})
vec = np.asarray(out[0]).reshape(-1)
return [float(x) for x in vec]
def compare(self, audio1: str, audio2: str) -> float:
return _cosine_distance(self.embed(audio1), self.embed(audio2))
def analyze(self, audio_path: str, actions):
# AnalysisHead expects 16kHz mono; _load_waveform already
# resamples to self._expected_sr. If the user configured a
# non-16k expected rate, resample one more time for analyze.
audio = self._load_waveform(audio_path)
if self._expected_sr != 16000:
import numpy as np
ratio = 16000 / float(self._expected_sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
).astype("float32")
attrs = self._analysis.analyze(audio_path, audio, actions)
if not attrs:
raise NotImplementedError(
"analyze head failed to load — install transformers + torch or pass age_gender_model/emotion_model options"
)
duration = float(len(audio)) / 16000.0 if len(audio) else 0.0
return [dict(start=0.0, end=duration, **attrs)]
def build_engine(model_name: str, options: dict[str, str]) -> tuple[SpeakerEngine, str]:
"""Pick an engine based on the options. ONNX path takes priority:
if the gallery has dropped a `model_path:` or `onnx:` option, run
the direct ONNX engine. Otherwise, fall back to SpeechBrain.
"""
engine_kind = (options.get("engine") or "").lower()
if engine_kind == "onnx" or options.get("model_path") or options.get("onnx"):
return OnnxDirectEngine(model_name, options), OnnxDirectEngine.name
return SpeechBrainEngine(model_name, options), SpeechBrainEngine.name

View File

@@ -0,0 +1,19 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
installRequirements
# No pre-baked model weights. Weights flow through LocalAI's gallery
# `files:` mechanism — see gallery entries for speechbrain-ecapa-tdnn
# and WeSpeaker / 3D-Speaker ONNX packs. SpeechBrain's
# EncoderClassifier.from_hparams also knows how to auto-download from
# HuggingFace into the configured savedir (we point it at ModelPath),
# so the first LoadModel call bootstraps the checkpoint if the gallery
# flow wasn't used.

View File

@@ -0,0 +1,5 @@
torch
torchaudio
speechbrain
transformers
onnxruntime

View File

@@ -0,0 +1,5 @@
torch
torchaudio
speechbrain
transformers
onnxruntime-gpu

View File

@@ -0,0 +1,5 @@
grpcio==1.71.0
protobuf
grpcio-tools
numpy
soundfile

View File

@@ -0,0 +1,9 @@
#!/bin/bash
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@

View File

@@ -0,0 +1,78 @@
"""Unit tests for the speaker-recognition gRPC backend.
The servicer is instantiated in-process (no gRPC channel) and driven
directly. The default path exercises SpeechBrain's ECAPA-TDNN — the
first run downloads the checkpoint into a temp savedir. Tests are
skipped gracefully when the heavy optional dependencies (torch /
speechbrain / onnxruntime) are not installed, so the gRPC plumbing
can still be verified on a bare image.
"""
from __future__ import annotations
import importlib
import os
import sys
import tempfile
import unittest
sys.path.insert(0, os.path.dirname(__file__))
import backend_pb2 # noqa: E402
from backend import BackendServicer # noqa: E402
def _have(*mods: str) -> bool:
for m in mods:
if importlib.util.find_spec(m) is None:
return False
return True
class _FakeCtx:
"""Minimal stand-in for a gRPC servicer context."""
def __init__(self) -> None:
self.code = None
self.details = ""
def set_code(self, c):
self.code = c
def set_details(self, d):
self.details = d
class ServicerPlumbingTest(unittest.TestCase):
"""Checks that LoadModel returns a clear error when no engine deps
are installed, and that Voice* calls on an uninitialised servicer
surface FAILED_PRECONDITION — both verifying the gRPC wiring
without requiring SpeechBrain or ONNX at test time."""
def test_pre_load_voice_calls_are_rejected(self):
svc = BackendServicer()
ctx = _FakeCtx()
svc.VoiceVerify(backend_pb2.VoiceVerifyRequest(audio1="/tmp/a.wav", audio2="/tmp/b.wav"), ctx)
self.assertEqual(str(ctx.code), "StatusCode.FAILED_PRECONDITION")
def test_load_without_deps_fails_cleanly(self):
svc = BackendServicer()
req = backend_pb2.ModelOptions(Model="speechbrain/spkrec-ecapa-voxceleb", ModelPath="")
result = svc.LoadModel(req, _FakeCtx())
# Either the deps are installed and it loaded, or they aren't
# and we got a structured error instead of a crash.
self.assertTrue(result.success or "engine init failed" in result.message)
@unittest.skipUnless(_have("speechbrain", "torch", "torchaudio"), "speechbrain / torch missing")
class SpeechBrainEngineSmokeTest(unittest.TestCase):
def test_load_and_embed(self):
svc = BackendServicer()
with tempfile.TemporaryDirectory() as td:
req = backend_pb2.ModelOptions(Model="speechbrain/spkrec-ecapa-voxceleb", ModelPath=td)
result = svc.LoadModel(req, _FakeCtx())
self.assertTrue(result.success, result.message)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,11 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
runUnittests