Files
LocalAI/backend/python/speaker-recognition/engines.py
Ettore Di Giacinto 4906cbad04 feat: add biometrics UI (#9524)
* feat(react-ui): add Face & Voice Recognition pages

Expose the face and voice biometrics endpoints
(/v1/face/*, /v1/voice/*) through the React UI. Each page has four
tabs driving the six endpoints per modality: Analyze (demographics
with bounding boxes / waveform segments), Compare (verify with a
match gauge and live threshold slider), Enrollment (register /
identify / forget with a top-K matches view), Embedding (raw
vector inspector with sparkline + copy).

MediaInput supports file upload plus live capture: webcam
snap-to-canvas for face, MediaRecorder -> AudioContext ->
16-bit PCM mono WAV transcode for voice (libsndfile on the
backend only handles WAV/FLAC/OGG natively).

Sidebar gets a new Biometrics section feature-gated on
face_recognition / voice_recognition; routes are wrapped in
<RequireFeature>. No new dependencies -- Font Awesome icons
picked from the Free set.

Assisted-by: Claude:Opus 4.7

* fix(localai): accept data URI prefixes with codec/charset params

Browser MediaRecorder produces data URIs like
  data:audio/webm;codecs=opus;base64,...
so the pre-';base64,' section can carry multiple parameter
segments. The `^data:([^;]+);base64,` regex in pkg/utils/base64.go
and core/http/endpoints/localai/audio.go only matched exactly one
segment, so recordings straight from the React UI's live-capture
tab failed the strip and then tripped the base64 decoder on the
leading 'data:' literal, surfacing as
  "invalid audio base64: illegal base64 data at input byte 4"

Widened both regexes to `^data:[^,]+?;base64,` so any number of
';param=value' segments between the mime type and ';base64,' are
tolerated. Added a regression test covering the MediaRecorder
shape.

Assisted-by: Claude:Opus 4.7

* fix(insightface): scope pack ONNX loading to known manifests

LocalAI's gallery extracts buffalo_* zips flat into the models
directory, which inevitably mixes with ONNX files from other
backends (opencv face engine, MiniFASNet antispoof, WeSpeaker
voice embedding) and older buffalo pack installs. Feeding those
foreign files into insightface's model_zoo.get_model() blows up
inside the router -- it assumes a 4-D NCHW input and indexes
`input_shape[2]` on tensors that aren't shaped like a face model,
raising IndexError mid-load and leaving the backend unusable.

The router's dispatch isn't amenable to per-file try/except alone
(first-file-wins picks det_10g.onnx from buffalo_l even when the
user asked for buffalo_sc -- alphabetical order happens to favour
the wrong pack). Instead, ship an explicit manifest of the
upstream v0.7 pack contents and scope the glob to that when the
requested pack is known. The manifest is small and stable; future
packs can be added alongside or fall through to the tolerance
loop, which also swallows any remaining IndexError / ValueError
from foreign files with a clear `[insightface] skipped` stderr
line for diagnostics.

Assisted-by: Claude:Opus 4.7

* fix(speaker-recognition): extract FBank features for rank-3 ONNX encoders

Pre-exported speaker-encoder ONNX graphs come in two shapes:

  rank-2  [batch, samples]           -- some 3D-Speaker exports,
                                        take raw waveform directly.
  rank-3  [batch, frames, n_mels]    -- WeSpeaker and most Kaldi-
                                        lineage encoders, expect
                                        pre-computed Kaldi FBank.

OnnxDirectEngine unconditionally fed `audio.reshape(1, -1)` --
correct for rank-2, IndexError-on-input_shape[3] on rank-3, which
surfaced to the UI as
  "Invalid rank for input: feats Got: 2 Expected: 3"

Detect the input rank at session init and run Kaldi FBank
(80-dim, 25ms/10ms frames, dither=0.0, per-utterance CMN) before
the forward pass when rank>=3. All knobs are configurable via
backend options for encoders that deviate from defaults.

torchaudio.compliance.kaldi is already in the backend's
requirements (SpeechBrain pulls torchaudio in), so no new
dependency.

Assisted-by: Claude:Opus 4.7

* fix(biometrics): isolate face and voice vector stores

Face (ArcFace, 512-D) and voice (ECAPA-TDNN 192-D / WeSpeaker
256-D) biometric embeddings were colliding inside a single
in-memory local-store instance. Enrolling one after the other
failed with
  "Try to add key with length N when existing length is M"
because local-store correctly refuses to mix dimensions in one
keyspace.

The registries were constructed with `storeName=""`, which in
StoreBackend() is just a WithModel() call. But ModelLoader's
cache is keyed on `modelID`, not `model` -- so both registries
collapsed to the same `modelID=""` slot and reused the same
backend process despite looking isolated on paper.

Three complementary fixes:

  1. application.go -- give each registry a distinct default
     namespace ("localai-face-biometrics" /
     "localai-voice-biometrics"). The comment claimed
     isolation, now it's actually enforced.

  2. stores.go -- pass the storeName as both WithModelID and
     WithModel so the ModelLoader cache key separates
     namespaces and the loader spawns distinct processes.

  3. local-store/store.go -- drop the Load() `opts.Model != ""`
     guard. It was there to prevent generic model-loading loops
     from picking up local-store by accident, but that auto-load
     path is being retired; the guard now just blocks legitimate
     namespace isolation. opts.Model is treated as a tag; the
     per-tuple process isolation upstream handles discrimination.

Assisted-by: Claude:Opus 4.7

* fix(gallery): stale-file cleanup and upgrade-tmp directory safety

Two related robustness fixes for backend install/upgrade:

pkg/downloader/uri.go
  OCI downloads passed through
      if filepath.Ext(filePath) != "" ...
          filePath = filepath.Dir(filePath)
  which was intended to redirect file-shaped download targets
  into their parent directory for OCI extraction. The heuristic
  misfires on directory-shaped paths with a dot-suffix --
  gallery.UpgradeBackend uses
      tmpPath = "<backendsPath>/<name>.upgrade-tmp"
  and Go's filepath.Ext treats ".upgrade-tmp" as an extension.
  The rewrite landed the extraction at "<backendsPath>/", which
  then **overwrote the real install** (backends/<name>/) with a
  flat-layout file and left a stray run.sh at the top level. The
  tmp dir itself stayed empty, so the validation step that
  checked "<tmpPath>/run.sh" predictably failed with
      "upgrade validation failed: run.sh not found in new backend"
  Every manual upgrade silently corrupted the backends tree this
  way. Guard the rewrite behind "target isn't already an existing
  directory" -- InstallBackend / UpgradeBackend both pre-create
  the target as a directory, so they get the correct behaviour;
  existing file-path callers with a genuine dot-extension still
  get the parent redirect.

core/gallery/backends.go
  InstallBackend's MkdirAll returned ENOTDIR when something at
  the target path was already a file (legacy dev builds dropped
  golang backend binaries directly at `<backendsPath>/<name>`
  instead of nesting them under their own subdir). That
  permanently blocked reinstall and upgrade for anyone carrying
  that state, since every retry hit the same error. Detect a
  pre-existing non-directory, warn, and remove it before the
  MkdirAll so the fresh install can write the correct nested
  layout with metadata.json + run.sh.

Assisted-by: Claude:Opus 4.7

* fix(galleryop): refresh upgrade cache after backend ops

UpgradeChecker caches the last upgrade-check result and only
refreshes on the 6-hour tick or after an auto-upgrade cycle.
Manual upgrades (POST /api/backends/upgrade/:name) go through
the async galleryop worker, which completes the upgrade
correctly but never tells UpgradeChecker to re-check -- so
/api/backends/upgrades continued to list a just-upgraded backend
as upgradeable, indistinguishable from a failed upgrade, for up
to six hours.

Add an optional `OnBackendOpCompleted func()` hook on
GalleryService that fires after every successful install /
upgrade / delete on the backend channel (async, so a slow
callback doesn't stall the queue). startup.go wires it to
UpgradeChecker.TriggerCheck after both services exist. Result:
the upgrade banner clears within milliseconds of the worker
finishing.

Assisted-by: Claude:Opus 4.7

* build: prepend GOPATH/bin to PATH for protogen-go

install-go-tools runs `go install` for protoc-gen-go and
protoc-gen-go-grpc, which writes them into `go env GOPATH`/bin.
That directory isn't on every dev's PATH, and protoc resolves
its code-gen plugins via PATH, so the immediately-following
protoc invocation fails with
  "protoc-gen-go: program not found"
which in turn blocks `make build` and any
`make backends/%` target that depends on build.

Prepend `go env GOPATH`/bin to PATH for the protoc invocation
so the freshly-installed plugins are found without requiring a
shell-profile change.

Assisted-by: Claude:Opus 4.7

* refactor(ui-api): non-blocking backend upgrade handler with opcache

POST /api/backends/upgrade/:name used to send the ManagementOp
directly onto the unbuffered BackendGalleryChannel, which blocked
the HTTP request whenever the galleryop worker was busy with a
prior operation. The op also didn't show up in /api/operations,
so the Backends UI couldn't reflect upgrade progress on the
affected row.

Register the op in opcache immediately, wrap it in a cancellable
context, store the cancellation function on the GalleryService,
and push onto the channel from a goroutine so the handler
returns right away. Response gains a `jobID` field and a
`message` string so clients have a consistent handle regardless
of whether the op is queued or running.

Pairs with the OnBackendOpCompleted hook added in the galleryop
commit — together the UI sees the upgrade start, watches
progress via /api/operations, and drops the "upgradeable" flag
the moment the worker finishes.

Assisted-by: Claude:Opus 4.7
2026-04-24 08:50:34 +02:00

429 lines
19 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
"""Speaker-recognition engines.
Two engines are offered, mirroring the insightface backend's split:
* SpeechBrainEngine: full PyTorch / SpeechBrain path. Uses the
ECAPA-TDNN recipe trained on VoxCeleb; 192-d L2-normalized
embeddings, cosine distance for verification. Auto-downloads the
checkpoint into LocalAI's models directory on first LoadModel.
* OnnxDirectEngine: CPU-friendly fallback that runs pre-exported
ONNX speaker encoders (WeSpeaker ResNet34, 3D-Speaker ERes2Net,
CAM++, etc.). Model paths come from the model config — the gallery
`files:` flow drops them into the models directory.
Engine selection follows the same gallery-driven convention face
recognition uses (insightface commits 9c6da0f7 / 405fec0b): the
Python backend reads `engine` / `model_path` / `checkpoint` from the
options dict and picks an engine accordingly.
"""
from __future__ import annotations
import os
from typing import Any, Iterable, Protocol
class SpeakerEngine(Protocol):
"""Interface both concrete engines satisfy."""
name: str
def embed(self, audio_path: str) -> list[float]: # pragma: no cover - interface
...
def compare(self, audio1: str, audio2: str) -> float: # pragma: no cover
...
def analyze(self, audio_path: str, actions: Iterable[str]) -> list[dict[str, Any]]: # pragma: no cover
...
def _cosine_distance(a, b) -> float:
import numpy as np
va = np.asarray(a, dtype=np.float32).reshape(-1)
vb = np.asarray(b, dtype=np.float32).reshape(-1)
na = float(np.linalg.norm(va))
nb = float(np.linalg.norm(vb))
if na == 0.0 or nb == 0.0:
return 1.0
return float(1.0 - np.dot(va, vb) / (na * nb))
class AnalysisHead:
"""Age / gender / emotion head, lazy-loaded on first analyze call.
Wraps two open-licence HuggingFace checkpoints:
* audeering/wav2vec2-large-robust-24-ft-age-gender — age
regression (0100 years) + 3-way gender (female/male/child).
Apache 2.0.
* superb/wav2vec2-base-superb-er — 4-way emotion classification
(neutral / happy / angry / sad). Apache 2.0.
Either model is optional — the head degrades gracefully to only the
attributes it could load. Override the checkpoint with the
`age_gender_model` / `emotion_model` option if you want something
else. Set either to an empty string to disable that head.
"""
# Age + gender is OFF by default: the high-accuracy Apache-2.0
# checkpoint (Audeering wav2vec2-large-robust-24-ft-age-gender) uses a
# custom multi-task head that AutoModelForAudioClassification silently
# mangles — it drops the age weights as UNEXPECTED and re-initialises
# the classifier head with random values, so the output is noise. Users
# who have a cleanly loadable age/gender classifier can opt in with
# `age_gender_model:<repo>` in options. The emotion default below
# (superb/wav2vec2-base-superb-er) loads via the standard audio-
# classification pipeline with no such caveat.
DEFAULT_AGE_GENDER_MODEL = ""
DEFAULT_EMOTION_MODEL = "superb/wav2vec2-base-superb-er"
AGE_GENDER_LABELS = ("female", "male", "child")
def __init__(self, options: dict[str, str]):
self._options = options
self._age_gender = None
self._age_gender_processor = None
self._age_gender_loaded = False
self._age_gender_error: str | None = None
self._emotion = None
self._emotion_loaded = False
self._emotion_error: str | None = None
# --- age / gender -------------------------------------------------
def _ensure_age_gender(self):
if self._age_gender_loaded:
return
self._age_gender_loaded = True
model_id = self._options.get(
"age_gender_model", self.DEFAULT_AGE_GENDER_MODEL
)
if not model_id:
self._age_gender_error = "disabled"
return
try:
# Late imports — torch / transformers are heavy and only
# pulled in when the analyze head actually runs.
import torch # type: ignore
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification # type: ignore
self._torch = torch
self._age_gender_processor = AutoFeatureExtractor.from_pretrained(model_id)
self._age_gender = AutoModelForAudioClassification.from_pretrained(model_id)
self._age_gender.eval()
except Exception as exc: # noqa: BLE001
self._age_gender_error = f"{type(exc).__name__}: {exc}"
def _infer_age_gender(self, waveform_16k) -> dict[str, Any]:
self._ensure_age_gender()
if self._age_gender is None:
return {}
import numpy as np
try:
inputs = self._age_gender_processor(
waveform_16k, sampling_rate=16000, return_tensors="pt"
)
with self._torch.no_grad():
outputs = self._age_gender(**inputs)
# Audeering's checkpoint is published with a custom head: the
# official recipe exposes `(hidden_states, logits_age, logits_gender)`.
# AutoModelForAudioClassification flattens that into a single
# `logits` tensor of shape [batch, 4] — [age_regression, female, male, child].
# Fall back gracefully when the shape is different (e.g. a
# user-supplied age_gender_model checkpoint that returns a proper tuple).
hidden = getattr(outputs, "logits", outputs)
age_years = None
gender_logits = None
if isinstance(hidden, (tuple, list)) and len(hidden) >= 2:
age_years = float(hidden[0].squeeze().item()) * 100.0
gender_logits = hidden[1]
else:
flat = hidden.squeeze()
if flat.ndim == 1 and flat.numel() >= 4:
age_years = float(flat[0].item()) * 100.0
gender_logits = flat[1:4]
elif flat.ndim == 1 and flat.numel() == 1:
age_years = float(flat.item()) * 100.0
if age_years is None and gender_logits is None:
return {}
result: dict[str, Any] = {}
if age_years is not None:
result["age"] = age_years
if gender_logits is not None:
probs = self._torch.softmax(gender_logits, dim=-1).cpu().numpy()
probs = np.asarray(probs).reshape(-1)
gender_map = {
label: float(probs[i])
for i, label in enumerate(self.AGE_GENDER_LABELS[: len(probs)])
}
result["gender"] = gender_map
if gender_map:
dom = max(gender_map.items(), key=lambda kv: kv[1])[0]
result["dominant_gender"] = {
"female": "Female",
"male": "Male",
"child": "Child",
}.get(dom, dom.capitalize())
return result
except Exception as exc: # noqa: BLE001
# Analyze is a best-effort feature — never take down the
# whole analyze call because the age/gender head had a bad
# day. Mark the failure so the emotion branch still runs.
self._age_gender_error = f"runtime: {type(exc).__name__}: {exc}"
return {}
# --- emotion ------------------------------------------------------
def _ensure_emotion(self):
if self._emotion_loaded:
return
self._emotion_loaded = True
model_id = self._options.get("emotion_model", self.DEFAULT_EMOTION_MODEL)
if not model_id:
self._emotion_error = "disabled"
return
try:
from transformers import pipeline # type: ignore
self._emotion = pipeline("audio-classification", model=model_id)
except Exception as exc: # noqa: BLE001
self._emotion_error = f"{type(exc).__name__}: {exc}"
def _infer_emotion(self, audio_path: str) -> dict[str, Any]:
self._ensure_emotion()
if self._emotion is None:
return {}
try:
raw = self._emotion(audio_path, top_k=8)
except Exception as exc: # noqa: BLE001
# Second-line defense: don't fail the whole analyze call
# over a runtime inference hiccup.
self._emotion_error = f"runtime: {type(exc).__name__}: {exc}"
return {}
emotion_map = {row["label"].lower(): float(row["score"]) for row in raw}
if not emotion_map:
return {}
dom = max(emotion_map.items(), key=lambda kv: kv[1])[0]
return {"emotion": emotion_map, "dominant_emotion": dom}
# --- orchestrator -------------------------------------------------
def analyze(self, audio_path: str, waveform_16k, actions: Iterable[str]) -> dict[str, Any]:
wanted = {a.strip().lower() for a in actions} if actions else {"age", "gender", "emotion"}
result: dict[str, Any] = {}
if "age" in wanted or "gender" in wanted:
ag = self._infer_age_gender(waveform_16k)
if "age" in wanted and "age" in ag:
result["age"] = ag["age"]
if "gender" in wanted:
if "gender" in ag:
result["gender"] = ag["gender"]
if "dominant_gender" in ag:
result["dominant_gender"] = ag["dominant_gender"]
if "emotion" in wanted:
em = self._infer_emotion(audio_path)
result.update(em)
return result
class SpeechBrainEngine:
"""ECAPA-TDNN via SpeechBrain. Auto-downloads on first use."""
name = "speechbrain-ecapa-tdnn"
def __init__(self, model_name: str, options: dict[str, str]):
# Late imports so the module can be introspected / tested
# without torch / speechbrain being installed.
from speechbrain.inference.speaker import EncoderClassifier # type: ignore
source = options.get("source") or model_name or "speechbrain/spkrec-ecapa-voxceleb"
savedir = options.get("_model_path") or os.environ.get("HF_HOME") or "./pretrained_models"
self._model = EncoderClassifier.from_hparams(source=source, savedir=savedir)
self._analysis = AnalysisHead(options)
def _load_waveform(self, path: str):
# Use soundfile + torch directly — torchaudio.load in torchaudio
# 2.8+ requires the torchcodec package for decoding, which adds
# another heavy ffmpeg-linked dep. soundfile covers WAV/FLAC
# which is what we care about here.
import numpy as np
import soundfile as sf # type: ignore
import torch # type: ignore
audio, sr = sf.read(path, always_2d=False)
if audio.ndim > 1:
audio = audio.mean(axis=1)
audio = np.asarray(audio, dtype=np.float32)
if sr != 16000:
# Simple linear resample — good enough for 16kHz downsampling
# from 44.1/48kHz, and we expect 16kHz inputs in practice.
ratio = 16000 / float(sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
).astype(np.float32)
return torch.from_numpy(audio).unsqueeze(0) # [1, T]
def embed(self, audio_path: str) -> list[float]:
waveform = self._load_waveform(audio_path)
vec = self._model.encode_batch(waveform).squeeze().detach().cpu().numpy()
return [float(x) for x in vec]
def compare(self, audio1: str, audio2: str) -> float:
return _cosine_distance(self.embed(audio1), self.embed(audio2))
def analyze(self, audio_path: str, actions):
# Age / gender / emotion aren't produced by ECAPA-TDNN itself;
# delegate to AnalysisHead which wraps separate Apache-2.0
# checkpoints. Returns a single segment spanning the clip —
# segmentation / diarisation is a future enhancement.
waveform = self._load_waveform(audio_path)
mono = waveform.squeeze().detach().cpu().numpy()
attrs = self._analysis.analyze(audio_path, mono, actions)
if not attrs:
raise NotImplementedError(
"analyze head failed to load — install transformers + torch or pass age_gender_model/emotion_model options"
)
duration = float(mono.shape[-1]) / 16000.0 if mono.size else 0.0
return [dict(start=0.0, end=duration, **attrs)]
class OnnxDirectEngine:
"""Run a pre-exported ONNX speaker encoder (WeSpeaker / 3D-Speaker)."""
name = "onnx-direct"
def __init__(self, model_name: str, options: dict[str, str]):
import onnxruntime as ort # type: ignore
# The gallery is expected to have dropped the ONNX file under
# the models directory; accept either an absolute path or a
# filename relative to _model_path.
onnx_path = options.get("model_path") or options.get("onnx")
if not onnx_path:
raise ValueError("OnnxDirectEngine requires `model_path: <file.onnx>` in options")
if not os.path.isabs(onnx_path):
onnx_path = os.path.join(options.get("_model_path", ""), onnx_path)
if not os.path.isfile(onnx_path):
raise FileNotFoundError(f"ONNX model not found: {onnx_path}")
providers = options.get("providers")
if providers:
provider_list = [p.strip() for p in providers.split(",") if p.strip()]
else:
provider_list = ["CPUExecutionProvider"]
self._session = ort.InferenceSession(onnx_path, providers=provider_list)
input_meta = self._session.get_inputs()[0]
self._input_name = input_meta.name
# Pre-exported speaker encoders come in two shapes:
# rank-2 [batch, samples] — some 3D-Speaker exports feed raw waveform.
# rank-3 [batch, frames, n_mels] — WeSpeaker and most Kaldi-lineage encoders
# expect pre-computed Kaldi FBank features.
# We detect this at load time and branch in embed(), because feeding raw audio
# into a rank-3 graph is exactly what triggered
# "Invalid rank for input: feats Got: 2 Expected: 3".
self._input_rank = len(input_meta.shape) if input_meta.shape is not None else 2
self._expected_sr = int(options.get("sample_rate", "16000"))
self._fbank_mels = int(options.get("fbank_num_mel_bins", "80"))
self._fbank_frame_length_ms = float(options.get("fbank_frame_length_ms", "25"))
self._fbank_frame_shift_ms = float(options.get("fbank_frame_shift_ms", "10"))
# Per-utterance cepstral mean normalisation — on for WeSpeaker by default,
# toggleable for encoders that expect raw FBank.
self._fbank_cmn = options.get("fbank_cmn", "true").lower() in ("1", "true", "yes")
self._analysis = AnalysisHead(options)
def _load_waveform(self, path: str):
import numpy as np
import soundfile as sf # type: ignore
audio, sr = sf.read(path, always_2d=False)
if sr != self._expected_sr:
# Cheap linear resample — good enough for sanity; callers
# should pre-resample for production.
ratio = self._expected_sr / float(sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
)
if audio.ndim > 1:
audio = audio.mean(axis=1)
return audio.astype("float32")
def embed(self, audio_path: str) -> list[float]:
import numpy as np
audio = self._load_waveform(audio_path)
if self._input_rank >= 3:
feats = self._extract_fbank(audio) # [frames, n_mels]
feed = feats[np.newaxis, :, :] # [1, frames, n_mels]
else:
feed = audio.reshape(1, -1) # [1, samples]
out = self._session.run(None, {self._input_name: feed})
vec = np.asarray(out[0]).reshape(-1)
return [float(x) for x in vec]
def _extract_fbank(self, audio):
"""Compute Kaldi-style 80-dim FBank features for speaker encoders that
expect pre-featurised input (WeSpeaker, most 3D-Speaker exports).
torchaudio is already a backend dependency for SpeechBrain — no new
package required."""
import numpy as np
import torch # type: ignore
import torchaudio.compliance.kaldi as kaldi # type: ignore
tensor = torch.from_numpy(audio).unsqueeze(0) # [1, samples]
feats = kaldi.fbank(
tensor,
sample_frequency=self._expected_sr,
num_mel_bins=self._fbank_mels,
frame_length=self._fbank_frame_length_ms,
frame_shift=self._fbank_frame_shift_ms,
dither=0.0,
) # [frames, n_mels]
if self._fbank_cmn:
feats = feats - feats.mean(dim=0, keepdim=True)
return feats.numpy().astype(np.float32)
def compare(self, audio1: str, audio2: str) -> float:
return _cosine_distance(self.embed(audio1), self.embed(audio2))
def analyze(self, audio_path: str, actions):
# AnalysisHead expects 16kHz mono; _load_waveform already
# resamples to self._expected_sr. If the user configured a
# non-16k expected rate, resample one more time for analyze.
audio = self._load_waveform(audio_path)
if self._expected_sr != 16000:
import numpy as np
ratio = 16000 / float(self._expected_sr)
n = int(round(len(audio) * ratio))
audio = np.interp(
np.linspace(0, len(audio), n, endpoint=False),
np.arange(len(audio)),
audio,
).astype("float32")
attrs = self._analysis.analyze(audio_path, audio, actions)
if not attrs:
raise NotImplementedError(
"analyze head failed to load — install transformers + torch or pass age_gender_model/emotion_model options"
)
duration = float(len(audio)) / 16000.0 if len(audio) else 0.0
return [dict(start=0.0, end=duration, **attrs)]
def build_engine(model_name: str, options: dict[str, str]) -> tuple[SpeakerEngine, str]:
"""Pick an engine based on the options. ONNX path takes priority:
if the gallery has dropped a `model_path:` or `onnx:` option, run
the direct ONNX engine. Otherwise, fall back to SpeechBrain.
"""
engine_kind = (options.get("engine") or "").lower()
if engine_kind == "onnx" or options.get("model_path") or options.get("onnx"):
return OnnxDirectEngine(model_name, options), OnnxDirectEngine.name
return SpeechBrainEngine(model_name, options), SpeechBrainEngine.name