mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-30 12:08:13 -04:00
* feat(voice-recognition): add /v1/voice/{verify,analyze,embed} + speaker-recognition backend
Audio analog to face recognition. Adds three gRPC RPCs
(VoiceVerify / VoiceAnalyze / VoiceEmbed), their Go service and HTTP
layers, a new FLAG_SPEAKER_RECOGNITION capability flag, and a Python
backend scaffold under backend/python/speaker-recognition/ wrapping
SpeechBrain ECAPA-TDNN with a parallel OnnxDirectEngine for
WeSpeaker / 3D-Speaker ONNX exports.
The kokoros Rust backend gets matching unimplemented trait stubs —
tonic's async_trait has no defaults, so adding an RPC without Rust
stubs breaks the build (same regression fixed by eb01c772 for face).
Swagger, /api/instructions, and the auth RouteFeatureRegistry /
APIFeatures list are updated so the endpoints surface everywhere a
client or admin UI looks.
Assisted-by: Claude:claude-opus-4-7
* feat(voice-recognition): add 1:N identify + register/forget endpoints
Mirrors the face-recognition register/identify/forget surface. New
package core/services/voicerecognition/ carries a Registry interface
and a local-store-backed implementation (same in-memory vector-store
plumbing facerecognition uses, separate instance so the embedding
spaces stay isolated).
Handlers under /v1/voice/{register,identify,forget} reuse
backend.VoiceEmbed to compute the probe vector, then delegate the
nearest-neighbour search to the registry. Default cosine-distance
threshold is tuned for ECAPA-TDNN on VoxCeleb (0.25, EER ~1.9%).
As with the face registry, the current backing is in-memory only — a
pgvector implementation is a future constructor-level swap.
Assisted-by: Claude:claude-opus-4-7
* feat(voice-recognition): gallery, docs, CI and e2e coverage
- backend/index.yaml: speaker-recognition backend entry + CPU and
CUDA-12 image variants (plus matching development variants).
- gallery/index.yaml: speechbrain-ecapa-tdnn (default) and
wespeaker-resnet34 model entries. The WeSpeaker SHA-256 is a
deliberate placeholder — the HF URI must be curl'd and its hash
filled in before the entry installs.
- docs/content/features/voice-recognition.md: API reference + quickstart,
mirrors the face-recognition docs.
- React UI: CAP_SPEAKER_RECOGNITION flag export (consumers follow face's
precedent — no dedicated tab yet).
- tests/e2e-backends: voice_embed / voice_verify / voice_analyze specs.
Helper resolveFaceFixture is reused as-is — the only thing face/voice
share is "download a file into workDir", so no need for a new helper.
- Makefile: docker-build-speaker-recognition + test-extra-backend-
speaker-recognition-{ecapa,all} targets. Audio fixtures default to
VCTK p225/p226 samples from HuggingFace.
- CI: test-extra.yml grows a tests-speaker-recognition-grpc job
mirroring insightface. backend.yml matrix gains CPU + CUDA-12 image
build entries — scripts/changed-backends.js auto-picks these up.
Assisted-by: Claude:claude-opus-4-7
* feat(voice-recognition): wire a working /v1/voice/analyze head
Adds AnalysisHead: a lazy-loading age / gender / emotion inference
wrapper that plugs into both SpeechBrainEngine and OnnxDirectEngine.
Defaults to two open-licence HuggingFace checkpoints:
- audeering/wav2vec2-large-robust-24-ft-age-gender (Apache 2.0) —
age regression + 3-way gender (female / male / child).
- superb/wav2vec2-base-superb-er (Apache 2.0) — 4-way emotion.
Both are optional and degrade gracefully when transformers or the
model can't be loaded — the engine raises NotImplementedError so the
gRPC layer returns 501 instead of a generic 500.
Emotion classes pass through from the model (neutral/happy/angry/sad
on the default checkpoint); the e2e test now accepts any non-empty
dominant gender so custom age_gender_model overrides don't fail it.
Adds transformers to the backend's CPU and CUDA-12 requirements.
Assisted-by: Claude:claude-opus-4-7
* fix(voice-recognition): pin real WeSpeaker ResNet34 ONNX SHA-256
Replaces the placeholder hash in gallery/index.yaml with the actual
SHA-256 (7bb2f06e…) of the upstream
Wespeaker/wespeaker-voxceleb-resnet34-LM ONNX at ~25MB. `local-ai
models install wespeaker-resnet34` now succeeds.
Assisted-by: Claude:claude-opus-4-7
* fix(voice-recognition): soundfile loader + honest analyze default
Two issues surfaced on first end-to-end smoke with the actual backend
image:
1. torchaudio.load in torchaudio 2.8+ requires the torchcodec package
for audio decoding. Switch SpeechBrainEngine._load_waveform to the
already-present soundfile (listed in requirements.txt) plus a numpy
linear resample to 16kHz. Drops a heavy ffmpeg-linked dep and the
codepath we never exercise (torchaudio's ffmpeg backend).
2. The AnalysisHead was defaulting to audeering/wav2vec2-large-robust-
24-ft-age-gender, but AutoModelForAudioClassification silently
mangles that checkpoint — it reports the age head weights as
UNEXPECTED and re-initialises the classifier head with random
values, so the "gender" output is noise and there is no age output
at all. Make age/gender opt-in instead (empty default; users wire
a cleanly-loadable Wav2Vec2ForSequenceClassification checkpoint via
age_gender_model: option). Emotion keeps its working Superb default.
Also broaden _infer_age_gender's tensor-shape handling and catch
runtime exceptions so a dodgy age/gender head never takes down the
whole analyze call.
Docs and README updated to match the new policy.
Verified with the branch-scoped gallery on localhost:
- voice/embed → 192-d ECAPA-TDNN vector
- voice/verify → same-clip dist≈6e-08 verified=true; cross-speaker
dist 0.76–0.99 verified=false (as expected)
- voice/register/identify/forget → round-trip works, 404 on unknown id
- voice/analyze → emotion populated, age/gender omitted (opt-in)
Assisted-by: Claude:claude-opus-4-7
* fix(voice-recognition): real CI audio fixtures + fixture-agnostic verify spec
Two issues surfaced after CI actually ran the speaker-recognition e2e
target (I'd curl-tested against a running server but hadn't run the
make target locally):
1. The default BACKEND_TEST_VOICE_AUDIO_* URLs pointed at
huggingface.co/datasets/CSTR-Edinburgh/vctk paths that return 404
(the dataset is gated). Swap them for the speechbrain test samples
served from github.com/speechbrain/speechbrain/raw/develop/ —
public, no auth, correct 16kHz mono format.
2. The VoiceVerify spec required d(file1,file2) < 0.4, assuming
file1/file2 were same-speaker. The speechbrain samples are three
different speakers (example1/2/5), and there is no easy un-gated
source of true same-speaker audio pairs (VoxCeleb/VCTK/LibriSpeech
are all license- or size-gated for CI use). Replace the ceiling
check with a relative-ordering assertion: d(pair) > d(same-clip)
for both file2 and file3 — that's enough to prove the embeddings
encode speaker info, and it works with any three non-identical
clips. Actual speaker ordering d(1,2) vs d(1,3) is logged but not
asserted.
Local run: 4/4 voice specs pass (Health, LoadModel, VoiceEmbed,
VoiceVerify) on the built backend image. 12 non-voice specs skipped
as expected.
Assisted-by: Claude:claude-opus-4-7
* fix(ci): checkout with submodules in the reusable backend_build workflow
The kokoros Rust backend build fails with
failed to read .../sources/Kokoros/kokoros/Cargo.toml: No such file
because the reusable backend_build.yml workflow's actions/checkout
step was missing `submodules: true`. Dockerfile.rust does `COPY .
/LocalAI`, and without the submodule files the subsequent `cargo
build` can't find the vendored Kokoros crate.
The bug pre-dates this PR — scripts/changed-backends.js only triggers
the kokoros image job when something under backend/rust/kokoros or
the shared proto changes, so master had been coasting past it. The
voice-recognition proto addition re-broke it.
Other checkouts in backend.yml (llama-cpp-darwin) and test-extra.yml
(insightface, kokoros, speaker-recognition) already pass
`submodules: true`; this brings the shared backend image builder in
line.
Assisted-by: Claude:claude-opus-4-7
206 lines
7.7 KiB
Python
206 lines
7.7 KiB
Python
#!/usr/bin/env python3
|
|
"""gRPC server for the LocalAI speaker-recognition backend.
|
|
|
|
Implements Health / LoadModel / Status plus the voice-specific methods:
|
|
VoiceVerify, VoiceAnalyze, VoiceEmbed. The heavy lifting lives in
|
|
engines.py — this file is just the gRPC plumbing, mirroring the
|
|
insightface backend's two-engine split (SpeechBrain + OnnxDirect).
|
|
"""
|
|
from __future__ import annotations
|
|
|
|
import argparse
|
|
import os
|
|
import signal
|
|
import sys
|
|
import time
|
|
from concurrent import futures
|
|
|
|
import backend_pb2
|
|
import backend_pb2_grpc
|
|
import grpc
|
|
|
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "common"))
|
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "common"))
|
|
from grpc_auth import get_auth_interceptors # noqa: E402
|
|
|
|
from engines import SpeakerEngine, build_engine # noqa: E402
|
|
|
|
_ONE_DAY = 60 * 60 * 24
|
|
MAX_WORKERS = int(os.environ.get("PYTHON_GRPC_MAX_WORKERS", "1"))
|
|
|
|
# ECAPA-TDNN on VoxCeleb is the reference. Threshold is tuned for
|
|
# cosine distance (1 - cosine_similarity). Clients may override.
|
|
DEFAULT_VERIFY_THRESHOLD = 0.25
|
|
|
|
|
|
def _parse_options(raw: list[str]) -> dict[str, str]:
|
|
out: dict[str, str] = {}
|
|
for entry in raw:
|
|
if ":" not in entry:
|
|
continue
|
|
k, v = entry.split(":", 1)
|
|
out[k.strip()] = v.strip()
|
|
return out
|
|
|
|
|
|
class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|
def __init__(self) -> None:
|
|
self.engine: SpeakerEngine | None = None
|
|
self.engine_name: str = ""
|
|
self.model_name: str = ""
|
|
self.verify_threshold: float = DEFAULT_VERIFY_THRESHOLD
|
|
|
|
def Health(self, request, context):
|
|
return backend_pb2.Reply(message=bytes("OK", "utf-8"))
|
|
|
|
def LoadModel(self, request, context):
|
|
options = _parse_options(list(request.Options))
|
|
# Surface LocalAI's models directory (ModelPath) so engines can
|
|
# anchor relative paths and auto-download into a writable spot
|
|
# alongside every other gallery-managed asset.
|
|
options["_model_path"] = request.ModelPath or ""
|
|
try:
|
|
engine, engine_name = build_engine(request.Model, options)
|
|
except Exception as exc: # noqa: BLE001
|
|
return backend_pb2.Result(success=False, message=f"engine init failed: {exc}")
|
|
|
|
self.engine = engine
|
|
self.engine_name = engine_name
|
|
self.model_name = request.Model
|
|
|
|
threshold_opt = options.get("verify_threshold")
|
|
if threshold_opt:
|
|
try:
|
|
self.verify_threshold = float(threshold_opt)
|
|
except ValueError:
|
|
pass
|
|
return backend_pb2.Result(success=True, message=f"loaded {engine_name}")
|
|
|
|
def Status(self, request, context):
|
|
state = backend_pb2.StatusResponse.State.READY if self.engine else backend_pb2.StatusResponse.State.UNINITIALIZED
|
|
return backend_pb2.StatusResponse(state=state)
|
|
|
|
def _require_engine(self, context) -> SpeakerEngine | None:
|
|
if self.engine is None:
|
|
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
|
|
context.set_details("no speaker-recognition model loaded")
|
|
return None
|
|
return self.engine
|
|
|
|
def VoiceVerify(self, request, context):
|
|
engine = self._require_engine(context)
|
|
if engine is None:
|
|
return backend_pb2.VoiceVerifyResponse()
|
|
if not request.audio1 or not request.audio2:
|
|
context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
|
|
context.set_details("audio1 and audio2 are required")
|
|
return backend_pb2.VoiceVerifyResponse()
|
|
|
|
threshold = request.threshold if request.threshold > 0 else self.verify_threshold
|
|
started = time.time()
|
|
try:
|
|
distance = engine.compare(request.audio1, request.audio2)
|
|
except Exception as exc: # noqa: BLE001
|
|
context.set_code(grpc.StatusCode.INTERNAL)
|
|
context.set_details(f"voice verify failed: {exc}")
|
|
return backend_pb2.VoiceVerifyResponse()
|
|
|
|
elapsed_ms = (time.time() - started) * 1000.0
|
|
# Confidence goes linearly from 100 at distance=0 to 0 at distance=threshold.
|
|
confidence = max(0.0, min(100.0, (1.0 - distance / threshold) * 100.0))
|
|
return backend_pb2.VoiceVerifyResponse(
|
|
verified=distance <= threshold,
|
|
distance=distance,
|
|
threshold=threshold,
|
|
confidence=confidence,
|
|
model=self.model_name,
|
|
processing_time_ms=elapsed_ms,
|
|
)
|
|
|
|
def VoiceEmbed(self, request, context):
|
|
engine = self._require_engine(context)
|
|
if engine is None:
|
|
return backend_pb2.VoiceEmbedResponse()
|
|
if not request.audio:
|
|
context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
|
|
context.set_details("audio is required")
|
|
return backend_pb2.VoiceEmbedResponse()
|
|
try:
|
|
vec = engine.embed(request.audio)
|
|
except Exception as exc: # noqa: BLE001
|
|
context.set_code(grpc.StatusCode.INTERNAL)
|
|
context.set_details(f"voice embed failed: {exc}")
|
|
return backend_pb2.VoiceEmbedResponse()
|
|
return backend_pb2.VoiceEmbedResponse(embedding=list(vec), model=self.model_name)
|
|
|
|
def VoiceAnalyze(self, request, context):
|
|
engine = self._require_engine(context)
|
|
if engine is None:
|
|
return backend_pb2.VoiceAnalyzeResponse()
|
|
if not request.audio:
|
|
context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
|
|
context.set_details("audio is required")
|
|
return backend_pb2.VoiceAnalyzeResponse()
|
|
|
|
actions = list(request.actions) or ["age", "gender", "emotion"]
|
|
try:
|
|
segments = engine.analyze(request.audio, actions)
|
|
except NotImplementedError:
|
|
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
|
context.set_details(f"analyze not supported by {self.engine_name}")
|
|
return backend_pb2.VoiceAnalyzeResponse()
|
|
except Exception as exc: # noqa: BLE001
|
|
context.set_code(grpc.StatusCode.INTERNAL)
|
|
context.set_details(f"voice analyze failed: {exc}")
|
|
return backend_pb2.VoiceAnalyzeResponse()
|
|
|
|
proto_segments = []
|
|
for seg in segments:
|
|
proto_segments.append(
|
|
backend_pb2.VoiceAnalysis(
|
|
start=seg.get("start", 0.0),
|
|
end=seg.get("end", 0.0),
|
|
age=seg.get("age", 0.0),
|
|
dominant_gender=seg.get("dominant_gender", ""),
|
|
gender=seg.get("gender", {}),
|
|
dominant_emotion=seg.get("dominant_emotion", ""),
|
|
emotion=seg.get("emotion", {}),
|
|
)
|
|
)
|
|
return backend_pb2.VoiceAnalyzeResponse(segments=proto_segments)
|
|
|
|
|
|
def serve(address: str) -> None:
|
|
interceptors = get_auth_interceptors()
|
|
server = grpc.server(
|
|
futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
|
|
interceptors=interceptors,
|
|
options=[
|
|
("grpc.max_send_message_length", 128 * 1024 * 1024),
|
|
("grpc.max_receive_message_length", 128 * 1024 * 1024),
|
|
],
|
|
)
|
|
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
|
|
server.add_insecure_port(address)
|
|
server.start()
|
|
print("speaker-recognition backend listening on", address, flush=True)
|
|
|
|
def _stop(*_):
|
|
server.stop(0)
|
|
sys.exit(0)
|
|
|
|
signal.signal(signal.SIGTERM, _stop)
|
|
signal.signal(signal.SIGINT, _stop)
|
|
try:
|
|
while True:
|
|
time.sleep(_ONE_DAY)
|
|
except KeyboardInterrupt:
|
|
server.stop(0)
|
|
|
|
|
|
if __name__ == "__main__":
|
|
parser = argparse.ArgumentParser()
|
|
parser.add_argument("--addr", default="localhost:50051")
|
|
args = parser.parse_args()
|
|
serve(args.addr)
|