mirror of
https://github.com/mudler/LocalAI.git
synced 2026-05-16 20:52:08 -04:00
feat(audio-transform): add LocalVQE backend, bidi gRPC RPC, Studio UI
Introduce a generic "audio transform" capability for any audio-in / audio-out
operation (echo cancellation, noise suppression, dereverberation, voice
conversion, etc.) and ship LocalVQE as the first backend implementation.
Backend protocol:
- Two new gRPC RPCs in backend.proto: unary AudioTransform for batch and
bidirectional AudioTransformStream for low-latency frame-by-frame use.
This is the first bidi stream in the proto; per-frame unary at LocalVQE's
16 ms hop would be RTT-bound. Wire it through pkg/grpc/{client,server,
embed,interface,base} with paired-channel ergonomics.
LocalVQE backend (backend/go/localvqe/):
- Go-Purego wrapper around upstream liblocalvqe.so. CMake builds the upstream
shared lib + its libggml-cpu-*.so runtime variants directly — no MODULE
wrapper needed because LocalVQE handles CPU feature selection internally
via GGML_BACKEND_DL.
- Sets GGML_NTHREADS from opts.Threads (or runtime.NumCPU()-1) — without it
LocalVQE runs single-threaded at ~1× realtime instead of the documented
~9.6×.
- Reference-length policy: zero-pad short refs, truncate long ones (the
trailing portion can't have leaked into a mic that wasn't recording).
- Ginkgo test suite (9 always-on specs + 2 model-gated).
HTTP layer:
- POST /audio/transformations (alias /audio/transform): multipart batch
endpoint, accepts audio + optional reference + params[*]=v form fields.
Persists inputs alongside the output in GeneratedContentDir/audio so the
React UI history can replay past (audio, reference, output) triples.
- GET /audio/transformations/stream: WebSocket bidi, 16 ms PCM frames
(interleaved stereo mic+ref in, mono out). JSON session.update envelope
for config; constants hoisted in core/schema/audio_transform.go.
- ffmpeg-based input normalisation to 16 kHz mono s16 WAV via the existing
utils.AudioToWav (with passthrough fast-path), so the user can upload any
format / rate without seeing the model's strict 16 kHz constraint.
- BackendTraceAudioTransform integration so /api/backend-traces and the
Traces UI light up with audio_snippet base64 and timing.
- Routes registered under routes/localai.go (LocalAI extension; OpenAI has
no /audio/transformations endpoint), traced via TraceMiddleware.
Auth + capability + importer:
- FLAG_AUDIO_TRANSFORM (model_config.go), FeatureAudioTransform (default-on,
in APIFeatures), three RouteFeatureRegistry rows.
- localvqe added to knownPrefOnlyBackends with modality "audio-transform".
- Gallery entry localvqe-v1-1.3m (sha256-pinned, hosted on
huggingface.co/LocalAI-io/LocalVQE).
React UI:
- New /app/transform page surfaced via a dedicated "Enhance" sidebar
section (sibling of Tools / Biometrics) — the page is enhancement, not
generation, so it lives outside Studio. Two AudioInput components
(Upload + Record tabs, drag-drop, mic capture).
- Echo-test button: records mic while playing the loaded reference through
the speakers — the mic naturally picks up speaker bleed, giving a real
(mic, ref) pair for AEC testing without leaving the UI.
- Reusable WaveformPlayer (canvas peaks + click-to-seek + audio controls)
and useAudioPeaks hook (shared module-scoped AudioContext to avoid
hitting browser context limits with three players on one page); migrated
TTS, Sound, Traces audio blocks to use it.
- Past runs saved in localStorage via useMediaHistory('audio-transform') —
the history entry stores all three URLs so clicking re-renders the full
triple, not just the output.
Build + e2e:
- 11 matrix entries removed from .github/workflows/backend.yml (CUDA, ROCm,
SYCL, Metal, L4T): upstream supports only CPU + Vulkan, so we ship those
two and let GPU-class hardware route through Vulkan in the gallery
capabilities map.
- tests-localvqe-grpc-transform job in test-extra.yml (gated on
detect-changes.outputs.localvqe).
- New audio_transform capability + 4 specs in tests/e2e-backends.
- Playwright spec suite in core/http/react-ui/e2e/audio-transform.spec.js
(8 specs covering tabs, file upload, multipart shape, history, errors).
Docs:
- New docs/content/features/audio-transform.md covering the (audio,
reference) mental model, batch + WebSocket wire formats, LocalVQE param
keys, and a YAML config example. Cross-links from text-to-audio and
audio-to-text feature pages.
Assisted-by: Claude:claude-opus-4-7 [Bash Read Edit Write Agent TaskCreate]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
181 lines
6.4 KiB
Go
181 lines
6.4 KiB
Go
package auth
|
|
|
|
// RouteFeature maps a route pattern + HTTP method to a required feature.
|
|
type RouteFeature struct {
|
|
Method string // "POST", "GET", "*" (any)
|
|
Pattern string // Echo route pattern, e.g. "/v1/chat/completions"
|
|
Feature string // Feature constant, e.g. FeatureChat
|
|
}
|
|
|
|
// RouteFeatureRegistry is the single source of truth for endpoint -> feature mappings.
|
|
// To gate a new endpoint, add an entry here -- no other file changes needed.
|
|
var RouteFeatureRegistry = []RouteFeature{
|
|
// Chat / Completions
|
|
{"POST", "/v1/chat/completions", FeatureChat},
|
|
{"POST", "/chat/completions", FeatureChat},
|
|
{"POST", "/v1/completions", FeatureChat},
|
|
{"POST", "/completions", FeatureChat},
|
|
{"POST", "/v1/engines/:model/completions", FeatureChat},
|
|
{"POST", "/v1/edits", FeatureChat},
|
|
{"POST", "/edits", FeatureChat},
|
|
|
|
// Anthropic
|
|
{"POST", "/v1/messages", FeatureChat},
|
|
{"POST", "/messages", FeatureChat},
|
|
|
|
// Open Responses
|
|
{"POST", "/v1/responses", FeatureChat},
|
|
{"POST", "/responses", FeatureChat},
|
|
{"GET", "/v1/responses", FeatureChat},
|
|
{"GET", "/responses", FeatureChat},
|
|
|
|
// Embeddings
|
|
{"POST", "/v1/embeddings", FeatureEmbeddings},
|
|
{"POST", "/embeddings", FeatureEmbeddings},
|
|
{"POST", "/v1/engines/:model/embeddings", FeatureEmbeddings},
|
|
|
|
// Images
|
|
{"POST", "/v1/images/generations", FeatureImages},
|
|
{"POST", "/images/generations", FeatureImages},
|
|
{"POST", "/v1/images/inpainting", FeatureImages},
|
|
{"POST", "/images/inpainting", FeatureImages},
|
|
|
|
// Audio transcription
|
|
{"POST", "/v1/audio/transcriptions", FeatureAudioTranscription},
|
|
{"POST", "/audio/transcriptions", FeatureAudioTranscription},
|
|
|
|
// Audio speech / TTS
|
|
{"POST", "/v1/audio/speech", FeatureAudioSpeech},
|
|
{"POST", "/audio/speech", FeatureAudioSpeech},
|
|
{"POST", "/tts", FeatureAudioSpeech},
|
|
{"POST", "/v1/text-to-speech/:voice-id", FeatureAudioSpeech},
|
|
|
|
// VAD
|
|
{"POST", "/vad", FeatureVAD},
|
|
{"POST", "/v1/vad", FeatureVAD},
|
|
|
|
// Detection
|
|
{"POST", "/v1/detection", FeatureDetection},
|
|
|
|
// Face recognition
|
|
{"POST", "/v1/face/verify", FeatureFaceRecognition},
|
|
{"POST", "/v1/face/analyze", FeatureFaceRecognition},
|
|
{"POST", "/v1/face/embed", FeatureFaceRecognition},
|
|
{"POST", "/v1/face/register", FeatureFaceRecognition},
|
|
{"POST", "/v1/face/identify", FeatureFaceRecognition},
|
|
{"POST", "/v1/face/forget", FeatureFaceRecognition},
|
|
|
|
// Voice (speaker) recognition
|
|
{"POST", "/v1/voice/verify", FeatureVoiceRecognition},
|
|
{"POST", "/v1/voice/analyze", FeatureVoiceRecognition},
|
|
{"POST", "/v1/voice/embed", FeatureVoiceRecognition},
|
|
{"POST", "/v1/voice/register", FeatureVoiceRecognition},
|
|
{"POST", "/v1/voice/identify", FeatureVoiceRecognition},
|
|
{"POST", "/v1/voice/forget", FeatureVoiceRecognition},
|
|
|
|
// Audio transform (echo cancellation, noise suppression, voice conversion, etc.)
|
|
{"POST", "/audio/transformations", FeatureAudioTransform},
|
|
{"POST", "/audio/transform", FeatureAudioTransform},
|
|
{"GET", "/audio/transformations/stream", FeatureAudioTransform},
|
|
|
|
// Video
|
|
{"POST", "/video", FeatureVideo},
|
|
|
|
// Sound generation
|
|
{"POST", "/v1/sound-generation", FeatureSound},
|
|
|
|
// Realtime
|
|
{"GET", "/v1/realtime", FeatureRealtime},
|
|
{"POST", "/v1/realtime/sessions", FeatureRealtime},
|
|
{"POST", "/v1/realtime/transcription_session", FeatureRealtime},
|
|
{"POST", "/v1/realtime/calls", FeatureRealtime},
|
|
|
|
// MCP
|
|
{"POST", "/v1/mcp/chat/completions", FeatureMCP},
|
|
{"POST", "/mcp/v1/chat/completions", FeatureMCP},
|
|
{"POST", "/mcp/chat/completions", FeatureMCP},
|
|
|
|
// Tokenize
|
|
{"POST", "/v1/tokenize", FeatureTokenize},
|
|
|
|
// Rerank
|
|
{"POST", "/v1/rerank", FeatureRerank},
|
|
|
|
// Stores
|
|
{"POST", "/stores/set", FeatureStores},
|
|
{"POST", "/stores/delete", FeatureStores},
|
|
{"POST", "/stores/get", FeatureStores},
|
|
{"POST", "/stores/find", FeatureStores},
|
|
|
|
// Fine-tuning
|
|
{"POST", "/api/fine-tuning/jobs", FeatureFineTuning},
|
|
{"GET", "/api/fine-tuning/jobs", FeatureFineTuning},
|
|
{"GET", "/api/fine-tuning/jobs/:id", FeatureFineTuning},
|
|
{"POST", "/api/fine-tuning/jobs/:id/stop", FeatureFineTuning},
|
|
{"DELETE", "/api/fine-tuning/jobs/:id", FeatureFineTuning},
|
|
{"GET", "/api/fine-tuning/jobs/:id/progress", FeatureFineTuning},
|
|
{"GET", "/api/fine-tuning/jobs/:id/checkpoints", FeatureFineTuning},
|
|
{"POST", "/api/fine-tuning/jobs/:id/export", FeatureFineTuning},
|
|
{"GET", "/api/fine-tuning/jobs/:id/download", FeatureFineTuning},
|
|
{"POST", "/api/fine-tuning/datasets", FeatureFineTuning},
|
|
|
|
// Quantization
|
|
{"POST", "/api/quantization/jobs", FeatureQuantization},
|
|
{"GET", "/api/quantization/jobs", FeatureQuantization},
|
|
{"GET", "/api/quantization/jobs/:id", FeatureQuantization},
|
|
{"POST", "/api/quantization/jobs/:id/stop", FeatureQuantization},
|
|
{"DELETE", "/api/quantization/jobs/:id", FeatureQuantization},
|
|
{"GET", "/api/quantization/jobs/:id/progress", FeatureQuantization},
|
|
{"POST", "/api/quantization/jobs/:id/import", FeatureQuantization},
|
|
{"GET", "/api/quantization/jobs/:id/download", FeatureQuantization},
|
|
}
|
|
|
|
// FeatureMeta describes a feature for the admin API/UI.
|
|
type FeatureMeta struct {
|
|
Key string `json:"key"`
|
|
Label string `json:"label"`
|
|
DefaultValue bool `json:"default"`
|
|
}
|
|
|
|
// AgentFeatureMetas returns metadata for agent features.
|
|
func AgentFeatureMetas() []FeatureMeta {
|
|
return []FeatureMeta{
|
|
{FeatureAgents, "Agents", false},
|
|
{FeatureSkills, "Skills", false},
|
|
{FeatureCollections, "Collections", false},
|
|
{FeatureMCPJobs, "MCP CI Jobs", false},
|
|
{FeatureLocalAIAssistant, "LocalAI Assistant", false},
|
|
}
|
|
}
|
|
|
|
// GeneralFeatureMetas returns metadata for general features.
|
|
func GeneralFeatureMetas() []FeatureMeta {
|
|
return []FeatureMeta{
|
|
{FeatureFineTuning, "Fine-Tuning", false},
|
|
{FeatureQuantization, "Quantization", false},
|
|
}
|
|
}
|
|
|
|
// APIFeatureMetas returns metadata for API endpoint features.
|
|
func APIFeatureMetas() []FeatureMeta {
|
|
return []FeatureMeta{
|
|
{FeatureChat, "Chat Completions", true},
|
|
{FeatureImages, "Image Generation", true},
|
|
{FeatureAudioSpeech, "Audio Speech / TTS", true},
|
|
{FeatureAudioTranscription, "Audio Transcription", true},
|
|
{FeatureVAD, "Voice Activity Detection", true},
|
|
{FeatureDetection, "Detection", true},
|
|
{FeatureVideo, "Video Generation", true},
|
|
{FeatureEmbeddings, "Embeddings", true},
|
|
{FeatureSound, "Sound Generation", true},
|
|
{FeatureRealtime, "Realtime", true},
|
|
{FeatureRerank, "Rerank", true},
|
|
{FeatureTokenize, "Tokenize", true},
|
|
{FeatureMCP, "MCP", true},
|
|
{FeatureStores, "Stores", true},
|
|
{FeatureFaceRecognition, "Face Recognition", true},
|
|
{FeatureVoiceRecognition, "Voice Recognition", true},
|
|
{FeatureAudioTransform, "Audio Transform", true},
|
|
}
|
|
}
|