Files
LocalAI/core/backend/audio_transform.go
Richard Palethorpe bb033b16a9 feat: add LocalVQE backend and audio transformations UI (#9640)
feat(audio-transform): add LocalVQE backend, bidi gRPC RPC, Studio UI

Introduce a generic "audio transform" capability for any audio-in / audio-out
operation (echo cancellation, noise suppression, dereverberation, voice
conversion, etc.) and ship LocalVQE as the first backend implementation.

Backend protocol:
- Two new gRPC RPCs in backend.proto: unary AudioTransform for batch and
  bidirectional AudioTransformStream for low-latency frame-by-frame use.
  This is the first bidi stream in the proto; per-frame unary at LocalVQE's
  16 ms hop would be RTT-bound. Wire it through pkg/grpc/{client,server,
  embed,interface,base} with paired-channel ergonomics.

LocalVQE backend (backend/go/localvqe/):
- Go-Purego wrapper around upstream liblocalvqe.so. CMake builds the upstream
  shared lib + its libggml-cpu-*.so runtime variants directly — no MODULE
  wrapper needed because LocalVQE handles CPU feature selection internally
  via GGML_BACKEND_DL.
- Sets GGML_NTHREADS from opts.Threads (or runtime.NumCPU()-1) — without it
  LocalVQE runs single-threaded at ~1× realtime instead of the documented
  ~9.6×.
- Reference-length policy: zero-pad short refs, truncate long ones (the
  trailing portion can't have leaked into a mic that wasn't recording).
- Ginkgo test suite (9 always-on specs + 2 model-gated).

HTTP layer:
- POST /audio/transformations (alias /audio/transform): multipart batch
  endpoint, accepts audio + optional reference + params[*]=v form fields.
  Persists inputs alongside the output in GeneratedContentDir/audio so the
  React UI history can replay past (audio, reference, output) triples.
- GET /audio/transformations/stream: WebSocket bidi, 16 ms PCM frames
  (interleaved stereo mic+ref in, mono out). JSON session.update envelope
  for config; constants hoisted in core/schema/audio_transform.go.
- ffmpeg-based input normalisation to 16 kHz mono s16 WAV via the existing
  utils.AudioToWav (with passthrough fast-path), so the user can upload any
  format / rate without seeing the model's strict 16 kHz constraint.
- BackendTraceAudioTransform integration so /api/backend-traces and the
  Traces UI light up with audio_snippet base64 and timing.
- Routes registered under routes/localai.go (LocalAI extension; OpenAI has
  no /audio/transformations endpoint), traced via TraceMiddleware.

Auth + capability + importer:
- FLAG_AUDIO_TRANSFORM (model_config.go), FeatureAudioTransform (default-on,
  in APIFeatures), three RouteFeatureRegistry rows.
- localvqe added to knownPrefOnlyBackends with modality "audio-transform".
- Gallery entry localvqe-v1-1.3m (sha256-pinned, hosted on
  huggingface.co/LocalAI-io/LocalVQE).

React UI:
- New /app/transform page surfaced via a dedicated "Enhance" sidebar
  section (sibling of Tools / Biometrics) — the page is enhancement, not
  generation, so it lives outside Studio. Two AudioInput components
  (Upload + Record tabs, drag-drop, mic capture).
- Echo-test button: records mic while playing the loaded reference through
  the speakers — the mic naturally picks up speaker bleed, giving a real
  (mic, ref) pair for AEC testing without leaving the UI.
- Reusable WaveformPlayer (canvas peaks + click-to-seek + audio controls)
  and useAudioPeaks hook (shared module-scoped AudioContext to avoid
  hitting browser context limits with three players on one page); migrated
  TTS, Sound, Traces audio blocks to use it.
- Past runs saved in localStorage via useMediaHistory('audio-transform') —
  the history entry stores all three URLs so clicking re-renders the full
  triple, not just the output.

Build + e2e:
- 11 matrix entries removed from .github/workflows/backend.yml (CUDA, ROCm,
  SYCL, Metal, L4T): upstream supports only CPU + Vulkan, so we ship those
  two and let GPU-class hardware route through Vulkan in the gallery
  capabilities map.
- tests-localvqe-grpc-transform job in test-extra.yml (gated on
  detect-changes.outputs.localvqe).
- New audio_transform capability + 4 specs in tests/e2e-backends.
- Playwright spec suite in core/http/react-ui/e2e/audio-transform.spec.js
  (8 specs covering tabs, file upload, multipart shape, history, errors).

Docs:
- New docs/content/features/audio-transform.md covering the (audio,
  reference) mental model, batch + WebSocket wire formats, LocalVQE param
  keys, and a YAML config example. Cross-links from text-to-audio and
  audio-to-text feature pages.

Assisted-by: Claude:claude-opus-4-7 [Bash Read Edit Write Agent TaskCreate]

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2026-05-04 22:07:11 +02:00

176 lines
5.5 KiB
Go

package backend
import (
"context"
"fmt"
"io"
"maps"
"os"
"path/filepath"
"time"
"github.com/mudler/LocalAI/core/config"
"github.com/mudler/LocalAI/core/trace"
"github.com/mudler/LocalAI/pkg/grpc"
"github.com/mudler/LocalAI/pkg/grpc/proto"
"github.com/mudler/LocalAI/pkg/model"
"github.com/mudler/LocalAI/pkg/utils"
)
// AudioTransformOptions carries per-request tuning for the unary transform.
type AudioTransformOptions struct {
// Params is forwarded verbatim to the backend (e.g. LocalVQE reads
// params["noise_gate"] / params["noise_gate_threshold_dbfs"]).
Params map[string]string
}
// AudioTransformOutputs are the on-disk paths of the persisted artifacts —
// the user-visible Dst plus copies of the inputs the backend actually saw.
// Inputs are persisted because the React UI history needs to display past
// runs, and rejecting them once the temp dir is cleaned up would defeat
// the point.
type AudioTransformOutputs struct {
Dst string
AudioPath string
ReferencePath string
}
// ModelAudioTransform runs the unary AudioTransform RPC and returns the
// generated output path plus the persisted input paths. `audioPath` is
// required; `referencePath` is optional (empty => backend zero-fills the
// reference channel).
func ModelAudioTransform(
audioPath, referencePath string,
opts AudioTransformOptions,
loader *model.ModelLoader,
appConfig *config.ApplicationConfig,
modelConfig config.ModelConfig,
) (AudioTransformOutputs, *proto.AudioTransformResult, error) {
mopts := ModelOptions(modelConfig, appConfig)
transformModel, err := loader.Load(mopts...)
if err != nil {
recordModelLoadFailure(appConfig, modelConfig.Name, modelConfig.Backend, err, nil)
return AudioTransformOutputs{}, nil, err
}
if transformModel == nil {
return AudioTransformOutputs{}, nil, fmt.Errorf("could not load audio-transform model %q", modelConfig.Model)
}
audioDir := filepath.Join(appConfig.GeneratedContentDir, "audio")
if err := os.MkdirAll(audioDir, 0750); err != nil {
return AudioTransformOutputs{}, nil, fmt.Errorf("failed creating audio directory: %s", err)
}
dst := filepath.Join(audioDir, utils.GenerateUniqueFileName(audioDir, "transform", ".wav"))
persistedAudio, err := persistAudioInput(audioPath, audioDir, "transform-input", ".wav")
if err != nil {
return AudioTransformOutputs{}, nil, fmt.Errorf("persist input audio: %w", err)
}
persistedRef := ""
if referencePath != "" {
persistedRef, err = persistAudioInput(referencePath, audioDir, "transform-ref", ".wav")
if err != nil {
return AudioTransformOutputs{}, nil, fmt.Errorf("persist reference: %w", err)
}
}
var startTime time.Time
if appConfig.EnableTracing {
trace.InitBackendTracingIfEnabled(appConfig.TracingMaxItems)
startTime = time.Now()
}
res, err := transformModel.AudioTransform(context.Background(), &proto.AudioTransformRequest{
AudioPath: audioPath,
ReferencePath: referencePath,
Dst: dst,
Params: opts.Params,
})
if appConfig.EnableTracing {
errStr := ""
if err != nil {
errStr = err.Error()
}
data := map[string]any{
"audio_path": audioPath,
"reference_path": referencePath,
"dst": dst,
"params": opts.Params,
}
if err == nil && res != nil {
data["sample_rate"] = res.SampleRate
data["samples"] = res.Samples
data["reference_provided"] = res.ReferenceProvided
if snippet := trace.AudioSnippet(dst); snippet != nil {
maps.Copy(data, snippet)
}
}
trace.RecordBackendTrace(trace.BackendTrace{
Timestamp: startTime,
Duration: time.Since(startTime),
Type: trace.BackendTraceAudioTransform,
ModelName: modelConfig.Name,
Backend: modelConfig.Backend,
Summary: trace.TruncateString(filepath.Base(audioPath), 200),
Error: errStr,
Data: data,
})
}
if err != nil {
return AudioTransformOutputs{}, nil, err
}
return AudioTransformOutputs{
Dst: dst,
AudioPath: persistedAudio,
ReferencePath: persistedRef,
}, res, nil
}
// ModelAudioTransformStream opens the bidirectional AudioTransformStream RPC
// and returns the underlying stream client. The caller is responsible for
// sending the initial Config message, subsequent Frame messages, and for
// calling CloseSend when input is done. The returned stream's Recv reports
// EOF when the backend has finished emitting frames.
func ModelAudioTransformStream(
ctx context.Context,
loader *model.ModelLoader,
appConfig *config.ApplicationConfig,
modelConfig config.ModelConfig,
) (grpc.AudioTransformStreamClient, error) {
mopts := ModelOptions(modelConfig, appConfig)
transformModel, err := loader.Load(mopts...)
if err != nil {
recordModelLoadFailure(appConfig, modelConfig.Name, modelConfig.Backend, err, nil)
return nil, err
}
if transformModel == nil {
return nil, fmt.Errorf("could not load audio-transform model %q", modelConfig.Model)
}
return transformModel.AudioTransformStream(ctx)
}
// persistAudioInput copies a transient input file (typically a multipart
// upload that lives in an os.TempDir slated for cleanup) into the long-lived
// GeneratedContentDir under a unique name, so the React UI can replay it
// from history.
func persistAudioInput(srcPath, dir, prefix, ext string) (string, error) {
src, err := os.Open(srcPath)
if err != nil {
return "", err
}
defer func() { _ = src.Close() }()
dst := filepath.Join(dir, utils.GenerateUniqueFileName(dir, prefix, ext))
out, err := os.Create(dst)
if err != nil {
return "", err
}
defer func() { _ = out.Close() }()
if _, err := io.Copy(out, src); err != nil {
return "", err
}
return dst, nil
}