mirror of
https://github.com/mudler/LocalAI.git
synced 2026-05-17 13:10:23 -04:00
Qwen3-ASR-0.6B encodes the jfk.wav fixture into 777 audio tokens via its mmproj, but the test harness defaulted BACKEND_TEST_CTX_SIZE to 512, so llama.cpp server rejected every transcription request with "request (777 tokens) exceeds the available context size (512 tokens)". Set BACKEND_TEST_CTX_SIZE=2048 on the llama-cpp transcription target only — sherpa-onnx and vibevoice transcription targets don't go through llama.cpp's slot/n_ctx and weren't failing. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Assisted-by: Claude:claude-opus-4-7 [Claude Code]
60 KiB
60 KiB