mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-17 05:18:53 -04:00
openai-functions.md used to claim LocalAI tool calling worked only on llama.cpp-compatible models. That was true when it was written; it's not true now — vLLM (since PR #9328) and MLX/MLX-VLM both extract structured tool calls from model output. - openai-functions.md: new 'Supported backends' matrix covering llama.cpp, vllm, vllm-omni, mlx, mlx-vlm, with the key distinction that vllm needs an explicit tool_parser: option while mlx auto- detects from the chat template. Reasoning content (think tags) is extracted on the same set of backends. Added setup snippets for both the vllm and mlx paths, and noted the gallery importer pre-fills tool_parser:/reasoning_parser: for known families. - compatibility-table.md: fix the stale 'Streaming: no' for vllm, vllm-omni, mlx, mlx-vlm (all four support streaming now). Add 'Functions' to their capabilities. Also widen the MLX Acceleration column to reflect the CPU/CUDA/Jetson L4T backends that already exist in backend/index.yaml — 'Metal' on its own was misleading.