mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-29 11:37:40 -04:00
* feat(vllm): expose AsyncEngineArgs via generic engine_args YAML map
LocalAI's vLLM backend wraps a small typed subset of vLLM's
AsyncEngineArgs (quantization, tensor_parallel_size, dtype, etc.).
Anything outside that subset -- pipeline/data/expert parallelism,
speculative_config, kv_transfer_config, all2all_backend, prefix
caching, chunked prefill, etc. -- requires a new protobuf field, a
Go struct field, an options.go line, and a backend.py mapping per
feature. That cadence is the bottleneck on shipping vLLM's
production feature set.
Add a generic `engine_args:` map on the model YAML that is
JSON-serialised into a new ModelOptions.EngineArgs proto field and
applied verbatim to AsyncEngineArgs at LoadModel time. Validation
is done by the Python backend via dataclasses.fields(); unknown
keys fail with the closest valid name as a hint.
dataclasses.replace() is used so vLLM's __post_init__ re-runs and
auto-converts dict values into nested config dataclasses
(CompilationConfig, AttentionConfig, ...). speculative_config and
kv_transfer_config flow through as dicts; vLLM converts them at
engine init.
Operators can now write:
engine_args:
data_parallel_size: 8
enable_expert_parallel: true
all2all_backend: deepep_low_latency
speculative_config:
method: deepseek_mtp
num_speculative_tokens: 3
kv_cache_dtype: fp8
without further proto/Go/Python plumbing per field.
Production defaults seeded by hooks_vllm.go: enable_prefix_caching
and enable_chunked_prefill default to true unless explicitly set.
Existing typed YAML fields (gpu_memory_utilization,
tensor_parallel_size, etc.) remain for back-compat; engine_args
overrides them when both are set.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* chore(vllm): pin cublas13 to vLLM 0.20.0 cu130 wheel
vLLM's PyPI wheel is built against CUDA 12 (libcudart.so.12) and won't
load on a cu130 host. Switch the cublas13 build to vLLM's per-tag cu130
simple-index (https://wheels.vllm.ai/0.20.0/cu130/) and pin
vllm==0.20.0. The cu130-flavoured wheel ships libcudart.so.13 and
includes the DFlash speculative-decoding method that landed in 0.20.0.
cublas13 install gets --index-strategy=unsafe-best-match so uv consults
both the cu130 index and PyPI when resolving — PyPI also publishes
vllm==0.20.0, but with cu12 binaries that error at import time.
Verified: Qwen3.5-4B + z-lab/Qwen3.5-4B-DFlash loads and serves chat
completions on RTX 5070 Ti (sm_120, cu130).
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* ci(vllm): bot job to bump cublas13 vLLM wheel pin
vLLM's cu130 wheel index URL is itself version-locked
(wheels.vllm.ai/<TAG>/cu130/, no /latest/ alias upstream), so a vLLM
bump means rewriting two values atomically — the URL segment and the
version constraint. bump_deps.sh handles git-sha-in-Makefile only;
add a sibling bump_vllm_wheel.sh and a matching workflow job that
mirrors the existing matrix's PR-creation pattern.
The bumper queries /releases/latest (which excludes prereleases),
strips the leading 'v', and seds both lines unconditionally. When the
file is already on the latest tag the rewrite is a no-op and
peter-evans/create-pull-request opens no PR.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* docs(vllm): document engine_args and speculative decoding
The new engine_args: map plumbs arbitrary AsyncEngineArgs through to
vLLM, but the public docs only covered the basic typed fields. Add a
short subsection in the vLLM section explaining the typed/generic
split and showing a worked DFlash speculative-decoding config, with
pointers to vLLM's SpeculativeConfig reference and z-lab's drafter
collection.
Assisted-by: Claude:claude-opus-4-7 [Claude Code]
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
93 lines
4.1 KiB
Bash
Executable File
93 lines
4.1 KiB
Bash
Executable File
#!/bin/bash
|
|
set -e
|
|
|
|
EXTRA_PIP_INSTALL_FLAGS="--no-build-isolation"
|
|
|
|
# Avoid to overcommit the CPU during build
|
|
# https://github.com/vllm-project/vllm/issues/20079
|
|
# https://docs.vllm.ai/en/v0.8.3/serving/env_vars.html
|
|
# https://docs.redhat.com/it/documentation/red_hat_ai_inference_server/3.0/html/vllm_server_arguments/environment_variables-server-arguments
|
|
export NVCC_THREADS=2
|
|
export MAX_JOBS=1
|
|
|
|
backend_dir=$(dirname $0)
|
|
|
|
if [ -d $backend_dir/common ]; then
|
|
source $backend_dir/common/libbackend.sh
|
|
else
|
|
source $backend_dir/../common/libbackend.sh
|
|
fi
|
|
|
|
# This is here because the Intel pip index is broken and returns 200 status codes for every package name, it just doesn't return any package links.
|
|
# This makes uv think that the package exists in the Intel pip index, and by default it stops looking at other pip indexes once it finds a match.
|
|
# We need uv to continue falling through to the pypi default index to find optimum[openvino] in the pypi index
|
|
# the --upgrade actually allows us to *downgrade* torch to the version provided in the Intel pip index
|
|
if [ "x${BUILD_PROFILE}" == "xintel" ]; then
|
|
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
|
|
fi
|
|
|
|
# CPU builds need unsafe-best-match to pull torch==2.10.0+cpu from the
|
|
# pytorch test channel while still resolving transformers/vllm from pypi.
|
|
if [ "x${BUILD_PROFILE}" == "xcpu" ]; then
|
|
EXTRA_PIP_INSTALL_FLAGS+=" --index-strategy=unsafe-best-match"
|
|
fi
|
|
|
|
# cublas13 pulls the vLLM wheel from a per-tag cu130 index (PyPI's vllm wheel
|
|
# is built against CUDA 12 and won't load on cu130). uv's default per-package
|
|
# first-match strategy would still pick the PyPI wheel, so allow it to consult
|
|
# every configured index when resolving.
|
|
if [ "x${BUILD_PROFILE}" == "xcublas13" ]; then
|
|
EXTRA_PIP_INSTALL_FLAGS+=" --index-strategy=unsafe-best-match"
|
|
fi
|
|
|
|
# JetPack 7 / L4T arm64 wheels (torch, vllm, flash-attn) live on
|
|
# pypi.jetson-ai-lab.io and are built for cp312, so bump the venv Python
|
|
# accordingly. JetPack 6 keeps cp310 + USE_PIP=true. unsafe-best-match
|
|
# is required because the jetson-ai-lab index lists transitive deps at
|
|
# limited versions — without it uv pins to the first matching index and
|
|
# fails to resolve a compatible wheel from PyPI.
|
|
if [ "x${BUILD_PROFILE}" == "xl4t12" ]; then
|
|
USE_PIP=true
|
|
fi
|
|
if [ "x${BUILD_PROFILE}" == "xl4t13" ]; then
|
|
PYTHON_VERSION="3.12"
|
|
PYTHON_PATCH="12"
|
|
PY_STANDALONE_TAG="20251120"
|
|
EXTRA_PIP_INSTALL_FLAGS+=" --index-strategy=unsafe-best-match"
|
|
fi
|
|
|
|
# FROM_SOURCE=true on a CPU build skips the prebuilt vllm wheel in
|
|
# requirements-cpu-after.txt and compiles vllm locally against the host's
|
|
# actual CPU. Not used by default because it takes ~30-40 minutes, but
|
|
# kept here for hosts where the prebuilt wheel SIGILLs (CPU without the
|
|
# required SIMD baseline, e.g. AVX-512 VNNI/BF16). Default CI uses a
|
|
# bigger-runner with compatible hardware instead.
|
|
if [ "x${BUILD_TYPE}" == "x" ] && [ "x${FROM_SOURCE:-}" == "xtrue" ]; then
|
|
# Temporarily hide the prebuilt wheel so installRequirements doesn't
|
|
# pull it — the rest of the requirements files (base deps, torch,
|
|
# transformers) are still installed normally.
|
|
_cpu_after="${backend_dir}/requirements-cpu-after.txt"
|
|
_cpu_after_bak=""
|
|
if [ -f "${_cpu_after}" ]; then
|
|
_cpu_after_bak="${_cpu_after}.from-source.bak"
|
|
mv "${_cpu_after}" "${_cpu_after_bak}"
|
|
fi
|
|
installRequirements
|
|
if [ -n "${_cpu_after_bak}" ]; then
|
|
mv "${_cpu_after_bak}" "${_cpu_after}"
|
|
fi
|
|
|
|
# Build vllm from source against the installed torch.
|
|
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/
|
|
_vllm_src=$(mktemp -d)
|
|
trap 'rm -rf "${_vllm_src}"' EXIT
|
|
git clone --depth 1 https://github.com/vllm-project/vllm "${_vllm_src}/vllm"
|
|
pushd "${_vllm_src}/vllm"
|
|
uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} wheel packaging ninja "setuptools>=49.4.0" numpy typing-extensions pillow setuptools-scm
|
|
# Respect pre-installed torch version — skip vllm's own requirements-build.txt torch pin.
|
|
VLLM_TARGET_DEVICE=cpu uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} --no-deps .
|
|
popd
|
|
else
|
|
installRequirements
|
|
fi
|