feat(vllm): parity with llama.cpp backend (#9328)

* fix(schema): serialize ToolCallID and Reasoning in Messages.ToProto

The ToProto conversion was dropping tool_call_id and reasoning_content
even though both proto and Go fields existed, breaking multi-turn tool
calling and reasoning passthrough to backends.

* refactor(config): introduce backend hook system and migrate llama-cpp defaults

Adds RegisterBackendHook/runBackendHooks so each backend can register
default-filling functions that run during ModelConfig.SetDefaults().

Migrates the existing GGUF guessing logic into hooks_llamacpp.go,
registered for both 'llama-cpp' and the empty backend (auto-detect).
Removes the old guesser.go shim.

* feat(config): add vLLM parser defaults hook and importer auto-detection

Introduces parser_defaults.json mapping model families to vLLM
tool_parser/reasoning_parser names, with longest-pattern-first matching.

The vllmDefaults hook auto-fills tool_parser and reasoning_parser
options at load time for known families, while the VLLMImporter writes
the same values into generated YAML so users can review and edit them.

Adds tests covering MatchParserDefaults, hook registration via
SetDefaults, and the user-override behavior.

* feat(vllm): wire native tool/reasoning parsers + chat deltas + logprobs

- Use vLLM's ToolParserManager/ReasoningParserManager to extract structured
  output (tool calls, reasoning content) instead of reimplementing parsing
- Convert proto Messages to dicts and pass tools to apply_chat_template
- Emit ChatDelta with content/reasoning_content/tool_calls in Reply
- Extract prompt_tokens, completion_tokens, and logprobs from output
- Replace boolean GuidedDecoding with proper GuidedDecodingParams from Grammar
- Add TokenizeString and Free RPC methods
- Fix missing `time` import used by load_video()

* feat(vllm): CPU support + shared utils + vllm-omni feature parity

- Split vllm install per acceleration: move generic `vllm` out of
  requirements-after.txt into per-profile after files (cublas12, hipblas,
  intel) and add CPU wheel URL for cpu-after.txt
- requirements-cpu.txt now pulls torch==2.7.0+cpu from PyTorch CPU index
- backend/index.yaml: register cpu-vllm / cpu-vllm-development variants
- New backend/python/common/vllm_utils.py: shared parse_options,
  messages_to_dicts, setup_parsers helpers (used by both vllm backends)
- vllm-omni: replace hardcoded chat template with tokenizer.apply_chat_template,
  wire native parsers via shared utils, emit ChatDelta with token counts,
  add TokenizeString and Free RPCs, detect CPU and set VLLM_TARGET_DEVICE
- Add test_cpu_inference.py: standalone script to validate CPU build with
  a small model (Qwen2.5-0.5B-Instruct)

* fix(vllm): CPU build compatibility with vllm 0.14.1

Validated end-to-end on CPU with Qwen2.5-0.5B-Instruct (LoadModel, Predict,
TokenizeString, Free all working).

- requirements-cpu-after.txt: pin vllm to 0.14.1+cpu (pre-built wheel from
  GitHub releases) for x86_64 and aarch64. vllm 0.14.1 is the newest CPU
  wheel whose torch dependency resolves against published PyTorch builds
  (torch==2.9.1+cpu). Later vllm CPU wheels currently require
  torch==2.10.0+cpu which is only available on the PyTorch test channel
  with incompatible torchvision.
- requirements-cpu.txt: bump torch to 2.9.1+cpu, add torchvision/torchaudio
  so uv resolves them consistently from the PyTorch CPU index.
- install.sh: add --index-strategy=unsafe-best-match for CPU builds so uv
  can mix the PyTorch index and PyPI for transitive deps (matches the
  existing intel profile behaviour).
- backend.py LoadModel: vllm >= 0.14 removed AsyncLLMEngine.get_model_config
  so the old code path errored out with AttributeError on model load.
  Switch to the new get_tokenizer()/tokenizer accessor with a fallback
  to building the tokenizer directly from request.Model.

* fix(vllm): tool parser constructor compat + e2e tool calling test

Concrete vLLM tool parsers override the abstract base's __init__ and
drop the tools kwarg (e.g. Hermes2ProToolParser only takes tokenizer).
Instantiating with tools= raised TypeError which was silently caught,
leaving chat_deltas.tool_calls empty.

Retry the constructor without the tools kwarg on TypeError — tools
aren't required by these parsers since extract_tool_calls finds tool
syntax in the raw model output directly.

Validated with Qwen/Qwen2.5-0.5B-Instruct + hermes parser on CPU:
the backend correctly returns ToolCallDelta{name='get_weather',
arguments='{"location": "Paris, France"}'} in ChatDelta.

test_tool_calls.py is a standalone smoke test that spawns the gRPC
backend, sends a chat completion with tools, and asserts the response
contains a structured tool call.

* ci(backend): build cpu-vllm container image

Add the cpu-vllm variant to the backend container build matrix so the
image registered in backend/index.yaml (cpu-vllm / cpu-vllm-development)
is actually produced by CI.

Follows the same pattern as the other CPU python backends
(cpu-diffusers, cpu-chatterbox, etc.) with build-type='' and no CUDA.
backend_pr.yml auto-picks this up via its matrix filter from backend.yml.

* test(e2e-backends): add tools capability + HF model name support

Extends tests/e2e-backends to cover backends that:
- Resolve HuggingFace model ids natively (vllm, vllm-omni) instead of
  loading a local file: BACKEND_TEST_MODEL_NAME is passed verbatim as
  ModelOptions.Model with no download/ModelFile.
- Parse tool calls into ChatDelta.tool_calls: new "tools" capability
  sends a Predict with a get_weather function definition and asserts
  the Reply contains a matching ToolCallDelta. Uses UseTokenizerTemplate
  with OpenAI-style Messages so the backend can wire tools into the
  model's chat template.
- Need backend-specific Options[]: BACKEND_TEST_OPTIONS lets a test set
  e.g. "tool_parser:hermes,reasoning_parser:qwen3" at LoadModel time.

Adds make target test-extra-backend-vllm that:
- docker-build-vllm
- loads Qwen/Qwen2.5-0.5B-Instruct
- runs health,load,predict,stream,tools with tool_parser:hermes

Drops backend/python/vllm/test_{cpu_inference,tool_calls}.py — those
standalone scripts were scaffolding used while bringing up the Python
backend; the e2e-backends harness now covers the same ground uniformly
alongside llama-cpp and ik-llama-cpp.

* ci(test-extra): run vllm e2e tests on CPU

Adds tests-vllm-grpc to the test-extra workflow, mirroring the
llama-cpp and ik-llama-cpp gRPC jobs. Triggers when files under
backend/python/vllm/ change (or on run-all), builds the local-ai
vllm container image, and runs the tests/e2e-backends harness with
BACKEND_TEST_MODEL_NAME=Qwen/Qwen2.5-0.5B-Instruct, tool_parser:hermes,
and the tools capability enabled.

Uses ubuntu-latest (no GPU) — vllm runs on CPU via the cpu-vllm
wheel we pinned in requirements-cpu-after.txt. Frees disk space
before the build since the docker image + torch + vllm wheel is
sizeable.

* fix(vllm): build from source on CI to avoid SIGILL on prebuilt wheel

The prebuilt vllm 0.14.1+cpu wheel from GitHub releases is compiled with
SIMD instructions (AVX-512 VNNI/BF16 or AMX-BF16) that not every CPU
supports. GitHub Actions ubuntu-latest runners SIGILL when vllm spawns
the model_executor.models.registry subprocess for introspection, so
LoadModel never reaches the actual inference path.

- install.sh: when FROM_SOURCE=true on a CPU build, temporarily hide
  requirements-cpu-after.txt so installRequirements installs the base
  deps + torch CPU without pulling the prebuilt wheel, then clone vllm
  and compile it with VLLM_TARGET_DEVICE=cpu. The resulting binaries
  target the host's actual CPU.
- backend/Dockerfile.python: accept a FROM_SOURCE build-arg and expose
  it as an ENV so install.sh sees it during `make`.
- Makefile docker-build-backend: forward FROM_SOURCE as --build-arg
  when set, so backends that need source builds can opt in.
- Makefile test-extra-backend-vllm: call docker-build-vllm via a
  recursive $(MAKE) invocation so FROM_SOURCE flows through.
- .github/workflows/test-extra.yml: set FROM_SOURCE=true on the
  tests-vllm-grpc job. Slower but reliable — the prebuilt wheel only
  works on hosts that share the build-time SIMD baseline.

Answers 'did you test locally?': yes, end-to-end on my local machine
with the prebuilt wheel (CPU supports AVX-512 VNNI). The CI runner CPU
gap was not covered locally — this commit plugs that gap.

* ci(vllm): use bigger-runner instead of source build

The prebuilt vllm 0.14.1+cpu wheel requires SIMD instructions (AVX-512
VNNI/BF16) that stock ubuntu-latest GitHub runners don't support —
vllm.model_executor.models.registry SIGILLs on import during LoadModel.

Source compilation works but takes 30-40 minutes per CI run, which is
too slow for an e2e smoke test. Instead, switch tests-vllm-grpc to the
bigger-runner self-hosted label (already used by backend.yml for the
llama-cpp CUDA build) — that hardware has the required SIMD baseline
and the prebuilt wheel runs cleanly.

FROM_SOURCE=true is kept as an opt-in escape hatch:
- install.sh still has the CPU source-build path for hosts that need it
- backend/Dockerfile.python still declares the ARG + ENV
- Makefile docker-build-backend still forwards the build-arg when set
Default CI path uses the fast prebuilt wheel; source build can be
re-enabled by exporting FROM_SOURCE=true in the environment.

* ci(vllm): install make + build deps on bigger-runner

bigger-runner is a bare self-hosted runner used by backend.yml for
docker image builds — it has docker but not the usual ubuntu-latest
toolchain. The make-based test target needs make, build-essential
(cgo in 'go test'), and curl/unzip (the Makefile protoc target
downloads protoc from github releases).

protoc-gen-go and protoc-gen-go-grpc come via 'go install' in the
install-go-tools target, which setup-go makes possible.

* ci(vllm): install libnuma1 + libgomp1 on bigger-runner

The vllm 0.14.1+cpu wheel ships a _C C++ extension that dlopens
libnuma.so.1 at import time. When the runner host doesn't have it,
the extension silently fails to register its torch ops, so
EngineCore crashes on init_device with:

  AttributeError: '_OpNamespace' '_C_utils' object has no attribute
    'init_cpu_threads_env'

Also add libgomp1 (OpenMP runtime, used by torch CPU kernels) to be
safe on stripped-down runners.

* feat(vllm): bundle libnuma/libgomp via package.sh

The vllm CPU wheel ships a _C extension that dlopens libnuma.so.1 at
import time; torch's CPU kernels in turn use libgomp.so.1 (OpenMP).
Without these on the host, vllm._C silently fails to register its
torch ops and EngineCore crashes with:

  AttributeError: '_OpNamespace' '_C_utils' object has no attribute
    'init_cpu_threads_env'

Rather than asking every user to install libnuma1/libgomp1 on their
host (or every LocalAI base image to ship them), bundle them into
the backend image itself — same pattern fish-speech and the GPU libs
already use. libbackend.sh adds ${EDIR}/lib to LD_LIBRARY_PATH at
run time so the bundled copies are picked up automatically.

- backend/python/vllm/package.sh (new): copies libnuma.so.1 and
  libgomp.so.1 from the builder's multilib paths into ${BACKEND}/lib,
  preserving soname symlinks. Runs during Dockerfile.python's
  'Run backend-specific packaging' step (which already invokes
  package.sh if present).
- backend/Dockerfile.python: install libnuma1 + libgomp1 in the
  builder stage so package.sh has something to copy (the Ubuntu
  base image otherwise only has libgomp in the gcc dep chain).
- test-extra.yml: drop the workaround that installed these libs on
  the runner host — with the backend image self-contained, the
  runner no longer needs them, and the test now exercises the
  packaging path end-to-end the way a production host would.

* ci(vllm): disable tests-vllm-grpc job (heterogeneous runners)

Both ubuntu-latest and bigger-runner have inconsistent CPU baselines:
some instances support the AVX-512 VNNI/BF16 instructions the prebuilt
vllm 0.14.1+cpu wheel was compiled with, others SIGILL on import of
vllm.model_executor.models.registry. The libnuma packaging fix doesn't
help when the wheel itself can't be loaded.

FROM_SOURCE=true compiles vllm against the actual host CPU and works
everywhere, but takes 30-50 minutes per run — too slow for a smoke
test on every PR.

Comment out the job for now. The test itself is intact and passes
locally; run it via 'make test-extra-backend-vllm' on a host with the
required SIMD baseline. Re-enable when:
  - we have a self-hosted runner label with guaranteed AVX-512 VNNI/BF16, or
  - vllm publishes a CPU wheel with a wider baseline, or
  - we set up a docker layer cache that makes FROM_SOURCE acceptable

The detect-changes vllm output, the test harness changes (tests/
e2e-backends + tools cap), the make target (test-extra-backend-vllm),
the package.sh and the Dockerfile/install.sh plumbing all stay in
place.
This commit is contained in:
Ettore Di Giacinto
2026-04-13 11:00:29 +02:00
committed by GitHub
parent 0f90d17aac
commit d67623230f
28 changed files with 1245 additions and 122 deletions

View File

@@ -0,0 +1,84 @@
"""Shared utilities for vLLM-based backends."""
import json
import sys
def parse_options(options_list):
"""Parse Options[] list of 'key:value' strings into a dict.
Supports type inference for common cases (bool, int, float).
Used by LoadModel to extract backend-specific options.
"""
opts = {}
for opt in options_list:
if ":" not in opt:
continue
key, value = opt.split(":", 1)
key = key.strip()
value = value.strip()
# Try type conversion
if value.lower() in ("true", "false"):
opts[key] = value.lower() == "true"
else:
try:
opts[key] = int(value)
except ValueError:
try:
opts[key] = float(value)
except ValueError:
opts[key] = value
return opts
def messages_to_dicts(proto_messages):
"""Convert proto Message objects to list of dicts for apply_chat_template().
Handles: role, content, name, tool_call_id, reasoning_content, tool_calls (JSON string -> list).
"""
result = []
for msg in proto_messages:
d = {"role": msg.role, "content": msg.content or ""}
if msg.name:
d["name"] = msg.name
if msg.tool_call_id:
d["tool_call_id"] = msg.tool_call_id
if msg.reasoning_content:
d["reasoning_content"] = msg.reasoning_content
if msg.tool_calls:
try:
d["tool_calls"] = json.loads(msg.tool_calls)
except json.JSONDecodeError:
pass
result.append(d)
return result
def setup_parsers(opts):
"""Return (tool_parser_cls, reasoning_parser_cls) tuple from opts dict.
Uses vLLM's native ToolParserManager and ReasoningParserManager.
Returns (None, None) if vLLM is not installed or parsers not available.
"""
tool_parser_cls = None
reasoning_parser_cls = None
tool_parser_name = opts.get("tool_parser")
reasoning_parser_name = opts.get("reasoning_parser")
if tool_parser_name:
try:
from vllm.tool_parsers import ToolParserManager
tool_parser_cls = ToolParserManager.get_tool_parser(tool_parser_name)
print(f"[vllm_utils] Loaded tool_parser: {tool_parser_name}", file=sys.stderr)
except Exception as e:
print(f"[vllm_utils] Failed to load tool_parser {tool_parser_name}: {e}", file=sys.stderr)
if reasoning_parser_name:
try:
from vllm.reasoning import ReasoningParserManager
reasoning_parser_cls = ReasoningParserManager.get_reasoning_parser(reasoning_parser_name)
print(f"[vllm_utils] Loaded reasoning_parser: {reasoning_parser_name}", file=sys.stderr)
except Exception as e:
print(f"[vllm_utils] Failed to load reasoning_parser {reasoning_parser_name}: {e}", file=sys.stderr)
return tool_parser_cls, reasoning_parser_cls

View File

@@ -17,6 +17,8 @@ import time
import os
import base64
import io
import json
import gc
from PIL import Image
import torch
@@ -30,6 +32,7 @@ import grpc
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'common'))
from grpc_auth import get_auth_interceptors
from vllm_utils import parse_options, messages_to_dicts, setup_parsers
from vllm_omni.entrypoints.omni import Omni
@@ -148,23 +151,20 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
def LoadModel(self, request, context):
try:
# CPU detection: if no CUDA, default vLLM target device to CPU.
try:
if not torch.cuda.is_available():
os.environ.setdefault("VLLM_TARGET_DEVICE", "cpu")
os.environ.setdefault("VLLM_CPU_KVCACHE_SPACE", "4")
except Exception:
pass
print(f"Loading model {request.Model}...", file=sys.stderr)
print(f"Request {request}", file=sys.stderr)
# Parse options from request.Options (key:value pairs)
self.options = {}
for opt in request.Options:
if ":" not in opt:
continue
key, value = opt.split(":", 1)
# Convert value to appropriate type
if is_float(value):
value = float(value)
elif is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
# Parse options from request.Options using shared helper
self.options = parse_options(request.Options)
opts = self.options
print(f"Options: {self.options}", file=sys.stderr)
@@ -244,6 +244,24 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
omni_kwargs["max_model_len"] = request.MaxModelLen
self.omni = Omni(**omni_kwargs)
# Load tokenizer for LLM/TTS so chat templates work
if self.model_type in ("llm", "tts"):
try:
from vllm.transformers_utils.tokenizer import get_tokenizer
self.tokenizer = get_tokenizer(
request.Model,
trust_remote_code=opts.get("trust_remote_code", False),
)
except Exception as e:
print(f"Failed to load tokenizer: {e}", file=sys.stderr)
self.tokenizer = None
else:
self.tokenizer = None
# Setup optional tool / reasoning parsers
self.tool_parser_cls, self.reasoning_parser_cls = setup_parsers(opts)
print("Model loaded successfully", file=sys.stderr)
return backend_pb2.Result(message="Model loaded successfully", success=True)
@@ -466,14 +484,32 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# Extract prompt
if request.Prompt:
prompt = request.Prompt
elif request.Messages and request.UseTokenizerTemplate:
# Build prompt from messages (simplified - would need tokenizer for full template)
prompt = ""
for msg in request.Messages:
role = msg.role
content = msg.content
prompt += f"<|im_start|>{role}\n{content}<|im_end|>\n"
prompt += "<|im_start|>assistant\n"
elif request.Messages:
if getattr(self, "tokenizer", None) is not None:
messages_dicts = messages_to_dicts(request.Messages)
template_kwargs = {"tokenize": False, "add_generation_prompt": True}
if request.Tools:
try:
template_kwargs["tools"] = json.loads(request.Tools)
except json.JSONDecodeError:
pass
try:
if request.Metadata.get("enable_thinking", "").lower() == "true":
template_kwargs["enable_thinking"] = True
except Exception:
pass
try:
prompt = self.tokenizer.apply_chat_template(messages_dicts, **template_kwargs)
except TypeError:
prompt = self.tokenizer.apply_chat_template(
messages_dicts, tokenize=False, add_generation_prompt=True
)
else:
# Fallback: basic template
prompt = ""
for msg in request.Messages:
prompt += f"<|im_start|>{msg.role}\n{msg.content}<|im_end|>\n"
prompt += "<|im_start|>assistant\n"
else:
yield backend_pb2.Reply(message=bytes("", 'utf-8'))
return
@@ -539,20 +575,79 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# Call omni.generate() (returns generator for LLM mode)
omni_generator = self.omni.generate([inputs], sampling_params_list)
# Extract text from outputs
# Extract text from outputs and track token usage
generated_text = ""
prompt_tokens = 0
completion_tokens = 0
for stage_outputs in omni_generator:
if stage_outputs.final_output_type == "text":
for output in stage_outputs.request_output:
text_output = output.outputs[0].text
completion = output.outputs[0]
text_output = completion.text
# Track tokens when available
try:
if getattr(output, "prompt_token_ids", None) is not None:
prompt_tokens = len(output.prompt_token_ids)
if getattr(completion, "token_ids", None) is not None:
completion_tokens = len(completion.token_ids)
except Exception:
pass
if streaming:
# Remove already sent text (vllm concatenates)
delta_text = text_output.removeprefix(generated_text)
yield backend_pb2.Reply(message=bytes(delta_text, encoding='utf-8'))
yield backend_pb2.Reply(
message=bytes(delta_text, encoding='utf-8'),
tokens=completion_tokens,
prompt_tokens=prompt_tokens,
)
generated_text = text_output
if not streaming:
yield backend_pb2.Reply(message=bytes(generated_text, encoding='utf-8'))
# Build optional ChatDelta with parsed reasoning / tool calls
chat_deltas = []
content_text = generated_text
reasoning_text = ""
tool_call_deltas = []
if self.reasoning_parser_cls is not None:
try:
parser = self.reasoning_parser_cls(self.tokenizer) if self.tokenizer else self.reasoning_parser_cls()
reasoning_text, content_text = parser.extract_reasoning_content(content_text, request=None)
reasoning_text = reasoning_text or ""
content_text = content_text or ""
except Exception as e:
print(f"reasoning_parser failed: {e}", file=sys.stderr)
if self.tool_parser_cls is not None:
try:
parser = self.tool_parser_cls(self.tokenizer) if self.tokenizer else self.tool_parser_cls()
tool_info = parser.extract_tool_calls(content_text, request=None)
if getattr(tool_info, "tools_called", False):
content_text = tool_info.content or ""
for tc in tool_info.tool_calls or []:
fn = getattr(tc, "function", None)
tool_call_deltas.append(backend_pb2.ToolCallDelta(
index=getattr(tc, "index", 0) or 0,
id=getattr(tc, "id", "") or "",
name=getattr(fn, "name", "") if fn else "",
arguments=getattr(fn, "arguments", "") if fn else "",
))
except Exception as e:
print(f"tool_parser failed: {e}", file=sys.stderr)
if self.tool_parser_cls is not None or self.reasoning_parser_cls is not None:
chat_deltas.append(backend_pb2.ChatDelta(
content=content_text,
reasoning_content=reasoning_text,
tool_calls=tool_call_deltas,
))
yield backend_pb2.Reply(
message=bytes(generated_text, encoding='utf-8'),
tokens=completion_tokens,
prompt_tokens=prompt_tokens,
chat_deltas=chat_deltas,
)
except Exception as err:
print(f"Error in Predict: {err}", file=sys.stderr)
@@ -647,6 +742,37 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
traceback.print_exc()
return backend_pb2.Result(success=False, message=f"Error generating TTS: {err}")
def TokenizeString(self, request, context):
if not hasattr(self, 'tokenizer') or self.tokenizer is None:
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
context.set_details("Model/tokenizer not loaded")
return backend_pb2.TokenizationResponse()
try:
tokens = self.tokenizer.encode(request.Prompt)
return backend_pb2.TokenizationResponse(length=len(tokens), tokens=tokens)
except Exception as e:
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(str(e))
return backend_pb2.TokenizationResponse()
def Free(self, request, context):
try:
if hasattr(self, 'omni'):
del self.omni
if hasattr(self, 'tokenizer'):
del self.tokenizer
self.tool_parser_cls = None
self.reasoning_parser_cls = None
gc.collect()
try:
if torch.cuda.is_available():
torch.cuda.empty_cache()
except Exception:
pass
return backend_pb2.Result(success=True, message="Model freed")
except Exception as e:
return backend_pb2.Result(success=False, message=str(e))
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),

View File

@@ -5,6 +5,9 @@ import argparse
import signal
import sys
import os
import json
import time
import gc
from typing import List
from PIL import Image
@@ -26,6 +29,25 @@ from vllm.assets.video import VideoAsset
import base64
import io
# Version-compat imports — wrap in try/except for older vLLM versions
try:
from vllm.tool_parsers import ToolParserManager
HAS_TOOL_PARSERS = True
except ImportError:
HAS_TOOL_PARSERS = False
try:
from vllm.reasoning import ReasoningParserManager
HAS_REASONING_PARSERS = True
except ImportError:
HAS_REASONING_PARSERS = False
try:
from vllm.sampling_params import GuidedDecodingParams
HAS_GUIDED_DECODING = True
except ImportError:
HAS_GUIDED_DECODING = False
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
@@ -69,6 +91,35 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
break
return decoded_text
def _parse_options(self, options_list):
"""Parse Options[] key:value string list into a dict."""
opts = {}
for opt in options_list:
if ":" not in opt:
continue
key, value = opt.split(":", 1)
opts[key.strip()] = value.strip()
return opts
def _messages_to_dicts(self, messages):
"""Convert proto Messages to list of dicts suitable for apply_chat_template()."""
result = []
for msg in messages:
d = {"role": msg.role, "content": msg.content or ""}
if msg.name:
d["name"] = msg.name
if msg.tool_call_id:
d["tool_call_id"] = msg.tool_call_id
if msg.reasoning_content:
d["reasoning_content"] = msg.reasoning_content
if msg.tool_calls:
try:
d["tool_calls"] = json.loads(msg.tool_calls)
except json.JSONDecodeError:
pass
result.append(d)
return result
def Health(self, request, context):
"""
Returns a health check message.
@@ -132,15 +183,49 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
try:
engine_model_config = await self.llm.get_model_config()
self.tokenizer = get_tokenizer(
engine_model_config.tokenizer,
tokenizer_mode=engine_model_config.tokenizer_mode,
trust_remote_code=engine_model_config.trust_remote_code,
truncation_side="left",
)
# vLLM >= 0.14 removed get_model_config() on AsyncLLM; the tokenizer
# is either already loaded on the engine or can be built from the
# Model name directly.
tokenizer = None
if hasattr(self.llm, "get_tokenizer"):
try:
tokenizer = await self.llm.get_tokenizer()
except TypeError:
tokenizer = self.llm.get_tokenizer()
except Exception:
tokenizer = None
if tokenizer is None and hasattr(self.llm, "tokenizer"):
tokenizer = self.llm.tokenizer
if tokenizer is None:
tokenizer = get_tokenizer(
request.Model,
trust_remote_code=bool(request.TrustRemoteCode),
truncation_side="left",
)
self.tokenizer = tokenizer
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
# Parse options for parser selection
opts = self._parse_options(request.Options)
# Instantiate tool/reasoning parser classes (they'll be instantiated per-request with tokenizer)
self.tool_parser_cls = None
self.reasoning_parser_cls = None
if HAS_TOOL_PARSERS and opts.get("tool_parser"):
try:
self.tool_parser_cls = ToolParserManager.get_tool_parser(opts["tool_parser"])
print(f"Loaded tool_parser: {opts['tool_parser']}", file=sys.stderr)
except Exception as e:
print(f"Failed to load tool_parser {opts.get('tool_parser')}: {e}", file=sys.stderr)
if HAS_REASONING_PARSERS and opts.get("reasoning_parser"):
try:
self.reasoning_parser_cls = ReasoningParserManager.get_reasoning_parser(opts["reasoning_parser"])
print(f"Loaded reasoning_parser: {opts['reasoning_parser']}", file=sys.stderr)
except Exception as e:
print(f"Failed to load reasoning_parser {opts.get('reasoning_parser')}: {e}", file=sys.stderr)
print("Model loaded successfully", file=sys.stderr)
return backend_pb2.Result(message="Model loaded successfully", success=True)
@@ -197,6 +282,38 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
finally:
await iterations.aclose()
async def TokenizeString(self, request, context):
if not hasattr(self, 'tokenizer') or self.tokenizer is None:
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
context.set_details("Model/tokenizer not loaded")
return backend_pb2.TokenizationResponse()
try:
tokens = self.tokenizer.encode(request.Prompt)
return backend_pb2.TokenizationResponse(length=len(tokens), tokens=tokens)
except Exception as e:
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(str(e))
return backend_pb2.TokenizationResponse()
async def Free(self, request, context):
try:
if hasattr(self, 'llm'):
del self.llm
if hasattr(self, 'tokenizer'):
del self.tokenizer
self.tool_parser_cls = None
self.reasoning_parser_cls = None
gc.collect()
try:
import torch
if torch.cuda.is_available():
torch.cuda.empty_cache()
except ImportError:
pass
return backend_pb2.Result(success=True, message="Model freed")
except Exception as e:
return backend_pb2.Result(success=False, message=str(e))
async def _predict(self, request, context, streaming=False):
# Build the sampling parameters
# NOTE: this must stay in sync with the vllm backend
@@ -222,7 +339,6 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
"SkipSpecialTokens": "skip_special_tokens",
"SpacesBetweenSpecialTokens": "spaces_between_special_tokens",
"TruncatePromptTokens": "truncate_prompt_tokens",
"GuidedDecoding": "guided_decoding",
}
sampling_params = SamplingParams(top_p=0.9, max_tokens=200)
@@ -233,6 +349,14 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
if value not in (None, 0, [], False):
setattr(sampling_params, param_field, value)
# Guided decoding: use Grammar field to pass JSON schema or BNF
if HAS_GUIDED_DECODING and request.Grammar:
try:
json.loads(request.Grammar) # valid JSON = JSON schema
sampling_params.guided_decoding = GuidedDecodingParams(json=request.Grammar)
except json.JSONDecodeError:
sampling_params.guided_decoding = GuidedDecodingParams(grammar=request.Grammar)
# Extract image paths and process images
prompt = request.Prompt
@@ -244,7 +368,27 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# If tokenizer template is enabled and messages are provided instead of prompt, apply the tokenizer template
if not request.Prompt and request.UseTokenizerTemplate and request.Messages:
prompt = self.tokenizer.apply_chat_template(request.Messages, tokenize=False, add_generation_prompt=True)
messages_dicts = self._messages_to_dicts(request.Messages)
template_kwargs = {"tokenize": False, "add_generation_prompt": True}
# Pass tools for tool calling
if request.Tools:
try:
template_kwargs["tools"] = json.loads(request.Tools)
except json.JSONDecodeError:
pass
# Enable thinking mode if requested
if request.Metadata.get("enable_thinking", "").lower() == "true":
template_kwargs["enable_thinking"] = True
try:
prompt = self.tokenizer.apply_chat_template(messages_dicts, **template_kwargs)
except TypeError:
# Some tokenizers don't support tools/enable_thinking kwargs — retry without them
prompt = self.tokenizer.apply_chat_template(
messages_dicts, tokenize=False, add_generation_prompt=True
)
# Generate text using the LLM engine
request_id = random_uuid()
@@ -265,25 +409,26 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# Stream the results
generated_text = ""
last_output = None
try:
async for request_output in outputs:
iteration_text = request_output.outputs[0].text
last_output = request_output
if streaming:
# Remove text already sent as vllm concatenates the text from previous yields
delta_iteration_text = iteration_text.removeprefix(generated_text)
# Send the partial result
yield backend_pb2.Reply(message=bytes(delta_iteration_text, encoding='utf-8'))
yield backend_pb2.Reply(
message=bytes(delta_iteration_text, encoding='utf-8'),
chat_deltas=[backend_pb2.ChatDelta(content=delta_iteration_text)],
)
# Keep track of text generated
generated_text = iteration_text
finally:
await outputs.aclose()
# If streaming, we already sent everything
if streaming:
return
# Remove the image files from /tmp folder
for img_path in image_paths:
try:
@@ -291,8 +436,99 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
except Exception as e:
print(f"Error removing image file: {img_path}, {e}", file=sys.stderr)
# Sending the final generated text
yield backend_pb2.Reply(message=bytes(generated_text, encoding='utf-8'))
# Parse reasoning and tool calls from final text using vLLM's native parsers
content = generated_text
reasoning_content = ""
tool_calls_proto = []
if self.reasoning_parser_cls:
try:
rp = self.reasoning_parser_cls(self.tokenizer)
r, c = rp.extract_reasoning(generated_text, request=None)
reasoning_content = r or ""
content = c if c is not None else generated_text
except Exception as e:
print(f"Reasoning parser error: {e}", file=sys.stderr)
if self.tool_parser_cls and request.Tools:
try:
tools = json.loads(request.Tools)
# Some concrete parsers only accept the tokenizer; only the
# abstract base declares the tools kwarg. Try with tools first,
# fall back to tokenizer-only.
try:
tp = self.tool_parser_cls(self.tokenizer, tools=tools)
except TypeError:
tp = self.tool_parser_cls(self.tokenizer)
info = tp.extract_tool_calls(content, request=None)
if info.tools_called:
content = info.content or ""
for i, tc in enumerate(info.tool_calls):
tool_calls_proto.append(backend_pb2.ToolCallDelta(
index=i,
id=tc.id,
name=tc.function.name,
arguments=tc.function.arguments,
))
except Exception as e:
print(f"Tool parser error: {e}", file=sys.stderr)
# Extract token counts
prompt_tokens = 0
completion_tokens = 0
if last_output is not None:
try:
prompt_tokens = len(last_output.prompt_token_ids or [])
except Exception:
pass
try:
completion_tokens = len(last_output.outputs[0].token_ids or [])
except Exception:
pass
# Extract logprobs if requested
logprobs_bytes = b""
if last_output is not None and request.Logprobs > 0:
try:
lp = last_output.outputs[0].logprobs
if lp:
logprobs_data = {"content": []}
for token_lp_dict in lp:
if token_lp_dict:
first_tok_id, first_lp = next(iter(token_lp_dict.items()))
logprobs_data["content"].append({
"token": getattr(first_lp, "decoded_token", str(first_tok_id)),
"logprob": first_lp.logprob,
})
logprobs_bytes = json.dumps(logprobs_data).encode("utf-8")
except Exception as e:
print(f"Logprobs extraction error: {e}", file=sys.stderr)
chat_delta = backend_pb2.ChatDelta(
content=content,
reasoning_content=reasoning_content,
tool_calls=tool_calls_proto,
)
if streaming:
# Final chunk with structured data
yield backend_pb2.Reply(
message=b"",
prompt_tokens=prompt_tokens,
tokens=completion_tokens,
chat_deltas=[chat_delta],
logprobs=logprobs_bytes,
)
return
# Non-streaming: single Reply with everything
yield backend_pb2.Reply(
message=bytes(content, encoding='utf-8'),
prompt_tokens=prompt_tokens,
tokens=completion_tokens,
chat_deltas=[chat_delta],
logprobs=logprobs_bytes,
)
def load_image(self, image_path: str):
"""

View File

@@ -26,20 +26,43 @@ if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
# We don't embed this into the images as it is a large dependency and not always needed.
# Besides, the speed inference are not actually usable in the current state for production use-cases.
if [ "x${BUILD_TYPE}" == "x" ] && [ "x${FROM_SOURCE:-}" == "xtrue" ]; then
ensureVenv
# https://docs.vllm.ai/en/v0.6.1/getting_started/cpu-installation.html
if [ ! -d vllm ]; then
git clone https://github.com/vllm-project/vllm
fi
pushd vllm
uv pip install wheel packaging ninja "setuptools>=49.4.0" numpy typing-extensions pillow setuptools-scm grpcio==1.68.1 protobuf bitsandbytes
uv pip install -v -r requirements-cpu.txt --extra-index-url https://download.pytorch.org/whl/cpu
VLLM_TARGET_DEVICE=cpu python setup.py install
popd
rm -rf vllm
else
installRequirements
# CPU builds need unsafe-best-match to pull torch==2.10.0+cpu from the
# pytorch test channel while still resolving transformers/vllm from pypi.
if [ "x${BUILD_PROFILE}" == "xcpu" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --index-strategy=unsafe-best-match"
fi
# FROM_SOURCE=true on a CPU build skips the prebuilt vllm wheel in
# requirements-cpu-after.txt and compiles vllm locally against the host's
# actual CPU. Not used by default because it takes ~30-40 minutes, but
# kept here for hosts where the prebuilt wheel SIGILLs (CPU without the
# required SIMD baseline, e.g. AVX-512 VNNI/BF16). Default CI uses a
# bigger-runner with compatible hardware instead.
if [ "x${BUILD_TYPE}" == "x" ] && [ "x${FROM_SOURCE:-}" == "xtrue" ]; then
# Temporarily hide the prebuilt wheel so installRequirements doesn't
# pull it — the rest of the requirements files (base deps, torch,
# transformers) are still installed normally.
_cpu_after="${backend_dir}/requirements-cpu-after.txt"
_cpu_after_bak=""
if [ -f "${_cpu_after}" ]; then
_cpu_after_bak="${_cpu_after}.from-source.bak"
mv "${_cpu_after}" "${_cpu_after_bak}"
fi
installRequirements
if [ -n "${_cpu_after_bak}" ]; then
mv "${_cpu_after_bak}" "${_cpu_after}"
fi
# Build vllm from source against the installed torch.
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/
_vllm_src=$(mktemp -d)
trap 'rm -rf "${_vllm_src}"' EXIT
git clone --depth 1 https://github.com/vllm-project/vllm "${_vllm_src}/vllm"
pushd "${_vllm_src}/vllm"
uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} wheel packaging ninja "setuptools>=49.4.0" numpy typing-extensions pillow setuptools-scm
# Respect pre-installed torch version — skip vllm's own requirements-build.txt torch pin.
VLLM_TARGET_DEVICE=cpu uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} --no-deps .
popd
else
installRequirements
fi

49
backend/python/vllm/package.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Script to package runtime shared libraries for the vllm backend.
#
# The final Dockerfile.python stage is FROM scratch, so system libraries
# must be explicitly copied into ${BACKEND}/lib so the backend can run on
# any host without installing them. libbackend.sh automatically adds that
# directory to LD_LIBRARY_PATH at run time.
#
# vllm's CPU C++ extension (vllm._C) dlopens libnuma.so.1 at import time;
# if it's missing, the _C_utils torch ops are never registered and the
# engine crashes with AttributeError on init_cpu_threads_env. libgomp is
# used by torch's CPU kernels; on some stripped-down hosts it's also
# absent, so we bundle it too.
set -e
CURDIR=$(dirname "$(realpath "$0")")
LIB_DIR="${CURDIR}/lib"
mkdir -p "${LIB_DIR}"
copy_with_symlinks() {
local soname="$1"
local hit=""
for dir in /usr/lib/x86_64-linux-gnu /usr/lib/aarch64-linux-gnu /lib/x86_64-linux-gnu /lib/aarch64-linux-gnu /usr/lib /lib; do
if [ -e "${dir}/${soname}" ]; then
hit="${dir}/${soname}"
break
fi
done
if [ -z "${hit}" ]; then
echo "warning: ${soname} not found in standard lib paths" >&2
return 0
fi
# Follow the symlink to the real file, copy it, then recreate the symlink.
local real
real=$(readlink -f "${hit}")
cp -v "${real}" "${LIB_DIR}/"
local real_base
real_base=$(basename "${real}")
if [ "${real_base}" != "${soname}" ]; then
ln -sf "${real_base}" "${LIB_DIR}/${soname}"
fi
}
copy_with_symlinks libnuma.so.1
copy_with_symlinks libgomp.so.1
echo "vllm packaging completed successfully"
ls -liah "${LIB_DIR}/"

View File

@@ -1 +1,2 @@
vllm
# vllm is installed per-acceleration in requirements-{profile}-after.txt
# (cublas12, hipblas, intel, cpu)

View File

@@ -0,0 +1,2 @@
vllm @ https://github.com/vllm-project/vllm/releases/download/v0.14.1/vllm-0.14.1+cpu-cp38-abi3-manylinux_2_35_x86_64.whl ; platform_machine == "x86_64"
vllm @ https://github.com/vllm-project/vllm/releases/download/v0.14.1/vllm-0.14.1+cpu-cp38-abi3-manylinux_2_35_aarch64.whl ; platform_machine == "aarch64"

View File

@@ -1,3 +1,6 @@
--extra-index-url https://download.pytorch.org/whl/cpu
accelerate
torch==2.7.0
transformers
torch==2.9.1+cpu
torchvision
torchaudio
transformers

View File

@@ -1 +1,2 @@
https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.7cxx11abiTRUE-cp310-cp310-linux_x86_64.whl
vllm

View File

@@ -0,0 +1 @@
vllm

View File

@@ -0,0 +1 @@
vllm

View File

@@ -122,6 +122,89 @@ class TestBackendServicer(unittest.TestCase):
self.tearDown()
def test_messages_to_dicts(self):
"""
Tests _messages_to_dicts conversion of proto Messages to dicts.
"""
import sys, os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from backend import BackendServicer
servicer = BackendServicer()
msgs = [
backend_pb2.Message(role="user", content="hello"),
backend_pb2.Message(
role="assistant",
content="",
tool_calls='[{"id":"call_1","type":"function","function":{"name":"foo","arguments":"{}"}}]',
reasoning_content="thinking...",
),
backend_pb2.Message(role="tool", content="result", name="foo", tool_call_id="call_1"),
]
result = servicer._messages_to_dicts(msgs)
self.assertEqual(len(result), 3)
self.assertEqual(result[0], {"role": "user", "content": "hello"})
self.assertEqual(result[1]["reasoning_content"], "thinking...")
self.assertIsInstance(result[1]["tool_calls"], list)
self.assertEqual(result[1]["tool_calls"][0]["id"], "call_1")
self.assertEqual(result[2]["tool_call_id"], "call_1")
self.assertEqual(result[2]["name"], "foo")
def test_parse_options(self):
"""
Tests _parse_options correctly parses key:value strings.
"""
import sys, os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from backend import BackendServicer
servicer = BackendServicer()
opts = servicer._parse_options([
"tool_parser:hermes",
"reasoning_parser:deepseek_r1",
"invalid_no_colon",
"key_with_colons:a:b:c",
])
self.assertEqual(opts["tool_parser"], "hermes")
self.assertEqual(opts["reasoning_parser"], "deepseek_r1")
self.assertEqual(opts["key_with_colons"], "a:b:c")
self.assertNotIn("invalid_no_colon", opts)
def test_tokenize_string(self):
"""
Tests the TokenizeString RPC returns valid tokens.
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
resp = stub.TokenizeString(backend_pb2.PredictOptions(Prompt="Hello world"))
self.assertGreater(resp.length, 0)
self.assertEqual(len(resp.tokens), resp.length)
except Exception as err:
print(err)
self.fail("TokenizeString service failed")
finally:
self.tearDown()
def test_free(self):
"""
Tests the Free RPC doesn't crash.
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
free_resp = stub.Free(backend_pb2.HealthMessage())
self.assertTrue(free_resp.success)
except Exception as err:
print(err)
self.fail("Free service failed")
finally:
self.tearDown()
def test_embedding(self):
"""
This method tests if the embeddings are generated successfully