mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-17 13:28:31 -04:00
* fix(schema): serialize ToolCallID and Reasoning in Messages.ToProto
The ToProto conversion was dropping tool_call_id and reasoning_content
even though both proto and Go fields existed, breaking multi-turn tool
calling and reasoning passthrough to backends.
* refactor(config): introduce backend hook system and migrate llama-cpp defaults
Adds RegisterBackendHook/runBackendHooks so each backend can register
default-filling functions that run during ModelConfig.SetDefaults().
Migrates the existing GGUF guessing logic into hooks_llamacpp.go,
registered for both 'llama-cpp' and the empty backend (auto-detect).
Removes the old guesser.go shim.
* feat(config): add vLLM parser defaults hook and importer auto-detection
Introduces parser_defaults.json mapping model families to vLLM
tool_parser/reasoning_parser names, with longest-pattern-first matching.
The vllmDefaults hook auto-fills tool_parser and reasoning_parser
options at load time for known families, while the VLLMImporter writes
the same values into generated YAML so users can review and edit them.
Adds tests covering MatchParserDefaults, hook registration via
SetDefaults, and the user-override behavior.
* feat(vllm): wire native tool/reasoning parsers + chat deltas + logprobs
- Use vLLM's ToolParserManager/ReasoningParserManager to extract structured
output (tool calls, reasoning content) instead of reimplementing parsing
- Convert proto Messages to dicts and pass tools to apply_chat_template
- Emit ChatDelta with content/reasoning_content/tool_calls in Reply
- Extract prompt_tokens, completion_tokens, and logprobs from output
- Replace boolean GuidedDecoding with proper GuidedDecodingParams from Grammar
- Add TokenizeString and Free RPC methods
- Fix missing `time` import used by load_video()
* feat(vllm): CPU support + shared utils + vllm-omni feature parity
- Split vllm install per acceleration: move generic `vllm` out of
requirements-after.txt into per-profile after files (cublas12, hipblas,
intel) and add CPU wheel URL for cpu-after.txt
- requirements-cpu.txt now pulls torch==2.7.0+cpu from PyTorch CPU index
- backend/index.yaml: register cpu-vllm / cpu-vllm-development variants
- New backend/python/common/vllm_utils.py: shared parse_options,
messages_to_dicts, setup_parsers helpers (used by both vllm backends)
- vllm-omni: replace hardcoded chat template with tokenizer.apply_chat_template,
wire native parsers via shared utils, emit ChatDelta with token counts,
add TokenizeString and Free RPCs, detect CPU and set VLLM_TARGET_DEVICE
- Add test_cpu_inference.py: standalone script to validate CPU build with
a small model (Qwen2.5-0.5B-Instruct)
* fix(vllm): CPU build compatibility with vllm 0.14.1
Validated end-to-end on CPU with Qwen2.5-0.5B-Instruct (LoadModel, Predict,
TokenizeString, Free all working).
- requirements-cpu-after.txt: pin vllm to 0.14.1+cpu (pre-built wheel from
GitHub releases) for x86_64 and aarch64. vllm 0.14.1 is the newest CPU
wheel whose torch dependency resolves against published PyTorch builds
(torch==2.9.1+cpu). Later vllm CPU wheels currently require
torch==2.10.0+cpu which is only available on the PyTorch test channel
with incompatible torchvision.
- requirements-cpu.txt: bump torch to 2.9.1+cpu, add torchvision/torchaudio
so uv resolves them consistently from the PyTorch CPU index.
- install.sh: add --index-strategy=unsafe-best-match for CPU builds so uv
can mix the PyTorch index and PyPI for transitive deps (matches the
existing intel profile behaviour).
- backend.py LoadModel: vllm >= 0.14 removed AsyncLLMEngine.get_model_config
so the old code path errored out with AttributeError on model load.
Switch to the new get_tokenizer()/tokenizer accessor with a fallback
to building the tokenizer directly from request.Model.
* fix(vllm): tool parser constructor compat + e2e tool calling test
Concrete vLLM tool parsers override the abstract base's __init__ and
drop the tools kwarg (e.g. Hermes2ProToolParser only takes tokenizer).
Instantiating with tools= raised TypeError which was silently caught,
leaving chat_deltas.tool_calls empty.
Retry the constructor without the tools kwarg on TypeError — tools
aren't required by these parsers since extract_tool_calls finds tool
syntax in the raw model output directly.
Validated with Qwen/Qwen2.5-0.5B-Instruct + hermes parser on CPU:
the backend correctly returns ToolCallDelta{name='get_weather',
arguments='{"location": "Paris, France"}'} in ChatDelta.
test_tool_calls.py is a standalone smoke test that spawns the gRPC
backend, sends a chat completion with tools, and asserts the response
contains a structured tool call.
* ci(backend): build cpu-vllm container image
Add the cpu-vllm variant to the backend container build matrix so the
image registered in backend/index.yaml (cpu-vllm / cpu-vllm-development)
is actually produced by CI.
Follows the same pattern as the other CPU python backends
(cpu-diffusers, cpu-chatterbox, etc.) with build-type='' and no CUDA.
backend_pr.yml auto-picks this up via its matrix filter from backend.yml.
* test(e2e-backends): add tools capability + HF model name support
Extends tests/e2e-backends to cover backends that:
- Resolve HuggingFace model ids natively (vllm, vllm-omni) instead of
loading a local file: BACKEND_TEST_MODEL_NAME is passed verbatim as
ModelOptions.Model with no download/ModelFile.
- Parse tool calls into ChatDelta.tool_calls: new "tools" capability
sends a Predict with a get_weather function definition and asserts
the Reply contains a matching ToolCallDelta. Uses UseTokenizerTemplate
with OpenAI-style Messages so the backend can wire tools into the
model's chat template.
- Need backend-specific Options[]: BACKEND_TEST_OPTIONS lets a test set
e.g. "tool_parser:hermes,reasoning_parser:qwen3" at LoadModel time.
Adds make target test-extra-backend-vllm that:
- docker-build-vllm
- loads Qwen/Qwen2.5-0.5B-Instruct
- runs health,load,predict,stream,tools with tool_parser:hermes
Drops backend/python/vllm/test_{cpu_inference,tool_calls}.py — those
standalone scripts were scaffolding used while bringing up the Python
backend; the e2e-backends harness now covers the same ground uniformly
alongside llama-cpp and ik-llama-cpp.
* ci(test-extra): run vllm e2e tests on CPU
Adds tests-vllm-grpc to the test-extra workflow, mirroring the
llama-cpp and ik-llama-cpp gRPC jobs. Triggers when files under
backend/python/vllm/ change (or on run-all), builds the local-ai
vllm container image, and runs the tests/e2e-backends harness with
BACKEND_TEST_MODEL_NAME=Qwen/Qwen2.5-0.5B-Instruct, tool_parser:hermes,
and the tools capability enabled.
Uses ubuntu-latest (no GPU) — vllm runs on CPU via the cpu-vllm
wheel we pinned in requirements-cpu-after.txt. Frees disk space
before the build since the docker image + torch + vllm wheel is
sizeable.
* fix(vllm): build from source on CI to avoid SIGILL on prebuilt wheel
The prebuilt vllm 0.14.1+cpu wheel from GitHub releases is compiled with
SIMD instructions (AVX-512 VNNI/BF16 or AMX-BF16) that not every CPU
supports. GitHub Actions ubuntu-latest runners SIGILL when vllm spawns
the model_executor.models.registry subprocess for introspection, so
LoadModel never reaches the actual inference path.
- install.sh: when FROM_SOURCE=true on a CPU build, temporarily hide
requirements-cpu-after.txt so installRequirements installs the base
deps + torch CPU without pulling the prebuilt wheel, then clone vllm
and compile it with VLLM_TARGET_DEVICE=cpu. The resulting binaries
target the host's actual CPU.
- backend/Dockerfile.python: accept a FROM_SOURCE build-arg and expose
it as an ENV so install.sh sees it during `make`.
- Makefile docker-build-backend: forward FROM_SOURCE as --build-arg
when set, so backends that need source builds can opt in.
- Makefile test-extra-backend-vllm: call docker-build-vllm via a
recursive $(MAKE) invocation so FROM_SOURCE flows through.
- .github/workflows/test-extra.yml: set FROM_SOURCE=true on the
tests-vllm-grpc job. Slower but reliable — the prebuilt wheel only
works on hosts that share the build-time SIMD baseline.
Answers 'did you test locally?': yes, end-to-end on my local machine
with the prebuilt wheel (CPU supports AVX-512 VNNI). The CI runner CPU
gap was not covered locally — this commit plugs that gap.
* ci(vllm): use bigger-runner instead of source build
The prebuilt vllm 0.14.1+cpu wheel requires SIMD instructions (AVX-512
VNNI/BF16) that stock ubuntu-latest GitHub runners don't support —
vllm.model_executor.models.registry SIGILLs on import during LoadModel.
Source compilation works but takes 30-40 minutes per CI run, which is
too slow for an e2e smoke test. Instead, switch tests-vllm-grpc to the
bigger-runner self-hosted label (already used by backend.yml for the
llama-cpp CUDA build) — that hardware has the required SIMD baseline
and the prebuilt wheel runs cleanly.
FROM_SOURCE=true is kept as an opt-in escape hatch:
- install.sh still has the CPU source-build path for hosts that need it
- backend/Dockerfile.python still declares the ARG + ENV
- Makefile docker-build-backend still forwards the build-arg when set
Default CI path uses the fast prebuilt wheel; source build can be
re-enabled by exporting FROM_SOURCE=true in the environment.
* ci(vllm): install make + build deps on bigger-runner
bigger-runner is a bare self-hosted runner used by backend.yml for
docker image builds — it has docker but not the usual ubuntu-latest
toolchain. The make-based test target needs make, build-essential
(cgo in 'go test'), and curl/unzip (the Makefile protoc target
downloads protoc from github releases).
protoc-gen-go and protoc-gen-go-grpc come via 'go install' in the
install-go-tools target, which setup-go makes possible.
* ci(vllm): install libnuma1 + libgomp1 on bigger-runner
The vllm 0.14.1+cpu wheel ships a _C C++ extension that dlopens
libnuma.so.1 at import time. When the runner host doesn't have it,
the extension silently fails to register its torch ops, so
EngineCore crashes on init_device with:
AttributeError: '_OpNamespace' '_C_utils' object has no attribute
'init_cpu_threads_env'
Also add libgomp1 (OpenMP runtime, used by torch CPU kernels) to be
safe on stripped-down runners.
* feat(vllm): bundle libnuma/libgomp via package.sh
The vllm CPU wheel ships a _C extension that dlopens libnuma.so.1 at
import time; torch's CPU kernels in turn use libgomp.so.1 (OpenMP).
Without these on the host, vllm._C silently fails to register its
torch ops and EngineCore crashes with:
AttributeError: '_OpNamespace' '_C_utils' object has no attribute
'init_cpu_threads_env'
Rather than asking every user to install libnuma1/libgomp1 on their
host (or every LocalAI base image to ship them), bundle them into
the backend image itself — same pattern fish-speech and the GPU libs
already use. libbackend.sh adds ${EDIR}/lib to LD_LIBRARY_PATH at
run time so the bundled copies are picked up automatically.
- backend/python/vllm/package.sh (new): copies libnuma.so.1 and
libgomp.so.1 from the builder's multilib paths into ${BACKEND}/lib,
preserving soname symlinks. Runs during Dockerfile.python's
'Run backend-specific packaging' step (which already invokes
package.sh if present).
- backend/Dockerfile.python: install libnuma1 + libgomp1 in the
builder stage so package.sh has something to copy (the Ubuntu
base image otherwise only has libgomp in the gcc dep chain).
- test-extra.yml: drop the workaround that installed these libs on
the runner host — with the backend image self-contained, the
runner no longer needs them, and the test now exercises the
packaging path end-to-end the way a production host would.
* ci(vllm): disable tests-vllm-grpc job (heterogeneous runners)
Both ubuntu-latest and bigger-runner have inconsistent CPU baselines:
some instances support the AVX-512 VNNI/BF16 instructions the prebuilt
vllm 0.14.1+cpu wheel was compiled with, others SIGILL on import of
vllm.model_executor.models.registry. The libnuma packaging fix doesn't
help when the wheel itself can't be loaded.
FROM_SOURCE=true compiles vllm against the actual host CPU and works
everywhere, but takes 30-50 minutes per run — too slow for a smoke
test on every PR.
Comment out the job for now. The test itself is intact and passes
locally; run it via 'make test-extra-backend-vllm' on a host with the
required SIMD baseline. Re-enable when:
- we have a self-hosted runner label with guaranteed AVX-512 VNNI/BF16, or
- vllm publishes a CPU wheel with a wider baseline, or
- we set up a docker layer cache that makes FROM_SOURCE acceptable
The detect-changes vllm output, the test harness changes (tests/
e2e-backends + tools cap), the make target (test-extra-backend-vllm),
the package.sh and the Dockerfile/install.sh plumbing all stay in
place.
815 lines
36 KiB
Python
815 lines
36 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
LocalAI vLLM-Omni Backend
|
|
|
|
This backend provides gRPC access to vllm-omni for multimodal generation:
|
|
- Image generation (text-to-image, image editing)
|
|
- Video generation (text-to-video, image-to-video)
|
|
- Text generation with multimodal inputs (LLM)
|
|
- Text-to-speech generation
|
|
"""
|
|
from concurrent import futures
|
|
import traceback
|
|
import argparse
|
|
import signal
|
|
import sys
|
|
import time
|
|
import os
|
|
import base64
|
|
import io
|
|
import json
|
|
import gc
|
|
|
|
from PIL import Image
|
|
import torch
|
|
import numpy as np
|
|
import soundfile as sf
|
|
|
|
import backend_pb2
|
|
import backend_pb2_grpc
|
|
|
|
import grpc
|
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'common'))
|
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'common'))
|
|
from grpc_auth import get_auth_interceptors
|
|
from vllm_utils import parse_options, messages_to_dicts, setup_parsers
|
|
|
|
|
|
from vllm_omni.entrypoints.omni import Omni
|
|
from vllm_omni.outputs import OmniRequestOutput
|
|
from vllm_omni.diffusion.data import DiffusionParallelConfig
|
|
from vllm_omni.utils.platform_utils import detect_device_type, is_npu
|
|
from vllm import SamplingParams
|
|
from diffusers.utils import export_to_video
|
|
|
|
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
|
|
|
|
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
|
|
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
|
|
|
|
|
|
def is_float(s):
|
|
"""Check if a string can be converted to float."""
|
|
try:
|
|
float(s)
|
|
return True
|
|
except ValueError:
|
|
return False
|
|
|
|
|
|
def is_int(s):
|
|
"""Check if a string can be converted to int."""
|
|
try:
|
|
int(s)
|
|
return True
|
|
except ValueError:
|
|
return False
|
|
|
|
|
|
# Implement the BackendServicer class with the service methods
|
|
class BackendServicer(backend_pb2_grpc.BackendServicer):
|
|
|
|
def _detect_model_type(self, model_name):
|
|
"""Detect model type from model name."""
|
|
model_lower = model_name.lower()
|
|
if "tts" in model_lower or "qwen3-tts" in model_lower:
|
|
return "tts"
|
|
elif "omni" in model_lower and "qwen3" in model_lower:
|
|
return "llm"
|
|
elif "wan" in model_lower or "t2v" in model_lower or "i2v" in model_lower:
|
|
return "video"
|
|
elif "image" in model_lower or "z-image" in model_lower or "qwen-image" in model_lower:
|
|
return "image"
|
|
else:
|
|
# Default to image for diffusion models, llm for others
|
|
return "image"
|
|
|
|
def _detect_tts_task_type(self):
|
|
"""Detect TTS task type from model name."""
|
|
model_lower = self.model_name.lower()
|
|
if "customvoice" in model_lower:
|
|
return "CustomVoice"
|
|
elif "voicedesign" in model_lower:
|
|
return "VoiceDesign"
|
|
elif "base" in model_lower:
|
|
return "Base"
|
|
else:
|
|
# Default to CustomVoice
|
|
return "CustomVoice"
|
|
|
|
def _load_image(self, image_path):
|
|
"""Load an image from file path or base64 encoded data."""
|
|
# Try file path first
|
|
if os.path.exists(image_path):
|
|
return Image.open(image_path)
|
|
# Try base64 decode
|
|
try:
|
|
image_data = base64.b64decode(image_path)
|
|
return Image.open(io.BytesIO(image_data))
|
|
except:
|
|
return None
|
|
|
|
def _load_video(self, video_path):
|
|
"""Load a video from file path or base64 encoded data."""
|
|
from vllm.assets.video import VideoAsset, video_to_ndarrays
|
|
if os.path.exists(video_path):
|
|
return video_to_ndarrays(video_path, num_frames=16)
|
|
# Try base64 decode
|
|
try:
|
|
timestamp = str(int(time.time() * 1000))
|
|
p = f"/tmp/vl-{timestamp}.data"
|
|
with open(p, "wb") as f:
|
|
f.write(base64.b64decode(video_path))
|
|
video = VideoAsset(name=p).np_ndarrays
|
|
os.remove(p)
|
|
return video
|
|
except:
|
|
return None
|
|
|
|
def _load_audio(self, audio_path):
|
|
"""Load audio from file path or base64 encoded data."""
|
|
import librosa
|
|
if os.path.exists(audio_path):
|
|
audio_signal, sr = librosa.load(audio_path, sr=16000)
|
|
return (audio_signal.astype(np.float32), sr)
|
|
# Try base64 decode
|
|
try:
|
|
audio_data = base64.b64decode(audio_path)
|
|
# Save to temp file and load
|
|
timestamp = str(int(time.time() * 1000))
|
|
p = f"/tmp/audio-{timestamp}.wav"
|
|
with open(p, "wb") as f:
|
|
f.write(audio_data)
|
|
audio_signal, sr = librosa.load(p, sr=16000)
|
|
os.remove(p)
|
|
return (audio_signal.astype(np.float32), sr)
|
|
except:
|
|
return None
|
|
|
|
def Health(self, request, context):
|
|
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
|
|
|
|
def LoadModel(self, request, context):
|
|
try:
|
|
# CPU detection: if no CUDA, default vLLM target device to CPU.
|
|
try:
|
|
if not torch.cuda.is_available():
|
|
os.environ.setdefault("VLLM_TARGET_DEVICE", "cpu")
|
|
os.environ.setdefault("VLLM_CPU_KVCACHE_SPACE", "4")
|
|
except Exception:
|
|
pass
|
|
|
|
print(f"Loading model {request.Model}...", file=sys.stderr)
|
|
print(f"Request {request}", file=sys.stderr)
|
|
|
|
# Parse options from request.Options using shared helper
|
|
self.options = parse_options(request.Options)
|
|
opts = self.options
|
|
|
|
print(f"Options: {self.options}", file=sys.stderr)
|
|
|
|
# Detect model type
|
|
self.model_name = request.Model
|
|
self.model_type = request.Type if request.Type else self._detect_model_type(request.Model)
|
|
print(f"Detected model type: {self.model_type}", file=sys.stderr)
|
|
|
|
# Build DiffusionParallelConfig if diffusion model (image or video)
|
|
parallel_config = None
|
|
if self.model_type in ["image", "video"]:
|
|
parallel_config = DiffusionParallelConfig(
|
|
ulysses_degree=self.options.get("ulysses_degree", 1),
|
|
ring_degree=self.options.get("ring_degree", 1),
|
|
cfg_parallel_size=self.options.get("cfg_parallel_size", 1),
|
|
tensor_parallel_size=self.options.get("tensor_parallel_size", 1),
|
|
)
|
|
|
|
# Build cache_config dict if cache_backend specified
|
|
cache_backend = self.options.get("cache_backend") # "cache_dit" or "tea_cache"
|
|
cache_config = None
|
|
if cache_backend == "cache_dit":
|
|
cache_config = {
|
|
"Fn_compute_blocks": self.options.get("cache_dit_fn_compute_blocks", 1),
|
|
"Bn_compute_blocks": self.options.get("cache_dit_bn_compute_blocks", 0),
|
|
"max_warmup_steps": self.options.get("cache_dit_max_warmup_steps", 4),
|
|
"residual_diff_threshold": self.options.get("cache_dit_residual_diff_threshold", 0.24),
|
|
"max_continuous_cached_steps": self.options.get("cache_dit_max_continuous_cached_steps", 3),
|
|
"enable_taylorseer": self.options.get("cache_dit_enable_taylorseer", False),
|
|
"taylorseer_order": self.options.get("cache_dit_taylorseer_order", 1),
|
|
"scm_steps_mask_policy": self.options.get("cache_dit_scm_steps_mask_policy"),
|
|
"scm_steps_policy": self.options.get("cache_dit_scm_steps_policy", "dynamic"),
|
|
}
|
|
elif cache_backend == "tea_cache":
|
|
cache_config = {
|
|
"rel_l1_thresh": self.options.get("tea_cache_rel_l1_thresh", 0.2),
|
|
}
|
|
|
|
# Base Omni initialization parameters
|
|
omni_kwargs = {
|
|
"model": request.Model,
|
|
}
|
|
|
|
# Add diffusion-specific parameters (image/video models)
|
|
if self.model_type in ["image", "video"]:
|
|
omni_kwargs.update({
|
|
"vae_use_slicing": is_npu(),
|
|
"vae_use_tiling": is_npu(),
|
|
"cache_backend": cache_backend,
|
|
"cache_config": cache_config,
|
|
"parallel_config": parallel_config,
|
|
"enforce_eager": self.options.get("enforce_eager", request.EnforceEager),
|
|
"enable_cpu_offload": self.options.get("enable_cpu_offload", False),
|
|
})
|
|
# Video-specific parameters
|
|
if self.model_type == "video":
|
|
omni_kwargs.update({
|
|
"boundary_ratio": self.options.get("boundary_ratio", 0.875),
|
|
"flow_shift": self.options.get("flow_shift", 5.0),
|
|
})
|
|
|
|
# Add LLM/TTS-specific parameters
|
|
if self.model_type in ["llm", "tts"]:
|
|
omni_kwargs.update({
|
|
"stage_configs_path": self.options.get("stage_configs_path"),
|
|
"log_stats": self.options.get("enable_stats", False),
|
|
"stage_init_timeout": self.options.get("stage_init_timeout", 300),
|
|
})
|
|
# vllm engine options (passed through Omni for LLM/TTS)
|
|
if request.GPUMemoryUtilization > 0:
|
|
omni_kwargs["gpu_memory_utilization"] = request.GPUMemoryUtilization
|
|
if request.TensorParallelSize > 0:
|
|
omni_kwargs["tensor_parallel_size"] = request.TensorParallelSize
|
|
if request.TrustRemoteCode:
|
|
omni_kwargs["trust_remote_code"] = request.TrustRemoteCode
|
|
if request.MaxModelLen > 0:
|
|
omni_kwargs["max_model_len"] = request.MaxModelLen
|
|
|
|
self.omni = Omni(**omni_kwargs)
|
|
|
|
# Load tokenizer for LLM/TTS so chat templates work
|
|
if self.model_type in ("llm", "tts"):
|
|
try:
|
|
from vllm.transformers_utils.tokenizer import get_tokenizer
|
|
self.tokenizer = get_tokenizer(
|
|
request.Model,
|
|
trust_remote_code=opts.get("trust_remote_code", False),
|
|
)
|
|
except Exception as e:
|
|
print(f"Failed to load tokenizer: {e}", file=sys.stderr)
|
|
self.tokenizer = None
|
|
else:
|
|
self.tokenizer = None
|
|
|
|
# Setup optional tool / reasoning parsers
|
|
self.tool_parser_cls, self.reasoning_parser_cls = setup_parsers(opts)
|
|
|
|
print("Model loaded successfully", file=sys.stderr)
|
|
return backend_pb2.Result(message="Model loaded successfully", success=True)
|
|
|
|
except Exception as err:
|
|
print(f"Unexpected {err=}, {type(err)=}", file=sys.stderr)
|
|
traceback.print_exc()
|
|
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
|
|
|
|
def GenerateImage(self, request, context):
|
|
try:
|
|
# Validate model is loaded and is image/diffusion type
|
|
if not hasattr(self, 'omni'):
|
|
return backend_pb2.Result(success=False, message="Model not loaded. Call LoadModel first.")
|
|
if self.model_type not in ["image"]:
|
|
return backend_pb2.Result(success=False, message=f"Model type {self.model_type} does not support image generation")
|
|
|
|
# Extract parameters
|
|
prompt = request.positive_prompt
|
|
negative_prompt = request.negative_prompt if request.negative_prompt else None
|
|
width = request.width if request.width > 0 else 1024
|
|
height = request.height if request.height > 0 else 1024
|
|
seed = request.seed if request.seed > 0 else None
|
|
num_inference_steps = request.step if request.step > 0 else 50
|
|
cfg_scale = self.options.get("cfg_scale", 4.0)
|
|
guidance_scale = self.options.get("guidance_scale", 1.0)
|
|
|
|
# Create generator if seed provided
|
|
generator = None
|
|
if seed:
|
|
device = detect_device_type()
|
|
generator = torch.Generator(device=device).manual_seed(seed)
|
|
|
|
# Handle image input for image editing
|
|
pil_image = None
|
|
if request.src or (request.ref_images and len(request.ref_images) > 0):
|
|
image_path = request.ref_images[0] if request.ref_images else request.src
|
|
pil_image = self._load_image(image_path)
|
|
if pil_image is None:
|
|
return backend_pb2.Result(success=False, message=f"Invalid image source: {image_path}")
|
|
pil_image = pil_image.convert("RGB")
|
|
|
|
# Build generate kwargs
|
|
generate_kwargs = {
|
|
"prompt": prompt,
|
|
"negative_prompt": negative_prompt,
|
|
"height": height,
|
|
"width": width,
|
|
"generator": generator,
|
|
"true_cfg_scale": cfg_scale,
|
|
"guidance_scale": guidance_scale,
|
|
"num_inference_steps": num_inference_steps,
|
|
}
|
|
if pil_image:
|
|
generate_kwargs["pil_image"] = pil_image
|
|
|
|
# Call omni.generate()
|
|
outputs = self.omni.generate(**generate_kwargs)
|
|
|
|
# Extract images (following example pattern)
|
|
if not outputs or len(outputs) == 0:
|
|
return backend_pb2.Result(success=False, message="No output generated")
|
|
|
|
first_output = outputs[0]
|
|
if not hasattr(first_output, "request_output") or not first_output.request_output:
|
|
return backend_pb2.Result(success=False, message="Invalid output structure")
|
|
|
|
req_out = first_output.request_output[0]
|
|
if not isinstance(req_out, OmniRequestOutput) or not hasattr(req_out, "images"):
|
|
return backend_pb2.Result(success=False, message="No images in output")
|
|
|
|
images = req_out.images
|
|
if not images or len(images) == 0:
|
|
return backend_pb2.Result(success=False, message="Empty images list")
|
|
|
|
# Save image
|
|
output_image = images[0]
|
|
output_image.save(request.dst)
|
|
return backend_pb2.Result(message="Image generated successfully", success=True)
|
|
|
|
except Exception as err:
|
|
print(f"Error generating image: {err}", file=sys.stderr)
|
|
traceback.print_exc()
|
|
return backend_pb2.Result(success=False, message=f"Error generating image: {err}")
|
|
|
|
def GenerateVideo(self, request, context):
|
|
try:
|
|
# Validate model is loaded and is video/diffusion type
|
|
if not hasattr(self, 'omni'):
|
|
return backend_pb2.Result(success=False, message="Model not loaded. Call LoadModel first.")
|
|
if self.model_type not in ["video"]:
|
|
return backend_pb2.Result(success=False, message=f"Model type {self.model_type} does not support video generation")
|
|
|
|
# Extract parameters
|
|
prompt = request.prompt
|
|
negative_prompt = request.negative_prompt if request.negative_prompt else ""
|
|
width = request.width if request.width > 0 else 1280
|
|
height = request.height if request.height > 0 else 720
|
|
num_frames = request.num_frames if request.num_frames > 0 else 81
|
|
fps = request.fps if request.fps > 0 else 24
|
|
seed = request.seed if request.seed > 0 else None
|
|
guidance_scale = request.cfg_scale if request.cfg_scale > 0 else 4.0
|
|
guidance_scale_high = self.options.get("guidance_scale_high")
|
|
num_inference_steps = request.step if request.step > 0 else 40
|
|
|
|
# Create generator
|
|
generator = None
|
|
if seed:
|
|
device = detect_device_type()
|
|
generator = torch.Generator(device=device).manual_seed(seed)
|
|
|
|
# Handle image input for image-to-video
|
|
pil_image = None
|
|
if request.start_image:
|
|
pil_image = self._load_image(request.start_image)
|
|
if pil_image is None:
|
|
return backend_pb2.Result(success=False, message=f"Invalid start_image: {request.start_image}")
|
|
pil_image = pil_image.convert("RGB")
|
|
# Resize to target dimensions
|
|
pil_image = pil_image.resize((width, height), Image.Resampling.LANCZOS)
|
|
|
|
# Build generate kwargs
|
|
generate_kwargs = {
|
|
"prompt": prompt,
|
|
"negative_prompt": negative_prompt,
|
|
"height": height,
|
|
"width": width,
|
|
"generator": generator,
|
|
"guidance_scale": guidance_scale,
|
|
"num_inference_steps": num_inference_steps,
|
|
"num_frames": num_frames,
|
|
}
|
|
if pil_image:
|
|
generate_kwargs["pil_image"] = pil_image
|
|
if guidance_scale_high:
|
|
generate_kwargs["guidance_scale_2"] = guidance_scale_high
|
|
|
|
# Call omni.generate()
|
|
frames = self.omni.generate(**generate_kwargs)
|
|
|
|
# Extract video frames (following example pattern)
|
|
if isinstance(frames, list) and len(frames) > 0:
|
|
first_item = frames[0]
|
|
|
|
if hasattr(first_item, "final_output_type"):
|
|
if first_item.final_output_type != "image":
|
|
return backend_pb2.Result(success=False, message=f"Unexpected output type: {first_item.final_output_type}")
|
|
|
|
# Pipeline mode: extract from nested request_output
|
|
if hasattr(first_item, "is_pipeline_output") and first_item.is_pipeline_output:
|
|
if isinstance(first_item.request_output, list) and len(first_item.request_output) > 0:
|
|
inner_output = first_item.request_output[0]
|
|
if isinstance(inner_output, OmniRequestOutput) and hasattr(inner_output, "images"):
|
|
frames = inner_output.images[0] if inner_output.images else None
|
|
# Diffusion mode: use direct images field
|
|
elif hasattr(first_item, "images") and first_item.images:
|
|
frames = first_item.images
|
|
else:
|
|
return backend_pb2.Result(success=False, message="No video frames found")
|
|
|
|
if frames is None:
|
|
return backend_pb2.Result(success=False, message="No video frames found in output")
|
|
|
|
# Convert frames to numpy array (following example)
|
|
if isinstance(frames, torch.Tensor):
|
|
video_tensor = frames.detach().cpu()
|
|
# Handle different tensor shapes [B, C, F, H, W] or [B, F, H, W, C]
|
|
if video_tensor.dim() == 5:
|
|
if video_tensor.shape[1] in (3, 4):
|
|
video_tensor = video_tensor[0].permute(1, 2, 3, 0)
|
|
else:
|
|
video_tensor = video_tensor[0]
|
|
elif video_tensor.dim() == 4 and video_tensor.shape[0] in (3, 4):
|
|
video_tensor = video_tensor.permute(1, 2, 3, 0)
|
|
# Normalize from [-1,1] to [0,1] if float
|
|
if video_tensor.is_floating_point():
|
|
video_tensor = video_tensor.clamp(-1, 1) * 0.5 + 0.5
|
|
video_array = video_tensor.float().numpy()
|
|
else:
|
|
video_array = frames
|
|
if hasattr(video_array, "shape") and video_array.ndim == 5:
|
|
video_array = video_array[0]
|
|
|
|
# Convert 4D array (frames, H, W, C) to list of frames
|
|
if isinstance(video_array, np.ndarray) and video_array.ndim == 4:
|
|
video_array = list(video_array)
|
|
|
|
# Save video
|
|
export_to_video(video_array, request.dst, fps=fps)
|
|
return backend_pb2.Result(message="Video generated successfully", success=True)
|
|
|
|
except Exception as err:
|
|
print(f"Error generating video: {err}", file=sys.stderr)
|
|
traceback.print_exc()
|
|
return backend_pb2.Result(success=False, message=f"Error generating video: {err}")
|
|
|
|
def Predict(self, request, context):
|
|
"""Non-streaming text generation with multimodal inputs."""
|
|
gen = self._predict(request, context, streaming=False)
|
|
try:
|
|
res = next(gen)
|
|
return res
|
|
except StopIteration:
|
|
return backend_pb2.Reply(message=bytes("", 'utf-8'))
|
|
|
|
def PredictStream(self, request, context):
|
|
"""Streaming text generation with multimodal inputs."""
|
|
return self._predict(request, context, streaming=True)
|
|
|
|
def _predict(self, request, context, streaming=False):
|
|
"""Internal method for text generation (streaming and non-streaming)."""
|
|
try:
|
|
# Validate model is loaded and is LLM type
|
|
if not hasattr(self, 'omni'):
|
|
yield backend_pb2.Reply(message=bytes("Model not loaded. Call LoadModel first.", 'utf-8'))
|
|
return
|
|
if self.model_type not in ["llm"]:
|
|
yield backend_pb2.Reply(message=bytes(f"Model type {self.model_type} does not support text generation", 'utf-8'))
|
|
return
|
|
|
|
# Extract prompt
|
|
if request.Prompt:
|
|
prompt = request.Prompt
|
|
elif request.Messages:
|
|
if getattr(self, "tokenizer", None) is not None:
|
|
messages_dicts = messages_to_dicts(request.Messages)
|
|
template_kwargs = {"tokenize": False, "add_generation_prompt": True}
|
|
if request.Tools:
|
|
try:
|
|
template_kwargs["tools"] = json.loads(request.Tools)
|
|
except json.JSONDecodeError:
|
|
pass
|
|
try:
|
|
if request.Metadata.get("enable_thinking", "").lower() == "true":
|
|
template_kwargs["enable_thinking"] = True
|
|
except Exception:
|
|
pass
|
|
try:
|
|
prompt = self.tokenizer.apply_chat_template(messages_dicts, **template_kwargs)
|
|
except TypeError:
|
|
prompt = self.tokenizer.apply_chat_template(
|
|
messages_dicts, tokenize=False, add_generation_prompt=True
|
|
)
|
|
else:
|
|
# Fallback: basic template
|
|
prompt = ""
|
|
for msg in request.Messages:
|
|
prompt += f"<|im_start|>{msg.role}\n{msg.content}<|im_end|>\n"
|
|
prompt += "<|im_start|>assistant\n"
|
|
else:
|
|
yield backend_pb2.Reply(message=bytes("", 'utf-8'))
|
|
return
|
|
|
|
# Build multi_modal_data dict
|
|
multi_modal_data = {}
|
|
|
|
# Process images
|
|
if request.Images:
|
|
image_data = []
|
|
for img_path in request.Images:
|
|
img = self._load_image(img_path)
|
|
if img:
|
|
# Convert to format expected by vllm
|
|
from vllm.multimodal.image import convert_image_mode
|
|
img_data = convert_image_mode(img, "RGB")
|
|
image_data.append(img_data)
|
|
if image_data:
|
|
multi_modal_data["image"] = image_data
|
|
|
|
# Process videos
|
|
if request.Videos:
|
|
video_data = []
|
|
for video_path in request.Videos:
|
|
video = self._load_video(video_path)
|
|
if video is not None:
|
|
video_data.append(video)
|
|
if video_data:
|
|
multi_modal_data["video"] = video_data
|
|
|
|
# Process audio
|
|
if request.Audios:
|
|
audio_data = []
|
|
for audio_path in request.Audios:
|
|
audio = self._load_audio(audio_path)
|
|
if audio is not None:
|
|
audio_data.append(audio)
|
|
if audio_data:
|
|
multi_modal_data["audio"] = audio_data
|
|
|
|
# Build inputs dict
|
|
inputs = {
|
|
"prompt": prompt,
|
|
"multi_modal_data": multi_modal_data if multi_modal_data else None,
|
|
}
|
|
|
|
# Build sampling params
|
|
sampling_params = SamplingParams(
|
|
temperature=request.Temperature if request.Temperature > 0 else 0.7,
|
|
top_p=request.TopP if request.TopP > 0 else 0.9,
|
|
top_k=request.TopK if request.TopK > 0 else -1,
|
|
max_tokens=request.Tokens if request.Tokens > 0 else 200,
|
|
presence_penalty=request.PresencePenalty if request.PresencePenalty != 0 else 0.0,
|
|
frequency_penalty=request.FrequencyPenalty if request.FrequencyPenalty != 0 else 0.0,
|
|
repetition_penalty=request.RepetitionPenalty if request.RepetitionPenalty != 0 else 1.0,
|
|
seed=request.Seed if request.Seed > 0 else None,
|
|
stop=request.StopPrompts if request.StopPrompts else None,
|
|
stop_token_ids=request.StopTokenIds if request.StopTokenIds else None,
|
|
ignore_eos=request.IgnoreEOS,
|
|
)
|
|
sampling_params_list = [sampling_params]
|
|
|
|
# Call omni.generate() (returns generator for LLM mode)
|
|
omni_generator = self.omni.generate([inputs], sampling_params_list)
|
|
|
|
# Extract text from outputs and track token usage
|
|
generated_text = ""
|
|
prompt_tokens = 0
|
|
completion_tokens = 0
|
|
for stage_outputs in omni_generator:
|
|
if stage_outputs.final_output_type == "text":
|
|
for output in stage_outputs.request_output:
|
|
completion = output.outputs[0]
|
|
text_output = completion.text
|
|
# Track tokens when available
|
|
try:
|
|
if getattr(output, "prompt_token_ids", None) is not None:
|
|
prompt_tokens = len(output.prompt_token_ids)
|
|
if getattr(completion, "token_ids", None) is not None:
|
|
completion_tokens = len(completion.token_ids)
|
|
except Exception:
|
|
pass
|
|
if streaming:
|
|
# Remove already sent text (vllm concatenates)
|
|
delta_text = text_output.removeprefix(generated_text)
|
|
yield backend_pb2.Reply(
|
|
message=bytes(delta_text, encoding='utf-8'),
|
|
tokens=completion_tokens,
|
|
prompt_tokens=prompt_tokens,
|
|
)
|
|
generated_text = text_output
|
|
|
|
if not streaming:
|
|
# Build optional ChatDelta with parsed reasoning / tool calls
|
|
chat_deltas = []
|
|
content_text = generated_text
|
|
reasoning_text = ""
|
|
tool_call_deltas = []
|
|
|
|
if self.reasoning_parser_cls is not None:
|
|
try:
|
|
parser = self.reasoning_parser_cls(self.tokenizer) if self.tokenizer else self.reasoning_parser_cls()
|
|
reasoning_text, content_text = parser.extract_reasoning_content(content_text, request=None)
|
|
reasoning_text = reasoning_text or ""
|
|
content_text = content_text or ""
|
|
except Exception as e:
|
|
print(f"reasoning_parser failed: {e}", file=sys.stderr)
|
|
|
|
if self.tool_parser_cls is not None:
|
|
try:
|
|
parser = self.tool_parser_cls(self.tokenizer) if self.tokenizer else self.tool_parser_cls()
|
|
tool_info = parser.extract_tool_calls(content_text, request=None)
|
|
if getattr(tool_info, "tools_called", False):
|
|
content_text = tool_info.content or ""
|
|
for tc in tool_info.tool_calls or []:
|
|
fn = getattr(tc, "function", None)
|
|
tool_call_deltas.append(backend_pb2.ToolCallDelta(
|
|
index=getattr(tc, "index", 0) or 0,
|
|
id=getattr(tc, "id", "") or "",
|
|
name=getattr(fn, "name", "") if fn else "",
|
|
arguments=getattr(fn, "arguments", "") if fn else "",
|
|
))
|
|
except Exception as e:
|
|
print(f"tool_parser failed: {e}", file=sys.stderr)
|
|
|
|
if self.tool_parser_cls is not None or self.reasoning_parser_cls is not None:
|
|
chat_deltas.append(backend_pb2.ChatDelta(
|
|
content=content_text,
|
|
reasoning_content=reasoning_text,
|
|
tool_calls=tool_call_deltas,
|
|
))
|
|
|
|
yield backend_pb2.Reply(
|
|
message=bytes(generated_text, encoding='utf-8'),
|
|
tokens=completion_tokens,
|
|
prompt_tokens=prompt_tokens,
|
|
chat_deltas=chat_deltas,
|
|
)
|
|
|
|
except Exception as err:
|
|
print(f"Error in Predict: {err}", file=sys.stderr)
|
|
traceback.print_exc()
|
|
yield backend_pb2.Reply(message=bytes(f"Error: {err}", encoding='utf-8'))
|
|
|
|
def TTS(self, request, context):
|
|
try:
|
|
# Validate model is loaded and is TTS type
|
|
if not hasattr(self, 'omni'):
|
|
return backend_pb2.Result(success=False, message="Model not loaded. Call LoadModel first.")
|
|
if self.model_type not in ["tts"]:
|
|
return backend_pb2.Result(success=False, message=f"Model type {self.model_type} does not support TTS")
|
|
|
|
# Extract parameters
|
|
text = request.text
|
|
language = request.language if request.language else "Auto"
|
|
voice = request.voice if request.voice else None
|
|
task_type = self._detect_tts_task_type()
|
|
|
|
# Build prompt with chat template
|
|
# TODO: for now vllm-omni supports only qwen3-tts, so we hardcode it, however, we want to support other models in the future.
|
|
# and we might need to use the chat template here
|
|
prompt = f"<|im_start|>assistant\n{text}<|im_end|>\n<|im_start|>assistant\n"
|
|
|
|
# Build inputs dict
|
|
inputs = {
|
|
"prompt": prompt,
|
|
"additional_information": {
|
|
"task_type": [task_type],
|
|
"text": [text],
|
|
"language": [language],
|
|
"max_new_tokens": [2048],
|
|
}
|
|
}
|
|
|
|
# Add task-specific fields
|
|
if task_type == "CustomVoice":
|
|
if voice:
|
|
inputs["additional_information"]["speaker"] = [voice]
|
|
# Add instruct if provided in options
|
|
if "instruct" in self.options:
|
|
inputs["additional_information"]["instruct"] = [self.options["instruct"]]
|
|
elif task_type == "VoiceDesign":
|
|
if "instruct" in self.options:
|
|
inputs["additional_information"]["instruct"] = [self.options["instruct"]]
|
|
inputs["additional_information"]["non_streaming_mode"] = [True]
|
|
elif task_type == "Base":
|
|
# Voice cloning requires ref_audio and ref_text
|
|
if "ref_audio" in self.options:
|
|
inputs["additional_information"]["ref_audio"] = [self.options["ref_audio"]]
|
|
if "ref_text" in self.options:
|
|
inputs["additional_information"]["ref_text"] = [self.options["ref_text"]]
|
|
if "x_vector_only_mode" in self.options:
|
|
inputs["additional_information"]["x_vector_only_mode"] = [self.options["x_vector_only_mode"]]
|
|
|
|
# Build sampling params
|
|
sampling_params = SamplingParams(
|
|
temperature=0.9,
|
|
top_p=1.0,
|
|
top_k=50,
|
|
max_tokens=2048,
|
|
seed=42,
|
|
detokenize=False,
|
|
repetition_penalty=1.05,
|
|
)
|
|
sampling_params_list = [sampling_params]
|
|
|
|
# Call omni.generate()
|
|
omni_generator = self.omni.generate(inputs, sampling_params_list)
|
|
|
|
# Extract audio (following TTS example)
|
|
for stage_outputs in omni_generator:
|
|
for output in stage_outputs.request_output:
|
|
if "audio" in output.multimodal_output:
|
|
audio_tensor = output.multimodal_output["audio"]
|
|
audio_samplerate = output.multimodal_output["sr"].item()
|
|
|
|
# Convert to numpy
|
|
audio_numpy = audio_tensor.float().detach().cpu().numpy()
|
|
if audio_numpy.ndim > 1:
|
|
audio_numpy = audio_numpy.flatten()
|
|
|
|
# Save audio file
|
|
sf.write(request.dst, audio_numpy, samplerate=audio_samplerate, format="WAV")
|
|
return backend_pb2.Result(message="TTS audio generated successfully", success=True)
|
|
|
|
return backend_pb2.Result(success=False, message="No audio output generated")
|
|
|
|
except Exception as err:
|
|
print(f"Error generating TTS: {err}", file=sys.stderr)
|
|
traceback.print_exc()
|
|
return backend_pb2.Result(success=False, message=f"Error generating TTS: {err}")
|
|
|
|
def TokenizeString(self, request, context):
|
|
if not hasattr(self, 'tokenizer') or self.tokenizer is None:
|
|
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
|
|
context.set_details("Model/tokenizer not loaded")
|
|
return backend_pb2.TokenizationResponse()
|
|
try:
|
|
tokens = self.tokenizer.encode(request.Prompt)
|
|
return backend_pb2.TokenizationResponse(length=len(tokens), tokens=tokens)
|
|
except Exception as e:
|
|
context.set_code(grpc.StatusCode.INTERNAL)
|
|
context.set_details(str(e))
|
|
return backend_pb2.TokenizationResponse()
|
|
|
|
def Free(self, request, context):
|
|
try:
|
|
if hasattr(self, 'omni'):
|
|
del self.omni
|
|
if hasattr(self, 'tokenizer'):
|
|
del self.tokenizer
|
|
self.tool_parser_cls = None
|
|
self.reasoning_parser_cls = None
|
|
gc.collect()
|
|
try:
|
|
if torch.cuda.is_available():
|
|
torch.cuda.empty_cache()
|
|
except Exception:
|
|
pass
|
|
return backend_pb2.Result(success=True, message="Model freed")
|
|
except Exception as e:
|
|
return backend_pb2.Result(success=False, message=str(e))
|
|
|
|
|
|
def serve(address):
|
|
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
|
|
options=[
|
|
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
|
|
('grpc.max_send_message_length', 50 * 1024 * 1024),
|
|
('grpc.max_receive_message_length', 50 * 1024 * 1024),
|
|
],
|
|
interceptors=get_auth_interceptors(),
|
|
)
|
|
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
|
|
server.add_insecure_port(address)
|
|
server.start()
|
|
print("Server started. Listening on: " + address, file=sys.stderr)
|
|
|
|
# Signal handlers for graceful shutdown
|
|
def signal_handler(sig, frame):
|
|
print("Received termination signal. Shutting down...")
|
|
server.stop(0)
|
|
sys.exit(0)
|
|
|
|
signal.signal(signal.SIGINT, signal_handler)
|
|
signal.signal(signal.SIGTERM, signal_handler)
|
|
|
|
try:
|
|
while True:
|
|
time.sleep(_ONE_DAY_IN_SECONDS)
|
|
except KeyboardInterrupt:
|
|
server.stop(0)
|
|
|
|
|
|
if __name__ == "__main__":
|
|
parser = argparse.ArgumentParser(description="Run the gRPC server.")
|
|
parser.add_argument(
|
|
"--addr", default="localhost:50051", help="The address to bind the server to."
|
|
)
|
|
args = parser.parse_args()
|
|
|
|
serve(args.addr)
|