Two unrelated CI breakages bundled together since both are one-liners:
- rerankers: bump torch 2.4.1 -> 2.7.1 on cpu/cublas12. The unpinned
transformers resolves to 5.x, whose moe.py registers a custom_op with
string-typed `'torch.Tensor'` annotations that torch 2.4.1's
infer_schema rejects, blocking the gRPC server from starting and
failing all 5 backend tests with "Connection refused" on :50051.
Matches the version used by the transformers backend.
- vllm-omni: strip fa3-fwd from the upstream requirements/cuda.txt
before resolving on aarch64. fa3-fwd 0.0.3 ships only an
x86_64 wheel and has no sdist, making the cuda profile unsatisfiable
on Jetson/SBSA. fa3-fwd is a soft runtime dep — vllm-omni's
attention backends fall back to FA2 then SDPA when it's missing.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
Some of the dependencies in `requirements.txt`, even if generic, pulls
down the line CUDA libraries.
This changes moves mostly all GPU-specific libs to the build-type, and
tries a safer approach. In `requirements.txt` now are listed only
"first-level" dependencies, for instance, grpc, but libs-dependencies
are moved down to the respective build-type `requirements.txt` to avoid
any mixin.
This should fix#2737 and #1592.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>