Compare commits

...

103 Commits

Author SHA1 Message Date
Ettore Di Giacinto
478d2adfb7 Trigger CI
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-24 16:48:33 +02:00
Ettore Di Giacinto
909fdd1b0e feat(transformers): add support for CPU and MPS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-24 16:47:51 +02:00
Ettore Di Giacinto
be132fe816 Revise latest project news in README
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-24 11:50:20 +02:00
Ettore Di Giacinto
ff5d2dc8be Revert "fix(rfdetr): use cpu torch for cpu builds" (#6131)
Revert "fix(rfdetr): use cpu torch for cpu builds (#6129)"

This reverts commit fec8a36b36.
2025-08-24 11:41:08 +02:00
Ettore Di Giacinto
c1cfa08226 chore(Dockerfile): drop python from images (#6130)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-24 11:40:32 +02:00
Ettore Di Giacinto
fec8a36b36 fix(rfdetr): use cpu torch for cpu builds (#6129)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-24 10:17:25 +02:00
Ettore Di Giacinto
5d4f5d2355 feat(backends): add CPU variant for diffusers backend (#6128)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-24 10:17:10 +02:00
LocalAI [bot]
057248008f chore: ⬆️ Update ggml-org/llama.cpp to 710dfc465a68f7443b87d9f792cffba00ed739fe (#6126)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-24 08:41:39 +02:00
Ettore Di Giacinto
9f2c9cd691 feat(llama.cpp): Add gfx1201 support (#6125)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-23 23:06:01 +02:00
Ettore Di Giacinto
6971f71a6c Add mlx-vlm (#6119)
* Add mlx-vlm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to CI workflows

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add requirements-mps.txt

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-23 23:05:30 +02:00
Ettore Di Giacinto
1ba66d00f5 feat: bundle python inside backends (#6123)
* feat(backends): bundle python

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* test ci

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* vllm on self-hosted

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add clang

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to fix it for Mac

* Relocate links only when is portable

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make sure to call macosPortableEnv

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use self-hosted for vllm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-23 22:36:39 +02:00
Ettore Di Giacinto
259383cf5e chore(deps): bump llama.cpp to '45363632cbd593537d541e81b600242e0b3d47fc' (#6122)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-23 08:39:10 +02:00
Ettore Di Giacinto
209c0694f5 Update backend.yml
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-22 23:36:24 +02:00
Ettore Di Giacinto
0fd395d6ec feat(diffusers): add MPS version (#6121)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-22 23:14:54 +02:00
Ettore Di Giacinto
d04bd47116 chore(Makefile): small fixup for darwin MLX builds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-22 08:52:29 +02:00
Ettore Di Giacinto
1d830ce7dd feat(mlx): add mlx backend (#6049)
* chore: allow to install with pip

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make the backend to build and actually work

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* List models from system only

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add script to build darwin python backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Run protogen in libbackend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Detect if mps is available across python backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* CI: try to build backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Debug CI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Index mlx-vlm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Remove mlx-vlm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop CI test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-22 08:42:29 +02:00
LocalAI [bot]
6dccfb09f8 chore: ⬆️ Update ggml-org/llama.cpp to cd36b5e5c7fed2a3ac671dd542d579ca40b48b54 (#6118)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-22 07:57:27 +02:00
LocalAI [bot]
e4d9cf8349 chore: ⬆️ Update ggml-org/llama.cpp to 7a6e91ad26160dd6dfb33d29ac441617422f28e7 (#6116)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-20 21:05:39 +00:00
Ettore Di Giacinto
c899e90277 Update image-generation.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-20 10:37:11 +02:00
Ettore Di Giacinto
8193d18c7c feat(img2img): Add support to Qwen Image Edit (#6113)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-20 10:18:55 +02:00
LocalAI [bot]
2e4dc6456f chore: ⬆️ Update ggml-org/llama.cpp to fb22dd07a639e81c7415e30b146f545f1a2f2caf (#6112)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-20 09:01:36 +02:00
LocalAI [bot]
4594430a3e feat(swagger): update swagger (#6111)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-19 22:56:00 +02:00
Ettore Di Giacinto
9c7f92c81f feat(p2p): automatically sync installed models between instances (#6108)
* feat(p2p): sync models between federated nodes

This change makes sure that between federated nodes all the models are
synced with each other.

Note: this works exclusively with models belonging to a gallery. It does
not sync files between the nodes, but rather it synces the node setup.
E.g. All the nodes needs to have configured the same galleries and
install models without any local editing.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make nodes stable

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups on syncing

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ui: improve p2p view

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-19 19:37:46 +02:00
Ettore Di Giacinto
060037bcd4 Revert "chore(deps): bump transformers from 4.48.3 to 4.55.2 in /backend/python/coqui" (#6105)
Revert "chore(deps): bump transformers from 4.48.3 to 4.55.2 in /backend/pyth…"

This reverts commit 27ce570844.
2025-08-19 15:00:33 +02:00
Ettore Di Giacinto
d9da4676b4 Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/coqui" (#6104)
Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/pyt…"

This reverts commit 42c7859ab1.
2025-08-19 15:00:11 +02:00
Ettore Di Giacinto
5ef4c2e471 feat(diffusers): add torchvision to support qwen-image-edit (#6103)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-19 12:05:48 +02:00
dependabot[bot]
27ce570844 chore(deps): bump transformers from 4.48.3 to 4.55.2 in /backend/python/coqui (#6096)
chore(deps): bump transformers in /backend/python/coqui

Bumps [transformers](https://github.com/huggingface/transformers) from 4.48.3 to 4.55.2.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.48.3...v4.55.2)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 4.55.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 09:44:01 +00:00
dependabot[bot]
42c7859ab1 chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/coqui (#6099)
chore(deps): bump torch in /backend/python/coqui

Bumps [torch](https://github.com/pytorch/pytorch) from 2.3.1+cxx11.abi to 2.8.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.8.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 08:42:52 +00:00
Ettore Di Giacinto
e7e83d0fa6 Revert "chore(deps): bump intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu in /backend/python/coqui" (#6102)
Revert "chore(deps): bump intel-extension-for-pytorch from 2.3.110+xpu to 2.8…"

This reverts commit c6dc1d86f1.
2025-08-19 09:29:56 +02:00
dependabot[bot]
c6dc1d86f1 chore(deps): bump intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu in /backend/python/coqui (#6095)
chore(deps): bump intel-extension-for-pytorch in /backend/python/coqui

Bumps intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu.

---
updated-dependencies:
- dependency-name: intel-extension-for-pytorch
  dependency-version: 2.8.10+xpu
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 07:09:47 +00:00
dependabot[bot]
6fd2e1964d chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/coqui (#6097)
Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 08:11:58 +02:00
dependabot[bot]
49ae41b716 chore(deps): bump securego/gosec from 2.22.7 to 2.22.8 (#6098)
Bumps [securego/gosec](https://github.com/securego/gosec) from 2.22.7 to 2.22.8.
- [Release notes](https://github.com/securego/gosec/releases)
- [Changelog](https://github.com/securego/gosec/blob/master/.goreleaser.yml)
- [Commits](https://github.com/securego/gosec/compare/v2.22.7...v2.22.8)

---
updated-dependencies:
- dependency-name: securego/gosec
  dependency-version: 2.22.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 08:11:26 +02:00
dependabot[bot]
b3f0ed62fd chore(deps): bump actions/checkout from 4 to 5 (#6101)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 08:10:54 +02:00
LocalAI [bot]
4b9afc418b chore: ⬆️ Update ggml-org/whisper.cpp to fc45bb86251f774ef817e89878bb4c2636c8a58f (#6089)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-19 08:10:25 +02:00
LocalAI [bot]
e44ff8514b chore: ⬆️ Update ggml-org/llama.cpp to 6d7f1117e3e3285d0c5c11b5ebb0439e27920082 (#6088)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-19 08:09:49 +02:00
dependabot[bot]
2b6be10b6b chore(deps): bump protobuf from 6.31.0 to 6.32.0 in /backend/python/transformers (#6100)
chore(deps): bump protobuf in /backend/python/transformers

Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 6.31.0 to 6.32.0.
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf/blob/main/protobuf_release.bzl)
- [Commits](https://github.com/protocolbuffers/protobuf/commits)

---
updated-dependencies:
- dependency-name: protobuf
  dependency-version: 6.32.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 05:09:17 +00:00
Ettore Di Giacinto
1361d844a1 chore(model gallery): add lfm2-1.2b (#6092)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-18 22:24:50 +02:00
Ettore Di Giacinto
fcc521cae5 chore(model gallery): add lfm2-vl-1.6b (#6091)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-18 22:21:35 +02:00
Ettore Di Giacinto
8cad7138be chore(model gallery): add lfm2-vl-450m (#6090)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-18 22:17:03 +02:00
Richard Palethorpe
ebd1db2f09 chore(ci): Build modified backends on PR (#6086)
* chore(stablediffusion-ggml): rm redundant comment

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* chore(ci): Build modified backends on PR

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-08-18 17:56:34 +02:00
LocalAI [bot]
7920d75805 chore: ⬆️ Update ggml-org/llama.cpp to 21c17b5befc5f6be5992bc87fc1ba99d388561df (#6084)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-18 08:26:58 +00:00
Ettore Di Giacinto
1d0e24a865 chore(model gallery): add impish_longtail_12b (#6082)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-17 09:41:01 +02:00
LocalAI [bot]
9eed5ef872 chore: ⬆️ Update ggml-org/llama.cpp to 1fe00296f587dfca0957e006d146f5875b61e43d (#6079)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-16 21:10:03 +00:00
Fabio Erculiani
39ab80442a Tune the "dark gray" font color in the webui to make it more readable (#6078)
Tune the "dark gray" font color in the LocalAI webui to make it more readable.
2025-08-16 21:16:25 +02:00
Ettore Di Giacinto
1b101df2c0 chore: InTrustedRoot -> VerifyPath
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-16 16:30:13 +02:00
Richard Palethorpe
784bd5db33 chore(build): Use Purego with stablediffusion backend (#6067)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-08-16 12:21:29 +02:00
Ettore Di Giacinto
b8b1ca782c chore(model gallery): add wingless_imp_8b-i1 (#6077)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-16 08:45:38 +02:00
Ettore Di Giacinto
1149fb66d3 chore(model gallery): add thedrummer_gemma-3-r1-4b-v1 (#6076)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-16 08:39:10 +02:00
LocalAI [bot]
243e86176e chore: ⬆️ Update ggml-org/llama.cpp to 5e6229a8409ac786e62cb133d09f1679a9aec13e (#6070)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-16 08:38:57 +02:00
Ettore Di Giacinto
8da38a0d10 chore(model gallery): add thedrummer_gemma-3-r1-12b-v1 (#6075)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-16 08:35:49 +02:00
Ettore Di Giacinto
60786fc876 chore(model gallery): add thedrummer_gemma-3-r1-27b-v1 (#6074)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-16 08:31:42 +02:00
LocalAI [bot]
9486b88a25 chore: ⬆️ Update ggml-org/whisper.cpp to 040510a132f0a9b51d4692b57a6abfd8c9660696 (#6069)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-16 08:30:54 +02:00
Ettore Di Giacinto
bef4c10629 feat(ui): General improvements (#6072)
* wip

* Simplify stop

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Improve UI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Show installed backends at the index

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Imporve UI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-16 07:44:50 +02:00
LocalAI [bot]
80f15851c5 chore(model-gallery): ⬆️ update checksum (#6071)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-15 22:29:17 +02:00
Ettore Di Giacinto
22067e3384 chore(rocm): bump rocm image, add gfx1200 support (#6065)
Fixes: https://github.com/mudler/LocalAI/issues/6044

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-15 16:36:54 +02:00
Ettore Di Giacinto
4fbd639463 chore(ci): fixup builds for darwin and hipblas
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-15 15:58:02 +02:00
Ettore Di Giacinto
70f7d0c25f Revert "chore(build): Convert stablediffusion-ggml backend to Purego (#5989)" (#6064)
This reverts commit 94cb20ae7f.
2025-08-15 15:18:40 +02:00
Ettore Di Giacinto
576e821298 chore(deps): bump llama.cpp to 'df36bce667bf14f8e538645547754386f9516326 (#6062)
chore(deps): bump llama.cpp to 'df36bce667bf14f8e538645547754386f9516326'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-15 13:28:15 +02:00
Ettore Di Giacinto
7293f26fcf chore(ci): fix darwin image publish
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-15 08:51:44 +02:00
Ettore Di Giacinto
79973a28ad chore(model gallery): add gemma-3-270m-it-qat (#6063)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-15 08:41:38 +02:00
Ettore Di Giacinto
8ab51509cc Update Makefile
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-15 08:33:25 +02:00
Ettore Di Giacinto
b3384e5428 Update Makefile
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-15 08:08:24 +02:00
Ettore Di Giacinto
7050c9f69d feat(webui): add import/edit model page (#6050)
* feat(webui): add import/edit model page

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Convert to a YAML editor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pass by the baseurl

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Improve visibility of the yaml editor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add test file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make reset work

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Emit error only if we can't delete the model yaml file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 23:48:09 +02:00
Ettore Di Giacinto
089efe05fd feat(backends): add system backend, refactor (#6059)
- Add a system backend path
- Refactor and consolidate system information in system state
- Use system state in all the components to figure out the system paths
  to used whenever needed
- Refactor BackendConfig -> ModelConfig. This was otherway misleading as
  now we do have a backend configuration which is not the model config.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 19:38:26 +02:00
Ettore Di Giacinto
253b7537dc fix(llama-cpp/darwin): make sure to bundle libutf8 libs (#6060)
fix(darwin): make sure to bundle libutf8_validity

Plus some refactoring, use makefile

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 17:56:35 +02:00
Ettore Di Giacinto
19c92c70c5 fix(backend-detection): default to CPU if there is less than 4GB of GPU available (#6057)
fix(gpu-detection): default to CPU if there is less than 4GB of GPU available

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 16:57:33 +02:00
Ettore Di Giacinto
b52bfaf1b3 fix: do not show invalid backends (#6058)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-14 13:01:56 +02:00
Ettore Di Giacinto
bf60ca5bf0 Update Makefile
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-14 11:53:43 +02:00
LocalAI [bot]
2b44467bd1 chore: ⬆️ Update ggml-org/llama.cpp to 29c8fbe4e05fd23c44950d0958299e25fbeabc5c (#6054)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-14 09:19:15 +02:00
LocalAI [bot]
8c1f4a131e chore: ⬆️ Update ggml-org/whisper.cpp to 16c2924cb2c4b5c9f79220aa7708eb5b346b029b (#6055)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-13 21:08:22 +00:00
Ettore Di Giacinto
10a3f0bd92 fix: chmod grpc processes only if needed (#6051)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-13 12:08:53 +02:00
LocalAI [bot]
72f4d541d0 chore: ⬆️ Update ggml-org/llama.cpp to f4586ee5986d6f965becb37876d6f3666478a961 (#6048)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-13 08:33:48 +02:00
LocalAI [bot]
9f812fdb84 chore: ⬆️ Update ggml-org/whisper.cpp to 5527454cdb3e15d7e2b8a6e2afcb58cb61651fd2 (#6047)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-12 21:12:07 +00:00
LocalAI [bot]
b70ee45fff docs: ⬆️ update docs version mudler/LocalAI (#6046)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-12 22:05:50 +02:00
dependabot[bot]
9d9c853541 chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/transformers (#6013)
chore(deps): bump grpcio in /backend/python/transformers

Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 22:05:16 +02:00
Ettore Di Giacinto
18fcd8557c fix(llama.cpp): support gfx1200 (#6045)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-12 22:04:30 +02:00
dependabot[bot]
d8e27c38d7 chore(deps): bump oneccl-bind-pt from 2.3.100+xpu to 2.8.0+xpu in /backend/python/common/template (#6016)
chore(deps): bump oneccl-bind-pt in /backend/python/common/template

Bumps oneccl-bind-pt from 2.3.100+xpu to 2.8.0+xpu.

---
updated-dependencies:
- dependency-name: oneccl-bind-pt
  dependency-version: 2.8.0+xpu
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 18:57:20 +00:00
dependabot[bot]
3b0dc87932 chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/common/template (#6025)
chore(deps): bump torch in /backend/python/common/template

Bumps [torch](https://github.com/pytorch/pytorch) from 2.3.1+cxx11.abi to 2.8.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.8.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 17:58:33 +00:00
dependabot[bot]
2374485222 chore(deps): bump actions/download-artifact from 4 to 5 (#6015)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 18:54:57 +02:00
dependabot[bot]
0ca1765c17 chore(deps): bump actions/checkout from 4 to 5 (#6014)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 18:54:39 +02:00
dependabot[bot]
90b5ed9a1e chore(deps): bump intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu in /backend/python/common/template (#6034)
chore(deps): bump intel-extension-for-pytorch

Bumps intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu.

---
updated-dependencies:
- dependency-name: intel-extension-for-pytorch
  dependency-version: 2.8.10+xpu
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 18:44:33 +02:00
dependabot[bot]
d438b769da chore(deps): bump intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu in /backend/python/bark (#6043)
chore(deps): bump intel-extension-for-pytorch in /backend/python/bark

Bumps intel-extension-for-pytorch from 2.3.110+xpu to 2.8.10+xpu.

---
updated-dependencies:
- dependency-name: intel-extension-for-pytorch
  dependency-version: 2.8.10+xpu
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 18:44:05 +02:00
dependabot[bot]
2e4bd1e33d chore(deps): bump oneccl-bind-pt from 2.3.100+xpu to 2.8.0+xpu in /backend/python/rerankers (#6021)
chore(deps): bump oneccl-bind-pt in /backend/python/rerankers

Bumps oneccl-bind-pt from 2.3.100+xpu to 2.8.0+xpu.

---
updated-dependencies:
- dependency-name: oneccl-bind-pt
  dependency-version: 2.8.0+xpu
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 16:04:54 +00:00
dependabot[bot]
ff73800970 chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/exllama2 (#6019)
chore(deps): bump grpcio in /backend/python/exllama2

Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 16:42:46 +02:00
Richard Palethorpe
94cb20ae7f chore(build): Convert stablediffusion-ggml backend to Purego (#5989)
* Try converting SD to purego

* chore(build): Use Purego with stablediffusion backend

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-08-12 16:42:15 +02:00
dependabot[bot]
47c20f9adb chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/rerankers (#6022)
chore(deps): bump grpcio in /backend/python/rerankers

Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:24:48 +02:00
dependabot[bot]
a7fe153630 chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/bark (#6033)
Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:16:00 +02:00
dependabot[bot]
27519d2233 chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/common/template (#6035)
chore(deps): bump grpcio in /backend/python/common/template

Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:15:28 +02:00
dependabot[bot]
8cab0f880b chore(deps): bump sentence-transformers from 5.0.0 to 5.1.0 in /backend/python/transformers (#6028)
chore(deps): bump sentence-transformers in /backend/python/transformers

Bumps [sentence-transformers](https://github.com/UKPLab/sentence-transformers) from 5.0.0 to 5.1.0.
- [Release notes](https://github.com/UKPLab/sentence-transformers/releases)
- [Commits](https://github.com/UKPLab/sentence-transformers/compare/v5.0.0...v5.1.0)

---
updated-dependencies:
- dependency-name: sentence-transformers
  dependency-version: 5.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:15:07 +02:00
dependabot[bot]
8c48b250c4 chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/diffusers (#6037)
chore(deps): bump grpcio in /backend/python/diffusers

Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:14:35 +02:00
dependabot[bot]
ba802c2ee4 chore(deps): bump grpcio from 1.71.0 to 1.74.0 in /backend/python/vllm (#6036)
Bumps [grpcio](https://github.com/grpc/grpc) from 1.71.0 to 1.74.0.
- [Release notes](https://github.com/grpc/grpc/releases)
- [Changelog](https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md)
- [Commits](https://github.com/grpc/grpc/compare/v1.71.0...v1.74.0)

---
updated-dependencies:
- dependency-name: grpcio
  dependency-version: 1.74.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:14:15 +02:00
Ettore Di Giacinto
429bb7a88c chore(model gallery): add baichuan-inc_baichuan-m2-32 (#6042)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-12 09:43:13 +02:00
LocalAI [bot]
b2e8b6d1aa chore: ⬆️ Update ggml-org/llama.cpp to be48528b068111304e4a0bb82c028558b5705f05 (#6012)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-11 21:06:10 +00:00
LocalAI [bot]
fba5b557a1 chore: ⬆️ Update ggml-org/whisper.cpp to b02242d0adb5c6c4896d59ac86d9ec9fe0d0fe33 (#6009)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-11 12:54:41 +02:00
LocalAI [bot]
6db19c5cb9 chore: ⬆️ Update ggml-org/llama.cpp to 79c1160b073b8148a404f3dd2584be1606dccc66 (#6006)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-11 12:54:21 +02:00
Ettore Di Giacinto
5428678209 chore(ci): more cleanup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-11 10:10:38 +02:00
LocalAI [bot]
06129139eb chore(model-gallery): ⬆️ update checksum (#6010)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-11 07:54:01 +02:00
Ettore Di Giacinto
05757e2738 feat(backends install): allow to specify name and alias during manual installation (#5971)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-10 10:05:53 +02:00
Ettore Di Giacinto
240b790f29 chore(model gallery): add impish_nemo_12b (#6007)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-10 10:05:20 +02:00
Ettore Di Giacinto
5f221f5946 fix(l4t-diffusers): add sentencepiece (#6005)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-09 09:08:35 +02:00
LocalAI [bot]
def7cdc0bf chore: ⬆️ Update ggml-org/llama.cpp to cd6983d56d2cce94ecb86bb114ae8379a609073c (#6003)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-09 08:41:58 +02:00
Ettore Di Giacinto
ea9bf3dba2 Update backend.yml
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-08 23:00:47 +02:00
Ettore Di Giacinto
b8eca530b6 feat(diffusers): add builds for nvidia-l4t (#6004)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-08-08 22:48:38 +02:00
219 changed files with 7263 additions and 2034 deletions

View File

@@ -2,6 +2,7 @@
name: 'build backend container images'
on:
pull_request:
push:
branches:
- master
@@ -63,6 +64,18 @@ jobs:
backend: "llama-cpp"
dockerfile: "./backend/Dockerfile.llama-cpp"
context: "./"
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-cpu-transformers'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:22.04"
skip-drivers: 'true'
backend: "transformers"
dockerfile: "./backend/Dockerfile.python"
context: "./backend"
- build-type: 'cublas'
cuda-major-version: "11"
cuda-minor-version: "7"
@@ -87,6 +100,30 @@ jobs:
backend: "diffusers"
dockerfile: "./backend/Dockerfile.python"
context: "./backend"
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-l4t-diffusers'
runs-on: 'ubuntu-24.04-arm'
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
skip-drivers: 'true'
backend: "diffusers"
dockerfile: "./backend/Dockerfile.python"
context: "./backend"
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-cpu-diffusers'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:22.04"
skip-drivers: 'true'
backend: "diffusers"
dockerfile: "./backend/Dockerfile.python"
context: "./backend"
# CUDA 11 additional backends
- build-type: 'cublas'
cuda-major-version: "11"
@@ -179,7 +216,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-vllm'
runs-on: 'ubuntu-latest'
runs-on: 'arc-runner-set'
base-image: "ubuntu:22.04"
skip-drivers: 'false'
backend: "vllm"
@@ -278,7 +315,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-rerankers'
runs-on: 'ubuntu-latest'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "rerankers"
dockerfile: "./backend/Dockerfile.python"
@@ -290,7 +327,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-llama-cpp'
runs-on: 'ubuntu-latest'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "llama-cpp"
dockerfile: "./backend/Dockerfile.llama-cpp"
@@ -301,8 +338,8 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-vllm'
runs-on: 'ubuntu-latest'
base-image: "rocm/dev-ubuntu-22.04:6.1"
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "vllm"
dockerfile: "./backend/Dockerfile.python"
@@ -314,7 +351,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-transformers'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "transformers"
dockerfile: "./backend/Dockerfile.python"
@@ -326,7 +363,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-diffusers'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "diffusers"
dockerfile: "./backend/Dockerfile.python"
@@ -339,7 +376,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-kokoro'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "kokoro"
dockerfile: "./backend/Dockerfile.python"
@@ -351,7 +388,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-faster-whisper'
runs-on: 'ubuntu-latest'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "faster-whisper"
dockerfile: "./backend/Dockerfile.python"
@@ -363,7 +400,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-coqui'
runs-on: 'ubuntu-latest'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "coqui"
dockerfile: "./backend/Dockerfile.python"
@@ -375,7 +412,7 @@ jobs:
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-bark'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
skip-drivers: 'false'
backend: "bark"
dockerfile: "./backend/Dockerfile.python"
@@ -423,7 +460,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-vllm'
runs-on: 'ubuntu-latest'
runs-on: 'arc-runner-set'
base-image: "quay.io/go-skynet/intel-oneapi-base:latest"
skip-drivers: 'false'
backend: "vllm"
@@ -740,7 +777,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-hipblas-whisper'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
runs-on: 'ubuntu-latest'
skip-drivers: 'false'
backend: "whisper"
@@ -902,7 +939,7 @@ jobs:
skip-drivers: 'true'
tag-latest: 'auto'
tag-suffix: '-gpu-hipblas-exllama2'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
runs-on: 'ubuntu-latest'
backend: "exllama2"
dockerfile: "./backend/Dockerfile.python"
@@ -914,7 +951,7 @@ jobs:
# platforms: 'linux/amd64'
# tag-latest: 'auto'
# tag-suffix: '-gpu-hipblas-rfdetr'
# base-image: "rocm/dev-ubuntu-22.04:6.1"
# base-image: "rocm/dev-ubuntu-22.04:6.4.3"
# runs-on: 'ubuntu-latest'
# skip-drivers: 'false'
# backend: "rfdetr"
@@ -933,6 +970,60 @@ jobs:
backend: "kitten-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./backend"
transformers-darwin:
uses: ./.github/workflows/backend_build_darwin.yml
with:
backend: "transformers"
build-type: "mps"
go-version: "1.24.x"
tag-suffix: "-metal-darwin-arm64-transformers"
use-pip: true
runs-on: "macOS-14"
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}
diffusers-darwin:
uses: ./.github/workflows/backend_build_darwin.yml
with:
backend: "diffusers"
build-type: "mps"
go-version: "1.24.x"
tag-suffix: "-metal-darwin-arm64-diffusers"
use-pip: true
runs-on: "macOS-14"
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}
mlx-darwin:
uses: ./.github/workflows/backend_build_darwin.yml
with:
backend: "mlx"
build-type: "mps"
go-version: "1.24.x"
tag-suffix: "-metal-darwin-arm64-mlx"
runs-on: "macOS-14"
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}
mlx-vlm-darwin:
uses: ./.github/workflows/backend_build_darwin.yml
with:
backend: "mlx-vlm"
build-type: "mps"
go-version: "1.24.x"
tag-suffix: "-metal-darwin-arm64-mlx-vlm"
runs-on: "macOS-14"
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}
llama-cpp-darwin:
runs-on: macOS-14
strategy:
@@ -940,7 +1031,7 @@ jobs:
go-version: ['1.21.x']
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Setup Go ${{ matrix.go-version }}
@@ -957,22 +1048,19 @@ jobs:
- name: Build llama-cpp-darwin
run: |
make protogen-go
make build
bash scripts/build-llama-cpp-darwin.sh
ls -la build/darwin.tar
mv build/darwin.tar build/llama-cpp.tar
make backends/llama-cpp-darwin
- name: Upload llama-cpp.tar
uses: actions/upload-artifact@v4
with:
name: llama-cpp-tar
path: build/llama-cpp.tar
path: backend-images/llama-cpp.tar
llama-cpp-darwin-publish:
needs: llama-cpp-darwin
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
steps:
- name: Download llama-cpp.tar
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
name: llama-cpp-tar
path: .
@@ -1029,7 +1117,7 @@ jobs:
go-version: ['1.21.x']
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Setup Go ${{ matrix.go-version }}
@@ -1048,21 +1136,19 @@ jobs:
make protogen-go
make build
export PLATFORMARCH=darwin/amd64
bash scripts/build-llama-cpp-darwin.sh
ls -la build/darwin.tar
mv build/darwin.tar build/llama-cpp.tar
make backends/llama-cpp-darwin
- name: Upload llama-cpp.tar
uses: actions/upload-artifact@v4
with:
name: llama-cpp-tar-x86
path: build/llama-cpp.tar
path: backend-images/llama-cpp.tar
llama-cpp-darwin-x86-publish:
if: github.event_name != 'pull_request'
needs: llama-cpp-darwin-x86
runs-on: ubuntu-latest
steps:
- name: Download llama-cpp.tar
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
name: llama-cpp-tar-x86
path: .

View File

@@ -55,9 +55,9 @@ on:
type: string
secrets:
dockerUsername:
required: true
required: false
dockerPassword:
required: true
required: false
quayUsername:
required: true
quayPassword:
@@ -66,6 +66,8 @@ on:
jobs:
backend-build:
runs-on: ${{ inputs.runs-on }}
env:
quay_username: ${{ secrets.quayUsername }}
steps:
@@ -95,7 +97,7 @@ jobs:
&& sudo apt-get install -y git
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Release space from worker
if: inputs.runs-on == 'ubuntu-latest'
@@ -187,7 +189,7 @@ jobs:
password: ${{ secrets.dockerPassword }}
- name: Login to Quay.io
# if: github.event_name != 'pull_request'
if: ${{ env.quay_username != '' }}
uses: docker/login-action@v3
with:
registry: quay.io
@@ -230,7 +232,7 @@ jobs:
file: ${{ inputs.dockerfile }}
cache-from: type=gha
platforms: ${{ inputs.platforms }}
push: true
push: ${{ env.quay_username != '' }}
tags: ${{ steps.meta_pull_request.outputs.tags }}
labels: ${{ steps.meta_pull_request.outputs.labels }}
@@ -238,4 +240,4 @@ jobs:
- name: job summary
run: |
echo "Built image: ${{ steps.meta.outputs.labels }}" >> $GITHUB_STEP_SUMMARY
echo "Built image: ${{ steps.meta.outputs.labels }}" >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,140 @@
---
name: 'build darwin python backend container images (reusable)'
on:
workflow_call:
inputs:
backend:
description: 'Backend to build'
required: true
type: string
build-type:
description: 'Build type (e.g., mps)'
default: ''
type: string
use-pip:
description: 'Use pip to install dependencies'
default: false
type: boolean
go-version:
description: 'Go version to use'
default: '1.24.x'
type: string
tag-suffix:
description: 'Tag suffix for the built image'
required: true
type: string
runs-on:
description: 'Runner to use'
default: 'macOS-14'
type: string
secrets:
dockerUsername:
required: false
dockerPassword:
required: false
quayUsername:
required: true
quayPassword:
required: true
jobs:
darwin-backend-build:
runs-on: ${{ inputs.runs-on }}
strategy:
matrix:
go-version: ['${{ inputs.go-version }}']
steps:
- name: Clone
uses: actions/checkout@v5
with:
submodules: true
- name: Setup Go ${{ matrix.go-version }}
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: false
# You can test your matrix by printing the current Go version
- name: Display Go version
run: go version
- name: Dependencies
run: |
brew install protobuf grpc make protoc-gen-go protoc-gen-go-grpc libomp llvm
- name: Build ${{ inputs.backend }}-darwin
run: |
make protogen-go
BACKEND=${{ inputs.backend }} BUILD_TYPE=${{ inputs.build-type }} USE_PIP=${{ inputs.use-pip }} make build-darwin-python-backend
- name: Upload ${{ inputs.backend }}.tar
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.backend }}-tar
path: backend-images/${{ inputs.backend }}.tar
darwin-backend-publish:
needs: darwin-backend-build
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
steps:
- name: Download ${{ inputs.backend }}.tar
uses: actions/download-artifact@v5
with:
name: ${{ inputs.backend }}-tar
path: .
- name: Install crane
run: |
curl -L https://github.com/google/go-containerregistry/releases/latest/download/go-containerregistry_Linux_x86_64.tar.gz | tar -xz
sudo mv crane /usr/local/bin/
- name: Log in to DockerHub
run: |
echo "${{ secrets.dockerPassword }}" | crane auth login docker.io -u "${{ secrets.dockerUsername }}" --password-stdin
- name: Log in to quay.io
run: |
echo "${{ secrets.quayPassword }}" | crane auth login quay.io -u "${{ secrets.quayUsername }}" --password-stdin
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
localai/localai-backends
tags: |
type=ref,event=branch
type=semver,pattern={{raw}}
type=sha
flavor: |
latest=auto
suffix=${{ inputs.tag-suffix }},onlatest=true
- name: Docker meta
id: quaymeta
uses: docker/metadata-action@v5
with:
images: |
quay.io/go-skynet/local-ai-backends
tags: |
type=ref,event=branch
type=semver,pattern={{raw}}
type=sha
flavor: |
latest=auto
suffix=${{ inputs.tag-suffix }},onlatest=true
- name: Push Docker image (DockerHub)
run: |
for tag in $(echo "${{ steps.meta.outputs.tags }}" | tr ',' '\n'); do
crane push ${{ inputs.backend }}.tar $tag
done
- name: Push Docker image (Quay)
run: |
for tag in $(echo "${{ steps.quaymeta.outputs.tags }}" | tr ',' '\n'); do
crane push ${{ inputs.backend }}.tar $tag
done

58
.github/workflows/backend_pr.yml vendored Normal file
View File

@@ -0,0 +1,58 @@
name: 'build backend container images (PR-filtered)'
on:
pull_request:
concurrency:
group: ci-backends-pr-${{ github.head_ref || github.ref }}-${{ github.repository }}
cancel-in-progress: true
jobs:
generate-matrix:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
has-backends: ${{ steps.set-matrix.outputs.has-backends }}
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Setup Bun
uses: oven-sh/setup-bun@v2
- name: Install dependencies
run: |
bun add js-yaml
bun add @octokit/core
# filters the matrix in backend.yml
- name: Filter matrix for changed backends
id: set-matrix
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_EVENT_PATH: ${{ github.event_path }}
run: bun run scripts/changed-backends.js
backend-jobs:
needs: generate-matrix
uses: ./.github/workflows/backend_build.yml
if: needs.generate-matrix.outputs.has-backends == 'true'
with:
tag-latest: ${{ matrix.tag-latest }}
tag-suffix: ${{ matrix.tag-suffix }}
build-type: ${{ matrix.build-type }}
cuda-major-version: ${{ matrix.cuda-major-version }}
cuda-minor-version: ${{ matrix.cuda-minor-version }}
platforms: ${{ matrix.platforms }}
runs-on: ${{ matrix.runs-on }}
base-image: ${{ matrix.base-image }}
backend: ${{ matrix.backend }}
dockerfile: ${{ matrix.dockerfile }}
skip-drivers: ${{ matrix.skip-drivers }}
context: ${{ matrix.context }}
secrets:
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}
strategy:
fail-fast: true
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set up Go

View File

@@ -31,7 +31,7 @@ jobs:
file: "backend/go/piper/Makefile"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Bump dependencies 🔧
id: bump
run: |

View File

@@ -12,7 +12,7 @@ jobs:
- repository: "mudler/LocalAI"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Bump dependencies 🔧
run: |
bash .github/bump_docs.sh ${{ matrix.repository }}

View File

@@ -15,7 +15,7 @@ jobs:
&& sudo add-apt-repository -y ppa:git-core/ppa \
&& sudo apt-get update \
&& sudo apt-get install -y git
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Install dependencies
run: |
sudo apt-get update

View File

@@ -20,7 +20,7 @@ jobs:
skip-commit-verification: true
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Approve a PR if not already approved
run: |

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- uses: actions/setup-go@v5

View File

@@ -73,7 +73,7 @@ jobs:
uses: docker/setup-buildx-action@master
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Cache GRPC
uses: docker/build-push-action@v6

View File

@@ -43,7 +43,7 @@ jobs:
uses: docker/setup-buildx-action@master
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Cache Intel images
uses: docker/build-push-action@v6

View File

@@ -47,7 +47,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'false'
tag-suffix: '-hipblas'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
grpc-base-image: "ubuntu:22.04"
runs-on: 'ubuntu-latest'
makeflags: "--jobs=3 --output-sync=target"

View File

@@ -39,7 +39,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-hipblas'
base-image: "rocm/dev-ubuntu-22.04:6.1"
base-image: "rocm/dev-ubuntu-22.04:6.4.3"
grpc-base-image: "ubuntu:22.04"
runs-on: 'ubuntu-latest'
makeflags: "--jobs=3 --output-sync=target"

View File

@@ -94,7 +94,7 @@ jobs:
&& sudo apt-get update \
&& sudo apt-get install -y git
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Release space from worker
if: inputs.runs-on == 'ubuntu-latest'

View File

@@ -13,7 +13,7 @@ jobs:
if: ${{ github.actor == 'localai-bot' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Approve a PR if not already approved
run: |

View File

@@ -11,7 +11,7 @@ jobs:
MODEL_NAME: gemma-3-12b-it
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0 # needed to checkout all branches for this Action to work
- uses: mudler/localai-github-action@v1
@@ -90,7 +90,7 @@ jobs:
MODEL_NAME: gemma-3-12b-it
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0 # needed to checkout all branches for this Action to work
- name: Start LocalAI

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set up Go

View File

@@ -14,11 +14,11 @@ jobs:
GO111MODULE: on
steps:
- name: Checkout Source
uses: actions/checkout@v4
uses: actions/checkout@v5
if: ${{ github.actor != 'dependabot[bot]' }}
- name: Run Gosec Security Scanner
if: ${{ github.actor != 'dependabot[bot]' }}
uses: securego/gosec@v2.22.7
uses: securego/gosec@v2.22.8
with:
# we let the report trigger content trigger a failure using the GitHub Security features.
args: '-no-fail -fmt sarif -out results.sarif ./...'

View File

@@ -19,7 +19,7 @@ jobs:
# runs-on: ubuntu-latest
# steps:
# - name: Clone
# uses: actions/checkout@v4
# uses: actions/checkout@v5
# with:
# submodules: true
# - name: Dependencies
@@ -40,7 +40,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Dependencies
@@ -61,7 +61,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Dependencies
@@ -83,7 +83,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Dependencies
@@ -104,7 +104,7 @@ jobs:
# runs-on: ubuntu-latest
# steps:
# - name: Clone
# uses: actions/checkout@v4
# uses: actions/checkout@v5
# with:
# submodules: true
# - name: Dependencies
@@ -124,7 +124,7 @@ jobs:
# runs-on: ubuntu-latest
# steps:
# - name: Clone
# uses: actions/checkout@v4
# uses: actions/checkout@v5
# with:
# submodules: true
# - name: Dependencies
@@ -186,7 +186,7 @@ jobs:
# sudo rm -rf "$AGENT_TOOLSDIRECTORY" || true
# df -h
# - name: Clone
# uses: actions/checkout@v4
# uses: actions/checkout@v5
# with:
# submodules: true
# - name: Dependencies
@@ -211,7 +211,7 @@ jobs:
# runs-on: ubuntu-latest
# steps:
# - name: Clone
# uses: actions/checkout@v4
# uses: actions/checkout@v5
# with:
# submodules: true
# - name: Dependencies
@@ -232,7 +232,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Dependencies

View File

@@ -23,6 +23,20 @@ jobs:
matrix:
go-version: ['1.21.x']
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: true
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Release space from worker
run: |
echo "Listing top largest packages"
@@ -56,7 +70,7 @@ jobs:
sudo rm -rfv build || true
df -h
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Setup Go ${{ matrix.go-version }}
@@ -152,7 +166,7 @@ jobs:
sudo rm -rfv build || true
df -h
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Dependencies
@@ -182,7 +196,7 @@ jobs:
go-version: ['1.21.x']
steps:
- name: Clone
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
submodules: true
- name: Setup Go ${{ matrix.go-version }}
@@ -200,11 +214,7 @@ jobs:
- name: Build llama-cpp-darwin
run: |
make protogen-go
make build
bash scripts/build-llama-cpp-darwin.sh
ls -la build/darwin.tar
mv build/darwin.tar build/llama-cpp.tar
./local-ai backends install "ocifile://$PWD/build/llama-cpp.tar"
make backends/llama-cpp-darwin
- name: Test
run: |
export C_INCLUDE_PATH=/usr/local/include

View File

@@ -9,7 +9,7 @@ jobs:
fail-fast: false
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- uses: actions/setup-go@v5
with:
go-version: 'stable'

View File

@@ -9,7 +9,7 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates curl wget espeak-ng libgomp1 \
python3 python-is-python3 ffmpeg && \
ffmpeg libopenblas-base libopenblas-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

View File

@@ -132,36 +132,6 @@ test: test-models/testmodel.ggml protogen-go
$(MAKE) test-tts
$(MAKE) test-stablediffusion
backends/llama-cpp: docker-build-llama-cpp docker-save-llama-cpp build
./local-ai backends install "ocifile://$(abspath ./backend-images/llama-cpp.tar)"
backends/piper: docker-build-piper docker-save-piper build
./local-ai backends install "ocifile://$(abspath ./backend-images/piper.tar)"
backends/stablediffusion-ggml: docker-build-stablediffusion-ggml docker-save-stablediffusion-ggml build
./local-ai backends install "ocifile://$(abspath ./backend-images/stablediffusion-ggml.tar)"
backends/whisper: docker-build-whisper docker-save-whisper build
./local-ai backends install "ocifile://$(abspath ./backend-images/whisper.tar)"
backends/silero-vad: docker-build-silero-vad docker-save-silero-vad build
./local-ai backends install "ocifile://$(abspath ./backend-images/silero-vad.tar)"
backends/local-store: docker-build-local-store docker-save-local-store build
./local-ai backends install "ocifile://$(abspath ./backend-images/local-store.tar)"
backends/huggingface: docker-build-huggingface docker-save-huggingface build
./local-ai backends install "ocifile://$(abspath ./backend-images/huggingface.tar)"
backends/rfdetr: docker-build-rfdetr docker-save-rfdetr build
./local-ai backends install "ocifile://$(abspath ./backend-images/rfdetr.tar)"
backends/kitten-tts: docker-build-kitten-tts docker-save-kitten-tts build
./local-ai backends install "ocifile://$(abspath ./backend-images/kitten-tts.tar)"
backends/kokoro: docker-build-kokoro docker-save-kokoro build
./local-ai backends install "ocifile://$(abspath ./backend-images/kokoro.tar)"
########################################################
## AIO tests
########################################################
@@ -354,6 +324,59 @@ docker-image-intel:
## Backends
########################################################
backends/diffusers: docker-build-diffusers docker-save-diffusers build
./local-ai backends install "ocifile://$(abspath ./backend-images/diffusers.tar)"
backends/llama-cpp: docker-build-llama-cpp docker-save-llama-cpp build
./local-ai backends install "ocifile://$(abspath ./backend-images/llama-cpp.tar)"
backends/piper: docker-build-piper docker-save-piper build
./local-ai backends install "ocifile://$(abspath ./backend-images/piper.tar)"
backends/stablediffusion-ggml: docker-build-stablediffusion-ggml docker-save-stablediffusion-ggml build
./local-ai backends install "ocifile://$(abspath ./backend-images/stablediffusion-ggml.tar)"
backends/whisper: docker-build-whisper docker-save-whisper build
./local-ai backends install "ocifile://$(abspath ./backend-images/whisper.tar)"
backends/silero-vad: docker-build-silero-vad docker-save-silero-vad build
./local-ai backends install "ocifile://$(abspath ./backend-images/silero-vad.tar)"
backends/local-store: docker-build-local-store docker-save-local-store build
./local-ai backends install "ocifile://$(abspath ./backend-images/local-store.tar)"
backends/huggingface: docker-build-huggingface docker-save-huggingface build
./local-ai backends install "ocifile://$(abspath ./backend-images/huggingface.tar)"
backends/rfdetr: docker-build-rfdetr docker-save-rfdetr build
./local-ai backends install "ocifile://$(abspath ./backend-images/rfdetr.tar)"
backends/kitten-tts: docker-build-kitten-tts docker-save-kitten-tts build
./local-ai backends install "ocifile://$(abspath ./backend-images/kitten-tts.tar)"
backends/kokoro: docker-build-kokoro docker-save-kokoro build
./local-ai backends install "ocifile://$(abspath ./backend-images/kokoro.tar)"
backends/llama-cpp-darwin: build
bash ./scripts/build/llama-cpp-darwin.sh
./local-ai backends install "ocifile://$(abspath ./backend-images/llama-cpp.tar)"
build-darwin-python-backend: build
bash ./scripts/build/python-darwin.sh
backends/mlx:
BACKEND=mlx $(MAKE) build-darwin-python-backend
./local-ai backends install "ocifile://$(abspath ./backend-images/mlx.tar)"
backends/diffuser-darwin:
BACKEND=diffusers $(MAKE) build-darwin-python-backend
./local-ai backends install "ocifile://$(abspath ./backend-images/diffusers.tar)"
backends/mlx-vlm:
BACKEND=mlx-vlm $(MAKE) build-darwin-python-backend
./local-ai backends install "ocifile://$(abspath ./backend-images/mlx-vlm.tar)"
backend-images:
mkdir -p backend-images
@@ -427,7 +450,10 @@ docker-build-transformers:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:transformers -f backend/Dockerfile.python --build-arg BACKEND=transformers .
docker-build-diffusers:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:diffusers -f backend/Dockerfile.python --build-arg BACKEND=diffusers .
docker build --progress=plain --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:diffusers -f backend/Dockerfile.python --build-arg BACKEND=diffusers ./backend
docker-save-diffusers: backend-images
docker save local-ai-backend:diffusers -o backend-images/diffusers.tar
docker-build-whisper:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:whisper -f backend/Dockerfile.golang --build-arg BACKEND=whisper .

View File

@@ -191,6 +191,7 @@ For more information, see [💻 Getting started](https://localai.io/basics/getti
## 📰 Latest project news
- August 2025: MLX, MLX-VLM, Diffusers and llama.cpp are now supported on Mac M1/M2/M3+ chips ( with `development` suffix in the gallery ): https://github.com/mudler/LocalAI/pull/6049 https://github.com/mudler/LocalAI/pull/6119 https://github.com/mudler/LocalAI/pull/6121 https://github.com/mudler/LocalAI/pull/6060
- July/August 2025: 🔍 [Object Detection](https://localai.io/features/object-detection/) added to the API featuring [rf-detr](https://github.com/roboflow/rf-detr)
- July 2025: All backends migrated outside of the main binary. LocalAI is now more lightweight, small, and automatically downloads the required backend to run the model. [Read the release notes](https://github.com/mudler/LocalAI/releases/tag/v3.2.0)
- June 2025: [Backend management](https://github.com/mudler/LocalAI/pull/5607) has been added. Attention: extras images are going to be deprecated from the next release! Read [the backend management PR](https://github.com/mudler/LocalAI/pull/5607).

View File

@@ -23,7 +23,7 @@ RUN apt-get update && \
libssl-dev \
git \
git-lfs \
unzip \
unzip clang \
upx-ucl \
curl python3-pip \
python-is-python3 \
@@ -116,7 +116,7 @@ COPY python/${BACKEND} /${BACKEND}
COPY backend.proto /${BACKEND}/backend.proto
COPY python/common/ /${BACKEND}/common
RUN cd /${BACKEND} && make
RUN cd /${BACKEND} && PORTABLE_PYTHON=true make
FROM scratch
ARG BACKEND=rerankers

View File

@@ -1,5 +1,5 @@
LLAMA_VERSION?=a0552c8beef74e843bb085c8ef0c63f9ed7a2b27
LLAMA_VERSION?=710dfc465a68f7443b87d9f792cffba00ed739fe
LLAMA_REPO?=https://github.com/ggerganov/llama.cpp
CMAKE_ARGS?=
@@ -32,10 +32,8 @@ else ifeq ($(BUILD_TYPE),hipblas)
ROCM_PATH ?= /opt/rocm
export CXX=$(ROCM_HOME)/llvm/bin/clang++
export CC=$(ROCM_HOME)/llvm/bin/clang
# GPU_TARGETS ?= gfx803,gfx900,gfx906,gfx908,gfx90a,gfx942,gfx1010,gfx1030,gfx1032,gfx1100,gfx1101,gfx1102
# AMDGPU_TARGETS ?= "$(GPU_TARGETS)"
CMAKE_ARGS+=-DGGML_HIP=ON
# CMAKE_ARGS+=-DGGML_HIP=ON -DAMDGPU_TARGETS="$(AMDGPU_TARGETS)" -DGPU_TARGETS="$(GPU_TARGETS)"
AMDGPU_TARGETS?=gfx803,gfx900,gfx906,gfx908,gfx90a,gfx942,gfx1010,gfx1030,gfx1032,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201
CMAKE_ARGS+=-DGGML_HIP=ON -DAMDGPU_TARGETS=$(AMDGPU_TARGETS)
else ifeq ($(BUILD_TYPE),vulkan)
CMAKE_ARGS+=-DGGML_VULKAN=1
else ifeq ($(OS),Darwin)

View File

@@ -53,9 +53,9 @@ static void start_llama_server(server_context& ctx_server) {
LOG_INF("%s: model loaded\n", __func__);
// print sample chat example to make it clear which template is used
LOG_INF("%s: chat template, chat_template: %s, example_format: '%s'\n", __func__,
common_chat_templates_source(ctx_server.chat_templates.get()),
common_chat_format_example(ctx_server.chat_templates.get(), ctx_server.params_base.use_jinja).c_str());
// LOG_INF("%s: chat template, chat_template: %s, example_format: '%s'\n", __func__,
// common_chat_templates_source(ctx_server.chat_templates.get()),
// common_chat_format_example(ctx_server.chat_templates.get(), ctx_server.params_base.use_jinja).c_str(), ctx_server.params_base.default_template_kwargs);
// Reset the chat templates
// TODO: We should make this configurable by respecting the option that is already present in LocalAI for vLLM
@@ -437,24 +437,7 @@ public:
}
}
// process files
mtmd::bitmaps bitmaps;
const bool has_mtmd = ctx_server.mctx != nullptr;
{
if (!has_mtmd && !files.empty()) {
throw std::runtime_error("This server does not support multimodal");
}
for (auto & file : files) {
mtmd::bitmap bmp(mtmd_helper_bitmap_init_from_buf(ctx_server.mctx, file.data(), file.size()));
if (!bmp.ptr) {
throw std::runtime_error("Failed to load image/audio");
}
// calculate bitmap hash (for KV caching)
std::string hash = fnv_hash(bmp.data(), bmp.n_bytes());
bmp.set_id(hash.c_str());
bitmaps.entries.push_back(std::move(bmp));
}
}
// process prompt
std::vector<server_tokens> inputs;
@@ -464,32 +447,10 @@ public:
if (has_mtmd) {
// multimodal
std::string prompt_str = prompt.get<std::string>();
mtmd_input_text inp_txt = {
prompt_str.c_str(),
/* add_special */ true,
/* parse_special */ true,
};
mtmd::input_chunks chunks(mtmd_input_chunks_init());
auto bitmaps_c_ptr = bitmaps.c_ptr();
int32_t tokenized = mtmd_tokenize(ctx_server.mctx,
chunks.ptr.get(),
&inp_txt,
bitmaps_c_ptr.data(),
bitmaps_c_ptr.size());
if (tokenized != 0) {
throw std::runtime_error("Failed to tokenize prompt");
}
server_tokens tmp(chunks, true);
inputs.push_back(std::move(tmp));
inputs.push_back(process_mtmd_prompt(ctx_server.mctx, prompt.get<std::string>(), files));
} else {
// non-multimodal version
auto tokenized_prompts = tokenize_input_prompts(ctx_server.vocab, prompt, true, true);
for (auto & p : tokenized_prompts) {
auto tmp = server_tokens(p, ctx_server.mctx != nullptr);
inputs.push_back(std::move(tmp));
}
// Everything else, including multimodal completions.
inputs = tokenize_input_prompts(ctx_server.vocab, ctx_server.mctx, prompt, true, true);
}
tasks.reserve(inputs.size());
@@ -630,23 +591,7 @@ public:
}
// process files
mtmd::bitmaps bitmaps;
const bool has_mtmd = ctx_server.mctx != nullptr;
{
if (!has_mtmd && !files.empty()) {
throw std::runtime_error("This server does not support multimodal");
}
for (auto & file : files) {
mtmd::bitmap bmp(mtmd_helper_bitmap_init_from_buf(ctx_server.mctx, file.data(), file.size()));
if (!bmp.ptr) {
throw std::runtime_error("Failed to load image/audio");
}
// calculate bitmap hash (for KV caching)
std::string hash = fnv_hash(bmp.data(), bmp.n_bytes());
bmp.set_id(hash.c_str());
bitmaps.entries.push_back(std::move(bmp));
}
}
// process prompt
std::vector<server_tokens> inputs;
@@ -657,33 +602,10 @@ public:
if (has_mtmd) {
// multimodal
std::string prompt_str = prompt.get<std::string>();
mtmd_input_text inp_txt = {
prompt_str.c_str(),
/* add_special */ true,
/* parse_special */ true,
};
mtmd::input_chunks chunks(mtmd_input_chunks_init());
auto bitmaps_c_ptr = bitmaps.c_ptr();
int32_t tokenized = mtmd_tokenize(ctx_server.mctx,
chunks.ptr.get(),
&inp_txt,
bitmaps_c_ptr.data(),
bitmaps_c_ptr.size());
if (tokenized != 0) {
std::cout << "[PREDICT] Failed to tokenize prompt" << std::endl;
throw std::runtime_error("Failed to tokenize prompt");
}
server_tokens tmp(chunks, true);
inputs.push_back(std::move(tmp));
inputs.push_back(process_mtmd_prompt(ctx_server.mctx, prompt.get<std::string>(), files));
} else {
// non-multimodal version
auto tokenized_prompts = tokenize_input_prompts(ctx_server.vocab, prompt, true, true);
for (auto & p : tokenized_prompts) {
auto tmp = server_tokens(p, ctx_server.mctx != nullptr);
inputs.push_back(std::move(tmp));
}
// Everything else, including multimodal completions.
inputs = tokenize_input_prompts(ctx_server.vocab, ctx_server.mctx, prompt, true, true);
}
tasks.reserve(inputs.size());
@@ -774,7 +696,7 @@ public:
json prompt = body.at("prompt");
auto tokenized_prompts = tokenize_input_prompts(ctx_server.vocab, prompt, true, true);
auto tokenized_prompts = tokenize_input_prompts(ctx_server.vocab, ctx_server.mctx, prompt, true, true);
for (const auto & tokens : tokenized_prompts) {
// this check is necessary for models that do not add BOS token to the input
if (tokens.empty()) {
@@ -793,7 +715,7 @@ public:
task.id = ctx_server.queue_tasks.get_new_id();
task.index = i;
task.prompt_tokens = server_tokens(tokenized_prompts[i], ctx_server.mctx != nullptr);
task.prompt_tokens = std::move(tokenized_prompts[i]);
// OAI-compat
task.params.oaicompat = OAICOMPAT_TYPE_EMBEDDING;
@@ -849,8 +771,10 @@ public:
}
// Tokenize the query
llama_tokens tokenized_query = tokenize_input_prompts(ctx_server.vocab, request->query(), /* add_special */ false, true)[0];
auto tokenized_query = tokenize_input_prompts(ctx_server.vocab, ctx_server.mctx, request->query(), /* add_special */ false, true);
if (tokenized_query.size() != 1) {
return grpc::Status(grpc::StatusCode::INVALID_ARGUMENT, "\"query\" must contain only a single prompt");
}
// Create and queue the task
json responses = json::array();
bool error = false;
@@ -862,14 +786,14 @@ public:
documents.push_back(request->documents(i));
}
auto tokenized_docs = tokenize_input_prompts(ctx_server.vocab, documents, /* add_special */ false, true);
auto tokenized_docs = tokenize_input_prompts(ctx_server.vocab, ctx_server.mctx, documents, /* add_special */ false, true);
tasks.reserve(tokenized_docs.size());
for (size_t i = 0; i < tokenized_docs.size(); i++) {
auto tmp = format_rerank(ctx_server.vocab, tokenized_query, tokenized_docs[i]);
auto tmp = format_rerank(ctx_server.vocab, tokenized_query[0], tokenized_docs[i]);
server_task task = server_task(SERVER_TASK_TYPE_RERANK);
task.id = ctx_server.queue_tasks.get_new_id();
task.index = i;
task.prompt_tokens = server_tokens(tmp, ctx_server.mctx != nullptr);
task.prompt_tokens = std::move(tmp);
tasks.push_back(std::move(task));
}

View File

@@ -42,7 +42,8 @@ fi
# Extend ld library path with the dir where this script is located/lib
if [ "$(uname)" == "Darwin" ]; then
DYLD_FALLBACK_LIBRARY_PATH=$CURDIR/lib:$DYLD_FALLBACK_LIBRARY_PATH
export DYLD_LIBRARY_PATH=$CURDIR/lib:$DYLD_LIBRARY_PATH
#export DYLD_FALLBACK_LIBRARY_PATH=$CURDIR/lib:$DYLD_FALLBACK_LIBRARY_PATH
else
export LD_LIBRARY_PATH=$CURDIR/lib:$LD_LIBRARY_PATH
fi
@@ -57,5 +58,5 @@ fi
echo "Using binary: $BINARY"
exec $CURDIR/$BINARY "$@"
# In case we fail execing, just run fallback
# We should never reach this point, however just in case we do, run fallback
exec $CURDIR/llama-cpp-fallback "$@"

View File

@@ -0,0 +1,16 @@
cmake_minimum_required(VERSION 3.12)
project(gosd LANGUAGES C CXX)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
add_subdirectory(./sources/stablediffusion-ggml.cpp)
add_library(gosd MODULE gosd.cpp)
target_link_libraries(gosd PRIVATE stable-diffusion ggml stdc++fs)
target_include_directories(gosd PUBLIC
stable-diffusion.cpp
stable-diffusion.cpp/thirdparty
)
set_property(TARGET gosd PROPERTY CXX_STANDARD 17)
set_target_properties(gosd PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})

View File

@@ -1,28 +1,16 @@
INCLUDE_PATH := $(abspath ./)
LIBRARY_PATH := $(abspath ./)
AR?=ar
CMAKE_ARGS?=
BUILD_TYPE?=
NATIVE?=false
CUDA_LIBPATH?=/usr/local/cuda/lib64/
ONEAPI_VARS?=/opt/intel/oneapi/setvars.sh
# keep standard at C11 and C++11
CXXFLAGS = -I. -I$(INCLUDE_PATH)/sources/stablediffusion-ggml.cpp/thirdparty -I$(INCLUDE_PATH)/sources/stablediffusion-ggml.cpp/ggml/include -I$(INCLUDE_PATH)/sources/stablediffusion-ggml.cpp -O3 -DNDEBUG -std=c++17 -fPIC
GOCMD?=go
CGO_LDFLAGS?=
# Avoid parent make file overwriting CGO_LDFLAGS which is needed for hipblas
CGO_LDFLAGS_SYCL=
GO_TAGS?=
LD_FLAGS?=
JOBS?=$(shell nproc --ignore=1)
# stablediffusion.cpp (ggml)
STABLEDIFFUSION_GGML_REPO?=https://github.com/leejet/stable-diffusion.cpp
STABLEDIFFUSION_GGML_VERSION?=5900ef6605c6fbf7934239f795c13c97bc993853
# Disable Shared libs as we are linking on static gRPC and we can't mix shared and static
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF -DGGML_MAX_NAME=128 -DSD_USE_SYSTEM_GGML=OFF
CMAKE_ARGS+=-DGGML_MAX_NAME=128
ifeq ($(NATIVE),false)
CMAKE_ARGS+=-DGGML_NATIVE=OFF
@@ -31,7 +19,6 @@ endif
# If build type is cublas, then we set -DGGML_CUDA=ON to CMAKE_ARGS automatically
ifeq ($(BUILD_TYPE),cublas)
CMAKE_ARGS+=-DSD_CUDA=ON -DGGML_CUDA=ON
CGO_LDFLAGS+=-lcublas -lcudart -L$(CUDA_LIBPATH) -L$(CUDA_LIBPATH)/stubs/ -lcuda
# If build type is openblas then we set -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
# to CMAKE_ARGS automatically
else ifeq ($(BUILD_TYPE),openblas)
@@ -46,14 +33,12 @@ else ifeq ($(BUILD_TYPE),hipblas)
# But if it's OSX without metal, disable it here
else ifeq ($(BUILD_TYPE),vulkan)
CMAKE_ARGS+=-DSD_VULKAN=ON -DGGML_VULKAN=ON
CGO_LDFLAGS+=-lvulkan
else ifeq ($(OS),Darwin)
ifneq ($(BUILD_TYPE),metal)
CMAKE_ARGS+=-DSD_METAL=OFF -DGGML_METAL=OFF
else
CMAKE_ARGS+=-DSD_METAL=ON -DGGML_METAL=ON
CMAKE_ARGS+=-DGGML_METAL_EMBED_LIBRARY=ON
TARGET+=--target ggml-metal
endif
endif
@@ -63,12 +48,6 @@ ifeq ($(BUILD_TYPE),sycl_f16)
-DCMAKE_CXX_COMPILER=icpx \
-DSD_SYCL=ON \
-DGGML_SYCL_F16=ON
export CC=icx
export CXX=icpx
CGO_LDFLAGS_SYCL += -fsycl -L${DNNLROOT}/lib -ldnnl ${MKLROOT}/lib/intel64/libmkl_sycl.a -fiopenmp -fopenmp-targets=spir64 -lOpenCL
CGO_LDFLAGS_SYCL += $(shell pkg-config --libs mkl-static-lp64-gomp)
CGO_CXXFLAGS += -fiopenmp -fopenmp-targets=spir64
CGO_CXXFLAGS += $(shell pkg-config --cflags mkl-static-lp64-gomp )
endif
ifeq ($(BUILD_TYPE),sycl_f32)
@@ -76,73 +55,24 @@ ifeq ($(BUILD_TYPE),sycl_f32)
-DCMAKE_C_COMPILER=icx \
-DCMAKE_CXX_COMPILER=icpx \
-DSD_SYCL=ON
export CC=icx
export CXX=icpx
CGO_LDFLAGS_SYCL += -fsycl -L${DNNLROOT}/lib -ldnnl ${MKLROOT}/lib/intel64/libmkl_sycl.a -fiopenmp -fopenmp-targets=spir64 -lOpenCL
CGO_LDFLAGS_SYCL += $(shell pkg-config --libs mkl-static-lp64-gomp)
CGO_CXXFLAGS += -fiopenmp -fopenmp-targets=spir64
CGO_CXXFLAGS += $(shell pkg-config --cflags mkl-static-lp64-gomp )
endif
# warnings
# CXXFLAGS += -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function
# Find all .a archives in ARCHIVE_DIR
# (ggml can have different backends cpu, cuda, etc., each backend generates a .a archive)
GGML_ARCHIVE_DIR := build/ggml/src/
ALL_ARCHIVES := $(shell find $(GGML_ARCHIVE_DIR) -type f -name '*.a')
ALL_OBJS := $(shell find $(GGML_ARCHIVE_DIR) -type f -name '*.o')
# Name of the single merged library
COMBINED_LIB := libggmlall.a
# Instead of using the archives generated by GGML, use the object files directly to avoid overwriting objects with the same base name
$(COMBINED_LIB): $(ALL_ARCHIVES)
@echo "Merging all .o into $(COMBINED_LIB): $(ALL_OBJS)"
rm -f $@
ar -qc $@ $(ALL_OBJS)
# Ensure we have a proper index
ranlib $@
build/libstable-diffusion.a:
@echo "Building SD with $(BUILD_TYPE) build type and $(CMAKE_ARGS)"
ifneq (,$(findstring sycl,$(BUILD_TYPE)))
+bash -c "source $(ONEAPI_VARS); \
mkdir -p build && \
cd build && \
cmake $(CMAKE_ARGS) ../sources/stablediffusion-ggml.cpp && \
cmake --build . --config Release"
else
mkdir -p build && \
cd build && \
cmake $(CMAKE_ARGS) ../sources/stablediffusion-ggml.cpp && \
cmake --build . --config Release
endif
$(MAKE) $(COMBINED_LIB)
gosd.o:
ifneq (,$(findstring sycl,$(BUILD_TYPE)))
+bash -c "source $(ONEAPI_VARS); \
$(CXX) $(CXXFLAGS) gosd.cpp -o gosd.o -c"
else
$(CXX) $(CXXFLAGS) gosd.cpp -o gosd.o -c
endif
## stablediffusion (ggml)
sources/stablediffusion-ggml.cpp:
git clone --recursive $(STABLEDIFFUSION_GGML_REPO) sources/stablediffusion-ggml.cpp && \
cd sources/stablediffusion-ggml.cpp && \
git checkout $(STABLEDIFFUSION_GGML_VERSION) && \
git submodule update --init --recursive --depth 1 --single-branch
libsd.a: sources/stablediffusion-ggml.cpp build/libstable-diffusion.a gosd.o
cp $(INCLUDE_PATH)/build/libstable-diffusion.a ./libsd.a
$(AR) rcs libsd.a gosd.o
libgosd.so: sources/stablediffusion-ggml.cpp CMakeLists.txt gosd.cpp gosd.h
mkdir -p build && \
cd build && \
cmake .. $(CMAKE_ARGS) && \
cmake --build . --config Release -j$(JOBS) && \
cd .. && \
mv build/libgosd.so ./
stablediffusion-ggml: libsd.a
CGO_LDFLAGS="$(CGO_LDFLAGS) $(CGO_LDFLAGS_SYCL)" C_INCLUDE_PATH="$(INCLUDE_PATH)" LIBRARY_PATH="$(LIBRARY_PATH)" \
CC="$(CC)" CXX="$(CXX)" CGO_CXXFLAGS="$(CGO_CXXFLAGS)" \
$(GOCMD) build -ldflags "$(LD_FLAGS)" -tags "$(GO_TAGS)" -o stablediffusion-ggml ./
stablediffusion-ggml: main.go gosd.go libgosd.so
CGO_ENABLED=0 $(GOCMD) build -tags "$(GO_TAGS)" -o stablediffusion-ggml ./
package:
bash package.sh
@@ -150,4 +80,4 @@ package:
build: stablediffusion-ggml package
clean:
rm -rf gosd.o libsd.a build $(COMBINED_LIB)
rm -rf libgosd.o build stablediffusion-ggml

View File

@@ -1,3 +1,4 @@
#include <cstdint>
#define GGML_MAX_NAME 128
#include <stdio.h>
@@ -57,7 +58,7 @@ sd_ctx_t* sd_c;
sample_method_t sample_method;
// Copied from the upstream CLI
void sd_log_cb(enum sd_log_level_t level, const char* log, void* data) {
static void sd_log_cb(enum sd_log_level_t level, const char* log, void* data) {
//SDParams* params = (SDParams*)data;
const char* level_str;
@@ -88,33 +89,33 @@ void sd_log_cb(enum sd_log_level_t level, const char* log, void* data) {
fflush(stderr);
}
int load_model(char *model, char *model_path, char* options[], int threads, int diff) {
fprintf (stderr, "Loading model!\n");
int load_model(const char *model, char *model_path, char* options[], int threads, int diff) {
fprintf (stderr, "Loading model: %p=%s\n", model, model);
sd_set_log_callback(sd_log_cb, NULL);
char *stableDiffusionModel = "";
const char *stableDiffusionModel = "";
if (diff == 1 ) {
stableDiffusionModel = model;
model = "";
}
// decode options. Options are in form optname:optvale, or if booleans only optname.
char *clip_l_path = "";
char *clip_g_path = "";
char *t5xxl_path = "";
char *vae_path = "";
char *scheduler = "";
char *sampler = "";
const char *clip_l_path = "";
const char *clip_g_path = "";
const char *t5xxl_path = "";
const char *vae_path = "";
const char *scheduler = "";
const char *sampler = "";
char *lora_dir = model_path;
bool lora_dir_allocated = false;
fprintf(stderr, "parsing options\n");
fprintf(stderr, "parsing options: %p\n", options);
// If options is not NULL, parse options
for (int i = 0; options[i] != NULL; i++) {
char *optname = strtok(options[i], ":");
char *optval = strtok(NULL, ":");
const char *optname = strtok(options[i], ":");
const char *optval = strtok(NULL, ":");
if (optval == NULL) {
optval = "true";
}
@@ -147,7 +148,8 @@ int load_model(char *model, char *model_path, char* options[], int threads, int
lora_dir_allocated = true;
fprintf(stderr, "Lora dir resolved to: %s\n", lora_dir);
} else {
lora_dir = optval;
lora_dir = strdup(optval);
lora_dir_allocated = true;
fprintf(stderr, "No model path provided, using lora dir as-is: %s\n", lora_dir);
}
}
@@ -226,7 +228,7 @@ int load_model(char *model, char *model_path, char* options[], int threads, int
return 0;
}
int gen_image(char *text, char *negativeText, int width, int height, int steps, int seed , char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char **ref_images, int ref_images_count) {
int gen_image(char *text, char *negativeText, int width, int height, int steps, int64_t seed, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char **ref_images, int ref_images_count) {
sd_image_t* results;
@@ -252,14 +254,14 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
// Handle input image for img2img
bool has_input_image = (src_image != NULL && strlen(src_image) > 0);
bool has_mask_image = (mask_image != NULL && strlen(mask_image) > 0);
uint8_t* input_image_buffer = NULL;
uint8_t* mask_image_buffer = NULL;
std::vector<uint8_t> default_mask_image_vec;
if (has_input_image) {
fprintf(stderr, "Loading input image: %s\n", src_image);
int c = 0;
int img_width = 0;
int img_height = 0;
@@ -273,29 +275,29 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
free(input_image_buffer);
return 1;
}
// Resize input image if dimensions don't match
if (img_width != width || img_height != height) {
fprintf(stderr, "Resizing input image from %dx%d to %dx%d\n", img_width, img_height, width, height);
uint8_t* resized_image_buffer = (uint8_t*)malloc(height * width * 3);
if (resized_image_buffer == NULL) {
fprintf(stderr, "Failed to allocate memory for resized image\n");
free(input_image_buffer);
return 1;
}
stbir_resize(input_image_buffer, img_width, img_height, 0,
resized_image_buffer, width, height, 0, STBIR_TYPE_UINT8,
3, STBIR_ALPHA_CHANNEL_NONE, 0,
STBIR_EDGE_CLAMP, STBIR_EDGE_CLAMP,
STBIR_FILTER_BOX, STBIR_FILTER_BOX,
STBIR_COLORSPACE_SRGB, nullptr);
free(input_image_buffer);
input_image_buffer = resized_image_buffer;
}
p.init_image = {(uint32_t)width, (uint32_t)height, 3, input_image_buffer};
p.strength = strength;
fprintf(stderr, "Using img2img with strength: %.2f\n", strength);
@@ -304,11 +306,11 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
p.init_image = {(uint32_t)width, (uint32_t)height, 3, NULL};
p.strength = 0.0f;
}
// Handle mask image for inpainting
if (has_mask_image) {
fprintf(stderr, "Loading mask image: %s\n", mask_image);
int c = 0;
int mask_width = 0;
int mask_height = 0;
@@ -318,11 +320,11 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
if (input_image_buffer) free(input_image_buffer);
return 1;
}
// Resize mask if dimensions don't match
if (mask_width != width || mask_height != height) {
fprintf(stderr, "Resizing mask image from %dx%d to %dx%d\n", mask_width, mask_height, width, height);
uint8_t* resized_mask_buffer = (uint8_t*)malloc(height * width);
if (resized_mask_buffer == NULL) {
fprintf(stderr, "Failed to allocate memory for resized mask\n");
@@ -330,18 +332,18 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
if (input_image_buffer) free(input_image_buffer);
return 1;
}
stbir_resize(mask_image_buffer, mask_width, mask_height, 0,
resized_mask_buffer, width, height, 0, STBIR_TYPE_UINT8,
1, STBIR_ALPHA_CHANNEL_NONE, 0,
STBIR_EDGE_CLAMP, STBIR_EDGE_CLAMP,
STBIR_FILTER_BOX, STBIR_FILTER_BOX,
STBIR_COLORSPACE_SRGB, nullptr);
free(mask_image_buffer);
mask_image_buffer = resized_mask_buffer;
}
p.mask_image = {(uint32_t)width, (uint32_t)height, 1, mask_image_buffer};
fprintf(stderr, "Using inpainting with mask\n");
} else {
@@ -353,17 +355,17 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
// Handle reference images
std::vector<sd_image_t> ref_images_vec;
std::vector<uint8_t*> ref_image_buffers;
if (ref_images_count > 0 && ref_images != NULL) {
fprintf(stderr, "Loading %d reference images\n", ref_images_count);
for (int i = 0; i < ref_images_count; i++) {
if (ref_images[i] == NULL || strlen(ref_images[i]) == 0) {
continue;
}
fprintf(stderr, "Loading reference image %d: %s\n", i + 1, ref_images[i]);
int c = 0;
int ref_width = 0;
int ref_height = 0;
@@ -377,33 +379,33 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
free(ref_image_buffer);
continue;
}
// Resize reference image if dimensions don't match
if (ref_width != width || ref_height != height) {
fprintf(stderr, "Resizing reference image from %dx%d to %dx%d\n", ref_width, ref_height, width, height);
uint8_t* resized_ref_buffer = (uint8_t*)malloc(height * width * 3);
if (resized_ref_buffer == NULL) {
fprintf(stderr, "Failed to allocate memory for resized reference image\n");
free(ref_image_buffer);
continue;
}
stbir_resize(ref_image_buffer, ref_width, ref_height, 0,
resized_ref_buffer, width, height, 0, STBIR_TYPE_UINT8,
3, STBIR_ALPHA_CHANNEL_NONE, 0,
STBIR_EDGE_CLAMP, STBIR_EDGE_CLAMP,
STBIR_FILTER_BOX, STBIR_FILTER_BOX,
STBIR_COLORSPACE_SRGB, nullptr);
free(ref_image_buffer);
ref_image_buffer = resized_ref_buffer;
}
ref_image_buffers.push_back(ref_image_buffer);
ref_images_vec.push_back({(uint32_t)width, (uint32_t)height, 3, ref_image_buffer});
}
if (!ref_images_vec.empty()) {
p.ref_images = ref_images_vec.data();
p.ref_images_count = ref_images_vec.size();
@@ -454,12 +456,12 @@ int gen_image(char *text, char *negativeText, int width, int height, int steps,
for (auto buffer : ref_image_buffers) {
if (buffer) free(buffer);
}
fprintf (stderr, "gen_image is done", dst);
fprintf (stderr, "gen_image is done: %s", dst);
return 0;
}
int unload() {
free_sd_ctx(sd_c);
return 0;
}

View File

@@ -1,15 +1,10 @@
package main
// #cgo CXXFLAGS: -I${SRCDIR}/sources/stablediffusion-ggml.cpp/thirdparty -I${SRCDIR}/sources/stablediffusion-ggml.cpp -I${SRCDIR}/sources/stablediffusion-ggml.cpp/ggml/include
// #cgo LDFLAGS: -L${SRCDIR}/ -lsd -lstdc++ -lm -lggmlall -lgomp
// #include <gosd.h>
// #include <stdlib.h>
import "C"
import (
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
"unsafe"
@@ -25,25 +20,34 @@ type SDGGML struct {
cfgScale float32
}
var (
LoadModel func(model, model_apth string, options []uintptr, threads int32, diff int) int
GenImage func(text, negativeText string, width, height, steps int, seed int64, dst string, cfgScale float32, srcImage string, strength float32, maskImage string, refImages []string, refImagesCount int) int
)
// Copied from Purego internal/strings
// TODO: We should upstream sending []string
func hasSuffix(s, suffix string) bool {
return len(s) >= len(suffix) && s[len(s)-len(suffix):] == suffix
}
func CString(name string) *byte {
if hasSuffix(name, "\x00") {
return &(*(*[]byte)(unsafe.Pointer(&name)))[0]
}
b := make([]byte, len(name)+1)
copy(b, name)
return &b[0]
}
func (sd *SDGGML) Load(opts *pb.ModelOptions) error {
sd.threads = int(opts.Threads)
modelPath := opts.ModelPath
modelFile := C.CString(opts.ModelFile)
defer C.free(unsafe.Pointer(modelFile))
modelPathC := C.CString(modelPath)
defer C.free(unsafe.Pointer(modelPathC))
var options **C.char
// prepare the options array to pass to C
size := C.size_t(unsafe.Sizeof((*C.char)(nil)))
length := C.size_t(len(opts.Options))
options = (**C.char)(C.malloc((length + 1) * size))
view := (*[1 << 30]*C.char)(unsafe.Pointer(options))[0 : len(opts.Options)+1 : len(opts.Options)+1]
modelFile := opts.ModelFile
modelPathC := modelPath
var diffusionModel int
@@ -68,81 +72,55 @@ func (sd *SDGGML) Load(opts *pb.ModelOptions) error {
fmt.Fprintf(os.Stderr, "Options: %+v\n", oo)
for i, x := range oo {
view[i] = C.CString(x)
// At the time of writing Purego doesn't recurse into slices and convert Go strings to pointers so we need to do that
var keepAlive []any
options := make([]uintptr, len(oo), len(oo)+1)
for i, op := range oo {
bytep := CString(op)
options[i] = uintptr(unsafe.Pointer(bytep))
keepAlive = append(keepAlive, bytep)
}
view[len(oo)] = nil
sd.cfgScale = opts.CFGScale
ret := C.load_model(modelFile, modelPathC, options, C.int(opts.Threads), C.int(diffusionModel))
ret := LoadModel(modelFile, modelPathC, options, opts.Threads, diffusionModel)
if ret != 0 {
return fmt.Errorf("could not load model")
}
runtime.KeepAlive(keepAlive)
return nil
}
func (sd *SDGGML) GenerateImage(opts *pb.GenerateImageRequest) error {
t := C.CString(opts.PositivePrompt)
defer C.free(unsafe.Pointer(t))
t := opts.PositivePrompt
dst := opts.Dst
negative := opts.NegativePrompt
srcImage := opts.Src
dst := C.CString(opts.Dst)
defer C.free(unsafe.Pointer(dst))
negative := C.CString(opts.NegativePrompt)
defer C.free(unsafe.Pointer(negative))
// Handle source image path
var srcImage *C.char
if opts.Src != "" {
srcImage = C.CString(opts.Src)
defer C.free(unsafe.Pointer(srcImage))
}
// Handle mask image path
var maskImage *C.char
var maskImage string
if opts.EnableParameters != "" {
// Parse EnableParameters for mask path if provided
// This is a simple approach - in a real implementation you might want to parse JSON
if strings.Contains(opts.EnableParameters, "mask:") {
parts := strings.Split(opts.EnableParameters, "mask:")
if len(parts) > 1 {
maskPath := strings.TrimSpace(parts[1])
if maskPath != "" {
maskImage = C.CString(maskPath)
defer C.free(unsafe.Pointer(maskImage))
maskImage = maskPath
}
}
}
}
// Handle reference images
var refImages **C.char
var refImagesCount C.int
if len(opts.RefImages) > 0 {
refImagesCount = C.int(len(opts.RefImages))
// Allocate array of C strings
size := C.size_t(unsafe.Sizeof((*C.char)(nil)))
refImages = (**C.char)(C.malloc((C.size_t(len(opts.RefImages)) + 1) * size))
view := (*[1 << 30]*C.char)(unsafe.Pointer(refImages))[0 : len(opts.RefImages)+1 : len(opts.RefImages)+1]
for i, refImagePath := range opts.RefImages {
view[i] = C.CString(refImagePath)
defer C.free(unsafe.Pointer(view[i]))
}
view[len(opts.RefImages)] = nil
}
refImagesCount := len(opts.RefImages)
refImages := make([]string, refImagesCount, refImagesCount+1)
copy(refImages, opts.RefImages)
*(*uintptr)(unsafe.Add(unsafe.Pointer(&refImages), refImagesCount)) = 0
// Default strength for img2img (0.75 is a good default)
strength := C.float(0.75)
if opts.Src != "" {
// If we have a source image, use img2img mode
// You could also parse strength from EnableParameters if needed
strength = C.float(0.75)
}
strength := float32(0.75)
ret := C.gen_image(t, negative, C.int(opts.Width), C.int(opts.Height), C.int(opts.Step), C.int(opts.Seed), dst, C.float(sd.cfgScale), srcImage, strength, maskImage, refImages, refImagesCount)
ret := GenImage(t, negative, int(opts.Width), int(opts.Height), int(opts.Step), int64(opts.Seed), dst, sd.cfgScale, srcImage, strength, maskImage, refImages, refImagesCount)
if ret != 0 {
return fmt.Errorf("inference failed")
}

View File

@@ -1,8 +1,8 @@
#ifdef __cplusplus
extern "C" {
#endif
int load_model(char *model, char *model_path, char* options[], int threads, int diffusionModel);
int gen_image(char *text, char *negativeText, int width, int height, int steps, int seed, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char **ref_images, int ref_images_count);
int load_model(const char *model, char *model_path, char* options[], int threads, int diffusionModel);
int gen_image(char *text, char *negativeText, int width, int height, int steps, int64_t seed, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char **ref_images, int ref_images_count);
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -1,9 +1,9 @@
package main
// Note: this is started internally by LocalAI and a server is allocated for each model
import (
"flag"
"github.com/ebitengine/purego"
grpc "github.com/mudler/LocalAI/pkg/grpc"
)
@@ -12,6 +12,14 @@ var (
)
func main() {
gosd, err := purego.Dlopen("./libgosd.so", purego.RTLD_NOW|purego.RTLD_GLOBAL)
if err != nil {
panic(err)
}
purego.RegisterLibFunc(&LoadModel, gosd, "load_model")
purego.RegisterLibFunc(&GenImage, gosd, "gen_image")
flag.Parse()
if err := grpc.StartServer(*addr, &SDGGML{}); err != nil {

View File

@@ -10,6 +10,7 @@ CURDIR=$(dirname "$(realpath $0)")
# Create lib directory
mkdir -p $CURDIR/package/lib
cp -avrf $CURDIR/libgosd.so $CURDIR/package/
cp -avrf $CURDIR/stablediffusion-ggml $CURDIR/package/
cp -rfv $CURDIR/run.sh $CURDIR/package/
@@ -47,6 +48,6 @@ else
exit 1
fi
echo "Packaging completed successfully"
echo "Packaging completed successfully"
ls -liah $CURDIR/package/
ls -liah $CURDIR/package/lib/
ls -liah $CURDIR/package/lib/

View File

@@ -6,7 +6,7 @@ CMAKE_ARGS?=
# whisper.cpp version
WHISPER_REPO?=https://github.com/ggml-org/whisper.cpp
WHISPER_CPP_VERSION?=4245c77b654cd384ad9f53a4a302be716b3e5861
WHISPER_CPP_VERSION?=fc45bb86251f774ef817e89878bb4c2636c8a58f
export WHISPER_CMAKE_ARGS?=-DBUILD_SHARED_LIBS=OFF
export WHISPER_DIR=$(abspath ./sources/whisper.cpp)

View File

@@ -127,6 +127,38 @@
nvidia: "cuda12-vllm"
amd: "rocm-vllm"
intel: "intel-vllm"
- &mlx
name: "mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx"
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
urls:
- https://github.com/ml-explore/mlx-lm
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-mlx
license: MIT
description: |
Run LLMs with MLX
tags:
- text-to-text
- LLM
- MLX
- &mlx-vlm
name: "mlx-vlm"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx-vlm"
icon: https://avatars.githubusercontent.com/u/102832242?s=200&v=4
urls:
- https://github.com/ml-explore/mlx-vlm
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-mlx-vlm
license: MIT
description: |
Run Vision-Language Models with MLX
tags:
- text-to-text
- multimodal
- vision-language
- LLM
- MLX
- &rerankers
name: "rerankers"
alias: "rerankers"
@@ -151,6 +183,8 @@
nvidia: "cuda12-transformers"
intel: "intel-transformers"
amd: "rocm-transformers"
metal: "metal-transformers"
default: "cpu-transformers"
- &diffusers
name: "diffusers"
icon: https://raw.githubusercontent.com/huggingface/diffusers/main/docs/source/en/imgs/diffusers_library.jpg
@@ -168,6 +202,9 @@
nvidia: "cuda12-diffusers"
intel: "intel-diffusers"
amd: "rocm-diffusers"
nvidia-l4t: "nvidia-l4t-diffusers"
metal: "metal-diffusers"
default: "cpu-diffusers"
- &exllama2
name: "exllama2"
urls:
@@ -370,6 +407,16 @@
- text-to-speech
- TTS
license: apache-2.0
- !!merge <<: *mlx
name: "mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-mlx
- !!merge <<: *mlx-vlm
name: "mlx-vlm-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx-vlm"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-mlx-vlm
- !!merge <<: *kitten-tts
name: "kitten-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-kitten-tts"
@@ -806,6 +853,28 @@
nvidia: "cuda12-transformers-development"
intel: "intel-transformers-development"
amd: "rocm-transformers-development"
default: "cpu-transformers-development"
metal: "metal-transformers-development"
- !!merge <<: *transformers
name: "cpu-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-transformers"
mirrors:
- localai/localai-backends:latest-cpu-transformers
- !!merge <<: *transformers
name: "cpu-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-transformers"
mirrors:
- localai/localai-backends:master-cpu-transformers
- !!merge <<: *transformers
name: "metal-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-transformers"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-transformers
- !!merge <<: *transformers
name: "metal-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-transformers"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-transformers
- !!merge <<: *transformers
name: "cuda12-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-transformers"
@@ -853,6 +922,29 @@
nvidia: "cuda12-diffusers-development"
intel: "intel-diffusers-development"
amd: "rocm-diffusers-development"
nvidia-l4t: "nvidia-l4t-diffusers-development"
metal: "metal-diffusers-development"
default: "cpu-diffusers-development"
- !!merge <<: *diffusers
name: "cpu-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-diffusers"
mirrors:
- localai/localai-backends:latest-cpu-diffusers
- !!merge <<: *diffusers
name: "cpu-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-diffusers"
mirrors:
- localai/localai-backends:master-cpu-diffusers
- !!merge <<: *diffusers
name: "nvidia-l4t-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-l4t-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-l4t-diffusers
- !!merge <<: *diffusers
name: "nvidia-l4t-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-l4t-diffusers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-l4t-diffusers
- !!merge <<: *diffusers
name: "cuda12-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-diffusers"
@@ -893,6 +985,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-diffusers"
mirrors:
- localai/localai-backends:master-gpu-intel-diffusers
- !!merge <<: *diffusers
name: "metal-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-diffusers"
mirrors:
- localai/localai-backends:latest-metal-darwin-arm64-diffusers
- !!merge <<: *diffusers
name: "metal-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-diffusers"
mirrors:
- localai/localai-backends:master-metal-darwin-arm64-diffusers
## exllama2
- !!merge <<: *exllama2
name: "exllama2-development"

View File

@@ -1,29 +1,23 @@
.PHONY: ttsbark
ttsbark: protogen
ttsbark:
bash install.sh
.PHONY: run
run: protogen
run: ttsbark
@echo "Running bark..."
bash run.sh
@echo "bark run."
.PHONY: test
test: protogen
test: ttsbark
@echo "Testing bark..."
bash test.sh
@echo "bark tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -1,5 +1,5 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
intel-extension-for-pytorch==2.8.10+xpu
torch==2.3.1+cxx11.abi
torchaudio==2.3.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu

View File

@@ -1,4 +1,4 @@
bark==0.1.5
grpcio==1.71.0
grpcio==1.74.0
protobuf
certifi

View File

@@ -1,29 +1,23 @@
.PHONY: coqui
coqui: protogen
.PHONY: chatterbox
chatterbox:
bash install.sh
.PHONY: run
run: protogen
run: chatterbox
@echo "Running coqui..."
bash run.sh
@echo "coqui run."
.PHONY: test
test: protogen
test: chatterbox
@echo "Testing coqui..."
bash test.sh
@echo "coqui tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -41,7 +41,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
# init handles the setup of the library
#
# use the library by adding the following line to a script:
# source $(dirname $0)/../common/libbackend.sh
@@ -17,29 +17,182 @@
# LIMIT_TARGETS="cublas12"
# source $(dirname $0)/../common/libbackend.sh
#
# You can switch between uv (conda-like) and pip installation methods by setting USE_PIP:
# USE_PIP=true source $(dirname $0)/../common/libbackend.sh
#
# ===================== user-configurable defaults =====================
PYTHON_VERSION="${PYTHON_VERSION:-3.10}" # e.g. 3.10 / 3.11 / 3.12 / 3.13
PYTHON_PATCH="${PYTHON_PATCH:-18}" # e.g. 18 -> 3.10.18 ; 13 -> 3.11.13
PY_STANDALONE_TAG="${PY_STANDALONE_TAG:-20250818}" # release tag date
# Enable/disable bundling of a portable Python build
PORTABLE_PYTHON="${PORTABLE_PYTHON:-false}"
PYTHON_VERSION="3.10"
# If you want to fully pin the filename (including tuned CPU targets), set:
# PORTABLE_PY_FILENAME="cpython-3.10.18+20250818-x86_64_v3-unknown-linux-gnu-install_only.tar.gz"
: "${PORTABLE_PY_FILENAME:=}"
: "${PORTABLE_PY_SHA256:=}" # optional; if set we verify the download
# =====================================================================
# Default to uv if USE_PIP is not set
if [ "x${USE_PIP:-}" == "x" ]; then
USE_PIP=false
fi
# ----------------------- helpers -----------------------
function _is_musl() {
# detect musl (Alpine, etc)
if command -v ldd >/dev/null 2>&1; then
ldd --version 2>&1 | grep -qi musl && return 0
fi
# busybox-ish fallback
if command -v getconf >/dev/null 2>&1; then
getconf GNU_LIBC_VERSION >/dev/null 2>&1 || return 0
fi
return 1
}
function _triple() {
local os="" arch="" libc="gnu"
case "$(uname -s)" in
Linux*) os="unknown-linux" ;;
Darwin*) os="apple-darwin" ;;
MINGW*|MSYS*|CYGWIN*) os="pc-windows-msvc" ;; # best-effort for Git Bash
*) echo "Unsupported OS $(uname -s)"; exit 1;;
esac
case "$(uname -m)" in
x86_64) arch="x86_64" ;;
aarch64|arm64) arch="aarch64" ;;
armv7l) arch="armv7" ;;
i686|i386) arch="i686" ;;
ppc64le) arch="ppc64le" ;;
s390x) arch="s390x" ;;
riscv64) arch="riscv64" ;;
*) echo "Unsupported arch $(uname -m)"; exit 1;;
esac
if [[ "$os" == "unknown-linux" ]]; then
if _is_musl; then
libc="musl"
else
libc="gnu"
fi
echo "${arch}-${os}-${libc}"
else
echo "${arch}-${os}"
fi
}
function _portable_dir() {
echo "${EDIR}/python"
}
function _portable_bin() {
# python-build-standalone puts python in ./bin
echo "$(_portable_dir)/bin"
}
function _portable_python() {
if [ -x "$(_portable_bin)/python3" ]; then
echo "$(_portable_bin)/python3"
else
echo "$(_portable_bin)/python"
fi
}
# macOS loader env for the portable CPython
_macosPortableEnv() {
if [ "$(uname -s)" = "Darwin" ]; then
export DYLD_LIBRARY_PATH="$(_portable_dir)/lib${DYLD_LIBRARY_PATH:+:${DYLD_LIBRARY_PATH}}"
export DYLD_FALLBACK_LIBRARY_PATH="$(_portable_dir)/lib${DYLD_FALLBACK_LIBRARY_PATH:+:${DYLD_FALLBACK_LIBRARY_PATH}}"
fi
}
# Good hygiene on macOS for downloaded/extracted trees
_unquarantinePortablePython() {
if [ "$(uname -s)" = "Darwin" ]; then
command -v xattr >/dev/null 2>&1 && xattr -dr com.apple.quarantine "$(_portable_dir)" || true
fi
}
# ------------------ ### PORTABLE PYTHON ------------------
function ensurePortablePython() {
local pdir="$(_portable_dir)"
local pbin="$(_portable_bin)"
local pyexe
if [ -x "${pbin}/python3" ] || [ -x "${pbin}/python" ]; then
_macosPortableEnv
return 0
fi
mkdir -p "${pdir}"
local triple="$(_triple)"
local full_ver="${PYTHON_VERSION}.${PYTHON_PATCH}"
local fn=""
if [ -n "${PORTABLE_PY_FILENAME}" ]; then
fn="${PORTABLE_PY_FILENAME}"
else
# generic asset name: cpython-<full_ver>+<tag>-<triple>-install_only.tar.gz
fn="cpython-${full_ver}+${PY_STANDALONE_TAG}-${triple}-install_only.tar.gz"
fi
local url="https://github.com/astral-sh/python-build-standalone/releases/download/${PY_STANDALONE_TAG}/${fn}"
local tmp="${pdir}/${fn}"
echo "Downloading portable Python: ${fn}"
# curl with retries; fall back to wget if needed
if command -v curl >/dev/null 2>&1; then
curl -L --fail --retry 3 --retry-delay 1 -o "${tmp}" "${url}"
else
wget -O "${tmp}" "${url}"
fi
if [ -n "${PORTABLE_PY_SHA256}" ]; then
echo "${PORTABLE_PY_SHA256} ${tmp}" | sha256sum -c -
fi
echo "Extracting ${fn} -> ${pdir}"
# always a .tar.gz (we purposely choose install_only)
tar -xzf "${tmp}" -C "${pdir}"
rm -f "${tmp}"
# Some archives nest a directory; if so, flatten to ${pdir}
# Find the first dir with a 'bin/python*'
local inner
inner="$(find "${pdir}" -type f -path "*/bin/python*" -maxdepth 3 2>/dev/null | head -n1 || true)"
if [ -n "${inner}" ]; then
local inner_root
inner_root="$(dirname "$(dirname "${inner}")")" # .../bin -> root
if [ "${inner_root}" != "${pdir}" ]; then
# move contents up one level
shopt -s dotglob
mv "${inner_root}/"* "${pdir}/"
rm -rf "${inner_root}"
shopt -u dotglob
fi
fi
_unquarantinePortablePython
_macosPortableEnv
# Make sure it's runnable
pyexe="$(_portable_python)"
"${pyexe}" -V
}
# init handles the setup of the library
function init() {
# Name of the backend (directory name)
BACKEND_NAME=${PWD##*/}
# Path where all backends files are
MY_DIR=$(realpath `dirname $0`)
# Build type
MY_DIR=$(realpath "$(dirname "$0")")
BUILD_PROFILE=$(getBuildProfile)
# Environment directory
EDIR=${MY_DIR}
# Allow to specify a custom env dir for shared environments
if [ "x${ENV_DIR}" != "x" ]; then
if [ "x${ENV_DIR:-}" != "x" ]; then
EDIR=${ENV_DIR}
fi
# If a backend has defined a list of valid build profiles...
if [ ! -z "${LIMIT_TARGETS}" ]; then
if [ ! -z "${LIMIT_TARGETS:-}" ]; then
isValidTarget=$(checkTargets ${LIMIT_TARGETS})
if [ ${isValidTarget} != true ]; then
echo "${BACKEND_NAME} can only be used on the following targets: ${LIMIT_TARGETS}"
@@ -50,6 +203,7 @@ function init() {
echo "Initializing libbackend for ${BACKEND_NAME}"
}
# getBuildProfile will inspect the system to determine which build profile is appropriate:
# returns one of the following:
# - cublas11
@@ -57,53 +211,139 @@ function init() {
# - hipblas
# - intel
function getBuildProfile() {
# First check if we are a cublas build, and if so report the correct build profile
if [ x"${BUILD_TYPE}" == "xcublas" ]; then
if [ ! -z ${CUDA_MAJOR_VERSION} ]; then
# If we have been given a CUDA version, we trust it
if [ x"${BUILD_TYPE:-}" == "xcublas" ]; then
if [ ! -z "${CUDA_MAJOR_VERSION:-}" ]; then
echo ${BUILD_TYPE}${CUDA_MAJOR_VERSION}
else
# We don't know what version of cuda we are, so we report ourselves as a generic cublas
echo ${BUILD_TYPE}
fi
return 0
fi
# If /opt/intel exists, then we are doing an intel/ARC build
if [ -d "/opt/intel" ]; then
echo "intel"
return 0
fi
# If for any other values of BUILD_TYPE, we don't need any special handling/discovery
if [ ! -z ${BUILD_TYPE} ]; then
if [ -n "${BUILD_TYPE:-}" ]; then
echo ${BUILD_TYPE}
return 0
fi
# If there is no BUILD_TYPE set at all, set a build-profile value of CPU, we aren't building for any GPU targets
echo "cpu"
}
# Make the venv relocatable:
# - rewrite venv/bin/python{,3} to relative symlinks into $(_portable_dir)
# - normalize entrypoint shebangs to /usr/bin/env python3
_makeVenvPortable() {
local venv_dir="${EDIR}/venv"
local vbin="${venv_dir}/bin"
[ -d "${vbin}" ] || return 0
# 1) Replace python symlinks with relative ones to ../../python/bin/python3
# (venv/bin -> venv -> EDIR -> python/bin)
local rel_py='../../python/bin/python3'
for name in python3 python; do
if [ -e "${vbin}/${name}" ] || [ -L "${vbin}/${name}" ]; then
rm -f "${vbin}/${name}"
fi
done
ln -s "${rel_py}" "${vbin}/python3"
ln -s "python3" "${vbin}/python"
# 2) Rewrite shebangs of entry points to use env, so the venv is relocatable
# Only touch text files that start with #! and reference the current venv.
local ve_abs="${vbin}/python"
local sed_i=(sed -i)
# macOS/BSD sed needs a backup suffix; GNU sed doesn't. Make it portable:
if sed --version >/dev/null 2>&1; then
sed_i=(sed -i)
else
sed_i=(sed -i '')
fi
for f in "${vbin}"/*; do
[ -f "$f" ] || continue
# Fast path: check first two bytes (#!)
head -c2 "$f" 2>/dev/null | grep -q '^#!' || continue
# Only rewrite if the shebang mentions the (absolute) venv python
if head -n1 "$f" | grep -Fq "${ve_abs}"; then
"${sed_i[@]}" '1s|^#!.*$|#!/usr/bin/env python3|' "$f"
chmod +x "$f" 2>/dev/null || true
fi
done
}
# ensureVenv makes sure that the venv for the backend both exists, and is activated.
#
# This function is idempotent, so you can call it as many times as you want and it will
# always result in an activated virtual environment
function ensureVenv() {
local interpreter=""
if [ "x${PORTABLE_PYTHON}" == "xtrue" ]; then
ensurePortablePython
interpreter="$(_portable_python)"
else
# Prefer system python${PYTHON_VERSION}, else python3, else fall back to bundled
if command -v python${PYTHON_VERSION} >/dev/null 2>&1; then
interpreter="python${PYTHON_VERSION}"
elif command -v python3 >/dev/null 2>&1; then
interpreter="python3"
else
echo "No suitable system Python found, bootstrapping portable build..."
ensurePortablePython
interpreter="$(_portable_python)"
fi
fi
if [ ! -d "${EDIR}/venv" ]; then
uv venv --python ${PYTHON_VERSION} ${EDIR}/venv
echo "virtualenv created"
if [ "x${USE_PIP}" == "xtrue" ]; then
"${interpreter}" -m venv --copies "${EDIR}/venv"
source "${EDIR}/venv/bin/activate"
"${interpreter}" -m pip install --upgrade pip
else
if [ "x${PORTABLE_PYTHON}" == "xtrue" ]; then
uv venv --python "${interpreter}" "${EDIR}/venv"
else
uv venv --python "${PYTHON_VERSION}" "${EDIR}/venv"
fi
fi
if [ "x${PORTABLE_PYTHON}" == "xtrue" ]; then
_makeVenvPortable
fi
fi
# Source if we are not already in a Virtual env
if [ "x${VIRTUAL_ENV}" != "x${EDIR}/venv" ]; then
source ${EDIR}/venv/bin/activate
echo "virtualenv activated"
# We call it here to make sure that when we source a venv we can still use python as expected
if [ -x "$(_portable_python)" ]; then
_macosPortableEnv
fi
echo "activated virtualenv has been ensured"
if [ "x${VIRTUAL_ENV:-}" != "x${EDIR}/venv" ]; then
source "${EDIR}/venv/bin/activate"
fi
}
function runProtogen() {
ensureVenv
if [ "x${USE_PIP}" == "xtrue" ]; then
pip install grpcio-tools
else
uv pip install grpcio-tools
fi
pushd "${EDIR}" >/dev/null
# use the venv python (ensures correct interpreter & sys.path)
python -m grpc_tools.protoc -I../../ -I./ --python_out=. --grpc_python_out=. backend.proto
popd >/dev/null
}
# installRequirements looks for several requirements files and if they exist runs the install for them in order
#
# - requirements-install.txt
@@ -111,7 +351,7 @@ function ensureVenv() {
# - requirements-${BUILD_TYPE}.txt
# - requirements-${BUILD_PROFILE}.txt
#
# BUILD_PROFILE is a pore specific version of BUILD_TYPE, ex: cuda-11 or cuda-12
# BUILD_PROFILE is a more specific version of BUILD_TYPE, ex: cuda-11 or cuda-12
# it can also include some options that we do not have BUILD_TYPES for, ex: intel
#
# NOTE: for BUILD_PROFILE==intel, this function does NOT automatically use the Intel python package index.
@@ -127,36 +367,36 @@ function ensureVenv() {
# installRequirements
function installRequirements() {
ensureVenv
# These are the requirements files we will attempt to install, in order
declare -a requirementFiles=(
"${EDIR}/requirements-install.txt"
"${EDIR}/requirements.txt"
"${EDIR}/requirements-${BUILD_TYPE}.txt"
"${EDIR}/requirements-${BUILD_TYPE:-}.txt"
)
if [ "x${BUILD_TYPE}" != "x${BUILD_PROFILE}" ]; then
if [ "x${BUILD_TYPE:-}" != "x${BUILD_PROFILE}" ]; then
requirementFiles+=("${EDIR}/requirements-${BUILD_PROFILE}.txt")
fi
# if BUILD_TYPE is empty, we are a CPU build, so we should try to install the CPU requirements
if [ "x${BUILD_TYPE}" == "x" ]; then
if [ "x${BUILD_TYPE:-}" == "x" ]; then
requirementFiles+=("${EDIR}/requirements-cpu.txt")
fi
requirementFiles+=("${EDIR}/requirements-after.txt")
if [ "x${BUILD_TYPE}" != "x${BUILD_PROFILE}" ]; then
if [ "x${BUILD_TYPE:-}" != "x${BUILD_PROFILE}" ]; then
requirementFiles+=("${EDIR}/requirements-${BUILD_PROFILE}-after.txt")
fi
for reqFile in ${requirementFiles[@]}; do
if [ -f ${reqFile} ]; then
if [ -f "${reqFile}" ]; then
echo "starting requirements install for ${reqFile}"
uv pip install ${EXTRA_PIP_INSTALL_FLAGS} --requirement ${reqFile}
if [ "x${USE_PIP}" == "xtrue" ]; then
pip install ${EXTRA_PIP_INSTALL_FLAGS:-} --requirement "${reqFile}"
else
uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} --requirement "${reqFile}"
fi
echo "finished requirements install for ${reqFile}"
fi
done
runProtogen
}
# startBackend discovers and runs the backend GRPC server
@@ -174,18 +414,18 @@ function installRequirements() {
# - ${BACKEND_NAME}.py
function startBackend() {
ensureVenv
if [ ! -z ${BACKEND_FILE} ]; then
exec ${EDIR}/venv/bin/python ${BACKEND_FILE} $@
if [ ! -z "${BACKEND_FILE:-}" ]; then
exec "${EDIR}/venv/bin/python" "${BACKEND_FILE}" "$@"
elif [ -e "${MY_DIR}/server.py" ]; then
exec ${EDIR}/venv/bin/python ${MY_DIR}/server.py $@
exec "${EDIR}/venv/bin/python" "${MY_DIR}/server.py" "$@"
elif [ -e "${MY_DIR}/backend.py" ]; then
exec ${EDIR}/venv/bin/python ${MY_DIR}/backend.py $@
exec "${EDIR}/venv/bin/python" "${MY_DIR}/backend.py" "$@"
elif [ -e "${MY_DIR}/${BACKEND_NAME}.py" ]; then
exec ${EDIR}/venv/bin/python ${MY_DIR}/${BACKEND_NAME}.py $@
exec "${EDIR}/venv/bin/python" "${MY_DIR}/${BACKEND_NAME}.py" "$@"
fi
}
# runUnittests discovers and runs python unittests
#
# You can specify a specific test file to use by setting TEST_FILE before calling runUnittests.
@@ -198,41 +438,36 @@ function startBackend() {
# be default a file named test.py in the backends directory will be used
function runUnittests() {
ensureVenv
if [ ! -z ${TEST_FILE} ]; then
testDir=$(dirname `realpath ${TEST_FILE}`)
testFile=$(basename ${TEST_FILE})
pushd ${testDir}
python -m unittest ${testFile}
popd
if [ ! -z "${TEST_FILE:-}" ]; then
testDir=$(dirname "$(realpath "${TEST_FILE}")")
testFile=$(basename "${TEST_FILE}")
pushd "${testDir}" >/dev/null
python -m unittest "${testFile}"
popd >/dev/null
elif [ -f "${MY_DIR}/test.py" ]; then
pushd ${MY_DIR}
pushd "${MY_DIR}" >/dev/null
python -m unittest test.py
popd
popd >/dev/null
else
echo "no tests defined for ${BACKEND_NAME}"
fi
}
##################################################################################
# Below here are helper functions not intended to be used outside of the library #
##################################################################################
# checkTargets determines if the current BUILD_TYPE or BUILD_PROFILE is in a list of valid targets
function checkTargets() {
# Collect all provided targets into a variable and...
targets=$@
# ...convert it into an array
declare -a targets=($targets)
for target in ${targets[@]}; do
if [ "x${BUILD_TYPE}" == "x${target}" ]; then
echo true
return 0
if [ "x${BUILD_TYPE:-}" == "x${target}" ]; then
echo true; return 0
fi
if [ "x${BUILD_PROFILE}" == "x${target}" ]; then
echo true
return 0
echo true; return 0
fi
done
echo false

View File

@@ -3,18 +3,11 @@
.PHONY: install
install:
bash install.sh
$(MAKE) protogen
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
bash protogen.sh
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -8,6 +8,4 @@ else
source $backend_dir/../common/libbackend.sh
fi
ensureVenv
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
runProtogen

View File

@@ -1,5 +1,5 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.3.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu
intel-extension-for-pytorch==2.8.10+xpu
torch==2.8.0
oneccl_bind_pt==2.8.0+xpu
optimum[openvino]

View File

@@ -1,3 +1,3 @@
grpcio==1.71.0
grpcio==1.74.0
protobuf
grpcio-tools

View File

@@ -1,29 +1,23 @@
.PHONY: coqui
coqui: protogen
coqui:
bash install.sh
.PHONY: run
run: protogen
run: coqui
@echo "Running coqui..."
bash run.sh
@echo "coqui run."
.PHONY: test
test: protogen
test: coqui
@echo "Testing coqui..."
bash test.sh
@echo "coqui tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -40,7 +40,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")

View File

@@ -1,4 +1,4 @@
grpcio==1.71.0
grpcio==1.74.0
protobuf
certifi
packaging==24.1

View File

@@ -12,28 +12,22 @@ export SKIP_CONDA=1
endif
.PHONY: diffusers
diffusers: protogen
diffusers:
bash install.sh
.PHONY: run
run: protogen
run: diffusers
@echo "Running diffusers..."
bash run.sh
@echo "Diffusers run."
test: protogen
test: diffusers
bash test.sh
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -18,7 +18,7 @@ import backend_pb2_grpc
import grpc
from diffusers import SanaPipeline, StableDiffusion3Pipeline, StableDiffusionXLPipeline, StableDiffusionDepth2ImgPipeline, DPMSolverMultistepScheduler, StableDiffusionPipeline, DiffusionPipeline, \
EulerAncestralDiscreteScheduler, FluxPipeline, FluxTransformer2DModel
EulerAncestralDiscreteScheduler, FluxPipeline, FluxTransformer2DModel, QwenImageEditPipeline
from diffusers import StableDiffusionImg2ImgPipeline, AutoPipelineForText2Image, ControlNetModel, StableVideoDiffusionPipeline, Lumina2Text2ImgPipeline
from diffusers.pipelines.stable_diffusion import safety_checker
from diffusers.utils import load_image, export_to_video
@@ -263,6 +263,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
elif request.PipelineType == "DiffusionPipeline":
self.pipe = DiffusionPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "QwenImageEditPipeline":
self.pipe = QwenImageEditPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "VideoDiffusionPipeline":
self.txt2vid = True
self.pipe = DiffusionPipeline.from_pretrained(request.Model,
@@ -365,6 +368,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
device = "cpu" if not request.CUDA else "cuda"
if XPU:
device = "xpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
self.device = device
if request.LoraAdapter:
# Check if its a local file and not a directory ( we load lora differently for a safetensor file )

View File

@@ -1,6 +1,8 @@
--extra-index-url https://download.pytorch.org/whl/cpu
git+https://github.com/huggingface/diffusers
opencv-python
transformers
torchvision==0.22.1
accelerate
compel
peft

View File

@@ -1,5 +1,6 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch==2.7.1+cu118
torchvision==0.22.1+cu118
git+https://github.com/huggingface/diffusers
opencv-python
transformers

View File

@@ -1,4 +1,5 @@
torch==2.7.1
torchvision==0.22.1
git+https://github.com/huggingface/diffusers
opencv-python
transformers

View File

@@ -0,0 +1,11 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu126/
torch
diffusers
transformers
accelerate
compel
peft
optimum-quanto
numpy<2
sentencepiece
torchvision

View File

@@ -0,0 +1,10 @@
torch==2.7.1
torchvision==0.22.1
git+https://github.com/huggingface/diffusers
opencv-python
transformers
accelerate
compel
peft
sentencepiece
optimum-quanto

View File

@@ -1,5 +1,5 @@
setuptools
grpcio==1.71.0
grpcio==1.74.0
pillow
protobuf
certifi

View File

@@ -12,4 +12,6 @@ if [ -d "/opt/intel" ]; then
export XPU=1
fi
export PYTORCH_ENABLE_MPS_FALLBACK=1
startBackend $@

View File

@@ -1,23 +1,17 @@
.PHONY: exllama2
exllama2: protogen
exllama2:
bash install.sh
.PHONY: run
run: protogen
run: exllama2
@echo "Running exllama2..."
bash run.sh
@echo "exllama2 run."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
$(RM) -r venv source __pycache__

View File

@@ -1,4 +1,4 @@
grpcio==1.71.0
grpcio==1.74.0
protobuf
certifi
wheel

View File

@@ -3,18 +3,11 @@
.PHONY: install
install:
bash install.sh
$(MAKE) protogen
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
bash protogen.sh
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -10,7 +10,7 @@ import sys
import os
import backend_pb2
import backend_pb2_grpc
import torch
from faster_whisper import WhisperModel
import grpc
@@ -35,7 +35,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# device = "cuda" if request.CUDA else "cpu"
if request.CUDA:
device = "cuda"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
try:
print("Preparing models, please wait", file=sys.stderr)
self.model = WhisperModel(request.Model, device=device, compute_type="float16")

View File

@@ -1,29 +1,23 @@
.PHONY: kitten-tts
kitten-tts: protogen
kitten-tts:
bash install.sh
.PHONY: run
run: protogen
run: kitten-tts
@echo "Running kitten-tts..."
bash run.sh
@echo "kitten-tts run."
.PHONY: test
test: protogen
test: kitten-tts
@echo "Testing kitten-tts..."
bash test.sh
@echo "kitten-tts tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -33,18 +33,6 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
# Get device
# device = "cuda" if request.CUDA else "cpu"
if torch.cuda.is_available():
print("CUDA is available", file=sys.stderr)
device = "cuda"
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")
self.AudioPath = None
# List available KittenTTS models
print("Available KittenTTS voices: expr-voice-2-m, expr-voice-2-f, expr-voice-3-m, expr-voice-3-f, expr-voice-4-m, expr-voice-4-f, expr-voice-5-m, expr-voice-5-f")

View File

@@ -1,29 +1,23 @@
.PHONY: kokoro
kokoro: protogen
kokoro:
bash install.sh
.PHONY: run
run: protogen
run: kokoro
@echo "Running kokoro..."
bash run.sh
@echo "kokoro run."
.PHONY: test
test: protogen
test: kokoro
@echo "Testing kokoro..."
bash test.sh
@echo "kokoro tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -33,17 +33,6 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
# Get device
if torch.cuda.is_available():
print("CUDA is available", file=sys.stderr)
device = "cuda"
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")
try:
print("Preparing Kokoro TTS pipeline, please wait", file=sys.stderr)
# empty dict

View File

@@ -0,0 +1,23 @@
.PHONY: mlx-vlm
mlx-vlm:
bash install.sh
.PHONY: run
run: mlx-vlm
@echo "Running mlx-vlm..."
bash run.sh
@echo "mlx run."
.PHONY: test
test: mlx-vlm
@echo "Testing mlx-vlm..."
bash test.sh
@echo "mlx tested."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,477 @@
#!/usr/bin/env python3
import asyncio
from concurrent import futures
import argparse
import signal
import sys
import os
from typing import List
import time
import backend_pb2
import backend_pb2_grpc
import grpc
from mlx_vlm import load, generate, stream_generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config, load_image
import mlx.core as mx
import base64
import io
from PIL import Image
import tempfile
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
A gRPC servicer that implements the Backend service defined in backend.proto.
"""
def _is_float(self, s):
"""Check if a string can be converted to float."""
try:
float(s)
return True
except ValueError:
return False
def _is_int(self, s):
"""Check if a string can be converted to int."""
try:
int(s)
return True
except ValueError:
return False
def Health(self, request, context):
"""
Returns a health check message.
Args:
request: The health check request.
context: The gRPC context.
Returns:
backend_pb2.Reply: The health check reply.
"""
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
async def LoadModel(self, request, context):
"""
Loads a multimodal vision-language model using MLX-VLM.
Args:
request: The load model request.
context: The gRPC context.
Returns:
backend_pb2.Result: The load model result.
"""
try:
print(f"Loading MLX-VLM model: {request.Model}", file=sys.stderr)
print(f"Request: {request}", file=sys.stderr)
# Parse options like in the diffusers backend
options = request.Options
self.options = {}
# The options are a list of strings in this form optname:optvalue
# We store all the options in a dict for later use
for opt in options:
if ":" not in opt:
continue
key, value = opt.split(":", 1) # Split only on first colon to handle values with colons
# Convert numeric values to appropriate types
if self._is_float(value):
value = float(value)
elif self._is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
print(f"Options: {self.options}", file=sys.stderr)
# Load model and processor using MLX-VLM
# mlx-vlm load function returns (model, processor) instead of (model, tokenizer)
self.model, self.processor = load(request.Model)
# Load model config for chat template support
self.config = load_config(request.Model)
except Exception as err:
print(f"Error loading MLX-VLM model {err=}, {type(err)=}", file=sys.stderr)
return backend_pb2.Result(success=False, message=f"Error loading MLX-VLM model: {err}")
print("MLX-VLM model loaded successfully", file=sys.stderr)
return backend_pb2.Result(message="MLX-VLM model loaded successfully", success=True)
async def Predict(self, request, context):
"""
Generates text based on the given prompt and sampling parameters using MLX-VLM with multimodal support.
Args:
request: The predict request.
context: The gRPC context.
Returns:
backend_pb2.Reply: The predict result.
"""
temp_files = []
try:
# Process images and audios from request
image_paths = []
audio_paths = []
# Process images
if request.Images:
for img_data in request.Images:
img_path = self.load_image_from_base64(img_data)
if img_path:
image_paths.append(img_path)
temp_files.append(img_path)
# Process audios
if request.Audios:
for audio_data in request.Audios:
audio_path = self.load_audio_from_base64(audio_data)
if audio_path:
audio_paths.append(audio_path)
temp_files.append(audio_path)
# Prepare the prompt with multimodal information
prompt = self._prepare_prompt(request, num_images=len(image_paths), num_audios=len(audio_paths))
# Build generation parameters using request attributes and options
max_tokens, generation_params = self._build_generation_params(request)
print(f"Generating text with MLX-VLM - max_tokens: {max_tokens}, params: {generation_params}", file=sys.stderr)
print(f"Images: {len(image_paths)}, Audios: {len(audio_paths)}", file=sys.stderr)
# Generate text using MLX-VLM with multimodal inputs
response = generate(
model=self.model,
processor=self.processor,
prompt=prompt,
image=image_paths if image_paths else None,
audio=audio_paths if audio_paths else None,
max_tokens=max_tokens,
temperature=generation_params.get('temp', 0.6),
top_p=generation_params.get('top_p', 1.0),
verbose=False
)
return backend_pb2.Reply(message=bytes(response, encoding='utf-8'))
except Exception as e:
print(f"Error in MLX-VLM Predict: {e}", file=sys.stderr)
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"Generation failed: {str(e)}")
return backend_pb2.Reply(message=bytes("", encoding='utf-8'))
finally:
# Clean up temporary files
self.cleanup_temp_files(temp_files)
def Embedding(self, request, context):
"""
A gRPC method that calculates embeddings for a given sentence.
Note: MLX-VLM doesn't support embeddings directly. This method returns an error.
Args:
request: An EmbeddingRequest object that contains the request parameters.
context: A grpc.ServicerContext object that provides information about the RPC.
Returns:
An EmbeddingResult object that contains the calculated embeddings.
"""
print("Embeddings not supported in MLX-VLM backend", file=sys.stderr)
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Embeddings are not supported in the MLX-VLM backend.")
return backend_pb2.EmbeddingResult()
async def PredictStream(self, request, context):
"""
Generates text based on the given prompt and sampling parameters, and streams the results using MLX-VLM with multimodal support.
Args:
request: The predict stream request.
context: The gRPC context.
Yields:
backend_pb2.Reply: Streaming predict results.
"""
temp_files = []
try:
# Process images and audios from request
image_paths = []
audio_paths = []
# Process images
if request.Images:
for img_data in request.Images:
img_path = self.load_image_from_base64(img_data)
if img_path:
image_paths.append(img_path)
temp_files.append(img_path)
# Process audios
if request.Audios:
for audio_data in request.Audios:
audio_path = self.load_audio_from_base64(audio_data)
if audio_path:
audio_paths.append(audio_path)
temp_files.append(audio_path)
# Prepare the prompt with multimodal information
prompt = self._prepare_prompt(request, num_images=len(image_paths), num_audios=len(audio_paths))
# Build generation parameters using request attributes and options
max_tokens, generation_params = self._build_generation_params(request, default_max_tokens=512)
print(f"Streaming text with MLX-VLM - max_tokens: {max_tokens}, params: {generation_params}", file=sys.stderr)
print(f"Images: {len(image_paths)}, Audios: {len(audio_paths)}", file=sys.stderr)
# Stream text generation using MLX-VLM with multimodal inputs
for response in stream_generate(
model=self.model,
processor=self.processor,
prompt=prompt,
image=image_paths if image_paths else None,
audio=audio_paths if audio_paths else None,
max_tokens=max_tokens,
temperature=generation_params.get('temp', 0.6),
top_p=generation_params.get('top_p', 1.0),
):
yield backend_pb2.Reply(message=bytes(response.text, encoding='utf-8'))
except Exception as e:
print(f"Error in MLX-VLM PredictStream: {e}", file=sys.stderr)
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"Streaming generation failed: {str(e)}")
yield backend_pb2.Reply(message=bytes("", encoding='utf-8'))
finally:
# Clean up temporary files
self.cleanup_temp_files(temp_files)
def _prepare_prompt(self, request, num_images=0, num_audios=0):
"""
Prepare the prompt for MLX-VLM generation, handling chat templates and multimodal inputs.
Args:
request: The gRPC request containing prompt and message information.
num_images: Number of images in the request.
num_audios: Number of audio files in the request.
Returns:
str: The prepared prompt.
"""
# If tokenizer template is enabled and messages are provided instead of prompt, apply the tokenizer template
if not request.Prompt and request.UseTokenizerTemplate and request.Messages:
# Convert gRPC messages to the format expected by apply_chat_template
messages = []
for msg in request.Messages:
messages.append({"role": msg.role, "content": msg.content})
# Use mlx-vlm's apply_chat_template which handles multimodal inputs
prompt = apply_chat_template(
self.processor,
self.config,
messages,
num_images=num_images,
num_audios=num_audios
)
return prompt
elif request.Prompt:
# If we have a direct prompt but also have images/audio, we need to format it properly
if num_images > 0 or num_audios > 0:
# Create a simple message structure for multimodal prompt
messages = [{"role": "user", "content": request.Prompt}]
prompt = apply_chat_template(
self.processor,
self.config,
messages,
num_images=num_images,
num_audios=num_audios
)
return prompt
else:
return request.Prompt
else:
# Fallback to empty prompt with multimodal template if we have media
if num_images > 0 or num_audios > 0:
messages = [{"role": "user", "content": ""}]
prompt = apply_chat_template(
self.processor,
self.config,
messages,
num_images=num_images,
num_audios=num_audios
)
return prompt
else:
return ""
def _build_generation_params(self, request, default_max_tokens=200):
"""
Build generation parameters from request attributes and options for MLX-VLM.
Args:
request: The gRPC request.
default_max_tokens: Default max_tokens if not specified.
Returns:
tuple: (max_tokens, generation_params dict)
"""
# Extract max_tokens
max_tokens = getattr(request, 'Tokens', default_max_tokens)
if max_tokens == 0:
max_tokens = default_max_tokens
# Extract generation parameters from request attributes
temp = getattr(request, 'Temperature', 0.0)
if temp == 0.0:
temp = 0.6 # Default temperature
top_p = getattr(request, 'TopP', 0.0)
if top_p == 0.0:
top_p = 1.0 # Default top_p
# Initialize generation parameters for MLX-VLM
generation_params = {
'temp': temp,
'top_p': top_p,
}
# Add seed if specified
seed = getattr(request, 'Seed', 0)
if seed != 0:
mx.random.seed(seed)
# Override with options if available
if hasattr(self, 'options'):
# Max tokens from options
if 'max_tokens' in self.options:
max_tokens = self.options['max_tokens']
# Generation parameters from options
param_option_mapping = {
'temp': 'temp',
'temperature': 'temp', # alias
'top_p': 'top_p',
}
for option_key, param_key in param_option_mapping.items():
if option_key in self.options:
generation_params[param_key] = self.options[option_key]
# Handle seed from options
if 'seed' in self.options:
mx.random.seed(self.options['seed'])
return max_tokens, generation_params
def load_image_from_base64(self, image_data: str):
"""
Load an image from base64 encoded data.
Args:
image_data (str): Base64 encoded image data.
Returns:
PIL.Image or str: The loaded image or path to the image.
"""
try:
decoded_data = base64.b64decode(image_data)
image = Image.open(io.BytesIO(decoded_data))
# Save to temporary file for mlx-vlm
with tempfile.NamedTemporaryFile(delete=False, suffix='.jpg') as tmp_file:
image.save(tmp_file.name, format='JPEG')
return tmp_file.name
except Exception as e:
print(f"Error loading image from base64: {e}", file=sys.stderr)
return None
def load_audio_from_base64(self, audio_data: str):
"""
Load audio from base64 encoded data.
Args:
audio_data (str): Base64 encoded audio data.
Returns:
str: Path to the loaded audio file.
"""
try:
decoded_data = base64.b64decode(audio_data)
# Save to temporary file for mlx-vlm
with tempfile.NamedTemporaryFile(delete=False, suffix='.wav') as tmp_file:
tmp_file.write(decoded_data)
return tmp_file.name
except Exception as e:
print(f"Error loading audio from base64: {e}", file=sys.stderr)
return None
def cleanup_temp_files(self, file_paths: List[str]):
"""
Clean up temporary files.
Args:
file_paths (List[str]): List of file paths to clean up.
"""
for file_path in file_paths:
try:
if file_path and os.path.exists(file_path):
os.remove(file_path)
except Exception as e:
print(f"Error removing temporary file {file_path}: {e}", file=sys.stderr)
async def serve(address):
# Start asyncio gRPC server
server = grpc.aio.server(migration_thread_pool=futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
# Add the servicer to the server
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
# Bind the server to the address
server.add_insecure_port(address)
# Gracefully shutdown the server on SIGTERM or SIGINT
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(
sig, lambda: asyncio.ensure_future(server.stop(5))
)
# Start the server
await server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Wait for the server to be terminated
await server.wait_for_termination()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
asyncio.run(serve(args.addr))

View File

@@ -1,13 +1,14 @@
#!/bin/bash
set -e
USE_PIP=true
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
ensureVenv
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
installRequirements

View File

@@ -0,0 +1 @@
git+https://github.com/Blaizzy/mlx-vlm

View File

@@ -0,0 +1,4 @@
grpcio==1.71.0
protobuf
certifi
setuptools

11
backend/python/mlx-vlm/run.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@

View File

@@ -0,0 +1,146 @@
import unittest
import subprocess
import time
import backend_pb2
import backend_pb2_grpc
import grpc
import unittest
import subprocess
import time
import grpc
import backend_pb2_grpc
import backend_pb2
class TestBackendServicer(unittest.TestCase):
"""
TestBackendServicer is the class that tests the gRPC service.
This class contains methods to test the startup and shutdown of the gRPC service.
"""
def setUp(self):
self.service = subprocess.Popen(["python", "backend.py", "--addr", "localhost:50051"])
time.sleep(10)
def tearDown(self) -> None:
self.service.terminate()
self.service.wait()
def test_server_startup(self):
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.Health(backend_pb2.HealthMessage())
self.assertEqual(response.message, b'OK')
except Exception as err:
print(err)
self.fail("Server failed to start")
finally:
self.tearDown()
def test_load_model(self):
"""
This method tests if the model is loaded successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
self.assertEqual(response.message, "Model loaded successfully")
except Exception as err:
print(err)
self.fail("LoadModel service failed")
finally:
self.tearDown()
def test_text(self):
"""
This method tests if the embeddings are generated successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
req = backend_pb2.PredictOptions(Prompt="The capital of France is")
resp = stub.Predict(req)
self.assertIsNotNone(resp.message)
except Exception as err:
print(err)
self.fail("text service failed")
finally:
self.tearDown()
def test_sampling_params(self):
"""
This method tests if all sampling parameters are correctly processed
NOTE: this does NOT test for correctness, just that we received a compatible response
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
req = backend_pb2.PredictOptions(
Prompt="The capital of France is",
TopP=0.8,
Tokens=50,
Temperature=0.7,
TopK=40,
PresencePenalty=0.1,
FrequencyPenalty=0.2,
RepetitionPenalty=1.1,
MinP=0.05,
Seed=42,
StopPrompts=["\n"],
StopTokenIds=[50256],
BadWords=["badword"],
IncludeStopStrInOutput=True,
IgnoreEOS=True,
MinTokens=5,
Logprobs=5,
PromptLogprobs=5,
SkipSpecialTokens=True,
SpacesBetweenSpecialTokens=True,
TruncatePromptTokens=10,
GuidedDecoding=True,
N=2,
)
resp = stub.Predict(req)
self.assertIsNotNone(resp.message)
self.assertIsNotNone(resp.logprobs)
except Exception as err:
print(err)
self.fail("sampling params service failed")
finally:
self.tearDown()
def test_embedding(self):
"""
This method tests if the embeddings are generated successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="intfloat/e5-mistral-7b-instruct"))
self.assertTrue(response.success)
embedding_request = backend_pb2.PredictOptions(Embeddings="This is a test sentence.")
embedding_response = stub.Embedding(embedding_request)
self.assertIsNotNone(embedding_response.embeddings)
# assert that is a list of floats
self.assertIsInstance(embedding_response.embeddings, list)
# assert that the list is not empty
self.assertTrue(len(embedding_response.embeddings) > 0)
except Exception as err:
print(err)
self.fail("Embedding service failed")
finally:
self.tearDown()

12
backend/python/mlx-vlm/test.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
runUnittests

View File

@@ -0,0 +1,23 @@
.PHONY: mlx
mlx:
bash install.sh
.PHONY: run
run:
@echo "Running mlx..."
bash run.sh
@echo "mlx run."
.PHONY: test
test:
@echo "Testing mlx..."
bash test.sh
@echo "mlx tested."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,376 @@
#!/usr/bin/env python3
import asyncio
from concurrent import futures
import argparse
import signal
import sys
import os
from typing import List
import time
import backend_pb2
import backend_pb2_grpc
import grpc
from mlx_lm import load, generate, stream_generate
from mlx_lm.sample_utils import make_sampler
from mlx_lm.models.cache import make_prompt_cache
import mlx.core as mx
import base64
import io
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
A gRPC servicer that implements the Backend service defined in backend.proto.
"""
def _is_float(self, s):
"""Check if a string can be converted to float."""
try:
float(s)
return True
except ValueError:
return False
def _is_int(self, s):
"""Check if a string can be converted to int."""
try:
int(s)
return True
except ValueError:
return False
def Health(self, request, context):
"""
Returns a health check message.
Args:
request: The health check request.
context: The gRPC context.
Returns:
backend_pb2.Reply: The health check reply.
"""
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
async def LoadModel(self, request, context):
"""
Loads a language model using MLX.
Args:
request: The load model request.
context: The gRPC context.
Returns:
backend_pb2.Result: The load model result.
"""
try:
print(f"Loading MLX model: {request.Model}", file=sys.stderr)
print(f"Request: {request}", file=sys.stderr)
# Parse options like in the diffusers backend
options = request.Options
self.options = {}
# The options are a list of strings in this form optname:optvalue
# We store all the options in a dict for later use
for opt in options:
if ":" not in opt:
continue
key, value = opt.split(":", 1) # Split only on first colon to handle values with colons
# Convert numeric values to appropriate types
if self._is_float(value):
value = float(value)
elif self._is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
print(f"Options: {self.options}", file=sys.stderr)
# Build tokenizer config for MLX using options
tokenizer_config = {}
# Handle trust_remote_code from request or options
if request.TrustRemoteCode or self.options.get("trust_remote_code", False):
tokenizer_config["trust_remote_code"] = True
# Handle EOS token from options
if "eos_token" in self.options:
tokenizer_config["eos_token"] = self.options["eos_token"]
# Handle other tokenizer config options
for key in ["pad_token", "bos_token", "unk_token", "sep_token", "cls_token", "mask_token"]:
if key in self.options:
tokenizer_config[key] = self.options[key]
# Load model and tokenizer using MLX
if tokenizer_config:
print(f"Loading with tokenizer_config: {tokenizer_config}", file=sys.stderr)
self.model, self.tokenizer = load(request.Model, tokenizer_config=tokenizer_config)
else:
self.model, self.tokenizer = load(request.Model)
# Initialize prompt cache for efficient generation
max_kv_size = self.options.get("max_kv_size", None)
self.prompt_cache = make_prompt_cache(self.model, max_kv_size)
except Exception as err:
print(f"Error loading MLX model {err=}, {type(err)=}", file=sys.stderr)
return backend_pb2.Result(success=False, message=f"Error loading MLX model: {err}")
print("MLX model loaded successfully", file=sys.stderr)
return backend_pb2.Result(message="MLX model loaded successfully", success=True)
async def Predict(self, request, context):
"""
Generates text based on the given prompt and sampling parameters using MLX.
Args:
request: The predict request.
context: The gRPC context.
Returns:
backend_pb2.Reply: The predict result.
"""
try:
# Prepare the prompt
prompt = self._prepare_prompt(request)
# Build generation parameters using request attributes and options
max_tokens, sampler_params = self._build_generation_params(request)
print(f"Generating text with MLX - max_tokens: {max_tokens}, sampler_params: {sampler_params}", file=sys.stderr)
# Create sampler with parameters
sampler = make_sampler(**sampler_params)
# Generate text using MLX with proper parameters
response = generate(
self.model,
self.tokenizer,
prompt=prompt,
max_tokens=max_tokens,
sampler=sampler,
prompt_cache=self.prompt_cache,
verbose=False
)
return backend_pb2.Reply(message=bytes(response, encoding='utf-8'))
except Exception as e:
print(f"Error in MLX Predict: {e}", file=sys.stderr)
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"Generation failed: {str(e)}")
return backend_pb2.Reply(message=bytes("", encoding='utf-8'))
def Embedding(self, request, context):
"""
A gRPC method that calculates embeddings for a given sentence.
Note: MLX-LM doesn't support embeddings directly. This method returns an error.
Args:
request: An EmbeddingRequest object that contains the request parameters.
context: A grpc.ServicerContext object that provides information about the RPC.
Returns:
An EmbeddingResult object that contains the calculated embeddings.
"""
print("Embeddings not supported in MLX backend", file=sys.stderr)
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Embeddings are not supported in the MLX backend.")
return backend_pb2.EmbeddingResult()
async def PredictStream(self, request, context):
"""
Generates text based on the given prompt and sampling parameters, and streams the results using MLX.
Args:
request: The predict stream request.
context: The gRPC context.
Yields:
backend_pb2.Reply: Streaming predict results.
"""
try:
# Prepare the prompt
prompt = self._prepare_prompt(request)
# Build generation parameters using request attributes and options
max_tokens, sampler_params = self._build_generation_params(request, default_max_tokens=512)
print(f"Streaming text with MLX - max_tokens: {max_tokens}, sampler_params: {sampler_params}", file=sys.stderr)
# Create sampler with parameters
sampler = make_sampler(**sampler_params)
# Stream text generation using MLX with proper parameters
for response in stream_generate(
self.model,
self.tokenizer,
prompt=prompt,
max_tokens=max_tokens,
sampler=sampler,
prompt_cache=self.prompt_cache,
):
yield backend_pb2.Reply(message=bytes(response.text, encoding='utf-8'))
except Exception as e:
print(f"Error in MLX PredictStream: {e}", file=sys.stderr)
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"Streaming generation failed: {str(e)}")
yield backend_pb2.Reply(message=bytes("", encoding='utf-8'))
def _prepare_prompt(self, request):
"""
Prepare the prompt for MLX generation, handling chat templates if needed.
Args:
request: The gRPC request containing prompt and message information.
Returns:
str: The prepared prompt.
"""
# If tokenizer template is enabled and messages are provided instead of prompt, apply the tokenizer template
if not request.Prompt and request.UseTokenizerTemplate and request.Messages:
# Convert gRPC messages to the format expected by apply_chat_template
messages = []
for msg in request.Messages:
messages.append({"role": msg.role, "content": msg.content})
prompt = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
return prompt
else:
return request.Prompt
def _build_generation_params(self, request, default_max_tokens=200):
"""
Build generation parameters from request attributes and options.
Args:
request: The gRPC request.
default_max_tokens: Default max_tokens if not specified.
Returns:
tuple: (max_tokens, sampler_params dict)
"""
# Extract max_tokens
max_tokens = getattr(request, 'Tokens', default_max_tokens)
if max_tokens == 0:
max_tokens = default_max_tokens
# Extract sampler parameters from request attributes
temp = getattr(request, 'Temperature', 0.0)
if temp == 0.0:
temp = 0.6 # Default temperature
top_p = getattr(request, 'TopP', 0.0)
if top_p == 0.0:
top_p = 1.0 # Default top_p
# Initialize sampler parameters
sampler_params = {
'temp': temp,
'top_p': top_p,
'xtc_threshold': 0.0,
'xtc_probability': 0.0,
}
# Add seed if specified
seed = getattr(request, 'Seed', 0)
if seed != 0:
mx.random.seed(seed)
# Override with options if available
if hasattr(self, 'options'):
# Max tokens from options
if 'max_tokens' in self.options:
max_tokens = self.options['max_tokens']
# Sampler parameters from options
sampler_option_mapping = {
'temp': 'temp',
'temperature': 'temp', # alias
'top_p': 'top_p',
'xtc_threshold': 'xtc_threshold',
'xtc_probability': 'xtc_probability',
}
for option_key, param_key in sampler_option_mapping.items():
if option_key in self.options:
sampler_params[param_key] = self.options[option_key]
# Handle seed from options
if 'seed' in self.options:
mx.random.seed(self.options['seed'])
# Special tokens for XTC sampling (if tokenizer has eos_token_ids)
xtc_special_tokens = []
if hasattr(self.tokenizer, 'eos_token_ids') and self.tokenizer.eos_token_ids:
xtc_special_tokens = list(self.tokenizer.eos_token_ids)
elif hasattr(self.tokenizer, 'eos_token_id') and self.tokenizer.eos_token_id is not None:
xtc_special_tokens = [self.tokenizer.eos_token_id]
# Add newline token if available
try:
newline_tokens = self.tokenizer.encode("\n")
xtc_special_tokens.extend(newline_tokens)
except:
pass # Skip if encoding fails
sampler_params['xtc_special_tokens'] = xtc_special_tokens
return max_tokens, sampler_params
async def serve(address):
# Start asyncio gRPC server
server = grpc.aio.server(migration_thread_pool=futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
# Add the servicer to the server
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
# Bind the server to the address
server.add_insecure_port(address)
# Gracefully shutdown the server on SIGTERM or SIGINT
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(
sig, lambda: asyncio.ensure_future(server.stop(5))
)
# Start the server
await server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Wait for the server to be terminated
await server.wait_for_termination()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
asyncio.run(serve(args.addr))

15
backend/python/mlx/install.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
set -e
USE_PIP=true
PYTHON_VERSION=""
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
installRequirements

View File

@@ -0,0 +1 @@
mlx-lm

View File

@@ -0,0 +1,4 @@
grpcio==1.71.0
protobuf
certifi
setuptools

11
backend/python/mlx/run.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@

146
backend/python/mlx/test.py Normal file
View File

@@ -0,0 +1,146 @@
import unittest
import subprocess
import time
import backend_pb2
import backend_pb2_grpc
import grpc
import unittest
import subprocess
import time
import grpc
import backend_pb2_grpc
import backend_pb2
class TestBackendServicer(unittest.TestCase):
"""
TestBackendServicer is the class that tests the gRPC service.
This class contains methods to test the startup and shutdown of the gRPC service.
"""
def setUp(self):
self.service = subprocess.Popen(["python", "backend.py", "--addr", "localhost:50051"])
time.sleep(10)
def tearDown(self) -> None:
self.service.terminate()
self.service.wait()
def test_server_startup(self):
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.Health(backend_pb2.HealthMessage())
self.assertEqual(response.message, b'OK')
except Exception as err:
print(err)
self.fail("Server failed to start")
finally:
self.tearDown()
def test_load_model(self):
"""
This method tests if the model is loaded successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
self.assertEqual(response.message, "Model loaded successfully")
except Exception as err:
print(err)
self.fail("LoadModel service failed")
finally:
self.tearDown()
def test_text(self):
"""
This method tests if the embeddings are generated successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
req = backend_pb2.PredictOptions(Prompt="The capital of France is")
resp = stub.Predict(req)
self.assertIsNotNone(resp.message)
except Exception as err:
print(err)
self.fail("text service failed")
finally:
self.tearDown()
def test_sampling_params(self):
"""
This method tests if all sampling parameters are correctly processed
NOTE: this does NOT test for correctness, just that we received a compatible response
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
self.assertTrue(response.success)
req = backend_pb2.PredictOptions(
Prompt="The capital of France is",
TopP=0.8,
Tokens=50,
Temperature=0.7,
TopK=40,
PresencePenalty=0.1,
FrequencyPenalty=0.2,
RepetitionPenalty=1.1,
MinP=0.05,
Seed=42,
StopPrompts=["\n"],
StopTokenIds=[50256],
BadWords=["badword"],
IncludeStopStrInOutput=True,
IgnoreEOS=True,
MinTokens=5,
Logprobs=5,
PromptLogprobs=5,
SkipSpecialTokens=True,
SpacesBetweenSpecialTokens=True,
TruncatePromptTokens=10,
GuidedDecoding=True,
N=2,
)
resp = stub.Predict(req)
self.assertIsNotNone(resp.message)
self.assertIsNotNone(resp.logprobs)
except Exception as err:
print(err)
self.fail("sampling params service failed")
finally:
self.tearDown()
def test_embedding(self):
"""
This method tests if the embeddings are generated successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="intfloat/e5-mistral-7b-instruct"))
self.assertTrue(response.success)
embedding_request = backend_pb2.PredictOptions(Embeddings="This is a test sentence.")
embedding_response = stub.Embedding(embedding_request)
self.assertIsNotNone(embedding_response.embeddings)
# assert that is a list of floats
self.assertIsInstance(embedding_response.embeddings, list)
# assert that the list is not empty
self.assertTrue(len(embedding_response.embeddings) > 0)
except Exception as err:
print(err)
self.fail("Embedding service failed")
finally:
self.tearDown()

12
backend/python/mlx/test.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
runUnittests

View File

@@ -1,30 +1,24 @@
.PHONY: rerankers
rerankers: protogen
rerankers:
bash install.sh
.PHONY: run
run: protogen
run: rerankers
@echo "Running rerankers..."
bash run.sh
@echo "rerankers run."
# It is not working well by using command line. It only6 works with IDE like VSCode.
.PHONY: test
test: protogen
test: rerankers
@echo "Testing rerankers..."
bash test.sh
@echo "rerankers tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -3,7 +3,7 @@ intel-extension-for-pytorch==2.3.110+xpu
transformers
accelerate
torch==2.3.1+cxx11.abi
oneccl_bind_pt==2.3.100+xpu
oneccl_bind_pt==2.8.0+xpu
rerankers[transformers]
optimum[openvino]
setuptools

View File

@@ -1,3 +1,3 @@
grpcio==1.71.0
grpcio==1.74.0
protobuf
certifi

View File

@@ -3,18 +3,11 @@
.PHONY: install
install:
bash install.sh
$(MAKE) protogen
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
bash protogen.sh
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -1,30 +1,24 @@
.PHONY: transformers
transformers: protogen
transformers:
bash install.sh
.PHONY: run
run: protogen
run: transformers
@echo "Running transformers..."
bash run.sh
@echo "transformers run."
# It is not working well by using command line. It only6 works with IDE like VSCode.
.PHONY: test
test: protogen
test: transformers
@echo "Testing transformers..."
bash test.sh
@echo "transformers tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -94,7 +94,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
self.SentenceTransformer = False
device_map="cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device_map = "mps"
quantization = None
autoTokenizer = True

View File

@@ -1,3 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cpu
torch==2.7.1
llvmlite==0.43.0
numba==0.60.0
@@ -5,5 +6,5 @@ accelerate
transformers
bitsandbytes
outetts
sentence-transformers==5.0.0
protobuf==6.31.0
sentence-transformers==5.1.0
protobuf==6.32.0

View File

@@ -6,5 +6,5 @@ accelerate
transformers
bitsandbytes
outetts
sentence-transformers==5.0.0
protobuf==6.31.0
sentence-transformers==5.1.0
protobuf==6.32.0

View File

@@ -5,5 +5,5 @@ numba==0.60.0
transformers
bitsandbytes
outetts
sentence-transformers==5.0.0
protobuf==6.31.0
sentence-transformers==5.1.0
protobuf==6.32.0

View File

@@ -7,5 +7,5 @@ numba==0.60.0
bitsandbytes
outetts
bitsandbytes
sentence-transformers==5.0.0
protobuf==6.31.0
sentence-transformers==5.1.0
protobuf==6.32.0

View File

@@ -9,5 +9,5 @@ transformers
intel-extension-for-transformers
bitsandbytes
outetts
sentence-transformers==5.0.0
protobuf==6.31.0
sentence-transformers==5.1.0
protobuf==6.32.0

View File

@@ -0,0 +1,9 @@
torch==2.7.1
accelerate
llvmlite==0.43.0
numba==0.60.0
transformers
bitsandbytes
outetts
sentence-transformers==5.1.0
protobuf==6.32.0

View File

@@ -1,5 +1,5 @@
grpcio==1.71.0
protobuf==6.31.0
grpcio==1.74.0
protobuf==6.32.0
certifi
setuptools
scipy==1.15.1

View File

@@ -1,29 +1,23 @@
.PHONY: vllm
vllm: protogen
vllm:
bash install.sh
.PHONY: run
run: protogen
run: vllm
@echo "Running vllm..."
bash run.sh
@echo "vllm run."
.PHONY: test
test: protogen
test: vllm
@echo "Testing vllm..."
bash test.sh
@echo "vllm tested."
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -1,4 +1,4 @@
grpcio==1.71.0
grpcio==1.74.0
protobuf
certifi
setuptools

View File

@@ -2,27 +2,29 @@ package application
import (
"github.com/mudler/LocalAI/core/config"
"github.com/mudler/LocalAI/core/services"
"github.com/mudler/LocalAI/core/templates"
"github.com/mudler/LocalAI/pkg/model"
)
type Application struct {
backendLoader *config.BackendConfigLoader
backendLoader *config.ModelConfigLoader
modelLoader *model.ModelLoader
applicationConfig *config.ApplicationConfig
templatesEvaluator *templates.Evaluator
galleryService *services.GalleryService
}
func newApplication(appConfig *config.ApplicationConfig) *Application {
return &Application{
backendLoader: config.NewBackendConfigLoader(appConfig.ModelPath),
modelLoader: model.NewModelLoader(appConfig.ModelPath, appConfig.SingleBackend),
backendLoader: config.NewModelConfigLoader(appConfig.SystemState.Model.ModelsPath),
modelLoader: model.NewModelLoader(appConfig.SystemState, appConfig.SingleBackend),
applicationConfig: appConfig,
templatesEvaluator: templates.NewEvaluator(appConfig.ModelPath),
templatesEvaluator: templates.NewEvaluator(appConfig.SystemState.Model.ModelsPath),
}
}
func (a *Application) BackendLoader() *config.BackendConfigLoader {
func (a *Application) ModelConfigLoader() *config.ModelConfigLoader {
return a.backendLoader
}
@@ -37,3 +39,19 @@ func (a *Application) ApplicationConfig() *config.ApplicationConfig {
func (a *Application) TemplatesEvaluator() *templates.Evaluator {
return a.templatesEvaluator
}
func (a *Application) GalleryService() *services.GalleryService {
return a.galleryService
}
func (a *Application) start() error {
galleryService := services.NewGalleryService(a.ApplicationConfig(), a.ModelLoader())
err := galleryService.Start(a.ApplicationConfig().Context, a.ModelConfigLoader(), a.ApplicationConfig().SystemState)
if err != nil {
return err
}
a.galleryService = galleryService
return nil
}

Some files were not shown because too many files have changed in this diff Show More