Compare commits

...

271 Commits

Author SHA1 Message Date
Ettore Di Giacinto
9099d0c77e models(gallery): add tq2.5-14b-sugarquill-v1 (#4104)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-10 11:50:38 +01:00
Ettore Di Giacinto
b69614c2b3 models(gallery): add tissint-14b-128k-rp (#4103)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-10 10:38:19 +01:00
Ettore Di Giacinto
068b90a6dc models(gallery): add opencoder-1.5b instruct and base (#4102)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-10 10:32:12 +01:00
Ettore Di Giacinto
0586fe2d9c models(gallery): add opencoder-8b instruct and base (#4101)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-10 10:28:47 +01:00
LocalAI [bot]
f1e03bf474 chore: ⬆️ Update ggerganov/llama.cpp to 6423c65aa8be1b98f990cf207422505ac5a441a1 (#4100)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-09 22:13:13 +00:00
Ettore Di Giacinto
7f0093b2c9 models(gallery): add eva-qwen2.5-14b-v0.2 (#4099)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-09 09:01:15 +01:00
Ettore Di Giacinto
e8431d62a2 models(gallery): add llenn-v0.75-qwen2.5-72b-i1 (#4098)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-09 08:58:09 +01:00
LocalAI [bot]
adafd7cf23 chore: ⬆️ Update ggerganov/llama.cpp to ec450d3bbf9fdb3cd06b27c00c684fd1861cb0cf (#4097)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-11-08 23:00:05 +00:00
Ettore Di Giacinto
6daef00d30 chore(refactor): drop unnecessary code in loader (#4096)
* chore: simplify passing options to ModelOptions

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(refactor): do not expose internal backend Loader

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-08 21:54:25 +01:00
Ettore Di Giacinto
a0cdd19038 models(gallery): add tess-r1-limerick-llama-3.1-70b (#4095)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-08 11:54:40 +01:00
Ettore Di Giacinto
d454118887 fix(container-images): install uv as system package (#4094)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-08 11:47:43 +01:00
LocalAI [bot]
356f23bacb chore: ⬆️ Update ggerganov/whisper.cpp to 31aea563a83803c710691fed3e8d700e06ae6788 (#4092)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-11-08 08:36:08 +01:00
LocalAI [bot]
196c249367 chore: ⬆️ Update ggerganov/llama.cpp to 97404c4a0374cac45c8c34a32d13819de1dd023d (#4093)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-07 22:55:56 +00:00
Ettore Di Giacinto
e2a8dd64db fix(tts): correctly pass backend config when generating model options (#4091)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-07 18:30:22 +01:00
Ettore Di Giacinto
20a5b20b59 chore(p2p): enhance logging (#4090)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-07 18:09:33 +01:00
Ettore Di Giacinto
06d0d00231 models(gallery): add valor-7b-v0.1 (#4089)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-07 10:05:50 +01:00
Ettore Di Giacinto
62c7f745ca models(gallery): add q25-1.5b-veolu (#4088)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-07 10:05:20 +01:00
LocalAI [bot]
551faa8ddb chore: ⬆️ Update ggerganov/llama.cpp to 5c333e014059122245c318e7ed4ec27d1085573c (#4087)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-06 21:48:57 +00:00
Ettore Di Giacinto
2c041a2077 feat(ui): move model detailed info to a modal (#4086)
* feat(ui): move model detailed info to a modal

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: add static asset

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-06 18:25:59 +01:00
Ettore Di Giacinto
c4af769d4f chore: hide raw safetensors files (#4085)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-06 12:04:39 +01:00
Ettore Di Giacinto
b425a870b0 fix(diffusers): correctly parse height and width request without parametrization (#4082)
* fix(diffusers): allow to specify width and height without enable-parameters

Let's simplify usage by not gating width and height by parameters

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: use sane defaults

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-06 08:53:02 +01:00
LocalAI [bot]
b59e16742e chore: ⬆️ Update ggerganov/llama.cpp to b8deef0ec0af5febac1d2cfd9119ff330ed0b762 (#4083)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-05 21:40:48 +00:00
Ettore Di Giacinto
947224b952 feat(diffusers): allow multiple lora adapters (#4081)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-05 15:14:33 +01:00
LocalAI [bot]
20cd8814c1 chore(model-gallery): ⬆️ update checksum (#4080)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-05 08:38:34 +01:00
LocalAI [bot]
ce8045f521 chore: ⬆️ Update ggerganov/llama.cpp to d5a409e57fe8bd24fef597ab8a31110d390a6392 (#4079)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-11-05 05:01:26 +00:00
Ettore Di Giacinto
1bf5a11437 models(gallery): add g2-9b-sugarquill-v0 (#4073)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-04 22:30:17 +01:00
Ettore Di Giacinto
2daa5e6be0 models(gallery): add cybertron-v4-qw7b-mgs (#4063)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-04 22:30:07 +01:00
Ettore Di Giacinto
b91aa288b5 models(gallery): add g2-9b-aletheia-v1 (#4056)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-04 19:59:14 +01:00
Ettore Di Giacinto
43187d1aba models(gallery): add llama-3.1-whiterabbitneo-2-8b (#4043)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-04 11:28:16 +01:00
Ettore Di Giacinto
97b730e238 models(gallery): add whiterabbitneo-2.5-qwen-2.5-coder-7b (#4042)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-04 11:23:17 +01:00
LocalAI [bot]
d11ed5287b chore: ⬆️ Update ggerganov/llama.cpp to 9f409893519b4a6def46ef80cd6f5d05ac0fb157 (#4041)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-04 09:30:04 +01:00
LocalAI [bot]
81ac490202 chore: ⬆️ Update mudler/go-piper to e10ca041a885d4a8f3871d52924b47792d5e5aa0 (#3949)
⬆️ Update mudler/go-piper

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-03 21:39:43 +00:00
LocalAI [bot]
e53dd4a57b chore: ⬆️ Update ggerganov/llama.cpp to 9830b6923b61f1e652a35afeac77aa5f886dad09 (#4040)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-03 13:01:56 +00:00
Ettore Di Giacinto
d274df2fe2 models(gallery): add control-8b-v1.1 (#4039)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-03 10:00:20 +01:00
Arnaud A
0b3a55b9fe docs: Update documentation for text-to-audio feature regarding response_format (#4038) 2024-11-03 02:15:54 +00:00
LocalAI [bot]
abd5eea66d chore: ⬆️ Update ggerganov/llama.cpp to 42cadc74bda60afafb45b71b1a39d150ede0ed4d (#4037)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-02 22:33:55 +00:00
Arnaud A
65c3df392c feat(tts): Implement naive response_format for tts endpoint (#4035)
Signed-off-by: n-Arno <arnaud.alcabas@gmail.com>
2024-11-02 19:13:35 +00:00
Ettore Di Giacinto
57908df956 chore(docs): add top-header partial (#4034)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-02 12:07:40 +01:00
Ettore Di Giacinto
26e522a558 models(gallery): add smollm2-1.7b-instruct (#4033)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-02 11:01:39 +01:00
Ettore Di Giacinto
817685e4c1 models(gallery): add starcannon-unleashed-12b-v1.0 (#4032)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-02 10:44:51 +01:00
LocalAI [bot]
bcad3f3018 chore: ⬆️ Update ggerganov/llama.cpp to 418f5eef262cea07c2af4f45ee6a88d882221fcb (#4030)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-02 09:06:06 +01:00
LocalAI [bot]
303370ad87 chore: ⬆️ Update ggerganov/whisper.cpp to 0377596b77a3602e36430320cbe45f8c305ef04a (#4031)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-01 22:17:04 +00:00
Ettore Di Giacinto
a9fb7174ba models(gallery): add llama3.1-bestmix-chem-einstein-8b (#4028)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-01 17:36:31 +01:00
LocalAI [bot]
6d6f50340f chore: ⬆️ Update ggerganov/whisper.cpp to aa037a60f32018f32e54be3531ec6cc7802899eb (#4026)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-11-01 11:22:22 +01:00
LocalAI [bot]
6a136b2a4b chore: ⬆️ Update ggerganov/llama.cpp to ab3d71f97f5b2915a229099777af00d3eada1d24 (#4025)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-31 21:40:33 +00:00
Ettore Di Giacinto
8f7045cfa6 chore(tests): bump timeouts (#4024)
To avoid flaky runs

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-31 15:40:43 +01:00
Ettore Di Giacinto
61c964dce7 fix(grpc): pass by modelpath (#4023)
Instead of trying to derive it from the model file. In backends that
specify HF url this results in a fragile logic.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-31 12:12:22 +01:00
Ettore Di Giacinto
48d621c64e models(gallery): add spiral-da-hyah-qwen2.5-72b-i1 (#4022)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-31 10:28:26 +01:00
LocalAI [bot]
661dbbf2b4 chore: ⬆️ Update ggerganov/whisper.cpp to 19dca2bb1464326587cbeb7af00f93c4a59b01fd (#4020)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-31 09:56:06 +01:00
LocalAI [bot]
254f644c5f chore: ⬆️ Update ggerganov/llama.cpp to 61408e7fad082dc44a11c8a9f1398da4837aad44 (#4021)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-31 09:55:42 +01:00
Ettore Di Giacinto
88edb1e2af chore(tests): expand timeout (#4019)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-30 15:34:44 +01:00
Ettore Di Giacinto
640a3f1bfe chore(embedded): modify phi-2 configuration URL
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-30 10:58:03 +01:00
Ettore Di Giacinto
b1243453f4 chore(tests): fix examples url
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-30 10:57:21 +01:00
Ettore Di Giacinto
dfc651f643 chore(readme): update examples link
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-30 09:12:45 +01:00
Ettore Di Giacinto
d4978383ff chore: create examples/README to redirect to the new repository
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-30 09:11:32 +01:00
Dave
cde0139363 chore: drop examples folder now that LocalAI-examples has been created (#4017)
Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-30 09:10:33 +01:00
Ettore Di Giacinto
3d4bb757d2 chore(deps): bump llama-cpp to 8f275a7c4593aa34147595a90282cf950a853690 (#4016)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-30 08:31:13 +01:00
LocalAI [bot]
a4e749c22f chore: ⬆️ Update ggerganov/whisper.cpp to 55e422109b3504d1a824935cc2681ada7ee9fd38 (#4015)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-29 22:01:46 +00:00
LocalAI [bot]
25a9685e2f chore: ⬆️ Update ggerganov/whisper.cpp to d4bc413505b2fba98dffbb9a176ddd1b165941d0 (#4005)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-29 15:07:43 +01:00
LocalAI [bot]
94d417c2b7 chore: ⬆️ Update ggerganov/llama.cpp to 61715d5cc83a28181df6a641846e4f6a740f3c74 (#4006)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-29 15:06:57 +01:00
Ettore Di Giacinto
b897d47e0f chore(deps): bump grpcio to 1.67.1 (#4009)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-29 15:04:21 +01:00
dependabot[bot]
3422d21346 chore(deps): Bump openai from 1.52.0 to 1.52.2 in /examples/functions (#4000)
Bumps [openai](https://github.com/openai/openai-python) from 1.52.0 to 1.52.2.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.52.0...v1.52.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:30:03 +01:00
dependabot[bot]
a7917a2150 chore(deps): Bump frozenlist from 1.4.1 to 1.5.0 in /examples/langchain/langchainpy-localai-example (#3992)
chore(deps): Bump frozenlist

Bumps [frozenlist](https://github.com/aio-libs/frozenlist) from 1.4.1 to 1.5.0.
- [Release notes](https://github.com/aio-libs/frozenlist/releases)
- [Changelog](https://github.com/aio-libs/frozenlist/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/frozenlist/compare/v1.4.1...v1.5.0)

---
updated-dependencies:
- dependency-name: frozenlist
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:29:20 +01:00
dependabot[bot]
7b23b894b4 chore(deps): Bump tqdm from 4.66.5 to 4.66.6 in /examples/langchain/langchainpy-localai-example (#3991)
chore(deps): Bump tqdm

Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.66.5 to 4.66.6.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.66.5...v4.66.6)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:28:10 +01:00
dependabot[bot]
15c083f731 chore(deps): Bump llama-index from 0.11.19 to 0.11.20 in /examples/chainlit (#3990)
chore(deps): Bump llama-index in /examples/chainlit

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.19 to 0.11.20.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.19...v0.11.20)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:27:44 +01:00
dependabot[bot]
293eaad69d chore(deps): Bump openai from 1.52.0 to 1.52.2 in /examples/langchain-chroma (#3989)
chore(deps): Bump openai in /examples/langchain-chroma

Bumps [openai](https://github.com/openai/openai-python) from 1.52.0 to 1.52.2.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.52.0...v1.52.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:26:45 +01:00
dependabot[bot]
605126db8a chore(deps): Bump llama-index from 0.11.19 to 0.11.20 in /examples/langchain-chroma (#3988)
chore(deps): Bump llama-index in /examples/langchain-chroma

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.19 to 0.11.20.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.19...v0.11.20)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:26:12 +01:00
dependabot[bot]
3980beabd7 chore(deps): Bump docs/themes/hugo-theme-relearn from 06e70da to 28fce6b (#3986)
chore(deps): Bump docs/themes/hugo-theme-relearn

Bumps [docs/themes/hugo-theme-relearn](https://github.com/McShelby/hugo-theme-relearn) from `06e70da` to `28fce6b`.
- [Release notes](https://github.com/McShelby/hugo-theme-relearn/releases)
- [Commits](06e70da8a6...28fce6b04c)

---
updated-dependencies:
- dependency-name: docs/themes/hugo-theme-relearn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:25:42 +01:00
Ettore Di Giacinto
11d3ce9edb Revert "chore(deps): Bump torchvision from 0.18.1+rocm6.0 to 0.20.0+cu118 in /backend/python/diffusers" (#4008)
Revert "chore(deps): Bump torchvision from 0.18.1+rocm6.0 to 0.20.0+cu118 in …"

This reverts commit 14cb620cd8.
2024-10-29 09:25:17 +01:00
dependabot[bot]
14cb620cd8 chore(deps): Bump torchvision from 0.18.1+rocm6.0 to 0.20.0+cu118 in /backend/python/diffusers (#3997)
chore(deps): Bump torchvision in /backend/python/diffusers

Bumps torchvision from 0.18.1+rocm6.0 to 0.20.0+cu118.

---
updated-dependencies:
- dependency-name: torchvision
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-28 23:33:35 +00:00
Ettore Di Giacinto
841dfefd62 models(gallery): add moe-girl-800ma-3bt (#3995)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 19:41:34 +01:00
Ettore Di Giacinto
d1cb2467fd models(gallery): add granite-3.0-1b-a400m-instruct (#3994)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 19:33:52 +01:00
dependabot[bot]
a8e10f03e9 chore(deps): Bump openai from 1.51.2 to 1.52.2 in /examples/langchain/langchainpy-localai-example (#3993)
chore(deps): Bump openai

Bumps [openai](https://github.com/openai/openai-python) from 1.51.2 to 1.52.2.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.51.2...v1.52.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-28 19:33:05 +01:00
Ettore Di Giacinto
94010a0a44 models(gallery): add meraj-mini (#3987)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 19:12:59 +01:00
Ettore Di Giacinto
75bc933dc4 models(gallery): add l3-nymeria-maid-8b (#3985)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 19:00:55 +01:00
Ettore Di Giacinto
8de0f21f7c models(gallery): add llama-3-whiterabbitneo-8b-v2.0 (#3984)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 16:35:24 +01:00
Ettore Di Giacinto
66b03b54cb models(gallery): add magnum-v4-9b (#3983)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 16:24:14 +01:00
Ettore Di Giacinto
9ea8159683 models(gallery): add delirium-v1 (#3981)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 10:09:53 +01:00
Ettore Di Giacinto
c33083aeca models(gallery): add quill-v1 (#3980)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-28 09:59:21 +01:00
LocalAI [bot]
eb34f838f8 chore: ⬆️ Update ggerganov/llama.cpp to 8841ce3f439de6e770f70319b7e08b6613197ea7 (#3979)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-27 21:43:51 +00:00
Ettore Di Giacinto
8327e85e34 models(gallery): add llama-3.1-hawkish-8b (#3978)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-27 09:08:34 +01:00
Ettore Di Giacinto
a8c08d83d0 models(gallery): add l3.1-70blivion-v0.1-rc1-70b-i1 (#3977)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-27 09:06:27 +01:00
LocalAI [bot]
e314cdcdde chore: ⬆️ Update ggerganov/llama.cpp to cc2983d3753c94a630ca7257723914d4c4f6122b (#3976)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-26 21:40:42 +00:00
Ettore Di Giacinto
4528e969c9 models(gallery): add thebeagle-v2beta-32b-mgs (#3975)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 14:56:41 +02:00
Ettore Di Giacinto
175ae751ba models(gallery): add llama-3.2-3b-instruct-uncensored (#3974)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 14:56:02 +02:00
Ettore Di Giacinto
43bfdc9561 models(gallery): add darkest-muse-v1 (#3973)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 14:52:55 +02:00
Ettore Di Giacinto
546dce68a6 chore: change url to github repository (#3972)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 14:50:18 +02:00
Ettore Di Giacinto
82db2fa425 models(gallery): add llama-3.2-sun-2.5b-chat (#3971)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 09:09:22 +02:00
Ettore Di Giacinto
a27af2d7ad models(gallery): add llama3.1-darkstorm-aspire-8b (#3970)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 09:05:18 +02:00
Ettore Di Giacinto
9f43f37150 models(gallery): add l3.1-moe-2x8b-v0.2 (#3969)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-26 09:02:27 +02:00
Ettore Di Giacinto
3ad920b50a fix(parler-tts): pin protobuf (#3963)
* fix(parler-tts): pin protobuf

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* debug

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Re-apply workaround

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-25 23:50:12 +02:00
LocalAI [bot]
dbe7ac484c chore: ⬆️ Update ggerganov/llama.cpp to 668750357e66bfa3d1504b65699f5a0dfe3cb7cb (#3965)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-25 21:42:18 +00:00
Ettore Di Giacinto
d9905ba050 fix(ci): drop grpcio-tools pin to apple CI test run (#3964)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-25 12:59:37 +02:00
Ettore Di Giacinto
dd2e243997 chore(python): update backend sample to consume grpcio from venv (#3961)
Backends can as well depends on grpcio and require different versions from
the ones that are installed in the system.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-25 12:32:48 +02:00
Ettore Di Giacinto
fd905b483b fix(gallery): overrides for parler-tts in the gallery (#3962)
chore(parler-tts): fix overrides in the gallery

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-25 12:32:37 +02:00
Ettore Di Giacinto
9c5cd9b38b fix(parler-tts): pin grpcio-tools (#3960)
Seems we require a specific version to build the backend files.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-25 12:25:29 +02:00
Sertaç Özercan
07ce0a3c17 feat: add flux single file support (#3959)
feat: flux pipeline single file

Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2024-10-25 10:12:43 +02:00
LocalAI [bot]
5be2d22117 chore: ⬆️ Update ggerganov/llama.cpp to 958367bf530d943a902afa1ce1c342476098576b (#3956)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-24 22:45:26 +02:00
Ettore Di Giacinto
e88468640f fix(parler-tts): use latest audiotools (#3954)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-24 11:40:35 +02:00
LocalAI [bot]
81890e76a0 chore: ⬆️ Update ggerganov/llama.cpp to 0a1c750c80147687df267114c81956757cc14382 (#3948)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-24 10:08:55 +02:00
LocalAI [bot]
a91c2e7aaa chore: ⬆️ Update ggerganov/whisper.cpp to 0fbaac9c891055796456df7b9122a70c220f9ca1 (#3950)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-24 10:08:20 +02:00
Mauro Morales
7748eb6553 docs: add Homebrew as an option to install on MacOS (#3946)
Add Homebrew as an option to install on MacOS

Signed-off-by: Mauro Morales <contact@mauromorales.com>
2024-10-23 20:02:08 +02:00
Ettore Di Giacinto
835932e95e feat: update proto file
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-23 15:46:06 +02:00
Ettore Di Giacinto
ae1ec4e096 feat(vllm): expose 'load_format' (#3943)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-23 15:34:57 +02:00
Ettore Di Giacinto
c75ecfa009 fix(phi3-vision): add multimodal template (#3944)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-23 15:34:45 +02:00
Ettore Di Giacinto
8737a65760 feat: allow to disable '/metrics' endpoints for local stats (#3945)
Seem the "/metrics" endpoint that is source of confusion as people tends
to believe we collect telemetry data just because we import
"opentelemetry", however it is still a good idea to allow to disable
even local metrics if not really required.

See also: https://github.com/mudler/LocalAI/issues/3942

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-23 15:34:32 +02:00
LocalAI [bot]
418c582430 chore: ⬆️ Update ggerganov/llama.cpp to c8c07d658a6cefc5a50cfdf6be7d726503612303 (#3940)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-23 11:17:21 +02:00
Dave
6fd0341eca chore: update go-piper to latest (#3939)
Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-23 11:16:38 +02:00
Ettore Di Giacinto
ccc7cb0287 feat(templates): use a single template for multimodals messages (#3892)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-22 09:34:05 +02:00
LocalAI [bot]
a1d6cc93a8 chore: ⬆️ Update ggerganov/llama.cpp to e01c67affe450638162a1a457e2e57859ef6ebf0 (#3937)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-22 09:33:55 +02:00
LocalAI [bot]
dc14d80f51 docs: ⬆️ update docs version mudler/LocalAI (#3936)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-22 09:33:29 +02:00
dependabot[bot]
b8eb10b6b7 chore(deps): Bump yarl from 1.15.5 to 1.16.0 in /examples/langchain/langchainpy-localai-example (#3938)
chore(deps): Bump yarl

Bumps [yarl](https://github.com/aio-libs/yarl) from 1.15.5 to 1.16.0.
- [Release notes](https://github.com/aio-libs/yarl/releases)
- [Changelog](https://github.com/aio-libs/yarl/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/yarl/compare/v1.15.5...v1.16.0)

---
updated-dependencies:
- dependency-name: yarl
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:33:14 +02:00
dependabot[bot]
0f6b4513bf chore(deps): Bump openai from 1.51.2 to 1.52.0 in /examples/functions (#3901)
Bumps [openai](https://github.com/openai/openai-python) from 1.51.2 to 1.52.0.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.51.2...v1.52.0)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:32:55 +02:00
dependabot[bot]
6f0c936f74 chore(deps): Bump marshmallow from 3.22.0 to 3.23.0 in /examples/langchain/langchainpy-localai-example (#3917)
chore(deps): Bump marshmallow

Bumps [marshmallow](https://github.com/marshmallow-code/marshmallow) from 3.22.0 to 3.23.0.
- [Changelog](https://github.com/marshmallow-code/marshmallow/blob/dev/CHANGELOG.rst)
- [Commits](https://github.com/marshmallow-code/marshmallow/compare/3.22.0...3.23.0)

---
updated-dependencies:
- dependency-name: marshmallow
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:32:45 +02:00
dependabot[bot]
42136b6f27 chore(deps): Bump llama-index from 0.11.17 to 0.11.19 in /examples/langchain-chroma (#3907)
chore(deps): Bump llama-index in /examples/langchain-chroma

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.17 to 0.11.19.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.17...v0.11.19)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:32:34 +02:00
dependabot[bot]
2810e3ea5c chore(deps): Bump openai from 1.51.2 to 1.52.0 in /examples/langchain-chroma (#3908)
chore(deps): Bump openai in /examples/langchain-chroma

Bumps [openai](https://github.com/openai/openai-python) from 1.51.2 to 1.52.0.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.51.2...v1.52.0)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:32:14 +02:00
dependabot[bot]
11d34e38dc chore(deps): Bump yarl from 1.15.2 to 1.15.5 in /examples/langchain/langchainpy-localai-example (#3921)
chore(deps): Bump yarl

Bumps [yarl](https://github.com/aio-libs/yarl) from 1.15.2 to 1.15.5.
- [Release notes](https://github.com/aio-libs/yarl/releases)
- [Changelog](https://github.com/aio-libs/yarl/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/yarl/compare/v1.15.2...v1.15.5)

---
updated-dependencies:
- dependency-name: yarl
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:31:47 +02:00
dependabot[bot]
06951cdd6b chore(deps): Bump sqlalchemy from 2.0.35 to 2.0.36 in /examples/langchain/langchainpy-localai-example (#3920)
chore(deps): Bump sqlalchemy

Bumps [sqlalchemy](https://github.com/sqlalchemy/sqlalchemy) from 2.0.35 to 2.0.36.
- [Release notes](https://github.com/sqlalchemy/sqlalchemy/releases)
- [Changelog](https://github.com/sqlalchemy/sqlalchemy/blob/main/CHANGES.rst)
- [Commits](https://github.com/sqlalchemy/sqlalchemy/commits)

---
updated-dependencies:
- dependency-name: sqlalchemy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:31:30 +02:00
dependabot[bot]
103af480c7 chore(deps): Bump docs/themes/hugo-theme-relearn from 007cc20 to 06e70da (#3932)
chore(deps): Bump docs/themes/hugo-theme-relearn

Bumps [docs/themes/hugo-theme-relearn](https://github.com/McShelby/hugo-theme-relearn) from `007cc20` to `06e70da`.
- [Release notes](https://github.com/McShelby/hugo-theme-relearn/releases)
- [Commits](007cc20686...06e70da8a6)

---
updated-dependencies:
- dependency-name: docs/themes/hugo-theme-relearn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:31:15 +02:00
dependabot[bot]
db401b4d84 chore(deps): Bump langchain-community from 0.3.2 to 0.3.3 in /examples/langchain/langchainpy-localai-example (#3923)
chore(deps): Bump langchain-community

Bumps [langchain-community](https://github.com/langchain-ai/langchain) from 0.3.2 to 0.3.3.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain-community==0.3.2...langchain-community==0.3.3)

---
updated-dependencies:
- dependency-name: langchain-community
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:30:52 +02:00
dependabot[bot]
e0c876aae1 chore(deps): Bump langchain from 0.3.3 to 0.3.4 in /examples/functions (#3900)
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.3.3 to 0.3.4.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain==0.3.3...langchain==0.3.4)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:30:28 +02:00
dependabot[bot]
5e0847b3d7 chore(deps): Bump weaviate-client from 4.8.1 to 4.9.0 in /examples/chainlit (#3894)
chore(deps): Bump weaviate-client in /examples/chainlit

Bumps [weaviate-client](https://github.com/weaviate/weaviate-python-client) from 4.8.1 to 4.9.0.
- [Release notes](https://github.com/weaviate/weaviate-python-client/releases)
- [Changelog](https://github.com/weaviate/weaviate-python-client/blob/main/docs/changelog.rst)
- [Commits](https://github.com/weaviate/weaviate-python-client/compare/v4.8.1...v4.9.0)

---
updated-dependencies:
- dependency-name: weaviate-client
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:30:16 +02:00
dependabot[bot]
ee5ca49bc1 chore(deps): Bump llama-index from 0.11.17 to 0.11.19 in /examples/chainlit (#3893)
chore(deps): Bump llama-index in /examples/chainlit

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.17 to 0.11.19.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.17...v0.11.19)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 09:29:56 +02:00
Ettore Di Giacinto
015835dba2 models(gallery): add phi-3 vision (#3890)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-21 11:47:52 +02:00
LocalAI [bot]
313ea2c4d2 chore: ⬆️ Update ggerganov/llama.cpp to 45f097645efb11b6d09a5b4adbbfd7c312ac0126 (#3889)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-20 21:40:26 +00:00
Ettore Di Giacinto
26c4058be4 fix(vllm): do not set videos if we don't have any (#3885)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-20 11:44:28 +02:00
Ettore Di Giacinto
32db787991 chore(deps): bump llama-cpp to cda0e4b648dde8fac162b3430b14a99597d3d74f (#3884)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-20 00:26:49 +02:00
Ettore Di Giacinto
011565aaa3 chore(openvoice): pin faster-whisper in requirements-intel.txt
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-19 23:04:42 +02:00
Ettore Di Giacinto
c967ac37bc chore(openvoice/deps): pin numpy in requirements-intel.txt
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-19 16:01:31 +02:00
Ettore Di Giacinto
64721606b9 chore(does): pin deps in requirements-intel.txt
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-19 13:56:46 +02:00
Ettore Di Giacinto
7c502ec209 Revert "chore(deps): Bump gradio from 3.48.0 to 5.0.0 in /backend/python/openvoice in the pip group" (#3881)
Revert "chore(deps): Bump gradio from 3.48.0 to 5.0.0 in /backend/python/open…"

This reverts commit 7ee25ecfb3.
2024-10-19 13:54:40 +02:00
dependabot[bot]
7ee25ecfb3 chore(deps): Bump gradio from 3.48.0 to 5.0.0 in /backend/python/openvoice in the pip group (#3880)
chore(deps): Bump gradio in /backend/python/openvoice in the pip group

Bumps the pip group in /backend/python/openvoice with 1 update: [gradio](https://github.com/gradio-app/gradio).


Updates `gradio` from 3.48.0 to 5.0.0
- [Release notes](https://github.com/gradio-app/gradio/releases)
- [Changelog](https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gradio-app/gradio/compare/gradio@3.48.0...gradio@5.0.0)

---
updated-dependencies:
- dependency-name: gradio
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-19 11:24:34 +00:00
Ettore Di Giacinto
cdbcac6a78 fix(sycl): drop gradio pin
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-19 11:16:23 +02:00
Ettore Di Giacinto
87f78ecfa9 chore(open voice): pin gradio version in requirements.txt
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-19 09:00:25 +02:00
LocalAI [bot]
cffecda48c chore: ⬆️ Update ggerganov/llama.cpp to afd9909a6481402844aecefa8a8908afdd7f52f1 (#3879)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-18 21:43:38 +00:00
Ettore Di Giacinto
963e5903fc chore(deps): downgrade networkx
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-18 19:36:55 +02:00
Ettore Di Giacinto
9c425d55f6 chore(deps): pin networkx (#3878)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 18:21:48 +02:00
Ettore Di Giacinto
398a9efa3a chore(deps): pin numpy (#3876)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 16:59:31 +02:00
Ettore Di Giacinto
8f2cf52f3b chore(deps): pin packaging (#3875)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 15:18:56 +02:00
Ettore Di Giacinto
134ea1a37b fix(dependencies): move deps that brings pytorch (#3873)
* fix(dependencies): move deps that brings pytorch

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(deps): pin llvmlite

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 10:31:21 +02:00
Ettore Di Giacinto
3e77a17b26 fix(dependencies): pin pytorch version (#3872)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-18 09:11:59 +02:00
LocalAI [bot]
a26fb548b1 chore: ⬆️ Update ggerganov/whisper.cpp to a5abfe6a90495f7bf19fe70d016ecc255e97359c (#3870)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-17 23:05:26 +02:00
LocalAI [bot]
08e1e2251e chore: ⬆️ Update ggerganov/llama.cpp to 99bd4ac28c32cd17c0e337ff5601393b033dc5fc (#3869)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-17 23:05:04 +02:00
Ettore Di Giacinto
dcabda42d1 fix(mamba): pin torch version (#3871)
causal-conv1d supports only torch 2.4.x, not torch 2.5.x

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 23:04:11 +02:00
Ettore Di Giacinto
fd4043266b Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-17 17:49:03 +02:00
Ettore Di Giacinto
e1db6dce82 feat(templates): add sprig to multimodal templates (#3868)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 17:34:20 +02:00
Ettore Di Giacinto
d5da8c3509 feat(templates): extract text from multimodal requests (#3866)
When offloading template construction to the backend, we want to keep
text around in case of multimodal requests.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 17:33:50 +02:00
Ettore Di Giacinto
9db068388b fix(vllm): images and videos are base64 by default (#3867)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 17:32:57 +02:00
Ettore Di Giacinto
54c0f153e2 models(gallery): add meissa-qwen2.5-7b-instruct (#3865)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 11:25:32 +02:00
Ettore Di Giacinto
e45e8a58fc models(gallery): add baldur-8b (#3864)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 11:20:56 +02:00
Ettore Di Giacinto
52bc463a3f models(gallery): add darkens-8b (#3863)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 11:16:41 +02:00
Ettore Di Giacinto
0da16c73ba models(gallery): add tor-8b (#3862)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 11:10:36 +02:00
Ettore Di Giacinto
e416843f22 models(gallery): add theia-llama-3.1-8b-v1 (#3861)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 11:06:24 +02:00
Ettore Di Giacinto
e65e3253a3 models(gallery): add apollo2-9b (#3860)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 10:16:52 +02:00
Ettore Di Giacinto
bc7d4586ed models(gallery): add mn-lulanum-12b-fix-i1 (#3859)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 10:08:57 +02:00
Ettore Di Giacinto
056d4b4fc9 models(gallery): add phi-3.5-mini-titanfusion-0.2 (#3857)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 10:06:38 +02:00
Ettore Di Giacinto
5927f9e43e models(gallery): add l3.1-etherealrainbow-v1.0-rc1-8b (#3856)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 10:03:08 +02:00
Ettore Di Giacinto
98dfa363db models(gallery): add qevacot-7b-v2 (#3855)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 09:59:42 +02:00
Ettore Di Giacinto
92cd538829 models(gallery): add llama-3.1-nemotron-70b-instruct-hf (#3854)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-17 09:56:07 +02:00
Ettore Di Giacinto
cdcfb2617c Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-17 09:46:26 +02:00
LocalAI [bot]
1a9299a7c0 chore: ⬆️ Update ggerganov/whisper.cpp to d3f7137cc9befa6d74dc4085de2b664b97b7c8bb (#3852)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-17 09:21:54 +02:00
LocalAI [bot]
a60b9b7a38 chore: ⬆️ Update ggerganov/llama.cpp to 9e041024481f6b249ab8918e18b9477f873b5a5e (#3853)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-16 21:41:30 +00:00
Ettore Di Giacinto
1b44a5a3b7 chore(deps): bump grpcio to 1.67.0 (#3851)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 18:39:28 +02:00
Ettore Di Giacinto
fdf1452c6b models(gallery): add mahou-1.5-llama3.1-70b-i1 (#3850)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 18:37:01 +02:00
Ettore Di Giacinto
773cec77a2 models(gallery): add tsunami-0.5x-7b-instruct-i1 (#3849)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 18:31:50 +02:00
Ettore Di Giacinto
585e0745da models(gallery): add astral-fusion-neural-happy-l3.1-8b (#3848)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 18:28:51 +02:00
Ettore Di Giacinto
41db6668f0 models(gallery): add doctoraifinetune-3.1-8b-i1 (#3846)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 09:34:57 +02:00
Ettore Di Giacinto
c9f28e2b56 models(gallery): add ml-ms-etheris-123b (#3845)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 09:34:12 +02:00
Ettore Di Giacinto
6afe9c8fda models(gallery): add llama-3.2-3b-reasoning-time (#3844)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 09:15:10 +02:00
Ettore Di Giacinto
f166541ac3 models(gallery): add llama-3.2-chibi-3b (#3843)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-16 09:12:58 +02:00
LocalAI [bot]
7ddf486b37 chore: ⬆️ Update ggerganov/llama.cpp to 755a9b2bf00fbae988e03a47e852b66eaddd113a (#3841)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-16 09:01:29 +02:00
LocalAI [bot]
5f130febb8 chore: ⬆️ Update ggerganov/whisper.cpp to b6049060dd2341b7816d2bce7dc7451c1665828e (#3842)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-15 21:41:29 +00:00
Ettore Di Giacinto
b82577d642 fix(llama.cpp): consider also native builds (#3839)
This is in order to identify also builds which are not using
alternatives based on capabilities.

For instance, there are cases when we build the backend only natively in
the host.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-15 09:41:53 +02:00
Franco Lombardo
97cf028175 chore: update integrations.md with LLPhant (#3838)
Signed-off-by: Franco Lombardo <f.lombardo69@gmail.com>
2024-10-15 09:41:39 +02:00
LocalAI [bot]
094f808549 chore: ⬆️ Update ggerganov/whisper.cpp to 06a1da9daff94c1bf1b1d38950628264fe443f76 (#3836)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-15 09:41:11 +02:00
dependabot[bot]
18f9e11f1a chore(deps): Bump docs/themes/hugo-theme-relearn from e1a1f01 to 007cc20 (#3835)
chore(deps): Bump docs/themes/hugo-theme-relearn

Bumps [docs/themes/hugo-theme-relearn](https://github.com/McShelby/hugo-theme-relearn) from `e1a1f01` to `007cc20`.
- [Release notes](https://github.com/McShelby/hugo-theme-relearn/releases)
- [Commits](e1a1f01f4c...007cc20686)

---
updated-dependencies:
- dependency-name: docs/themes/hugo-theme-relearn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-15 09:40:53 +02:00
dependabot[bot]
18c35ee86f chore(deps): Bump numpy from 2.1.1 to 2.1.2 in /examples/langchain/langchainpy-localai-example (#3833)
chore(deps): Bump numpy

Bumps [numpy](https://github.com/numpy/numpy) from 2.1.1 to 2.1.2.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](https://github.com/numpy/numpy/compare/v2.1.1...v2.1.2)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-15 09:40:30 +02:00
dependabot[bot]
53d1db1da0 chore(deps): Bump yarl from 1.15.1 to 1.15.2 in /examples/langchain/langchainpy-localai-example (#3832)
chore(deps): Bump yarl

Bumps [yarl](https://github.com/aio-libs/yarl) from 1.15.1 to 1.15.2.
- [Release notes](https://github.com/aio-libs/yarl/releases)
- [Changelog](https://github.com/aio-libs/yarl/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/yarl/compare/v1.15.1...v1.15.2)

---
updated-dependencies:
- dependency-name: yarl
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-15 09:40:06 +02:00
dependabot[bot]
13e7432b89 chore(deps): Bump langchain-community from 0.3.1 to 0.3.2 in /examples/langchain/langchainpy-localai-example (#3831)
chore(deps): Bump langchain-community

Bumps [langchain-community](https://github.com/langchain-ai/langchain) from 0.3.1 to 0.3.2.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain-community==0.3.1...langchain-community==0.3.2)

---
updated-dependencies:
- dependency-name: langchain-community
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-15 09:39:55 +02:00
LocalAI [bot]
ddd289d1af chore: ⬆️ Update ggerganov/llama.cpp to a89f75e1b7b90cb2d4d4c52ca53ef9e9b466aa45 (#3837)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-14 22:03:40 +00:00
dependabot[bot]
f9903d850f chore(deps): Bump charset-normalizer from 3.3.2 to 3.4.0 in /examples/langchain/langchainpy-localai-example (#3834)
chore(deps): Bump charset-normalizer

Bumps [charset-normalizer](https://github.com/Ousret/charset_normalizer) from 3.3.2 to 3.4.0.
- [Release notes](https://github.com/Ousret/charset_normalizer/releases)
- [Changelog](https://github.com/jawah/charset_normalizer/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Ousret/charset_normalizer/compare/3.3.2...3.4.0)

---
updated-dependencies:
- dependency-name: charset-normalizer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-14 20:05:36 +00:00
Ettore Di Giacinto
1e3cef6774 models(gallery): add edgerunner-command-nested-i1 (#3830)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 11:22:29 +02:00
Ettore Di Giacinto
dcf28e6a28 models(gallery): add cursorcore-yi-9b (#3829)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 11:20:09 +02:00
Ettore Di Giacinto
cb47a03880 models(gallery): add cursorcore-ds-6.7b-i1 (#3828)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 11:14:14 +02:00
Ettore Di Giacinto
d2a5a58e11 models(gallery): add cursorcore-qw2.5-1.5b-lc-i1 (#3827)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 11:10:34 +02:00
Ettore Di Giacinto
88115e4ddb models(gallery): add cursorcore-qw2.5-7b-i1 (#3826)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 11:06:11 +02:00
Ettore Di Giacinto
0a198e32de models(gallery): add eva-qwen2.5-14b-v0.1-i1 (#3825)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 10:53:28 +02:00
Ettore Di Giacinto
61388317c1 models(gallery): add hermes-3-llama-3.1-8b-lorablated (#3824)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 10:28:56 +02:00
Ettore Di Giacinto
304484c59b models(gallery): add hermes-3-llama-3.1-70b-lorablated (#3823)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 10:17:23 +02:00
Ettore Di Giacinto
93ba5ea14f models(gallery): add supernova-medius (#3822)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 09:00:37 +02:00
Ettore Di Giacinto
8ec828a654 models(gallery): add llama-3.1-8b-arliai-formax-v1.0-iq-arm-imatrix (#3821)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 08:59:41 +02:00
Ettore Di Giacinto
b6f681315a models(gallery): add llama3.1-gutenberg-doppel-70b (#3820)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 08:54:31 +02:00
Ettore Di Giacinto
d53e71021f models(gallery): add llama3.1-flammades-70b (#3819)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-14 08:50:02 +02:00
LocalAI [bot]
43146fa607 chore: ⬆️ Update ggerganov/llama.cpp to d4c19c0f5cdb1e512573e8c86c79e8d0238c73c4 (#3817)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-14 08:29:14 +02:00
Ettore Di Giacinto
f4dab82919 models(gallery): add llama-3_8b_unaligned_beta (#3818)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-13 23:07:00 +02:00
dependabot[bot]
f659304227 chore(deps): Bump openai from 1.51.1 to 1.51.2 in /examples/langchain-chroma (#3810)
chore(deps): Bump openai in /examples/langchain-chroma

Bumps [openai](https://github.com/openai/openai-python) from 1.51.1 to 1.51.2.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.51.1...v1.51.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 11:24:59 +02:00
dependabot[bot]
fd493a4451 chore(deps): Bump aiohttp from 3.10.9 to 3.10.10 in /examples/langchain/langchainpy-localai-example (#3812)
chore(deps): Bump aiohttp

Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.10.9 to 3.10.10.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.10.9...v3.10.10)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:13:06 +02:00
dependabot[bot]
181fa93168 chore(deps): Bump debugpy from 1.8.6 to 1.8.7 in /examples/langchain/langchainpy-localai-example (#3814)
chore(deps): Bump debugpy

Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.6 to 1.8.7.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.6...v1.8.7)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:12:38 +02:00
dependabot[bot]
d5d9e78983 chore(deps): Bump langchain from 0.3.2 to 0.3.3 in /examples/functions (#3802)
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.3.2 to 0.3.3.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain==0.3.2...langchain==0.3.3)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:10:51 +02:00
dependabot[bot]
a1a86aa1f7 chore(deps): Bump chromadb from 0.5.11 to 0.5.13 in /examples/langchain-chroma (#3811)
chore(deps): Bump chromadb in /examples/langchain-chroma

Bumps [chromadb](https://github.com/chroma-core/chroma) from 0.5.11 to 0.5.13.
- [Release notes](https://github.com/chroma-core/chroma/releases)
- [Changelog](https://github.com/chroma-core/chroma/blob/main/RELEASE_PROCESS.md)
- [Commits](https://github.com/chroma-core/chroma/compare/0.5.11...0.5.13)

---
updated-dependencies:
- dependency-name: chromadb
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:10:26 +02:00
dependabot[bot]
9695969913 chore(deps): Bump yarl from 1.13.1 to 1.15.1 in /examples/langchain/langchainpy-localai-example (#3816)
chore(deps): Bump yarl

Bumps [yarl](https://github.com/aio-libs/yarl) from 1.13.1 to 1.15.1.
- [Release notes](https://github.com/aio-libs/yarl/releases)
- [Changelog](https://github.com/aio-libs/yarl/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/yarl/compare/v1.13.1...v1.15.1)

---
updated-dependencies:
- dependency-name: yarl
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:09:48 +02:00
dependabot[bot]
975c579d44 chore(deps): Bump openai from 1.51.1 to 1.51.2 in /examples/langchain/langchainpy-localai-example (#3808)
chore(deps): Bump openai

Bumps [openai](https://github.com/openai/openai-python) from 1.51.1 to 1.51.2.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.51.1...v1.51.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:09:05 +02:00
dependabot[bot]
814cc24b69 chore(deps): Bump langchain from 0.3.1 to 0.3.3 in /examples/langchain-chroma (#3809)
chore(deps): Bump langchain in /examples/langchain-chroma

Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.3.1 to 0.3.3.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain==0.3.1...langchain==0.3.3)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:08:14 +02:00
dependabot[bot]
086f9e1f07 chore(deps): Bump llama-index from 0.11.16 to 0.11.17 in /examples/chainlit (#3807)
chore(deps): Bump llama-index in /examples/chainlit

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.16 to 0.11.17.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.16...v0.11.17)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:07:29 +02:00
dependabot[bot]
3f923bb2ce chore(deps): Bump openai from 1.51.1 to 1.51.2 in /examples/functions (#3806)
Bumps [openai](https://github.com/openai/openai-python) from 1.51.1 to 1.51.2.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.51.1...v1.51.2)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:06:48 +02:00
dependabot[bot]
803e2db30b chore(deps): Bump python from 3.12-bullseye to 3.13-bullseye in /examples/langchain (#3805)
chore(deps): Bump python in /examples/langchain

Bumps python from 3.12-bullseye to 3.13-bullseye.

---
updated-dependencies:
- dependency-name: python
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:06:21 +02:00
dependabot[bot]
a282bd4969 chore(deps): Bump llama-index from 0.11.16 to 0.11.17 in /examples/langchain-chroma (#3804)
chore(deps): Bump llama-index in /examples/langchain-chroma

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.16 to 0.11.17.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.16...v0.11.17)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:05:54 +02:00
dependabot[bot]
5bca02bad4 chore(deps): Bump langchain from 0.3.2 to 0.3.3 in /examples/langchain/langchainpy-localai-example (#3803)
chore(deps): Bump langchain

Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.3.2 to 0.3.3.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain==0.3.2...langchain==0.3.3)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:05:26 +02:00
dependabot[bot]
4858e72fd9 chore(deps): Bump sentence-transformers from 3.1.1 to 3.2.0 in /backend/python/sentencetransformers (#3801)
chore(deps): Bump sentence-transformers

Bumps [sentence-transformers](https://github.com/UKPLab/sentence-transformers) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/UKPLab/sentence-transformers/releases)
- [Commits](https://github.com/UKPLab/sentence-transformers/compare/v3.1.1...v3.2.0)

---
updated-dependencies:
- dependency-name: sentence-transformers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:04:56 +02:00
dependabot[bot]
7eab6ba71b chore(deps): Bump mxschmitt/action-tmate from 3.18 to 3.19 (#3799)
Bumps [mxschmitt/action-tmate](https://github.com/mxschmitt/action-tmate) from 3.18 to 3.19.
- [Release notes](https://github.com/mxschmitt/action-tmate/releases)
- [Changelog](https://github.com/mxschmitt/action-tmate/blob/master/RELEASE.md)
- [Commits](https://github.com/mxschmitt/action-tmate/compare/v3.18...v3.19)

---
updated-dependencies:
- dependency-name: mxschmitt/action-tmate
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 10:04:31 +02:00
dependabot[bot]
a909f63fbe chore(deps): Bump docs/themes/hugo-theme-relearn from d5a0ee0 to e1a1f01 (#3798)
chore(deps): Bump docs/themes/hugo-theme-relearn

Bumps [docs/themes/hugo-theme-relearn](https://github.com/McShelby/hugo-theme-relearn) from `d5a0ee0` to `e1a1f01`.
- [Release notes](https://github.com/McShelby/hugo-theme-relearn/releases)
- [Commits](d5a0ee04ad...e1a1f01f4c)

---
updated-dependencies:
- dependency-name: docs/themes/hugo-theme-relearn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-13 00:50:00 +00:00
LocalAI [bot]
b46f36195f chore: ⬆️ Update ggerganov/llama.cpp to edc265661cd707327297b6ec4d83423c43cb50a5 (#3797)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-13 00:07:54 +02:00
Dave
465f1f14a7 chore: dependabot ignore generated grpc go package (#3795)
Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-13 00:07:43 +02:00
LocalAI [bot]
b8b1e10f34 docs: ⬆️ update docs version mudler/LocalAI (#3796)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-12 21:41:06 +00:00
Dave
a1634b219a fix: roll out bluemonday Sanitize more widely (#3794)
* initial pass: roll out bluemonday sanitization more widely

Signed-off-by: Dave Lee <dave@gray101.com>

* add one additional sanitize - the overall modelslist used by the docs site

Signed-off-by: Dave Lee <dave@gray101.com>

---------

Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-12 09:45:47 +02:00
Ettore Di Giacinto
6257e2f510 chore(deps): bump llama-cpp to 96776405a17034dcfd53d3ddf5d142d34bdbb657 (#3793)
This adapts also to upstream changes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-12 01:25:03 +02:00
Dave
65ca754166 Fix: listmodelservice / welcome endpoint use LOOSE_ONLY (#3791)
* fix list model service and welcome

Signed-off-by: Dave Lee <dave@gray101.com>

* comment

Signed-off-by: Dave Lee <dave@gray101.com>

---------

Signed-off-by: Dave Lee <dave@gray101.com>
2024-10-11 23:49:00 +02:00
Ettore Di Giacinto
a0f0505f0d fix(welcome): do not list model twice if we have a config (#3790)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-11 17:30:14 +02:00
Ettore Di Giacinto
be6c4e6061 fix(llama-cpp): consistently select fallback (#3789)
* fix(llama-cpp): consistently select fallback

We didn't took in consideration the case where the host has the CPU
flagset, but the binaries were not actually present in the asset dir.

This made possible for instance for models that specified the llama-cpp
backend directly in the config to not eventually pick-up the fallback
binary in case the optimized binaries were not present.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: adjust and simplify selection

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: move failure recovery to BackendLoader()

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* comments

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* minor fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-11 16:55:57 +02:00
LocalAI [bot]
1996e6f4c9 chore: ⬆️ Update ggerganov/llama.cpp to 0e9f760eb12546704ef8fa72577bc1a3ffe1bc04 (#3786)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-10 21:46:50 +00:00
Ettore Di Giacinto
671cd42917 chore(gallery): do not specify backend with moondream
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-10-10 19:54:07 +02:00
Ettore Di Giacinto
568a01bf5c models(gallery): add gemma-2-ataraxy-v3i-9b (#3785)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 19:16:23 +02:00
Ettore Di Giacinto
164abb8c9f models(gallery): add fireball-meta-llama-3.2-8b-instruct-agent-003-128k-code-dpo (#3784)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 19:13:47 +02:00
Ettore Di Giacinto
ed2946feac models(gallery): add llama-3.2-3b-agent007-coder (#3783)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 19:11:50 +02:00
Ettore Di Giacinto
bdd351b372 models(gallery): add nihappy-l3.1-8b-v0.09 (#3782)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 19:09:49 +02:00
Ettore Di Giacinto
ad5e7d376a models(gallery): add llama-3.2-3b-agent007 (#3781)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 19:06:58 +02:00
Ettore Di Giacinto
6e78d8cd9d models(gallery): add dans-personalityengine-v1.0.0-8b (#3780)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 18:56:01 +02:00
Ettore Di Giacinto
614125f268 models(gallery): add qwen2.5-7b-ins-v3 (#3779)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 15:05:09 +02:00
Ettore Di Giacinto
f41965bfb5 models(gallery): add rombos-llm-v2.5.1-qwen-3b (#3778)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-10 10:47:41 +02:00
Josh Bennett
85a3cc8d8f feat(transformers): Use downloaded model for Transformers backend if it already exists. (#3777)
* signing commit

Signed-off-by: Josh Bennett <562773+joshbtn@users.noreply.github.com>

* Update transformers backend to check for existing model directory

Signed-off-by: Josh Bennett <562773+joshbtn@users.noreply.github.com>

---------

Signed-off-by: Josh Bennett <562773+joshbtn@users.noreply.github.com>
2024-10-10 08:42:59 +00:00
LocalAI [bot]
ea8675d473 chore: ⬆️ Update ggerganov/llama.cpp to c81f3bbb051f8b736e117dfc78c99d7c4e0450f6 (#3775)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-09 21:40:46 +00:00
Ettore Di Giacinto
08a54c1812 models(gallery): add llama-3.1-swallow-70b-v0.1-i1 (#3774)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-09 17:16:17 +02:00
Ettore Di Giacinto
8c7439b96e models(gallery): add llama3.2-3b-esper2 (#3773)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-09 17:08:13 +02:00
Ettore Di Giacinto
a9e42a76fa models(gallery): add llama3.2-3b-enigma (#3772)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-09 17:05:50 +02:00
Ettore Di Giacinto
1a3b3d3e67 models(gallery): add versatillama-llama-3.2-3b-instruct-abliterated (#3771)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-09 16:58:34 +02:00
LocalAI [bot]
759d35e6b5 chore: ⬆️ Update ggerganov/whisper.cpp to fdbfb460ed546452a5d53611bba66d10d842e719 (#3768)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-09 09:42:44 +02:00
LocalAI [bot]
825e85bcc5 chore: ⬆️ Update ggerganov/llama.cpp to dca1d4b58a7f1acf1bd253be84e50d6367f492fd (#3769)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-08 21:41:05 +00:00
Ettore Di Giacinto
62165d556c models(gallery): add archfunctions template
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-08 18:52:21 +02:00
Ettore Di Giacinto
78459889d8 models(gallery): add archfunctions models (#3767)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-08 18:51:30 +02:00
Ettore Di Giacinto
0fdc6a92f6 models(gallery): add moe-girl-1ba-7bt-i1 (#3766)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-08 18:38:27 +02:00
LocalAI [bot]
8586a0167a chore: ⬆️ Update ggerganov/whisper.cpp to ebca09a3d1033417b0c630bbbe607b0f185b1488 (#3764)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-08 09:35:18 +02:00
LocalAI [bot]
f1d16a45c5 chore: ⬆️ Update ggerganov/llama.cpp to 6374743747b14db4eb73ce82ae449a2978bc3b47 (#3763)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-08 09:35:01 +02:00
dependabot[bot]
2023627d7f chore(deps): Bump appleboy/ssh-action from 1.0.3 to 1.1.0 (#3762)
Bumps [appleboy/ssh-action](https://github.com/appleboy/ssh-action) from 1.0.3 to 1.1.0.
- [Release notes](https://github.com/appleboy/ssh-action/releases)
- [Changelog](https://github.com/appleboy/ssh-action/blob/master/.goreleaser.yaml)
- [Commits](https://github.com/appleboy/ssh-action/compare/v1.0.3...v1.1.0)

---
updated-dependencies:
- dependency-name: appleboy/ssh-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:34:43 +02:00
dependabot[bot]
d5e1958a1f chore(deps): Bump nginx from 1.27.0 to 1.27.2 in /examples/k8sgpt (#3761)
Bumps nginx from 1.27.0 to 1.27.2.

---
updated-dependencies:
- dependency-name: nginx
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:34:23 +02:00
dependabot[bot]
f9c58a01d3 chore(deps): Bump llama-index from 0.11.14 to 0.11.16 in /examples/langchain-chroma (#3760)
chore(deps): Bump llama-index in /examples/langchain-chroma

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.14 to 0.11.16.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.14...v0.11.16)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:34:05 +02:00
dependabot[bot]
4500650000 chore(deps): Bump openai from 1.50.2 to 1.51.1 in /examples/langchain-chroma (#3758)
chore(deps): Bump openai in /examples/langchain-chroma

Bumps [openai](https://github.com/openai/openai-python) from 1.50.2 to 1.51.1.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.50.2...v1.51.1)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:33:25 +02:00
dependabot[bot]
5674e671d0 chore(deps): Bump langchain from 0.3.1 to 0.3.2 in /examples/langchain/langchainpy-localai-example (#3752)
chore(deps): Bump langchain

Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.3.1 to 0.3.2.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain==0.3.1...langchain==0.3.2)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:33:13 +02:00
dependabot[bot]
0f44c3f69c chore(deps): Bump debugpy from 1.8.2 to 1.8.6 in /examples/langchain/langchainpy-localai-example (#3751)
chore(deps): Bump debugpy

Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.2 to 1.8.6.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.2...v1.8.6)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:32:59 +02:00
dependabot[bot]
f9069daf03 chore(deps): Bump streamlit from 1.38.0 to 1.39.0 in /examples/streamlit-bot (#3757)
chore(deps): Bump streamlit in /examples/streamlit-bot

Bumps [streamlit](https://github.com/streamlit/streamlit) from 1.38.0 to 1.39.0.
- [Release notes](https://github.com/streamlit/streamlit/releases)
- [Commits](https://github.com/streamlit/streamlit/compare/1.38.0...1.39.0)

---
updated-dependencies:
- dependency-name: streamlit
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:32:40 +02:00
dependabot[bot]
5f58841a3a chore(deps): Bump llama-index from 0.11.14 to 0.11.16 in /examples/chainlit (#3753)
chore(deps): Bump llama-index in /examples/chainlit

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.14 to 0.11.16.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.14...v0.11.16)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:32:30 +02:00
dependabot[bot]
287200e687 chore(deps): Bump aiohttp from 3.10.8 to 3.10.9 in /examples/langchain/langchainpy-localai-example (#3750)
chore(deps): Bump aiohttp

Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.10.8 to 3.10.9.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.10.8...v3.10.9)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:32:18 +02:00
dependabot[bot]
b653883c0a chore(deps): Bump multidict from 6.0.5 to 6.1.0 in /examples/langchain/langchainpy-localai-example (#3749)
chore(deps): Bump multidict

Bumps [multidict](https://github.com/aio-libs/multidict) from 6.0.5 to 6.1.0.
- [Release notes](https://github.com/aio-libs/multidict/releases)
- [Changelog](https://github.com/aio-libs/multidict/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/multidict/compare/v6.0.5...v6.1.0)

---
updated-dependencies:
- dependency-name: multidict
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:32:08 +02:00
dependabot[bot]
6b8a402353 chore(deps): Bump openai from 1.45.1 to 1.51.1 in /examples/langchain/langchainpy-localai-example (#3748)
chore(deps): Bump openai

Bumps [openai](https://github.com/openai/openai-python) from 1.45.1 to 1.51.1.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.45.1...v1.51.1)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 09:24:56 +02:00
Ettore Di Giacinto
d9b63fae7c chore(tests): improve rwkv tests and consume TEST_FLAKES (#3765)
chores(tests): improve rwkv tests and consume TEST_FLAKES

consistently use TEST_FLAKES and reduce flakiness of rwkv tests by being
case insensitive

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-08 09:24:19 +02:00
dependabot[bot]
377cdcabbf chore(deps): Bump openai from 1.50.2 to 1.51.1 in /examples/functions (#3754)
Bumps [openai](https://github.com/openai/openai-python) from 1.50.2 to 1.51.1.
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v1.50.2...v1.51.1)

---
updated-dependencies:
- dependency-name: openai
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 00:05:53 +00:00
dependabot[bot]
92a7f40141 chore(deps): Bump langchain from 0.3.1 to 0.3.2 in /examples/functions (#3755)
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.3.1 to 0.3.2.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain==0.3.1...langchain==0.3.2)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-07 21:04:18 +00:00
Ettore Di Giacinto
e06daf437a chore(Dockerfile): default to cmake from package manager (#3746)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 16:42:56 +02:00
Ettore Di Giacinto
d19bea4af2 chore(vllm): do not install from source (#3745)
chore(vllm): do not install from source by default

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 12:27:37 +02:00
Ettore Di Giacinto
fbca9f82fd fix(vllm): bump cmake - vllm requires it (#3744)
* fix(vllm): bump cmake - vllm requires it

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(tests): try to increase coqui timeout

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 11:22:55 +02:00
Ettore Di Giacinto
04f284d202 models(gallery): add gemma-2-9b-it-abliterated (#3743)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 09:56:33 +02:00
Ettore Di Giacinto
cfd6112256 models(gallery): add violet_twilight-v0.2-iq-imatrix (#3742)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 09:52:23 +02:00
Ettore Di Giacinto
debc0974a6 models(gallery): add t.e-8.1-iq-imatrix-request (#3741)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 09:51:57 +02:00
Ettore Di Giacinto
03bbbea039 models(gallery): add mn-backyardai-party-12b-v1-iq-arm-imatrix (#3740)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-07 09:46:44 +02:00
LocalAI [bot]
55af0b1c68 chore: ⬆️ Update ggerganov/whisper.cpp to 9f346d00840bcd7af62794871109841af40cecfb (#3739)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-07 09:33:28 +02:00
LocalAI [bot]
c8bfb72104 chore: ⬆️ Update ggerganov/llama.cpp to d5cb86844f26f600c48bf3643738ea68138f961d (#3738)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-06 21:40:25 +00:00
LocalAI [bot]
1b8a663001 chore: ⬆️ Update ggerganov/llama.cpp to 8c475b97b8ba7d678d4c9904b1161bd8811a9b44 (#3736)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-06 10:10:13 +02:00
LocalAI [bot]
a9abfa2b61 chore: ⬆️ Update ggerganov/whisper.cpp to 6a94163b913d8e974e60d9ac56c8930d19f45773 (#3735)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-06 10:09:57 +02:00
Ettore Di Giacinto
092bb0bd6b fix(base-grpc): close channel in base grpc server (#3734)
If the LLM does not implement any logic for PredictStream, we close the
channel immediately to not leave the process hanging.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-05 15:14:27 +02:00
Ettore Di Giacinto
e28e80857b feat(shutdown): allow force shutdown of backends (#3733)
We default to a soft kill, however, we might want to force killing
backends after a while to avoid hanging requests (which may hallucinate
indefinetly)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-05 10:41:35 +02:00
LocalAI [bot]
905473c739 chore: ⬆️ Update ggerganov/whisper.cpp to 2944cb72d95282378037cb0eb45c9e2b2529ff2c (#3730)
⬆️ Update ggerganov/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-05 00:09:24 +02:00
LocalAI [bot]
aa0564a1c6 chore: ⬆️ Update ggerganov/llama.cpp to 71967c2a6d30da9f61580d3e2d4cb00e0223b6fa (#3731)
⬆️ Update ggerganov/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-10-05 00:09:02 +02:00
315 changed files with 3340 additions and 9813 deletions

View File

@@ -0,0 +1,11 @@
meta {
name: model delete
type: http
seq: 7
}
post {
url: {{PROTOCOL}}{{HOST}}:{{PORT}}/models/galleries
body: none
auth: none
}

View File

Binary file not shown.

View File

@@ -0,0 +1,16 @@
meta {
name: transcribe
type: http
seq: 1
}
post {
url: {{PROTOCOL}}{{HOST}}:{{PORT}}/v1/audio/transcriptions
body: multipartForm
auth: none
}
body:multipart-form {
file: @file(transcription/gb1.ogg)
model: whisper-1
}

View File

@@ -6,6 +6,7 @@ import (
"io/ioutil"
"os"
"github.com/microcosm-cc/bluemonday"
"gopkg.in/yaml.v3"
)
@@ -279,6 +280,12 @@ func main() {
return
}
// Ensure that all arbitrary text content is sanitized before display
for i, m := range models {
models[i].Name = bluemonday.StrictPolicy().Sanitize(m.Name)
models[i].Description = bluemonday.StrictPolicy().Sanitize(m.Description)
}
// render the template
data := struct {
Models []*GalleryModel

View File

@@ -9,6 +9,8 @@ updates:
directory: "/"
schedule:
interval: "weekly"
ignore:
- dependency-name: "github.com/mudler/LocalAI/pkg/grpc/proto"
- package-ecosystem: "github-actions"
# Workflow files stored in the default location of `.github/workflows`. (You don't need to specify `/.github/workflows` for `directory`. You can use `directory: "/"`.)
directory: "/"

View File

@@ -33,7 +33,7 @@ jobs:
run: |
CGO_ENABLED=0 make build-api
- name: rm
uses: appleboy/ssh-action@v1.0.3
uses: appleboy/ssh-action@v1.1.0
with:
host: ${{ secrets.EXPLORER_SSH_HOST }}
username: ${{ secrets.EXPLORER_SSH_USERNAME }}
@@ -53,7 +53,7 @@ jobs:
rm: true
target: ./local-ai
- name: restarting
uses: appleboy/ssh-action@v1.0.3
uses: appleboy/ssh-action@v1.1.0
with:
host: ${{ secrets.EXPLORER_SSH_HOST }}
username: ${{ secrets.EXPLORER_SSH_USERNAME }}

View File

@@ -79,7 +79,7 @@ jobs:
args: ${{ steps.summarize.outputs.message }}
- name: Setup tmate session if fails
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
@@ -161,7 +161,7 @@ jobs:
TWITTER_ACCESS_TOKEN_SECRET: ${{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}
- name: Setup tmate session if fails
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180

View File

@@ -123,7 +123,7 @@ jobs:
release/*
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
@@ -232,7 +232,7 @@ jobs:
release/*
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
@@ -308,7 +308,7 @@ jobs:
release/*
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
@@ -350,7 +350,7 @@ jobs:
release/*
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180

View File

@@ -123,6 +123,13 @@ jobs:
run: |
make --jobs=5 --output-sync=target -C backend/python/parler-tts
make --jobs=5 --output-sync=target -C backend/python/parler-tts test
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
limit-access-to-actor: true
tests-openvoice:
runs-on: ubuntu-latest

View File

@@ -133,7 +133,7 @@ jobs:
PATH="$PATH:/root/go/bin" GO_TAGS="stablediffusion tts" make --jobs 5 --output-sync=target test
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
@@ -197,7 +197,7 @@ jobs:
make run-e2e-aio
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180
@@ -224,7 +224,7 @@ jobs:
- name: Dependencies
run: |
brew install protobuf grpc make protoc-gen-go protoc-gen-go-grpc libomp llvm
pip install --user --no-cache-dir grpcio-tools==1.64.1
pip install --user --no-cache-dir grpcio-tools
- name: Test
run: |
export C_INCLUDE_PATH=/usr/local/include
@@ -235,7 +235,7 @@ jobs:
BUILD_TYPE="GITHUB_CI_HAS_BROKEN_METAL" CMAKE_ARGS="-DGGML_F16C=OFF -DGGML_AVX512=OFF -DGGML_AVX2=OFF -DGGML_FMA=OFF" make --jobs 4 --output-sync=target test
- name: Setup tmate session if tests fail
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3.18
uses: mxschmitt/action-tmate@v3.19
with:
detached: true
connect-timeout-seconds: 180

View File

@@ -9,6 +9,8 @@ FROM ${BASE_IMAGE} AS requirements-core
USER root
ARG GO_VERSION=1.22.6
ARG CMAKE_VERSION=3.26.4
ARG CMAKE_FROM_SOURCE=false
ARG TARGETARCH
ARG TARGETVARIANT
@@ -21,13 +23,25 @@ RUN apt-get update && \
build-essential \
ccache \
ca-certificates \
cmake \
curl \
curl libssl-dev \
git \
unzip upx-ucl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install CMake (the version in 22.04 is too old)
RUN <<EOT bash
if [ "${CMAKE_FROM_SOURCE}}" = "true" ]; then
curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
else
apt-get update && \
apt-get install -y \
cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
fi
EOT
# Install Go
RUN curl -L -s https://go.dev/dl/go${GO_VERSION}.linux-${TARGETARCH}.tar.gz | tar -C /usr/local -xz
ENV PATH=$PATH:/root/go/bin:/usr/local/go/bin
@@ -71,7 +85,8 @@ WORKDIR /build
# The requirements-extras target is for any builds with IMAGE_TYPE=extras. It should not be placed in this target unless every IMAGE_TYPE=extras build will use it
FROM requirements-core AS requirements-extras
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
# Install uv as a system package
RUN curl -LsSf https://astral.sh/uv/install.sh | UV_INSTALL_DIR=/usr/bin sh
ENV PATH="/root/.cargo/bin:${PATH}"
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
@@ -188,6 +203,8 @@ FROM ${GRPC_BASE_IMAGE} AS grpc
# This is a bit of a hack, but it's required in order to be able to effectively cache this layer in CI
ARG GRPC_MAKEFLAGS="-j4 -Otarget"
ARG GRPC_VERSION=v1.65.0
ARG CMAKE_FROM_SOURCE=false
ARG CMAKE_VERSION=3.26.4
ENV MAKEFLAGS=${GRPC_MAKEFLAGS}
@@ -196,12 +213,24 @@ WORKDIR /build
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
build-essential \
cmake \
build-essential curl libssl-dev \
git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install CMake (the version in 22.04 is too old)
RUN <<EOT bash
if [ "${CMAKE_FROM_SOURCE}}" = "true" ]; then
curl -L -s https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz -o cmake.tar.gz && tar xvf cmake.tar.gz && cd cmake-${CMAKE_VERSION} && ./configure && make && make install
else
apt-get update && \
apt-get install -y \
cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
fi
EOT
# We install GRPC to a different prefix here so that we can copy in only the build artifacts later
# saves several hundred MB on the final docker image size vs copying in the entire GRPC source tree
# and running make install in the target container

View File

@@ -8,7 +8,7 @@ DETECT_LIBS?=true
# llama.cpp versions
GOLLAMA_REPO?=https://github.com/go-skynet/go-llama.cpp
GOLLAMA_VERSION?=2b57a8ae43e4699d3dc5d1496a1ccd42922993be
CPPLLAMA_VERSION?=d5ed2b929d85bbd7dbeecb690880f07d9d7a6077
CPPLLAMA_VERSION?=6423c65aa8be1b98f990cf207422505ac5a441a1
# go-rwkv version
RWKV_REPO?=https://github.com/donomii/go-rwkv.cpp
@@ -16,7 +16,7 @@ RWKV_VERSION?=661e7ae26d442f5cfebd2a0881b44e8c55949ec6
# whisper.cpp version
WHISPER_REPO?=https://github.com/ggerganov/whisper.cpp
WHISPER_CPP_VERSION?=ccc2547210e09e3a1785817383ab770389bb442b
WHISPER_CPP_VERSION?=31aea563a83803c710691fed3e8d700e06ae6788
# bert.cpp version
BERT_REPO?=https://github.com/go-skynet/go-bert.cpp
@@ -24,7 +24,7 @@ BERT_VERSION?=710044b124545415f555e4260d16b146c725a6e4
# go-piper version
PIPER_REPO?=https://github.com/mudler/go-piper
PIPER_VERSION?=9d0100873a7dbb0824dfea40e8cec70a1b110759
PIPER_VERSION?=e10ca041a885d4a8f3871d52924b47792d5e5aa0
# stablediffusion version
STABLEDIFFUSION_REPO?=https://github.com/mudler/go-stable-diffusion
@@ -470,13 +470,13 @@ run-e2e-image:
run-e2e-aio: protogen-go
@echo 'Running e2e AIO tests'
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --flake-attempts 5 -v -r ./tests/e2e-aio
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --flake-attempts $(TEST_FLAKES) -v -r ./tests/e2e-aio
test-e2e:
@echo 'Running e2e tests'
BUILD_TYPE=$(BUILD_TYPE) \
LOCALAI_API=http://$(E2E_BRIDGE_IP):5390/v1 \
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --flake-attempts 5 -v -r ./tests/e2e
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --flake-attempts $(TEST_FLAKES) -v -r ./tests/e2e
teardown-e2e:
rm -rf $(TEST_DIR) || true
@@ -484,24 +484,24 @@ teardown-e2e:
test-llama: prepare-test
TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models \
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="llama" --flake-attempts 5 -v -r $(TEST_PATHS)
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="llama" --flake-attempts $(TEST_FLAKES) -v -r $(TEST_PATHS)
test-llama-gguf: prepare-test
TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models \
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="llama-gguf" --flake-attempts 5 -v -r $(TEST_PATHS)
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="llama-gguf" --flake-attempts $(TEST_FLAKES) -v -r $(TEST_PATHS)
test-tts: prepare-test
TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models \
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="tts" --flake-attempts 1 -v -r $(TEST_PATHS)
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="tts" --flake-attempts $(TEST_FLAKES) -v -r $(TEST_PATHS)
test-stablediffusion: prepare-test
TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models \
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="stablediffusion" --flake-attempts 1 -v -r $(TEST_PATHS)
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="stablediffusion" --flake-attempts $(TEST_FLAKES) -v -r $(TEST_PATHS)
test-stores: backend-assets/grpc/local-store
mkdir -p tests/integration/backend-assets/grpc
cp -f backend-assets/grpc/local-store tests/integration/backend-assets/grpc/
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="stores" --flake-attempts 1 -v -r tests/integration
$(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="stores" --flake-attempts $(TEST_FLAKES) -v -r tests/integration
test-container:
docker build --target requirements -t local-ai-test-container .

View File

@@ -40,7 +40,7 @@
> :bulb: Get help - [❓FAQ](https://localai.io/faq/) [💭Discussions](https://github.com/go-skynet/LocalAI/discussions) [:speech_balloon: Discord](https://discord.gg/uJAeKSAGDy) [:book: Documentation website](https://localai.io/)
>
> [💻 Quickstart](https://localai.io/basics/getting_started/) [🖼️ Models](https://models.localai.io/) [🚀 Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [🥽 Demo](https://demo.localai.io) [🌍 Explorer](https://explorer.localai.io) [🛫 Examples](https://github.com/go-skynet/LocalAI/tree/master/examples/)
> [💻 Quickstart](https://localai.io/basics/getting_started/) [🖼️ Models](https://models.localai.io/) [🚀 Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [🥽 Demo](https://demo.localai.io) [🌍 Explorer](https://explorer.localai.io) [🛫 Examples](https://github.com/mudler/LocalAI-examples)
[![tests](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml)[![Build and Release](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml)[![build container images](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml)[![Bump dependencies](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml)[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/localai)](https://artifacthub.io/packages/search?repo=localai)
@@ -66,10 +66,26 @@ docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
```
To load models:
```bash
# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
local-ai run llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with the phi-2 model directly from huggingface
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest
```
[💻 Getting started](https://localai.io/basics/getting_started/index.html)
## 📰 Latest project news
- Oct 2024: examples moved to [LocalAI-examples](https://github.com/mudler/LocalAI-examples)
- Aug 2024: 🆕 FLUX-1, [P2P Explorer](https://explorer.localai.io)
- July 2024: 🔥🔥 🆕 P2P Dashboard, LocalAI Federated mode and AI Swarms: https://github.com/mudler/LocalAI/pull/2723
- June 2024: 🆕 You can browse now the model gallery without LocalAI! Check out https://models.localai.io

View File

@@ -219,6 +219,7 @@ message ModelOptions {
int32 SwapSpace = 53;
int32 MaxModelLen = 54;
int32 TensorParallelSize = 55;
string LoadFormat = 58;
string MMProj = 41;
@@ -232,6 +233,11 @@ message ModelOptions {
bool FlashAttention = 56;
bool NoKVOffload = 57;
string ModelPath = 59;
repeated string LoraAdapters = 60;
repeated float LoraScales = 61;
}
message Result {

View File

@@ -113,7 +113,7 @@ static std::string tokens_to_str(llama_context *ctx, Iter begin, Iter end)
std::string ret;
for (; begin != end; ++begin)
{
ret += llama_token_to_piece(ctx, *begin);
ret += common_token_to_piece(ctx, *begin);
}
return ret;
}
@@ -121,7 +121,7 @@ static std::string tokens_to_str(llama_context *ctx, Iter begin, Iter end)
// format incomplete utf-8 multibyte character for output
static std::string tokens_to_output_formatted_string(const llama_context *ctx, const llama_token token)
{
std::string out = token == -1 ? "" : llama_token_to_piece(ctx, token);
std::string out = token == -1 ? "" : common_token_to_piece(ctx, token);
// if the size is 1 and first bit is 1, meaning it's a partial character
// (size > 1 meaning it's already a known token)
if (out.size() == 1 && (out[0] & 0x80) == 0x80)
@@ -203,8 +203,8 @@ struct llama_client_slot
std::string stopping_word;
// sampling
struct gpt_sampler_params sparams;
gpt_sampler *ctx_sampling = nullptr;
struct common_sampler_params sparams;
common_sampler *ctx_sampling = nullptr;
int32_t ga_i = 0; // group-attention state
int32_t ga_n = 1; // group-attention factor
@@ -257,7 +257,7 @@ struct llama_client_slot
images.clear();
}
bool has_budget(gpt_params &global_params) {
bool has_budget(common_params &global_params) {
if (params.n_predict == -1 && global_params.n_predict == -1)
{
return true; // limitless
@@ -391,6 +391,39 @@ struct llama_metrics {
}
};
struct llava_embd_batch {
std::vector<llama_pos> pos;
std::vector<int32_t> n_seq_id;
std::vector<llama_seq_id> seq_id_0;
std::vector<llama_seq_id *> seq_ids;
std::vector<int8_t> logits;
llama_batch batch;
llava_embd_batch(float * embd, int32_t n_tokens, llama_pos pos_0, llama_seq_id seq_id) {
pos .resize(n_tokens);
n_seq_id.resize(n_tokens);
seq_ids .resize(n_tokens + 1);
logits .resize(n_tokens);
seq_id_0.resize(1);
seq_id_0[0] = seq_id;
seq_ids [n_tokens] = nullptr;
batch = {
/*n_tokens =*/ n_tokens,
/*tokens =*/ nullptr,
/*embd =*/ embd,
/*pos =*/ pos.data(),
/*n_seq_id =*/ n_seq_id.data(),
/*seq_id =*/ seq_ids.data(),
/*logits =*/ logits.data(),
};
for (int i = 0; i < n_tokens; i++) {
batch.pos [i] = pos_0 + i;
batch.n_seq_id[i] = 1;
batch.seq_id [i] = seq_id_0.data();
batch.logits [i] = false;
}
}
};
struct llama_server_context
{
llama_model *model = nullptr;
@@ -398,7 +431,7 @@ struct llama_server_context
clip_ctx *clp_ctx = nullptr;
gpt_params params;
common_params params;
llama_batch batch;
@@ -441,7 +474,7 @@ struct llama_server_context
}
}
bool load_model(const gpt_params &params_)
bool load_model(const common_params &params_)
{
params = params_;
if (!params.mmproj.empty()) {
@@ -458,9 +491,9 @@ struct llama_server_context
}
}
llama_init_result llama_init = llama_init_from_gpt_params(params);
model = llama_init.model;
ctx = llama_init.context;
common_init_result common_init = common_init_from_params(params);
model = common_init.model;
ctx = common_init.context;
if (model == nullptr)
{
LOG_ERR("unable to load model: %s", params.model.c_str());
@@ -578,12 +611,12 @@ struct llama_server_context
std::vector<llama_token> p;
if (first)
{
p = ::llama_tokenize(ctx, s, add_bos, TMP_FORCE_SPECIAL);
p = common_tokenize(ctx, s, add_bos, TMP_FORCE_SPECIAL);
first = false;
}
else
{
p = ::llama_tokenize(ctx, s, false, TMP_FORCE_SPECIAL);
p = common_tokenize(ctx, s, false, TMP_FORCE_SPECIAL);
}
prompt_tokens.insert(prompt_tokens.end(), p.begin(), p.end());
}
@@ -600,7 +633,7 @@ struct llama_server_context
else
{
auto s = json_prompt.template get<std::string>();
prompt_tokens = ::llama_tokenize(ctx, s, add_bos, TMP_FORCE_SPECIAL);
prompt_tokens = common_tokenize(ctx, s, add_bos, TMP_FORCE_SPECIAL);
}
return prompt_tokens;
@@ -629,7 +662,7 @@ struct llama_server_context
bool launch_slot_with_data(llama_client_slot* &slot, json data) {
slot_params default_params;
gpt_sampler_params default_sparams;
common_sampler_params default_sparams;
slot->params.stream = json_value(data, "stream", false);
slot->params.cache_prompt = json_value(data, "cache_prompt", false);
@@ -637,7 +670,6 @@ struct llama_server_context
slot->sparams.top_k = json_value(data, "top_k", default_sparams.top_k);
slot->sparams.top_p = json_value(data, "top_p", default_sparams.top_p);
slot->sparams.min_p = json_value(data, "min_p", default_sparams.min_p);
slot->sparams.tfs_z = json_value(data, "tfs_z", default_sparams.tfs_z);
slot->sparams.typ_p = json_value(data, "typical_p", default_sparams.typ_p);
slot->sparams.temp = json_value(data, "temperature", default_sparams.temp);
slot->sparams.dynatemp_range = json_value(data, "dynatemp_range", default_sparams.dynatemp_range);
@@ -769,7 +801,7 @@ struct llama_server_context
}
else if (el[0].is_string())
{
auto toks = llama_tokenize(model, el[0].get<std::string>(), false);
auto toks = common_tokenize(model, el[0].get<std::string>(), false);
for (auto tok : toks)
{
slot->sparams.logit_bias.push_back({tok, bias});
@@ -801,7 +833,7 @@ struct llama_server_context
sampler_names.emplace_back(name);
}
}
slot->sparams.samplers = gpt_sampler_types_from_names(sampler_names, false);
slot->sparams.samplers = common_sampler_types_from_names(sampler_names, false);
}
else
{
@@ -885,9 +917,9 @@ struct llama_server_context
if (slot->ctx_sampling != nullptr)
{
gpt_sampler_free(slot->ctx_sampling);
common_sampler_free(slot->ctx_sampling);
}
slot->ctx_sampling = gpt_sampler_init(model, slot->sparams);
slot->ctx_sampling = common_sampler_init(model, slot->sparams);
//llama_set_rng_seed(ctx, slot->params.seed);
slot->command = LOAD_PROMPT;
@@ -914,13 +946,13 @@ struct llama_server_context
system_tokens.clear();
if (!system_prompt.empty()) {
system_tokens = ::llama_tokenize(ctx, system_prompt, add_bos_token);
system_tokens = common_tokenize(ctx, system_prompt, add_bos_token);
llama_batch_clear(batch);
common_batch_clear(batch);
for (int i = 0; i < (int)system_tokens.size(); ++i)
{
llama_batch_add(batch, system_tokens[i], i, { 0 }, false);
common_batch_add(batch, system_tokens[i], i, { 0 }, false);
}
for (int32_t i = 0; i < (int32_t) batch.n_tokens; i += params.n_batch)
@@ -934,7 +966,6 @@ struct llama_server_context
batch.n_seq_id + i,
batch.seq_id + i,
batch.logits + i,
0, 0, 0, // unused
};
if (llama_decode(ctx, batch_view) != 0)
{
@@ -1009,7 +1040,7 @@ struct llama_server_context
bool process_token(completion_token_output &result, llama_client_slot &slot) {
// remember which tokens were sampled - used for repetition penalties during sampling
const std::string token_str = llama_token_to_piece(ctx, result.tok);
const std::string token_str = common_token_to_piece(ctx, result.tok);
slot.sampled = result.tok;
// search stop word and delete it
@@ -1160,7 +1191,7 @@ struct llama_server_context
samplers.reserve(slot.sparams.samplers.size());
for (const auto & sampler : slot.sparams.samplers)
{
samplers.emplace_back(gpt_sampler_type_to_str(sampler));
samplers.emplace_back(common_sampler_type_to_str(sampler));
}
return json {
@@ -1174,7 +1205,6 @@ struct llama_server_context
{"top_k", slot.sparams.top_k},
{"top_p", slot.sparams.top_p},
{"min_p", slot.sparams.min_p},
{"tfs_z", slot.sparams.tfs_z},
{"typical_p", slot.sparams.typ_p},
{"repeat_last_n", slot.sparams.penalty_last_n},
{"repeat_penalty", slot.sparams.penalty_repeat},
@@ -1216,7 +1246,7 @@ struct llama_server_context
if (slot.sparams.n_probs > 0)
{
std::vector<completion_token_output> probs_output = {};
const std::vector<llama_token> to_send_toks = llama_tokenize(ctx, tkn.text_to_send, false);
const std::vector<llama_token> to_send_toks = common_tokenize(ctx, tkn.text_to_send, false);
size_t probs_pos = std::min(slot.sent_token_probs_index, slot.generated_token_probs.size());
size_t probs_stop_pos = std::min(slot.sent_token_probs_index + to_send_toks.size(), slot.generated_token_probs.size());
if (probs_pos < probs_stop_pos)
@@ -1268,7 +1298,7 @@ struct llama_server_context
std::vector<completion_token_output> probs = {};
if (!slot.params.stream && slot.stopped_word)
{
const std::vector<llama_token> stop_word_toks = llama_tokenize(ctx, slot.stopping_word, false);
const std::vector<llama_token> stop_word_toks = common_tokenize(ctx, slot.stopping_word, false);
probs = std::vector<completion_token_output>(slot.generated_token_probs.begin(), slot.generated_token_probs.end() - stop_word_toks.size());
}
else
@@ -1379,7 +1409,6 @@ struct llama_server_context
batch.n_seq_id + i,
batch.seq_id + i,
batch.logits + i,
0, 0, 0, // unused
};
if (llama_decode(ctx, batch_view))
{
@@ -1398,8 +1427,9 @@ struct llama_server_context
}
const int n_embd = llama_n_embd(model);
llama_batch batch_img = { n_eval, nullptr, (img.image_embedding + i * n_embd), nullptr, nullptr, nullptr, nullptr, slot.n_past, 1, 0, };
if (llama_decode(ctx, batch_img))
float * embd = img.image_embedding + i * n_embd;
llava_embd_batch llava_batch = llava_embd_batch(embd, n_eval, slot.n_past, 0);
if (llama_decode(ctx, llava_batch.batch))
{
LOG("%s : failed to eval image\n", __func__);
return false;
@@ -1408,7 +1438,7 @@ struct llama_server_context
}
image_idx++;
llama_batch_clear(batch);
common_batch_clear(batch);
// append prefix of next image
const auto json_prompt = (image_idx >= (int) slot.images.size()) ?
@@ -1418,7 +1448,7 @@ struct llama_server_context
std::vector<llama_token> append_tokens = tokenize(json_prompt, false); // has next image
for (int i = 0; i < (int) append_tokens.size(); ++i)
{
llama_batch_add(batch, append_tokens[i], system_tokens.size() + slot.n_past, { slot.id }, true);
common_batch_add(batch, append_tokens[i], system_tokens.size() + slot.n_past, { slot.id }, true);
slot.n_past += 1;
}
}
@@ -1550,7 +1580,7 @@ struct llama_server_context
update_system_prompt();
}
llama_batch_clear(batch);
common_batch_clear(batch);
if (all_slots_are_idle)
{
@@ -1628,7 +1658,7 @@ struct llama_server_context
// TODO: we always have to take into account the "system_tokens"
// this is not great and needs to be improved somehow
llama_batch_add(batch, slot.sampled, system_tokens.size() + slot_npast, { slot.id }, true);
common_batch_add(batch, slot.sampled, system_tokens.size() + slot_npast, { slot.id }, true);
slot.n_past += 1;
}
@@ -1722,7 +1752,7 @@ struct llama_server_context
if (!slot.params.cache_prompt)
{
gpt_sampler_reset(slot.ctx_sampling);
common_sampler_reset(slot.ctx_sampling);
slot.n_past = 0;
slot.n_past_se = 0;
@@ -1734,7 +1764,7 @@ struct llama_server_context
// push the prompt into the sampling context (do not apply grammar)
for (auto &token : prompt_tokens)
{
gpt_sampler_accept(slot.ctx_sampling, token, false);
common_sampler_accept(slot.ctx_sampling, token, false);
}
slot.n_past = common_part(slot.cache_tokens, prompt_tokens);
@@ -1826,7 +1856,7 @@ struct llama_server_context
ga_i += ga_w/ga_n;
}
}
llama_batch_add(batch, prefix_tokens[slot.n_past], system_tokens.size() + slot_npast, {slot.id }, false);
common_batch_add(batch, prefix_tokens[slot.n_past], system_tokens.size() + slot_npast, {slot.id }, false);
slot_npast++;
}
@@ -1904,7 +1934,6 @@ struct llama_server_context
batch.n_seq_id + i,
batch.seq_id + i,
batch.logits + i,
0, 0, 0, // unused
};
const int ret = llama_decode(ctx, batch_view);
@@ -1943,9 +1972,9 @@ struct llama_server_context
}
completion_token_output result;
const llama_token id = gpt_sampler_sample(slot.ctx_sampling, ctx, slot.i_batch - i);
const llama_token id = common_sampler_sample(slot.ctx_sampling, ctx, slot.i_batch - i);
gpt_sampler_accept(slot.ctx_sampling, id, true);
common_sampler_accept(slot.ctx_sampling, id, true);
slot.n_decoded += 1;
if (slot.n_decoded == 1)
@@ -1956,7 +1985,7 @@ struct llama_server_context
}
result.tok = id;
const auto * cur_p = gpt_sampler_get_candidates(slot.ctx_sampling);
const auto * cur_p = common_sampler_get_candidates(slot.ctx_sampling);
for (size_t i = 0; i < (size_t) slot.sparams.n_probs; ++i) {
result.probs.push_back({
@@ -2009,7 +2038,7 @@ static json format_partial_response(
struct token_translator
{
llama_context * ctx;
std::string operator()(llama_token tok) const { return llama_token_to_piece(ctx, tok); }
std::string operator()(llama_token tok) const { return common_token_to_piece(ctx, tok); }
std::string operator()(const completion_token_output &cto) const { return (*this)(cto.tok); }
};
@@ -2074,7 +2103,6 @@ json parse_options(bool streaming, const backend::PredictOptions* predict, llama
// slot->params.n_predict = json_value(data, "n_predict", default_params.n_predict);
// slot->sparams.top_k = json_value(data, "top_k", default_sparams.top_k);
// slot->sparams.top_p = json_value(data, "top_p", default_sparams.top_p);
// slot->sparams.tfs_z = json_value(data, "tfs_z", default_sparams.tfs_z);
// slot->sparams.typical_p = json_value(data, "typical_p", default_sparams.typical_p);
// slot->sparams.temp = json_value(data, "temperature", default_sparams.temp);
// slot->sparams.penalty_last_n = json_value(data, "repeat_last_n", default_sparams.penalty_last_n);
@@ -2098,7 +2126,6 @@ json parse_options(bool streaming, const backend::PredictOptions* predict, llama
data["n_predict"] = predict->tokens() == 0 ? -1 : predict->tokens();
data["top_k"] = predict->topk();
data["top_p"] = predict->topp();
data["tfs_z"] = predict->tailfreesamplingz();
data["typical_p"] = predict->typicalp();
data["temperature"] = predict->temperature();
data["repeat_last_n"] = predict->repeat();
@@ -2145,7 +2172,6 @@ json parse_options(bool streaming, const backend::PredictOptions* predict, llama
// llama.params.n_predict = predict->tokens() == 0 ? -1 : predict->tokens();
// llama.params.sparams.top_k = predict->topk();
// llama.params.sparams.top_p = predict->topp();
// llama.params.sparams.tfs_z = predict->tailfreesamplingz();
// llama.params.sparams.typical_p = predict->typicalp();
// llama.params.sparams.penalty_last_n = predict->repeat();
// llama.params.sparams.temp = predict->temperature();
@@ -2203,7 +2229,7 @@ json parse_options(bool streaming, const backend::PredictOptions* predict, llama
// }
static void params_parse(const backend::ModelOptions* request,
gpt_params & params) {
common_params & params) {
// this is comparable to: https://github.com/ggerganov/llama.cpp/blob/d9b33fe95bd257b36c84ee5769cc048230067d6f/examples/server/server.cpp#L1809
@@ -2311,7 +2337,7 @@ public:
grpc::Status LoadModel(ServerContext* context, const backend::ModelOptions* request, backend::Result* result) {
// Implement LoadModel RPC
gpt_params params;
common_params params;
params_parse(request, params);
llama_backend_init();

View File

@@ -1,2 +1,2 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torch==2.4.1+cu118

View File

@@ -1 +1 @@
torch
torch==2.4.1

View File

@@ -1,2 +1,2 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
torch
torch==2.4.1+rocm6.0

View File

@@ -1,6 +1,6 @@
accelerate
auto-gptq==0.7.1
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi
transformers

View File

@@ -1,4 +1,4 @@
transformers
accelerate
torch
torchaudio
torch==2.4.1
torchaudio==2.4.1

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torchaudio
torch==2.4.1+cu118
torchaudio==2.4.1+cu118
transformers
accelerate

View File

@@ -1,4 +1,4 @@
torch
torchaudio
torch==2.4.1
torchaudio==2.4.1
transformers
accelerate

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
torch
torchaudio
torch==2.4.1+rocm6.0
torchaudio==2.4.1+rocm6.0
transformers
accelerate

View File

@@ -1,4 +1,4 @@
bark==0.1.5
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi

View File

@@ -1,8 +1,9 @@
.DEFAULT_GOAL := install
.PHONY: install
install: protogen
install:
bash install.sh
$(MAKE) protogen
.PHONY: protogen
protogen: backend_pb2_grpc.py backend_pb2.py
@@ -12,7 +13,7 @@ protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. --python_out=. --grpc_python_out=. backend.proto
bash protogen.sh
.PHONY: clean
clean: protogen-clean

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -e
source $(dirname $0)/../common/libbackend.sh
python3 -m grpc_tools.protoc -I../.. --python_out=. --grpc_python_out=. backend.proto

View File

@@ -1,2 +1,3 @@
grpcio==1.66.2
protobuf
grpcio==1.67.1
protobuf
grpcio-tools

View File

@@ -1,3 +1,4 @@
transformers
accelerate
torch
torch==2.4.1
coqui-tts

View File

@@ -1,5 +1,6 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torchaudio
torch==2.4.1+cu118
torchaudio==2.4.1+cu118
transformers
accelerate
accelerate
coqui-tts

View File

@@ -1,4 +1,5 @@
torch
torchaudio
torch==2.4.1
torchaudio==2.4.1
transformers
accelerate
accelerate
coqui-tts

View File

@@ -1,5 +1,6 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
torch
torchaudio
torch==2.4.1+rocm6.0
torchaudio==2.4.1+rocm6.0
transformers
accelerate
accelerate
coqui-tts

View File

@@ -5,4 +5,5 @@ torchaudio
optimum[openvino]
setuptools==75.1.0 # https://github.com/mudler/LocalAI/issues/2406
transformers
accelerate
accelerate
coqui-tts

View File

@@ -1,4 +1,4 @@
coqui-tts
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi
certifi
packaging==24.1

View File

@@ -19,7 +19,7 @@ class TestBackendServicer(unittest.TestCase):
This method sets up the gRPC service by starting the server
"""
self.service = subprocess.Popen(["python3", "backend.py", "--addr", "localhost:50051"])
time.sleep(10)
time.sleep(30)
def tearDown(self) -> None:
"""

View File

@@ -247,11 +247,16 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
use_safetensors=True,
variant=variant)
elif request.PipelineType == "FluxPipeline":
if fromSingleFile:
self.pipe = FluxPipeline.from_single_file(modelFile,
torch_dtype=torchType,
use_safetensors=True)
else:
self.pipe = FluxPipeline.from_pretrained(
request.Model,
torch_dtype=torch.bfloat16)
if request.LowVRAM:
self.pipe.enable_model_cpu_offload()
if request.LowVRAM:
self.pipe.enable_model_cpu_offload()
elif request.PipelineType == "FluxTransformer2DModel":
dtype = torch.bfloat16
# specify from environment or default to "ChuckMcSneed/FLUX.1-dev"
@@ -296,22 +301,34 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
self.pipe.controlnet = self.controlnet
else:
self.controlnet = None
# Assume directory from request.ModelFile.
# Only if request.LoraAdapter it's not an absolute path
if request.LoraAdapter and request.ModelFile != "" and not os.path.isabs(request.LoraAdapter) and request.LoraAdapter:
# get base path of modelFile
modelFileBase = os.path.dirname(request.ModelFile)
if request.LoraAdapter and not os.path.isabs(request.LoraAdapter):
# modify LoraAdapter to be relative to modelFileBase
request.LoraAdapter = os.path.join(modelFileBase, request.LoraAdapter)
request.LoraAdapter = os.path.join(request.ModelPath, request.LoraAdapter)
device = "cpu" if not request.CUDA else "cuda"
self.device = device
if request.LoraAdapter:
# Check if its a local file and not a directory ( we load lora differently for a safetensor file )
if os.path.exists(request.LoraAdapter) and not os.path.isdir(request.LoraAdapter):
# self.load_lora_weights(request.LoraAdapter, 1, device, torchType)
self.pipe.load_lora_weights(request.LoraAdapter)
else:
self.pipe.unet.load_attn_procs(request.LoraAdapter)
if len(request.LoraAdapters) > 0:
i = 0
adapters_name = []
adapters_weights = []
for adapter in request.LoraAdapters:
if not os.path.isabs(adapter):
adapter = os.path.join(request.ModelPath, adapter)
self.pipe.load_lora_weights(adapter, adapter_name=f"adapter_{i}")
adapters_name.append(f"adapter_{i}")
i += 1
for adapters_weight in request.LoraScales:
adapters_weights.append(adapters_weight)
self.pipe.set_adapters(adapters_name, adapter_weights=adapters_weights)
if request.CUDA:
self.pipe.to('cuda')
@@ -392,8 +409,6 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# create a dictionary of values for the parameters
options = {
"negative_prompt": request.negative_prompt,
"width": request.width,
"height": request.height,
"num_inference_steps": steps,
}
@@ -411,13 +426,13 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
keys = options.keys()
if request.EnableParameters != "":
keys = request.EnableParameters.split(",")
keys = [key.strip() for key in request.EnableParameters.split(",")]
if request.EnableParameters == "none":
keys = []
# create a dictionary of parameters by using the keys from EnableParameters and the values from defaults
kwargs = {key: options[key] for key in keys}
kwargs = {key: options.get(key) for key in keys if key in options}
# Set seed
if request.seed > 0:
@@ -428,6 +443,12 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
if self.PipelineType == "FluxPipeline":
kwargs["max_sequence_length"] = 256
if request.width:
kwargs["width"] = request.width
if request.height:
kwargs["height"] = request.height
if self.PipelineType == "FluxTransformer2DModel":
kwargs["output_type"] = "pil"
kwargs["generator"] = torch.Generator("cpu").manual_seed(0)
@@ -447,6 +468,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
export_to_video(video_frames, request.dst)
return backend_pb2.Result(message="Media generated successfully", success=True)
print(f"Generating image with {kwargs=}", file=sys.stderr)
image = {}
if COMPEL:
conditioning, pooled = self.compel.build_conditioning_tensor(prompt)

View File

@@ -5,5 +5,5 @@ accelerate
compel
peft
sentencepiece
torch
torch==2.4.1
optimum-quanto

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torch==2.4.1+cu118
diffusers
opencv-python
transformers

View File

@@ -1,4 +1,4 @@
torch
torch==2.4.1
diffusers
opencv-python
transformers

View File

@@ -1,5 +1,5 @@
setuptools
grpcio==1.66.2
grpcio==1.67.1
pillow
protobuf
certifi

View File

@@ -1,3 +1,3 @@
transformers
accelerate
torch
torch==2.4.1

View File

@@ -1,4 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torch==2.4.1+cu118
transformers
accelerate

View File

@@ -1,3 +1,3 @@
torch
torch==2.4.1
transformers
accelerate

View File

@@ -1,4 +1,4 @@
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi
wheel

View File

@@ -1,2 +1,2 @@
torch
torch==2.4.1
transformers

View File

@@ -1,3 +1,3 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torch==2.4.1+cu118
transformers

View File

@@ -1,2 +1,2 @@
torch
torch==2.4.1
transformers

View File

@@ -1,3 +1,3 @@
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi

View File

@@ -1 +1,3 @@
torch
torch==2.4.1
git+https://github.com/myshell-ai/MeloTTS.git
git+https://github.com/myshell-ai/OpenVoice.git

View File

@@ -1,2 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torch==2.4.1+cu118
git+https://github.com/myshell-ai/MeloTTS.git
git+https://github.com/myshell-ai/OpenVoice.git

View File

@@ -1 +1,3 @@
torch
torch==2.4.1
git+https://github.com/myshell-ai/MeloTTS.git
git+https://github.com/myshell-ai/OpenVoice.git

View File

@@ -1,2 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
torch
torch==2.4.1+rocm6.0
git+https://github.com/myshell-ai/MeloTTS.git
git+https://github.com/myshell-ai/OpenVoice.git

View File

@@ -2,22 +2,22 @@
intel-extension-for-pytorch
torch
optimum[openvino]
grpcio==1.66.2
grpcio==1.67.1
protobuf
librosa==0.9.1
faster-whisper==1.0.3
faster-whisper==0.9.0
pydub==0.25.1
wavmark==0.0.3
numpy==1.26.4
numpy==1.22.0
eng_to_ipa==0.0.2
inflect==7.0.0
unidecode==1.3.7
whisper-timestamped==1.15.4
whisper-timestamped==1.14.2
openai
python-dotenv
pypinyin==0.50.0
cn2an==0.5.22
jieba==0.42.1
gradio==4.44.1
langid==1.1.6
git+https://github.com/myshell-ai/MeloTTS.git
git+https://github.com/myshell-ai/OpenVoice.git

View File

@@ -1,10 +1,10 @@
grpcio==1.66.2
grpcio==1.67.1
protobuf
librosa
faster-whisper
pydub==0.25.1
wavmark==0.0.3
numpy
numpy==1.22.0
eng_to_ipa==0.0.2
inflect
unidecode
@@ -13,8 +13,8 @@ openai
python-dotenv
pypinyin
cn2an==0.5.22
networkx==2.8.8
jieba==0.42.1
gradio
gradio==3.48.0
langid==1.1.6
git+https://github.com/myshell-ai/MeloTTS.git
git+https://github.com/myshell-ai/OpenVoice.git
llvmlite==0.43.0

View File

@@ -19,7 +19,7 @@ class TestBackendServicer(unittest.TestCase):
This method sets up the gRPC service by starting the server
"""
self.service = subprocess.Popen(["python3", "backend.py", "--addr", "localhost:50051"])
time.sleep(10)
time.sleep(30)
def tearDown(self) -> None:
"""

View File

@@ -12,9 +12,10 @@ export SKIP_CONDA=1
endif
.PHONY: parler-tts
parler-tts: protogen
parler-tts:
@echo "Installing $(CONDA_ENV_PATH)..."
bash install.sh $(CONDA_ENV_PATH)
$(MAKE) protogen
.PHONY: run
run: protogen
@@ -36,7 +37,7 @@ protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
backend_pb2_grpc.py backend_pb2.py:
python3 -m grpc_tools.protoc -I../.. --python_out=. --grpc_python_out=. backend.proto
bash protogen.sh
.PHONY: clean
clean: protogen-clean

View File

@@ -11,8 +11,10 @@ if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
installRequirements
# https://github.com/descriptinc/audiotools/issues/101
# incompatible protobuf versions.
PYDIR=python3.10

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -e
source $(dirname $0)/../common/libbackend.sh
python3 -m grpc_tools.protoc -I../.. --python_out=. --grpc_python_out=. backend.proto

View File

@@ -1,3 +1,4 @@
git+https://github.com/huggingface/parler-tts.git@8e465f1b5fcd223478e07175cb40494d19ffbe17
llvmlite==0.43.0
numba==0.60.0
grpcio-tools==1.42.0

View File

@@ -1,3 +1,3 @@
transformers
accelerate
torch
torch==2.4.1

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torchaudio
torch==2.4.1+cu118
torchaudio==2.4.1+cu118
transformers
accelerate

View File

@@ -1,4 +1,4 @@
torch
torchaudio
torch==2.4.1
torchaudio==2.4.1
transformers
accelerate

View File

@@ -1,4 +1,3 @@
grpcio==1.66.2
protobuf
grpcio==1.67.1
certifi
llvmlite==0.43.0
llvmlite==0.43.0

View File

@@ -1,4 +1,4 @@
transformers
accelerate
torch
torch==2.4.1
rerankers[transformers]

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu118
transformers
accelerate
torch
torch==2.4.1+cu118
rerankers[transformers]

View File

@@ -1,4 +1,4 @@
transformers
accelerate
torch
torch==2.4.1
rerankers[transformers]

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
transformers
accelerate
torch
torch==2.4.1+rocm6.0
rerankers[transformers]

View File

@@ -1,3 +1,3 @@
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi

View File

@@ -1,6 +1,6 @@
torch
torch==2.4.1
accelerate
transformers
bitsandbytes
sentence-transformers==3.1.1
sentence-transformers==3.2.0
transformers

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu118
torch
torch==2.4.1+cu118
accelerate
sentence-transformers==3.1.1
sentence-transformers==3.2.0
transformers

View File

@@ -1,4 +1,4 @@
torch
torch==2.4.1
accelerate
sentence-transformers==3.1.1
sentence-transformers==3.2.0
transformers

View File

@@ -1,5 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
torch
torch==2.4.1+rocm6.0
accelerate
sentence-transformers==3.1.1
sentence-transformers==3.2.0
transformers

View File

@@ -4,5 +4,5 @@ torch
optimum[openvino]
setuptools==69.5.1 # https://github.com/mudler/LocalAI/issues/2406
accelerate
sentence-transformers==3.1.1
sentence-transformers==3.2.0
transformers

View File

@@ -1,4 +1,4 @@
grpcio==1.66.2
grpcio==1.67.1
protobuf
certifi
datasets

View File

@@ -1,3 +1,3 @@
transformers
accelerate
torch
torch==2.4.1

View File

@@ -1,4 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu118
transformers
accelerate
torch
torch==2.4.1+cu118

View File

@@ -1,3 +1,3 @@
transformers
accelerate
torch
torch==2.4.1

View File

@@ -1,4 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.0
transformers
accelerate
torch
torch==2.4.1+rocm6.0

View File

@@ -1,4 +1,4 @@
grpcio==1.66.2
grpcio==1.67.1
protobuf
scipy==1.14.0
certifi

View File

@@ -72,7 +72,12 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
Returns:
A Result object that contains the result of the LoadModel operation.
"""
model_name = request.Model
# Check to see if the Model exists in the filesystem already.
if os.path.exists(request.ModelFile):
model_name = request.ModelFile
compute = torch.float16
if request.F16Memory == True:

Some files were not shown because too many files have changed in this diff Show More