Compare commits

...

49 Commits

Author SHA1 Message Date
Ettore Di Giacinto
61a6e95f7d Additional thinking tags
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-20 12:02:35 +01:00
Ettore Di Giacinto
a352125726 chore: refactorings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-20 11:48:00 +01:00
Ettore Di Giacinto
187e474daf fix(reasoning): handle only closing tags
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-20 11:40:29 +01:00
Ettore Di Giacinto
4bf2f8bbd8 chore(docs): update docs with Anthropic API and openresponses
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-20 09:25:24 +01:00
LocalAI [bot]
d3525b7509 chore: ⬆️ Update ggml-org/llama.cpp to 959ecf7f234dc0bc0cd6829b25cb0ee1481aa78a (#8122)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-19 22:50:47 +01:00
LocalAI [bot]
c8aa821e0e chore: ⬆️ Update leejet/stable-diffusion.cpp to a48b4a3ade9972faf0adcad47e51c6fc03f0e46d (#8121)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-19 22:27:46 +01:00
dependabot[bot]
b3191927ae chore(deps): bump github.com/mudler/cogito from 0.7.2 to 0.8.1 (#8124)
Bumps [github.com/mudler/cogito](https://github.com/mudler/cogito) from 0.7.2 to 0.8.1.
- [Release notes](https://github.com/mudler/cogito/releases)
- [Commits](https://github.com/mudler/cogito/compare/v0.7.2...v0.8.1)

---
updated-dependencies:
- dependency-name: github.com/mudler/cogito
  dependency-version: 0.8.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 22:26:26 +01:00
LocalAI [bot]
54c5a2d9ea docs: ⬆️ update docs version mudler/LocalAI (#8120)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-19 21:18:24 +00:00
Ettore Di Giacinto
0279591fec Enable reranking for Qwen3-VL-Reranker-8B
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-19 15:28:58 +01:00
LocalAI [bot]
8845186955 chore: ⬆️ Update leejet/stable-diffusion.cpp to 2efd19978dd4164e387bf226025c9666b6ef35e2 (#8099)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-18 22:40:35 +01:00
LocalAI [bot]
ab8ed24358 chore: ⬆️ Update ggml-org/llama.cpp to 287a33017b32600bfc0e81feeb0ad6e81e0dd484 (#8100)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-18 22:40:14 +01:00
LocalAI [bot]
a021df5a88 feat(swagger): update swagger (#8098)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-18 22:10:06 +01:00
Ettore Di Giacinto
5f403b1631 chore: drop neutts for l4t (#8101)
Builds exhausts CI currently, and there are better backends at this
point in time. We will probably deprecate it in the future.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-18 21:55:56 +01:00
rampa3
897ad1729e chore(model gallery): add qwen3-coder-30b-a3b-instruct based on model request (#8082)
* chore(model gallery): add qwen3-coder-30b-a3b-instruct based on model request

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

* added missing model config import URL

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

---------

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
2026-01-18 09:23:07 +01:00
LocalAI [bot]
16a18a2e55 chore: ⬆️ Update leejet/stable-diffusion.cpp to 9565c7f6bd5fcff124c589147b2621244f2c4aa1 (#8086)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-17 22:12:21 +01:00
Ettore Di Giacinto
3387bfaee0 feat(api): add support for open responses specification (#8063)
* feat: openresponses

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add ttl settings, fix tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: register cors middleware by default

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* satisfy schema

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Logitbias and logprobs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add grammar

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* SSE compliance

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* tool JSON conversion

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* support background mode

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* swagger

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* drop code. This is handled in the handler

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Small refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* background mode for MCP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-17 22:11:47 +01:00
LocalAI [bot]
1cd33047b4 chore: ⬆️ Update ggml-org/llama.cpp to 2fbde785bc106ae1c4102b0e82b9b41d9c466579 (#8087)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-17 21:10:18 +00:00
Ettore Di Giacinto
1de045311a chore(ui): add video generation link (#8079)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-17 09:49:47 +01:00
LocalAI [bot]
5fe9bf9f84 chore: ⬆️ Update ggml-org/whisper.cpp to f53dc74843e97f19f94a79241357f74ad5b691a6 (#8074)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-17 08:32:53 +01:00
LocalAI [bot]
d4fd0c0609 chore: ⬆️ Update ggml-org/llama.cpp to 388ce822415f24c60fcf164a321455f1e008cafb (#8073)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-16 21:22:33 +00:00
Ettore Di Giacinto
d16722ee13 Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/rerankers in the pip group across 1 directory" (#8072)
Revert "chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/pyt…"

This reverts commit 1f10ab39a9.
2026-01-16 20:50:33 +01:00
dependabot[bot]
1f10ab39a9 chore(deps): bump torch from 2.3.1+cxx11.abi to 2.8.0 in /backend/python/rerankers in the pip group across 1 directory (#8066)
chore(deps): bump torch

Bumps the pip group with 1 update in the /backend/python/rerankers directory: [torch](https://github.com/pytorch/pytorch).


Updates `torch` from 2.3.1+cxx11.abi to 2.8.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.8.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.8.0
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 19:38:12 +00:00
Ettore Di Giacinto
4d36e393d1 fix(ci): use more beefy runner for expensive jobs (#8065)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-16 19:26:40 +01:00
LocalAI [bot]
cb8616c7d1 chore: ⬆️ Update ggml-org/llama.cpp to 785a71008573e2d84728fb0ba9e851d72d3f8fab (#8053)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-15 22:53:17 +01:00
LocalAI [bot]
ff31d50488 chore: ⬆️ Update ggml-org/whisper.cpp to 2eeeba56e9edd762b4b38467bab96c2517163158 (#8052)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-15 22:52:56 +01:00
Divyanshupandey007
1a50717e33 fix: reduce log verbosity for /api/operations polling (#8050)
* fix: reduce log verbosity for /api/operations polling

Reduces log clutter by changing the log level from INFO to DEBUG for successful (200 OK) /api/operations requests. This endpoint is polled frequently by the Web UI, causing log spam. Fixes #7989.

* fix: reduce log verbosity for /api/operations polling

Reduces log clutter by changing the log level from INFO to DEBUG for successful (200 OK) /api/operations requests. This endpoint is polled frequently by the Web UI, causing log spam. Fixes #7989.
2026-01-15 21:13:13 +01:00
LocalAI [bot]
49d6305509 chore: ⬆️ Update ggml-org/llama.cpp to d98b548120eecf98f0f6eaa1ba7e29b3afda9f2e (#8040)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-15 08:39:46 +01:00
Ettore Di Giacinto
d20a113aef fix(functions): do not duplicate function when valid JSON is inside XML tags (#8043)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-14 23:42:00 +01:00
LocalAI [bot]
cbaa793520 chore: ⬆️ Update ggml-org/whisper.cpp to 47af2fb70f7e4ee1ba40c8bed513760fdfe7a704 (#8039)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-14 22:12:32 +01:00
Ettore Di Giacinto
6fe3fc880f Update section headers in README.md for clarity
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-14 22:11:58 +01:00
Ettore Di Giacinto
752e641c48 Clarify Docker usage in README
Updated Docker section in README to clarify usage.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-14 22:10:59 +01:00
Ettore Di Giacinto
44d78b4d15 chore(doc): put alert on install.sh until is fixed (#8042)
See: https://github.com/mudler/LocalAI/issues/8032

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-14 22:08:48 +01:00
Ettore Di Giacinto
64d0a96ba3 feat(ui): add video gen UI (#8020)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-14 11:43:32 +01:00
Ettore Di Giacinto
b19afc9e64 feat(diffusers): add support to LTX-2 (#8019)
* feat(diffusers): add support to LTX-2

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to the gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-14 09:07:30 +01:00
LocalAI [bot]
d6e698876b chore: ⬆️ Update ggml-org/llama.cpp to e4832e3ae4d58ac0ecbdbf4ae055424d6e628c9f (#8015)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-14 08:09:37 +01:00
LocalAI [bot]
8962205546 chore: ⬆️ Update ggml-org/whisper.cpp to a96310871a3b294f026c3bcad4e715d17b5905fe (#8014)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-14 08:09:00 +01:00
LocalAI [bot]
eddc460118 chore: ⬆️ Update leejet/stable-diffusion.cpp to 7010bb4dff7bd55b03d35ef9772142c21699eba9 (#8013)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-14 08:08:31 +01:00
Ettore Di Giacinto
a6ff354c86 feat(tts): add pocket-tts backend (#8018)
* feat(pocket-tts): add new backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to the gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update docs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-13 23:35:19 +01:00
dependabot[bot]
3a2be4df48 chore(deps): bump github.com/onsi/ginkgo/v2 from 2.27.3 to 2.27.5 (#8004)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.27.3 to 2.27.5.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.27.3...v2.27.5)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.27.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 09:06:20 +01:00
dependabot[bot]
4e1f448e86 chore(deps): bump fyne.io/fyne/v2 from 2.7.1 to 2.7.2 (#8003)
Bumps [fyne.io/fyne/v2](https://github.com/fyne-io/fyne) from 2.7.1 to 2.7.2.
- [Release notes](https://github.com/fyne-io/fyne/releases)
- [Changelog](https://github.com/fyne-io/fyne/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fyne-io/fyne/compare/v2.7.1...v2.7.2)

---
updated-dependencies:
- dependency-name: fyne.io/fyne/v2
  dependency-version: 2.7.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 08:45:58 +01:00
dependabot[bot]
3e0168360a chore(deps): bump github.com/gpustack/gguf-parser-go from 0.22.1 to 0.23.1 (#8001)
chore(deps): bump github.com/gpustack/gguf-parser-go

Bumps [github.com/gpustack/gguf-parser-go](https://github.com/gpustack/gguf-parser-go) from 0.22.1 to 0.23.1.
- [Release notes](https://github.com/gpustack/gguf-parser-go/releases)
- [Commits](https://github.com/gpustack/gguf-parser-go/compare/v0.22.1...v0.23.1)

---
updated-dependencies:
- dependency-name: github.com/gpustack/gguf-parser-go
  dependency-version: 0.23.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 08:45:35 +01:00
dependabot[bot]
ea4157887b chore(deps): bump github.com/onsi/gomega from 1.38.3 to 1.39.0 (#8000)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.38.3 to 1.39.0.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.38.3...v1.39.0)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-version: 1.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 08:45:18 +01:00
dependabot[bot]
699c50be47 chore(deps): bump github.com/mudler/go-processmanager from 0.0.0-20240820160718-8b802d3ecf82 to 0.1.0 (#7992)
chore(deps): bump github.com/mudler/go-processmanager

Bumps [github.com/mudler/go-processmanager](https://github.com/mudler/go-processmanager) from 0.0.0-20240820160718-8b802d3ecf82 to 0.1.0.
- [Release notes](https://github.com/mudler/go-processmanager/releases)
- [Commits](https://github.com/mudler/go-processmanager/commits/v0.1.0)

---
updated-dependencies:
- dependency-name: github.com/mudler/go-processmanager
  dependency-version: 0.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 08:44:53 +01:00
dependabot[bot]
94eecc43a3 chore(deps): bump protobuf from 6.33.2 to 6.33.4 in /backend/python/transformers (#7993)
chore(deps): bump protobuf in /backend/python/transformers

Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 6.33.2 to 6.33.4.
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Commits](https://github.com/protocolbuffers/protobuf/commits)

---
updated-dependencies:
- dependency-name: protobuf
  dependency-version: 6.33.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 23:46:32 +00:00
LocalAI [bot]
7e35ec6c4f chore: ⬆️ Update ggml-org/llama.cpp to bcf7546160982f56bc290d2e538544bbc0772f63 (#7991)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-12 21:14:33 +00:00
Ettore Di Giacinto
7891c33cb1 chore(vulkan): bump vulkan-sdk to 1.4.335.0 (#7981)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-12 07:51:26 +01:00
Ettore Di Giacinto
271cc79709 chore(backends): do not bundle cuda target directory (#7982)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-12 07:51:09 +01:00
LocalAI [bot]
3d12d5e70d chore: ⬆️ Update leejet/stable-diffusion.cpp to 885e62ea822e674c6837a8225d2d75f021b97a6a (#7979)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-11 22:44:11 +01:00
LocalAI [bot]
bc180c2638 chore: ⬆️ Update ggml-org/llama.cpp to 0c3b7a9efebc73d206421c99b7eb6b6716231322 (#7978)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-11 22:06:30 +01:00
83 changed files with 10489 additions and 545 deletions

View File

@@ -105,6 +105,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "9"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-pocket-tts'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "0"
@@ -124,7 +137,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-12-llama-cpp'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "llama-cpp"
@@ -340,6 +353,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-13-pocket-tts'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
@@ -405,6 +431,19 @@ jobs:
backend: "vibevoice"
dockerfile: "./backend/Dockerfile.python"
context: "./"
- build-type: 'l4t'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-cuda-13-arm64-pocket-tts'
runs-on: 'ubuntu-24.04-arm'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
ubuntu-version: '2404'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
- build-type: 'l4t'
cuda-major-version: "13"
cuda-minor-version: "0"
@@ -641,13 +680,26 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-pocket-tts'
runs-on: 'arc-runner-set'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'hipblas'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-faster-whisper'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "faster-whisper"
@@ -660,7 +712,7 @@ jobs:
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-rocm-hipblas-coqui'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "rocm/dev-ubuntu-24.04:6.4.4"
skip-drivers: 'false'
backend: "coqui"
@@ -772,6 +824,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-pocket-tts'
runs-on: 'ubuntu-24.04-arm'
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
skip-drivers: 'true'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
@@ -825,6 +890,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-intel-pocket-tts'
runs-on: 'arc-runner-set'
base-image: "intel/oneapi-basekit:2025.3.0-0-devel-ubuntu24.04"
skip-drivers: 'false'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'intel'
cuda-major-version: ""
cuda-minor-version: ""
@@ -885,7 +963,7 @@ jobs:
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-cpu-llama-cpp'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "llama-cpp"
@@ -911,7 +989,7 @@ jobs:
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-gpu-vulkan-llama-cpp'
runs-on: 'ubuntu-latest'
runs-on: 'bigger-runner'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "llama-cpp"
@@ -1252,19 +1330,6 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: 'l4t'
cuda-major-version: "12"
cuda-minor-version: "0"
platforms: 'linux/arm64'
skip-drivers: 'true'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-arm64-neutts'
base-image: "nvcr.io/nvidia/l4t-jetpack:r36.4.0"
runs-on: 'ubuntu-24.04-arm'
backend: "neutts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2204'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
@@ -1278,6 +1343,19 @@ jobs:
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
- build-type: ''
cuda-major-version: ""
cuda-minor-version: ""
platforms: 'linux/amd64,linux/arm64'
tag-latest: 'auto'
tag-suffix: '-cpu-pocket-tts'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:24.04"
skip-drivers: 'false'
backend: "pocket-tts"
dockerfile: "./backend/Dockerfile.python"
context: "./"
ubuntu-version: '2404'
backend-jobs-darwin:
uses: ./.github/workflows/backend_build_darwin.yml
strategy:

View File

@@ -265,4 +265,23 @@ jobs:
- name: Test moonshine
run: |
make --jobs=5 --output-sync=target -C backend/python/moonshine
make --jobs=5 --output-sync=target -C backend/python/moonshine test
make --jobs=5 --output-sync=target -C backend/python/moonshine test
tests-pocket-tts:
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v6
with:
submodules: true
- name: Dependencies
run: |
sudo apt-get update
sudo apt-get install build-essential ffmpeg
sudo apt-get install -y ca-certificates cmake curl patch python3-pip
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
pip install --user --no-cache-dir grpcio-tools==1.64.1
- name: Test pocket-tts
run: |
make --jobs=5 --output-sync=target -C backend/python/pocket-tts
make --jobs=5 --output-sync=target -C backend/python/pocket-tts test

View File

@@ -42,22 +42,22 @@ RUN <<EOT bash
ocaml-core ninja-build pkg-config libxml2-dev wayland-protocols python3-jsonschema \
clang-format qtbase5-dev qt6-base-dev libxcb-glx0-dev sudo xz-utils mesa-vulkan-drivers
if [ "amd64" = "$TARGETARCH" ]; then
wget "https://sdk.lunarg.com/sdk/download/1.4.328.1/linux/vulkansdk-linux-x86_64-1.4.328.1.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
wget "https://sdk.lunarg.com/sdk/download/1.4.335.0/linux/vulkansdk-linux-x86_64-1.4.335.0.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
mkdir -p /opt/vulkan-sdk && \
mv 1.4.328.1 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.328.1 && \
mv 1.4.335.0 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.335.0 && \
./vulkansdk --no-deps --maxjobs \
vulkan-loader \
vulkan-validationlayers \
vulkan-extensionlayer \
vulkan-tools \
shaderc && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/share/* /usr/share/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/share/* /usr/share/ && \
rm -rf /opt/vulkan-sdk
fi
if [ "arm64" = "$TARGETARCH" ]; then

View File

@@ -1,5 +1,5 @@
# Disable parallel execution for backend builds
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/stablediffusion-ggml-darwin backends/vllm backends/moonshine
.NOTPARALLEL: backends/diffusers backends/llama-cpp backends/piper backends/stablediffusion-ggml backends/whisper backends/faster-whisper backends/silero-vad backends/local-store backends/huggingface backends/rfdetr backends/kitten-tts backends/kokoro backends/chatterbox backends/llama-cpp-darwin backends/neutts build-darwin-python-backend build-darwin-go-backend backends/mlx backends/diffuser-darwin backends/mlx-vlm backends/mlx-audio backends/stablediffusion-ggml-darwin backends/vllm backends/moonshine backends/pocket-tts
GOCMD=go
GOTEST=$(GOCMD) test
@@ -9,7 +9,7 @@ LAUNCHER_BINARY_NAME=local-ai-launcher
CUDA_MAJOR_VERSION?=13
CUDA_MINOR_VERSION?=0
UBUNTU_VERSION?=2204
UBUNTU_VERSION?=2404
UBUNTU_CODENAME?=noble
GORELEASER?=
@@ -316,6 +316,7 @@ prepare-test-extra: protogen-python
$(MAKE) -C backend/python/vllm
$(MAKE) -C backend/python/vibevoice
$(MAKE) -C backend/python/moonshine
$(MAKE) -C backend/python/pocket-tts
test-extra: prepare-test-extra
$(MAKE) -C backend/python/transformers test
@@ -324,6 +325,7 @@ test-extra: prepare-test-extra
$(MAKE) -C backend/python/vllm test
$(MAKE) -C backend/python/vibevoice test
$(MAKE) -C backend/python/moonshine test
$(MAKE) -C backend/python/pocket-tts test
DOCKER_IMAGE?=local-ai
DOCKER_AIO_IMAGE?=local-ai-aio
@@ -447,17 +449,16 @@ BACKEND_FASTER_WHISPER = faster-whisper|python|.|false|true
BACKEND_COQUI = coqui|python|.|false|true
BACKEND_BARK = bark|python|.|false|true
BACKEND_EXLLAMA2 = exllama2|python|.|false|true
# Python backends with ./backend context
BACKEND_RFDETR = rfdetr|python|./backend|false|true
BACKEND_KITTEN_TTS = kitten-tts|python|./backend|false|true
BACKEND_NEUTTS = neutts|python|./backend|false|true
BACKEND_KOKORO = kokoro|python|./backend|false|true
BACKEND_VLLM = vllm|python|./backend|false|true
BACKEND_DIFFUSERS = diffusers|python|./backend|--progress=plain|true
BACKEND_CHATTERBOX = chatterbox|python|./backend|false|true
BACKEND_VIBEVOICE = vibevoice|python|./backend|--progress=plain|true
BACKEND_MOONSHINE = moonshine|python|./backend|false|true
BACKEND_RFDETR = rfdetr|python|.|false|true
BACKEND_KITTEN_TTS = kitten-tts|python|.|false|true
BACKEND_NEUTTS = neutts|python|.|false|true
BACKEND_KOKORO = kokoro|python|.|false|true
BACKEND_VLLM = vllm|python|.|false|true
BACKEND_DIFFUSERS = diffusers|python|.|--progress=plain|true
BACKEND_CHATTERBOX = chatterbox|python|.|false|true
BACKEND_VIBEVOICE = vibevoice|python|.|--progress=plain|true
BACKEND_MOONSHINE = moonshine|python|.|false|true
BACKEND_POCKET_TTS = pocket-tts|python|.|false|true
# Helper function to build docker image for a backend
# Usage: $(call docker-build-backend,BACKEND_NAME,DOCKERFILE_TYPE,BUILD_CONTEXT,PROGRESS_FLAG,NEEDS_BACKEND_ARG)
@@ -503,12 +504,13 @@ $(eval $(call generate-docker-build-target,$(BACKEND_DIFFUSERS)))
$(eval $(call generate-docker-build-target,$(BACKEND_CHATTERBOX)))
$(eval $(call generate-docker-build-target,$(BACKEND_VIBEVOICE)))
$(eval $(call generate-docker-build-target,$(BACKEND_MOONSHINE)))
$(eval $(call generate-docker-build-target,$(BACKEND_POCKET_TTS)))
# Pattern rule for docker-save targets
docker-save-%: backend-images
docker save local-ai-backend:$* -o backend-images/$*.tar
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-vibevoice docker-build-exllama2 docker-build-moonshine
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-vibevoice docker-build-exllama2 docker-build-moonshine docker-build-pocket-tts
########################################################
### END Backends

View File

@@ -111,6 +111,8 @@
## 💻 Quickstart
> ⚠️ **Note:** The `install.sh` script is currently experiencing issues due to the heavy changes currently undergoing in LocalAI and may produce broken or misconfigured installations. Please use Docker installation (see below) or manual binary installation until [issue #8032](https://github.com/mudler/LocalAI/issues/8032) is resolved.
Run the installer script:
```bash
@@ -128,7 +130,7 @@ For more installation options, see [Installer Options](https://localai.io/instal
> Note: the DMGs are not signed by Apple as quarantined. See https://github.com/mudler/LocalAI/issues/6268 for a workaround, fix is tracked here: https://github.com/mudler/LocalAI/issues/6244
Or run with docker:
### Containers (Docker, podman, ...)
> **💡 Docker Run vs Docker Start**
>
@@ -137,13 +139,13 @@ Or run with docker:
>
> If you've already run LocalAI before and want to start it again, use: `docker start -i local-ai`
### CPU only image:
#### CPU only image:
```bash
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
```
### NVIDIA GPU Images:
#### NVIDIA GPU Images:
```bash
# CUDA 13.0
@@ -160,25 +162,25 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nv
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64-cuda-13
```
### AMD GPU Images (ROCm):
#### AMD GPU Images (ROCm):
```bash
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
```
### Intel GPU Images (oneAPI):
#### Intel GPU Images (oneAPI):
```bash
docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel
```
### Vulkan GPU Images:
#### Vulkan GPU Images:
```bash
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan
```
### AIO Images (pre-downloaded models):
#### AIO Images (pre-downloaded models):
```bash
# CPU version
@@ -295,6 +297,7 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
| **silero-vad** | Voice Activity Detection | CPU |
| **neutts** | Text-to-speech with voice cloning | CUDA 12/13, ROCm, CPU |
| **vibevoice** | Real-time TTS with voice cloning | CUDA 12/13, ROCm, Intel, CPU |
| **pocket-tts** | Lightweight CPU-based TTS | CUDA 12/13, ROCm, Intel, CPU |
### Image & Video Generation
| Backend | Description | Acceleration Support |
@@ -316,8 +319,8 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
|-------------------|-------------------|------------------|
| **NVIDIA CUDA 12** | All CUDA-compatible backends | Nvidia hardware |
| **NVIDIA CUDA 13** | All CUDA-compatible backends | Nvidia hardware |
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts, vibevoice | AMD Graphics |
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark, vibevoice | Intel Arc, Intel iGPUs |
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts, vibevoice, pocket-tts | AMD Graphics |
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark, vibevoice, pocket-tts | Intel Arc, Intel iGPUs |
| **Apple Metal** | llama.cpp, whisper, diffusers, MLX, MLX-VLM, bark-cpp | Apple M1/M2/M3+ |
| **Vulkan** | llama.cpp, whisper, stablediffusion | Cross-platform GPUs |
| **NVIDIA Jetson (CUDA 12)** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI (AGX Orin, etc.) |

View File

@@ -47,22 +47,22 @@ RUN <<EOT bash
ocaml-core ninja-build pkg-config libxml2-dev wayland-protocols python3-jsonschema \
clang-format qtbase5-dev qt6-base-dev libxcb-glx0-dev sudo xz-utils
if [ "amd64" = "$TARGETARCH" ]; then
wget "https://sdk.lunarg.com/sdk/download/1.4.328.1/linux/vulkansdk-linux-x86_64-1.4.328.1.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
wget "https://sdk.lunarg.com/sdk/download/1.4.335.0/linux/vulkansdk-linux-x86_64-1.4.335.0.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
mkdir -p /opt/vulkan-sdk && \
mv 1.4.328.1 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.328.1 && \
mv 1.4.335.0 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.335.0 && \
./vulkansdk --no-deps --maxjobs \
vulkan-loader \
vulkan-validationlayers \
vulkan-extensionlayer \
vulkan-tools \
shaderc && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/share/* /usr/share/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/share/* /usr/share/ && \
rm -rf /opt/vulkan-sdk
fi
if [ "arm64" = "$TARGETARCH" ]; then

View File

@@ -104,22 +104,22 @@ RUN <<EOT bash
ocaml-core ninja-build pkg-config libxml2-dev wayland-protocols python3-jsonschema \
clang-format qtbase5-dev qt6-base-dev libxcb-glx0-dev sudo xz-utils
if [ "amd64" = "$TARGETARCH" ]; then
wget "https://sdk.lunarg.com/sdk/download/1.4.328.1/linux/vulkansdk-linux-x86_64-1.4.328.1.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
wget "https://sdk.lunarg.com/sdk/download/1.4.335.0/linux/vulkansdk-linux-x86_64-1.4.335.0.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
mkdir -p /opt/vulkan-sdk && \
mv 1.4.328.1 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.328.1 && \
mv 1.4.335.0 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.335.0 && \
./vulkansdk --no-deps --maxjobs \
vulkan-loader \
vulkan-validationlayers \
vulkan-extensionlayer \
vulkan-tools \
shaderc && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/share/* /usr/share/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/share/* /usr/share/ && \
rm -rf /opt/vulkan-sdk
fi
if [ "arm64" = "$TARGETARCH" ]; then

View File

@@ -61,22 +61,22 @@ RUN <<EOT bash
ocaml-core ninja-build pkg-config libxml2-dev wayland-protocols python3-jsonschema \
clang-format qtbase5-dev qt6-base-dev libxcb-glx0-dev sudo xz-utils
if [ "amd64" = "$TARGETARCH" ]; then
wget "https://sdk.lunarg.com/sdk/download/1.4.328.1/linux/vulkansdk-linux-x86_64-1.4.328.1.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.328.1.tar.xz && \
wget "https://sdk.lunarg.com/sdk/download/1.4.335.0/linux/vulkansdk-linux-x86_64-1.4.335.0.tar.xz" && \
tar -xf vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
rm vulkansdk-linux-x86_64-1.4.335.0.tar.xz && \
mkdir -p /opt/vulkan-sdk && \
mv 1.4.328.1 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.328.1 && \
mv 1.4.335.0 /opt/vulkan-sdk/ && \
cd /opt/vulkan-sdk/1.4.335.0 && \
./vulkansdk --no-deps --maxjobs \
vulkan-loader \
vulkan-validationlayers \
vulkan-extensionlayer \
vulkan-tools \
shaderc && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.328.1/x86_64/share/* /usr/share/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/bin/* /usr/bin/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/lib/* /usr/lib/x86_64-linux-gnu/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/include/* /usr/include/ && \
cp -rfv /opt/vulkan-sdk/1.4.335.0/x86_64/share/* /usr/share/ && \
rm -rf /opt/vulkan-sdk
fi
if [ "arm64" = "$TARGETARCH" ]; then

View File

@@ -1,5 +1,5 @@
LLAMA_VERSION?=b1377188784f9aea26b8abde56d4aee8c733eec7
LLAMA_VERSION?=959ecf7f234dc0bc0cd6829b25cb0ee1481aa78a
LLAMA_REPO?=https://github.com/ggerganov/llama.cpp
CMAKE_ARGS?=

View File

@@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
# stablediffusion.cpp (ggml)
STABLEDIFFUSION_GGML_REPO?=https://github.com/leejet/stable-diffusion.cpp
STABLEDIFFUSION_GGML_VERSION?=0e52afc6513cc2dea9a1a017afc4a008d5acf2b0
STABLEDIFFUSION_GGML_VERSION?=a48b4a3ade9972faf0adcad47e51c6fc03f0e46d
CMAKE_ARGS+=-DGGML_MAX_NAME=128

View File

@@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
# whisper.cpp version
WHISPER_REPO?=https://github.com/ggml-org/whisper.cpp
WHISPER_CPP_VERSION?=679bdb53dbcbfb3e42685f50c7ff367949fd4d48
WHISPER_CPP_VERSION?=f53dc74843e97f19f94a79241357f74ad5b691a6
SO_TARGET?=libgowhisper.so
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF

View File

@@ -428,6 +428,28 @@
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &pocket-tts
urls:
- https://github.com/kyutai-labs/pocket-tts
description: |
Pocket TTS is a lightweight text-to-speech model designed to run efficiently on CPUs.
tags:
- text-to-speech
- TTS
license: mit
name: "pocket-tts"
alias: "pocket-tts"
capabilities:
nvidia: "cuda12-pocket-tts"
intel: "intel-pocket-tts"
amd: "rocm-pocket-tts"
nvidia-l4t: "nvidia-l4t-pocket-tts"
default: "cpu-pocket-tts"
nvidia-cuda-13: "cuda13-pocket-tts"
nvidia-cuda-12: "cuda12-pocket-tts"
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &piper
name: "piper"
uri: "quay.io/go-skynet/local-ai-backends:latest-piper"
@@ -515,18 +537,14 @@
default: "cpu-neutts"
nvidia: "cuda12-neutts"
amd: "rocm-neutts"
nvidia-l4t: "nvidia-l4t-neutts"
nvidia-cuda-12: "cuda12-neutts"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-neutts"
- !!merge <<: *neutts
name: "neutts-development"
capabilities:
default: "cpu-neutts-development"
nvidia: "cuda12-neutts-development"
amd: "rocm-neutts-development"
nvidia-l4t: "nvidia-l4t-neutts-development"
nvidia-cuda-12: "cuda12-neutts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-neutts-development"
- !!merge <<: *llamacpp
name: "llama-cpp-development"
capabilities:
@@ -556,11 +574,6 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-neutts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "nvidia-l4t-arm64-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-neutts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-neutts
- !!merge <<: *neutts
name: "cpu-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-neutts"
@@ -576,11 +589,6 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-neutts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "nvidia-l4t-arm64-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-neutts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-neutts
- !!merge <<: *mlx
name: "mlx-development"
uri: "quay.io/go-skynet/local-ai-backends:master-metal-darwin-arm64-mlx"
@@ -1605,3 +1613,86 @@
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice
## pocket-tts
- !!merge <<: *pocket-tts
name: "pocket-tts-development"
capabilities:
nvidia: "cuda12-pocket-tts-development"
intel: "intel-pocket-tts-development"
amd: "rocm-pocket-tts-development"
nvidia-l4t: "nvidia-l4t-pocket-tts-development"
default: "cpu-pocket-tts-development"
nvidia-cuda-13: "cuda13-pocket-tts-development"
nvidia-cuda-12: "cuda12-pocket-tts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-pocket-tts-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-pocket-tts-development"
- !!merge <<: *pocket-tts
name: "cpu-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-pocket-tts"
mirrors:
- localai/localai-backends:latest-cpu-pocket-tts
- !!merge <<: *pocket-tts
name: "cpu-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-pocket-tts"
mirrors:
- localai/localai-backends:master-cpu-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda12-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda12-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-pocket-tts
- !!merge <<: *pocket-tts
name: "intel-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-intel-pocket-tts
- !!merge <<: *pocket-tts
name: "intel-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-intel-pocket-tts
- !!merge <<: *pocket-tts
name: "rocm-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-pocket-tts"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-pocket-tts
- !!merge <<: *pocket-tts
name: "rocm-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-pocket-tts"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-pocket-tts
- !!merge <<: *pocket-tts
name: "nvidia-l4t-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-pocket-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-pocket-tts
- !!merge <<: *pocket-tts
name: "nvidia-l4t-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-pocket-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-nvidia-l4t-arm64-pocket-tts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-pocket-tts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-pocket-tts
- !!merge <<: *pocket-tts
name: "cuda13-nvidia-l4t-arm64-pocket-tts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-pocket-tts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-pocket-tts

View File

@@ -41,6 +41,14 @@ from optimum.quanto import freeze, qfloat8, quantize
from transformers import T5EncoderModel
from safetensors.torch import load_file
# Import LTX-2 specific utilities
try:
from diffusers.pipelines.ltx2.export_utils import encode_video as ltx2_encode_video
LTX2_AVAILABLE = True
except ImportError:
LTX2_AVAILABLE = False
ltx2_encode_video = None
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
COMPEL = os.environ.get("COMPEL", "0") == "1"
XPU = os.environ.get("XPU", "0") == "1"
@@ -290,6 +298,20 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
pipe.enable_model_cpu_offload()
return pipe
# LTX2ImageToVideoPipeline - needs img2vid flag, CPU offload, and special handling
if pipeline_type == "LTX2ImageToVideoPipeline":
self.img2vid = True
self.ltx2_pipeline = True
pipe = load_diffusers_pipeline(
class_name="LTX2ImageToVideoPipeline",
model_id=request.Model,
torch_dtype=torchType,
variant=variant
)
if not DISABLE_CPU_OFFLOAD:
pipe.enable_model_cpu_offload()
return pipe
# ================================================================
# Dynamic pipeline loading - the default path for most pipelines
# Uses the dynamic loader to instantiate any pipeline by class name
@@ -404,6 +426,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
fromSingleFile = request.Model.startswith("http") or request.Model.startswith("/") or local
self.img2vid = False
self.txt2vid = False
self.ltx2_pipeline = False
# Load pipeline using dynamic loader
# Special cases that require custom initialization are handled first
@@ -686,7 +709,44 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
print(f"Generating video with {kwargs=}", file=sys.stderr)
# Generate video frames based on pipeline type
if self.PipelineType == "WanPipeline":
if self.ltx2_pipeline or self.PipelineType == "LTX2ImageToVideoPipeline":
# LTX-2 image-to-video generation with audio
if not LTX2_AVAILABLE:
return backend_pb2.Result(success=False, message="LTX-2 pipeline requires diffusers.pipelines.ltx2.export_utils")
# LTX-2 uses 'image' parameter instead of 'start_image'
if request.start_image:
image = load_image(request.start_image)
kwargs["image"] = image
# Remove start_image if it was added
kwargs.pop("start_image", None)
# LTX-2 uses 'frame_rate' instead of 'fps'
frame_rate = float(fps)
kwargs["frame_rate"] = frame_rate
# LTX-2 requires output_type="np" and return_dict=False
kwargs["output_type"] = "np"
kwargs["return_dict"] = False
# Generate video and audio
video, audio = self.pipe(**kwargs)
# Convert video to uint8 format
video = (video * 255).round().astype("uint8")
video = torch.from_numpy(video)
# Use LTX-2's encode_video function which handles audio
ltx2_encode_video(
video[0],
fps=frame_rate,
audio=audio[0].float().cpu(),
audio_sample_rate=self.pipe.vocoder.config.output_sampling_rate,
output_path=request.dst,
)
return backend_pb2.Result(message="Video generated successfully", success=True)
elif self.PipelineType == "WanPipeline":
# WAN2.2 text-to-video generation
output = self.pipe(**kwargs)
frames = output.frames[0] # WAN2.2 returns frames in this format
@@ -727,7 +787,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
else:
return backend_pb2.Result(success=False, message=f"Pipeline {self.PipelineType} does not support video generation")
# Export video
# Export video (for non-LTX-2 pipelines)
export_to_video(frames, request.dst, fps=fps)
return backend_pb2.Result(message="Video generated successfully", success=True)

View File

@@ -0,0 +1,23 @@
.PHONY: pocket-tts
pocket-tts:
bash install.sh
.PHONY: run
run: pocket-tts
@echo "Running pocket-tts..."
bash run.sh
@echo "pocket-tts run."
.PHONY: test
test: pocket-tts
@echo "Testing pocket-tts..."
bash test.sh
@echo "pocket-tts tested."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,255 @@
#!/usr/bin/env python3
"""
This is an extra gRPC server of LocalAI for Pocket TTS
"""
from concurrent import futures
import time
import argparse
import signal
import sys
import os
import traceback
import scipy.io.wavfile
import backend_pb2
import backend_pb2_grpc
import torch
from pocket_tts import TTSModel
import grpc
def is_float(s):
"""Check if a string can be converted to float."""
try:
float(s)
return True
except ValueError:
return False
def is_int(s):
"""Check if a string can be converted to int."""
try:
int(s)
return True
except ValueError:
return False
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
BackendServicer is the class that implements the gRPC service
"""
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
# Get device
if torch.cuda.is_available():
print("CUDA is available", file=sys.stderr)
device = "cuda"
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")
# Normalize potential 'mpx' typo to 'mps'
if device == "mpx":
print("Note: device 'mpx' detected, treating it as 'mps'.", file=sys.stderr)
device = "mps"
# Validate mps availability if requested
if device == "mps" and not torch.backends.mps.is_available():
print("Warning: MPS not available. Falling back to CPU.", file=sys.stderr)
device = "cpu"
self.device = device
options = request.Options
# empty dict
self.options = {}
# The options are a list of strings in this form optname:optvalue
# We are storing all the options in a dict so we can use it later when
# generating the audio
for opt in options:
if ":" not in opt:
continue
key, value = opt.split(":", 1) # Split only on first colon
# if value is a number, convert it to the appropriate type
if is_float(value):
value = float(value)
elif is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
# Default voice for caching
self.default_voice_url = self.options.get("default_voice", None)
self._voice_cache = {}
try:
print("Loading Pocket TTS model", file=sys.stderr)
self.tts_model = TTSModel.load_model()
print(f"Model loaded successfully. Sample rate: {self.tts_model.sample_rate}", file=sys.stderr)
# Pre-load default voice if specified
if self.default_voice_url:
try:
print(f"Pre-loading default voice: {self.default_voice_url}", file=sys.stderr)
voice_state = self.tts_model.get_state_for_audio_prompt(self.default_voice_url)
self._voice_cache[self.default_voice_url] = voice_state
print("Default voice loaded successfully", file=sys.stderr)
except Exception as e:
print(f"Warning: Failed to pre-load default voice: {e}", file=sys.stderr)
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(message="Model loaded successfully", success=True)
def _get_voice_state(self, voice_input):
"""
Get voice state from cache or load it.
voice_input can be:
- HuggingFace URL (e.g., hf://kyutai/tts-voices/alba-mackenna/casual.wav)
- Local file path
- None (use default)
"""
# Use default if no voice specified
if not voice_input:
voice_input = self.default_voice_url
if not voice_input:
return None
# Check cache first
if voice_input in self._voice_cache:
return self._voice_cache[voice_input]
# Load voice state
try:
print(f"Loading voice from: {voice_input}", file=sys.stderr)
voice_state = self.tts_model.get_state_for_audio_prompt(voice_input)
self._voice_cache[voice_input] = voice_state
return voice_state
except Exception as e:
print(f"Error loading voice from {voice_input}: {e}", file=sys.stderr)
return None
def TTS(self, request, context):
try:
# Determine voice input
# Priority: request.voice > AudioPath (from ModelOptions) > default
voice_input = None
if request.voice:
voice_input = request.voice
elif hasattr(request, 'AudioPath') and request.AudioPath:
# Use AudioPath as voice file
if os.path.isabs(request.AudioPath):
voice_input = request.AudioPath
elif hasattr(request, 'ModelFile') and request.ModelFile:
model_file_base = os.path.dirname(request.ModelFile)
voice_input = os.path.join(model_file_base, request.AudioPath)
elif hasattr(request, 'ModelPath') and request.ModelPath:
voice_input = os.path.join(request.ModelPath, request.AudioPath)
else:
voice_input = request.AudioPath
# Get voice state
voice_state = self._get_voice_state(voice_input)
if voice_state is None:
return backend_pb2.Result(
success=False,
message=f"Voice not found or failed to load: {voice_input}. Please provide a valid voice URL or file path."
)
# Prepare text
text = request.text.strip()
if not text:
return backend_pb2.Result(
success=False,
message="Text is empty"
)
print(f"Generating audio for text: {text[:50]}...", file=sys.stderr)
# Generate audio
audio = self.tts_model.generate_audio(voice_state, text)
# Audio is a 1D torch tensor containing PCM data
if audio is None or audio.numel() == 0:
return backend_pb2.Result(
success=False,
message="No audio generated"
)
# Save audio to file
output_path = request.dst
if not output_path:
output_path = "/tmp/pocket-tts-output.wav"
# Ensure output directory exists
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
# Convert torch tensor to numpy and save
audio_numpy = audio.numpy()
scipy.io.wavfile.write(output_path, self.tts_model.sample_rate, audio_numpy)
print(f"Saved audio to {output_path}", file=sys.stderr)
except Exception as err:
print(f"Error in TTS: {err}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(success=True)
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Define the signal handler function
def signal_handler(sig, frame):
print("Received termination signal. Shutting down...")
server.stop(0)
sys.exit(0)
# Set the signal handlers for SIGINT and SIGTERM
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
serve(args.addr)

View File

@@ -0,0 +1,30 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
# This is here because the Intel pip index is broken and returns 200 status codes for every package name, it just doesn't return any package links.
# This makes uv think that the package exists in the Intel pip index, and by default it stops looking at other pip indexes once it finds a match.
# We need uv to continue falling through to the pypi default index to find optimum[openvino] in the pypi index
# the --upgrade actually allows us to *downgrade* torch to the version provided in the Intel pip index
if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
# Use python 3.12 for l4t
if [ "x${BUILD_PROFILE}" == "xl4t13" ]; then
PYTHON_VERSION="3.12"
PYTHON_PATCH="12"
PY_STANDALONE_TAG="20251120"
fi
if [ "x${BUILD_PROFILE}" == "xl4t12" ]; then
USE_PIP=true
fi
installRequirements

View File

@@ -0,0 +1,11 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
python3 -m grpc_tools.protoc -I../.. -I./ --python_out=. --grpc_python_out=. backend.proto

View File

@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cpu
pocket-tts
scipy
torch

View File

@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu121
pocket-tts
scipy
torch

View File

@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu130
pocket-tts
scipy
torch

View File

@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.3
pocket-tts
scipy
torch==2.7.1+rocm6.3

View File

@@ -0,0 +1,4 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pocket-tts
scipy
torch==2.5.1+cxx11.abi

View File

@@ -0,0 +1,4 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu129/
pocket-tts
scipy
torch

View File

@@ -0,0 +1,4 @@
--extra-index-url https://download.pytorch.org/whl/cu130
pocket-tts
scipy
torch

View File

@@ -0,0 +1,4 @@
pocket-tts
scipy
torch==2.7.1
torchvision==0.22.1

View File

@@ -0,0 +1,4 @@
grpcio==1.71.0
protobuf
certifi
packaging==24.1

View File

@@ -0,0 +1,9 @@
#!/bin/bash
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@

View File

@@ -0,0 +1,141 @@
"""
A test script to test the gRPC service
"""
import unittest
import subprocess
import time
import os
import tempfile
import backend_pb2
import backend_pb2_grpc
import grpc
class TestBackendServicer(unittest.TestCase):
"""
TestBackendServicer is the class that tests the gRPC service
"""
def setUp(self):
"""
This method sets up the gRPC service by starting the server
"""
self.service = subprocess.Popen(["python3", "backend.py", "--addr", "localhost:50051"])
time.sleep(30)
def tearDown(self) -> None:
"""
This method tears down the gRPC service by terminating the server
"""
self.service.terminate()
self.service.wait()
def test_server_startup(self):
"""
This method tests if the server starts up successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.Health(backend_pb2.HealthMessage())
self.assertEqual(response.message, b'OK')
except Exception as err:
print(err)
self.fail("Server failed to start")
finally:
self.tearDown()
def test_load_model(self):
"""
This method tests if the model is loaded successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions())
print(response)
self.assertTrue(response.success)
self.assertEqual(response.message, "Model loaded successfully")
except Exception as err:
print(err)
self.fail("LoadModel service failed")
finally:
self.tearDown()
def test_tts_with_hf_voice(self):
"""
This method tests TTS generation with HuggingFace voice URL
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
# Load model
response = stub.LoadModel(backend_pb2.ModelOptions())
self.assertTrue(response.success)
# Create temporary output file
with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as tmp_file:
output_path = tmp_file.name
# Test TTS with HuggingFace voice URL
tts_request = backend_pb2.TTSRequest(
text="Hello world, this is a test.",
dst=output_path,
voice="azelma"
)
tts_response = stub.TTS(tts_request)
self.assertTrue(tts_response.success)
# Verify output file exists and is not empty
self.assertTrue(os.path.exists(output_path))
self.assertGreater(os.path.getsize(output_path), 0)
# Cleanup
os.unlink(output_path)
except Exception as err:
print(err)
self.fail("TTS service failed")
finally:
self.tearDown()
def test_tts_with_default_voice(self):
"""
This method tests TTS generation with default voice (via AudioPath in LoadModel)
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
# Load model with default voice
load_request = backend_pb2.ModelOptions(
Options=["default_voice:azelma"]
)
response = stub.LoadModel(load_request)
self.assertTrue(response.success)
# Create temporary output file
with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as tmp_file:
output_path = tmp_file.name
# Test TTS without specifying voice (should use default)
tts_request = backend_pb2.TTSRequest(
text="Hello world, this is a test.",
dst=output_path
)
tts_response = stub.TTS(tts_request)
self.assertTrue(tts_response.success)
# Verify output file exists and is not empty
self.assertTrue(os.path.exists(output_path))
self.assertGreater(os.path.getsize(output_path), 0)
# Cleanup
os.unlink(output_path)
except Exception as err:
print(err)
self.fail("TTS service with default voice failed")
finally:
self.tearDown()

View File

@@ -0,0 +1,11 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
runUnittests

View File

@@ -6,4 +6,4 @@ transformers
bitsandbytes
outetts
sentence-transformers==5.2.0
protobuf==6.33.2
protobuf==6.33.4

View File

@@ -6,4 +6,4 @@ transformers
bitsandbytes
outetts
sentence-transformers==5.2.0
protobuf==6.33.2
protobuf==6.33.4

View File

@@ -6,4 +6,4 @@ transformers
bitsandbytes
outetts
sentence-transformers==5.2.0
protobuf==6.33.2
protobuf==6.33.4

View File

@@ -8,4 +8,4 @@ bitsandbytes
outetts
bitsandbytes
sentence-transformers==5.2.0
protobuf==6.33.2
protobuf==6.33.4

View File

@@ -10,4 +10,4 @@ intel-extension-for-transformers
bitsandbytes
outetts
sentence-transformers==5.2.0
protobuf==6.33.2
protobuf==6.33.4

View File

@@ -1,5 +1,5 @@
grpcio==1.76.0
protobuf==6.33.2
protobuf==6.33.4
certifi
setuptools
scipy==1.15.1

View File

@@ -56,7 +56,7 @@ type RunCMD struct {
UseSubtleKeyComparison bool `env:"LOCALAI_SUBTLE_KEY_COMPARISON" default:"false" help:"If true, API Key validation comparisons will be performed using constant-time comparisons rather than simple equality. This trades off performance on each request for resiliancy against timing attacks." group:"hardening"`
DisableApiKeyRequirementForHttpGet bool `env:"LOCALAI_DISABLE_API_KEY_REQUIREMENT_FOR_HTTP_GET" default:"false" help:"If true, a valid API key is not required to issue GET requests to portions of the web ui. This should only be enabled in secure testing environments" group:"hardening"`
DisableMetricsEndpoint bool `env:"LOCALAI_DISABLE_METRICS_ENDPOINT,DISABLE_METRICS_ENDPOINT" default:"false" help:"Disable the /metrics endpoint" group:"api"`
HttpGetExemptedEndpoints []string `env:"LOCALAI_HTTP_GET_EXEMPTED_ENDPOINTS" default:"^/$,^/browse/?$,^/talk/?$,^/p2p/?$,^/chat/?$,^/text2image/?$,^/tts/?$,^/static/.*$,^/swagger.*$" help:"If LOCALAI_DISABLE_API_KEY_REQUIREMENT_FOR_HTTP_GET is overriden to true, this is the list of endpoints to exempt. Only adjust this in case of a security incident or as a result of a personal security posture review" group:"hardening"`
HttpGetExemptedEndpoints []string `env:"LOCALAI_HTTP_GET_EXEMPTED_ENDPOINTS" default:"^/$,^/browse/?$,^/talk/?$,^/p2p/?$,^/chat/?$,^/image/?$,^/text2image/?$,^/tts/?$,^/static/.*$,^/swagger.*$" help:"If LOCALAI_DISABLE_API_KEY_REQUIREMENT_FOR_HTTP_GET is overriden to true, this is the list of endpoints to exempt. Only adjust this in case of a security incident or as a result of a personal security posture review" group:"hardening"`
Peer2Peer bool `env:"LOCALAI_P2P,P2P" name:"p2p" default:"false" help:"Enable P2P mode" group:"p2p"`
Peer2PeerDHTInterval int `env:"LOCALAI_P2P_DHT_INTERVAL,P2P_DHT_INTERVAL" default:"360" name:"p2p-dht-interval" help:"Interval for DHT refresh (used during token generation)" group:"p2p"`
Peer2PeerOTPInterval int `env:"LOCALAI_P2P_OTP_INTERVAL,P2P_OTP_INTERVAL" default:"9000" name:"p2p-otp-interval" help:"Interval for OTP refresh (used during token generation)" group:"p2p"`
@@ -83,6 +83,7 @@ type RunCMD struct {
EnableTracing bool `env:"LOCALAI_ENABLE_TRACING,ENABLE_TRACING" help:"Enable API tracing" group:"api"`
TracingMaxItems int `env:"LOCALAI_TRACING_MAX_ITEMS" default:"1024" help:"Maximum number of traces to keep" group:"api"`
AgentJobRetentionDays int `env:"LOCALAI_AGENT_JOB_RETENTION_DAYS,AGENT_JOB_RETENTION_DAYS" default:"30" help:"Number of days to keep agent job history (default: 30)" group:"api"`
OpenResponsesStoreTTL string `env:"LOCALAI_OPEN_RESPONSES_STORE_TTL,OPEN_RESPONSES_STORE_TTL" default:"0" help:"TTL for Open Responses store (e.g., 1h, 30m, 0 = no expiration)" group:"api"`
Version bool
}
@@ -249,6 +250,15 @@ func (r *RunCMD) Run(ctx *cliContext.Context) error {
opts = append(opts, config.WithLRUEvictionRetryInterval(dur))
}
// Handle Open Responses store TTL
if r.OpenResponsesStoreTTL != "" && r.OpenResponsesStoreTTL != "0" {
dur, err := time.ParseDuration(r.OpenResponsesStoreTTL)
if err != nil {
return fmt.Errorf("invalid Open Responses store TTL: %w", err)
}
opts = append(opts, config.WithOpenResponsesStoreTTL(dur))
}
// split ":" to get backend name and the uri
for _, v := range r.ExternalGRPCBackends {
backend := v[:strings.IndexByte(v, ':')]

View File

@@ -86,6 +86,8 @@ type ApplicationConfig struct {
AgentJobRetentionDays int // Default: 30 days
OpenResponsesStoreTTL time.Duration // TTL for Open Responses store (0 = no expiration)
PathWithoutAuth []string
}
@@ -467,6 +469,12 @@ func WithAgentJobRetentionDays(days int) AppOption {
}
}
func WithOpenResponsesStoreTTL(ttl time.Duration) AppOption {
return func(o *ApplicationConfig) {
o.OpenResponsesStoreTTL = ttl
}
}
func WithEnforcedPredownloadScans(enforced bool) AppOption {
return func(o *ApplicationConfig) {
o.EnforcePredownloadScans = enforced
@@ -594,6 +602,12 @@ func (o *ApplicationConfig) ToRuntimeSettings() RuntimeSettings {
} else {
lruEvictionRetryInterval = "1s" // default
}
var openResponsesStoreTTL string
if o.OpenResponsesStoreTTL > 0 {
openResponsesStoreTTL = o.OpenResponsesStoreTTL.String()
} else {
openResponsesStoreTTL = "0" // default: no expiration
}
return RuntimeSettings{
WatchdogEnabled: &watchdogEnabled,
@@ -628,6 +642,7 @@ func (o *ApplicationConfig) ToRuntimeSettings() RuntimeSettings {
AutoloadBackendGalleries: &autoloadBackendGalleries,
ApiKeys: &apiKeys,
AgentJobRetentionDays: &agentJobRetentionDays,
OpenResponsesStoreTTL: &openResponsesStoreTTL,
}
}
@@ -769,6 +784,14 @@ func (o *ApplicationConfig) ApplyRuntimeSettings(settings *RuntimeSettings) (req
if settings.AgentJobRetentionDays != nil {
o.AgentJobRetentionDays = *settings.AgentJobRetentionDays
}
if settings.OpenResponsesStoreTTL != nil {
if *settings.OpenResponsesStoreTTL == "0" || *settings.OpenResponsesStoreTTL == "" {
o.OpenResponsesStoreTTL = 0 // No expiration
} else if dur, err := time.ParseDuration(*settings.OpenResponsesStoreTTL); err == nil {
o.OpenResponsesStoreTTL = dur
}
// This setting doesn't require restart, can be updated dynamically
}
// Note: ApiKeys requires special handling (merging with startup keys) - handled in caller
return requireRestart

View File

@@ -10,6 +10,7 @@ import (
"github.com/mudler/LocalAI/core/schema"
"github.com/mudler/LocalAI/pkg/downloader"
"github.com/mudler/LocalAI/pkg/functions"
"github.com/mudler/LocalAI/pkg/reasoning"
"github.com/mudler/cogito"
"gopkg.in/yaml.v3"
)
@@ -51,6 +52,7 @@ type ModelConfig struct {
ResponseFormatMap map[string]interface{} `yaml:"-" json:"-"`
FunctionsConfig functions.FunctionsConfig `yaml:"function,omitempty" json:"function,omitempty"`
ReasoningConfig reasoning.ReasoningConfig `yaml:"reasoning,omitempty" json:"reasoning,omitempty"`
FeatureFlag FeatureFlag `yaml:"feature_flags,omitempty" json:"feature_flags,omitempty"` // Feature Flag registry. We move fast, and features may break on a per model/backend basis. Registry for (usually temporary) flags that indicate aborting something early.
// LLM configs (GPT4ALL, Llama.cpp, ...)

View File

@@ -60,4 +60,7 @@ type RuntimeSettings struct {
// Agent settings
AgentJobRetentionDays *int `json:"agent_job_retention_days,omitempty"`
// Open Responses settings
OpenResponsesStoreTTL *string `json:"open_responses_store_ttl,omitempty"` // TTL for stored responses (e.g., "1h", "30m", "0" = no expiration)
}

View File

@@ -108,7 +108,15 @@ func API(application *application.Application) (*echo.Echo, error) {
req := c.Request()
res := c.Response()
err := next(c)
xlog.Info("HTTP request", "method", req.Method, "path", req.URL.Path, "status", res.Status)
// Fix for #7989: Reduce log verbosity of Web UI polling
// If the path is /api/operations and the request was successful (200),
// we log it at DEBUG level (hidden by default) instead of INFO.
if req.URL.Path == "/api/operations" && res.Status == 200 {
xlog.Debug("HTTP request", "method", req.Method, "path", req.URL.Path, "status", res.Status)
} else {
xlog.Info("HTTP request", "method", req.Method, "path", req.URL.Path, "status", res.Status)
}
return err
}
})
@@ -185,6 +193,8 @@ func API(application *application.Application) (*echo.Echo, error) {
corsConfig.AllowOrigins = strings.Split(application.ApplicationConfig().CORSAllowOrigins, ",")
}
e.Use(middleware.CORSWithConfig(corsConfig))
} else {
e.Use(middleware.CORS())
}
// CSRF middleware
@@ -206,6 +216,7 @@ func API(application *application.Application) (*echo.Echo, error) {
routes.RegisterLocalAIRoutes(e, requestExtractor, application.ModelConfigLoader(), application.ModelLoader(), application.ApplicationConfig(), application.GalleryService(), opcache, application.TemplatesEvaluator(), application)
routes.RegisterOpenAIRoutes(e, requestExtractor, application)
routes.RegisterAnthropicRoutes(e, requestExtractor, application)
routes.RegisterOpenResponsesRoutes(e, requestExtractor, application)
if !application.ApplicationConfig().DisableWebUI {
routes.RegisterUIAPIRoutes(e, application.ModelConfigLoader(), application.ModelLoader(), application.ApplicationConfig(), application.GalleryService(), opcache, application)
routes.RegisterUIRoutes(e, application.ModelConfigLoader(), application.ModelLoader(), application.ApplicationConfig(), application.GalleryService())

View File

@@ -11,6 +11,7 @@ import (
"github.com/labstack/echo/v4"
"github.com/mudler/LocalAI/core/application"
"github.com/mudler/LocalAI/core/config"
"github.com/mudler/LocalAI/core/http/endpoints/openresponses"
"github.com/mudler/LocalAI/core/p2p"
"github.com/mudler/LocalAI/core/schema"
"github.com/mudler/xlog"
@@ -84,6 +85,16 @@ func UpdateSettingsEndpoint(app *application.Application) echo.HandlerFunc {
})
}
}
if settings.OpenResponsesStoreTTL != nil {
if *settings.OpenResponsesStoreTTL != "0" && *settings.OpenResponsesStoreTTL != "" {
if _, err := time.ParseDuration(*settings.OpenResponsesStoreTTL); err != nil {
return c.JSON(http.StatusBadRequest, schema.SettingsResponse{
Success: false,
Error: "Invalid open_responses_store_ttl format: " + err.Error(),
})
}
}
}
// Save to file
if appConfig.DynamicConfigsDir == "" {
@@ -144,6 +155,22 @@ func UpdateSettingsEndpoint(app *application.Application) echo.HandlerFunc {
xlog.Info("Updated LRU eviction retry settings", "maxRetries", maxRetries, "retryInterval", retryInterval)
}
// Update Open Responses store TTL dynamically
if settings.OpenResponsesStoreTTL != nil {
ttl := time.Duration(0)
if *settings.OpenResponsesStoreTTL != "0" && *settings.OpenResponsesStoreTTL != "" {
if dur, err := time.ParseDuration(*settings.OpenResponsesStoreTTL); err == nil {
ttl = dur
} else {
xlog.Warn("Invalid Open Responses store TTL format", "ttl", *settings.OpenResponsesStoreTTL, "error", err)
}
}
// Import the store package
store := openresponses.GetGlobalStore()
store.SetTTL(ttl)
xlog.Info("Updated Open Responses store TTL", "ttl", ttl)
}
// Check if agent job retention changed
agentJobChanged := settings.AgentJobRetentionDays != nil

View File

@@ -13,6 +13,7 @@ import (
"github.com/mudler/LocalAI/core/http/middleware"
"github.com/mudler/LocalAI/core/schema"
"github.com/mudler/LocalAI/pkg/functions"
"github.com/mudler/LocalAI/pkg/reasoning"
"github.com/mudler/LocalAI/core/templates"
"github.com/mudler/LocalAI/pkg/model"
@@ -43,10 +44,19 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
lastEmittedReasoning := ""
lastEmittedCleanedContent := ""
// Configure reasoning extraction options
// Auto-detect if prompt ends with thinking tag
// or use explicit config setting
thinkingForcedOpen := config.ReasoningConfig.ThinkingForcedOpen || reasoning.DetectThinkingForcedOpen(s)
_, _, err := ComputeChoices(req, s, config, cl, startupOptions, loader, func(s string, c *[]schema.Choice) {}, func(s string, tokenUsage backend.TokenUsage) bool {
accumulatedContent += s
// Extract reasoning from accumulated content
currentReasoning, cleanedContent := functions.ExtractReasoning(accumulatedContent)
opts := []reasoning.Option{}
if thinkingForcedOpen {
opts = append(opts, reasoning.WithThinkingForcedOpen())
}
currentReasoning, cleanedContent := reasoning.Extract(accumulatedContent, opts...)
// Calculate new reasoning delta (what we haven't emitted yet)
var reasoningDelta *string
@@ -230,7 +240,13 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
return err
}
// Extract reasoning before processing tool calls
reasoning, cleanedResult := functions.ExtractReasoning(result)
// Auto-detect if prompt ends with thinking tag or use explicit config
toolsThinkingForcedOpen := config.ReasoningConfig.ThinkingForcedOpen || reasoning.DetectThinkingForcedOpen(prompt)
opts := []reasoning.Option{}
if toolsThinkingForcedOpen {
opts = append(opts, reasoning.WithThinkingForcedOpen())
}
extractedReasoning, cleanedResult := reasoning.Extract(result, opts...)
result = cleanedResult
textContentToReturn = functions.ParseTextContent(result, config.FunctionsConfig)
@@ -266,8 +282,8 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
}
var deltaReasoning *string
if reasoning != "" {
deltaReasoning = &reasoning
if extractedReasoning != "" {
deltaReasoning = &extractedReasoning
}
delta := &schema.Message{Content: &result}
if deltaReasoning != nil {
@@ -618,17 +634,24 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
// no streaming mode
default:
// Auto-detect if prompt ends with thinking tag for non-streaming mode
nonStreamThinkingForcedOpen := config.ReasoningConfig.ThinkingForcedOpen || reasoning.DetectThinkingForcedOpen(predInput)
tokenCallback := func(s string, c *[]schema.Choice) {
// Extract reasoning from the response
reasoning, cleanedS := functions.ExtractReasoning(s)
s = cleanedS
var extractedReasoning string
opts := []reasoning.Option{}
if nonStreamThinkingForcedOpen {
opts = append(opts, reasoning.WithThinkingForcedOpen())
}
extractedReasoning, s = reasoning.Extract(s, opts...)
if !shouldUseFn {
// no function is called, just reply and use stop as finish reason
stopReason := FinishReasonStop
message := &schema.Message{Role: "assistant", Content: &s}
if reasoning != "" {
message.Reasoning = &reasoning
if extractedReasoning != "" {
message.Reasoning = &extractedReasoning
}
*c = append(*c, schema.Choice{FinishReason: &stopReason, Index: 0, Message: message})
return
@@ -650,8 +673,8 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
stopReason := FinishReasonStop
message := &schema.Message{Role: "assistant", Content: &result}
if reasoning != "" {
message.Reasoning = &reasoning
if extractedReasoning != "" {
message.Reasoning = &extractedReasoning
}
*c = append(*c, schema.Choice{
FinishReason: &stopReason,
@@ -664,8 +687,8 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
Role: "assistant",
},
}
if reasoning != "" {
toolChoice.Message.Reasoning = &reasoning
if extractedReasoning != "" {
toolChoice.Message.Reasoning = &extractedReasoning
}
for _, ss := range results {
@@ -695,8 +718,8 @@ func ChatEndpoint(cl *config.ModelConfigLoader, ml *model.ModelLoader, evaluator
"arguments": args,
},
}
if reasoning != "" {
message.Reasoning = &reasoning
if extractedReasoning != "" {
message.Reasoning = &extractedReasoning
}
*c = append(*c, schema.Choice{
FinishReason: &functionCallReason,

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,453 @@
package openresponses
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"github.com/mudler/LocalAI/core/schema"
"github.com/mudler/xlog"
)
// ResponseStore provides thread-safe storage for Open Responses API responses
type ResponseStore struct {
mu sync.RWMutex
responses map[string]*StoredResponse
ttl time.Duration // Time-to-live for stored responses (0 = no expiration)
cleanupCtx context.Context
cleanupCancel context.CancelFunc
}
// StreamedEvent represents a buffered SSE event for streaming resume
type StreamedEvent struct {
SequenceNumber int `json:"sequence_number"`
EventType string `json:"event_type"`
Data []byte `json:"data"` // JSON-serialized event
}
// StoredResponse contains a complete response with its input request and output items
type StoredResponse struct {
Request *schema.OpenResponsesRequest
Response *schema.ORResponseResource
Items map[string]*schema.ORItemField // item_id -> item mapping for quick lookup
StoredAt time.Time
ExpiresAt *time.Time // nil if no expiration
// Background execution support
CancelFunc context.CancelFunc // For cancellation of background tasks
StreamEvents []StreamedEvent // Buffered events for streaming resume
StreamEnabled bool // Was created with stream=true
IsBackground bool // Was created with background=true
EventsChan chan struct{} // Signals new events for live subscribers
mu sync.RWMutex // Protect concurrent access to this response
}
var (
globalStore *ResponseStore
storeOnce sync.Once
)
// GetGlobalStore returns the singleton response store instance
func GetGlobalStore() *ResponseStore {
storeOnce.Do(func() {
globalStore = NewResponseStore(0) // Default: no TTL, will be updated from appConfig
})
return globalStore
}
// SetTTL updates the TTL for the store
// This will affect all new responses stored after this call
func (s *ResponseStore) SetTTL(ttl time.Duration) {
s.mu.Lock()
defer s.mu.Unlock()
// Stop existing cleanup loop if running
if s.cleanupCancel != nil {
s.cleanupCancel()
s.cleanupCancel = nil
s.cleanupCtx = nil
}
s.ttl = ttl
// If TTL > 0, start cleanup loop
if ttl > 0 {
s.cleanupCtx, s.cleanupCancel = context.WithCancel(context.Background())
go s.cleanupLoop(s.cleanupCtx)
}
xlog.Debug("Updated Open Responses store TTL", "ttl", ttl, "cleanup_running", ttl > 0)
}
// NewResponseStore creates a new response store with optional TTL
// If ttl is 0, responses are stored indefinitely
func NewResponseStore(ttl time.Duration) *ResponseStore {
store := &ResponseStore{
responses: make(map[string]*StoredResponse),
ttl: ttl,
}
// Start cleanup goroutine if TTL is set
if ttl > 0 {
store.cleanupCtx, store.cleanupCancel = context.WithCancel(context.Background())
go store.cleanupLoop(store.cleanupCtx)
}
return store
}
// Store stores a response with its request and items
func (s *ResponseStore) Store(responseID string, request *schema.OpenResponsesRequest, response *schema.ORResponseResource) {
s.mu.Lock()
defer s.mu.Unlock()
// Build item index for quick lookup
items := make(map[string]*schema.ORItemField)
for i := range response.Output {
item := &response.Output[i]
if item.ID != "" {
items[item.ID] = item
}
}
stored := &StoredResponse{
Request: request,
Response: response,
Items: items,
StoredAt: time.Now(),
ExpiresAt: nil,
}
// Set expiration if TTL is configured
if s.ttl > 0 {
expiresAt := time.Now().Add(s.ttl)
stored.ExpiresAt = &expiresAt
}
s.responses[responseID] = stored
xlog.Debug("Stored Open Responses response", "response_id", responseID, "items_count", len(items))
}
// Get retrieves a stored response by ID
func (s *ResponseStore) Get(responseID string) (*StoredResponse, error) {
s.mu.RLock()
defer s.mu.RUnlock()
stored, exists := s.responses[responseID]
if !exists {
return nil, fmt.Errorf("response not found: %s", responseID)
}
// Check expiration
if stored.ExpiresAt != nil && time.Now().After(*stored.ExpiresAt) {
// Expired, but we'll return it anyway and let caller handle cleanup
return nil, fmt.Errorf("response expired: %s", responseID)
}
return stored, nil
}
// GetItem retrieves a specific item from a stored response
func (s *ResponseStore) GetItem(responseID, itemID string) (*schema.ORItemField, error) {
stored, err := s.Get(responseID)
if err != nil {
return nil, err
}
item, exists := stored.Items[itemID]
if !exists {
return nil, fmt.Errorf("item not found: %s in response %s", itemID, responseID)
}
return item, nil
}
// FindItem searches for an item across all stored responses
// Returns the item and the response ID it was found in
func (s *ResponseStore) FindItem(itemID string) (*schema.ORItemField, string, error) {
s.mu.RLock()
defer s.mu.RUnlock()
now := time.Now()
for responseID, stored := range s.responses {
// Skip expired responses
if stored.ExpiresAt != nil && now.After(*stored.ExpiresAt) {
continue
}
if item, exists := stored.Items[itemID]; exists {
return item, responseID, nil
}
}
return nil, "", fmt.Errorf("item not found in any stored response: %s", itemID)
}
// Delete removes a response from storage
func (s *ResponseStore) Delete(responseID string) {
s.mu.Lock()
defer s.mu.Unlock()
delete(s.responses, responseID)
xlog.Debug("Deleted Open Responses response", "response_id", responseID)
}
// Cleanup removes expired responses
func (s *ResponseStore) Cleanup() int {
if s.ttl == 0 {
return 0
}
s.mu.Lock()
defer s.mu.Unlock()
now := time.Now()
count := 0
for id, stored := range s.responses {
if stored.ExpiresAt != nil && now.After(*stored.ExpiresAt) {
delete(s.responses, id)
count++
}
}
if count > 0 {
xlog.Debug("Cleaned up expired Open Responses", "count", count)
}
return count
}
// cleanupLoop runs periodic cleanup of expired responses
func (s *ResponseStore) cleanupLoop(ctx context.Context) {
if s.ttl == 0 {
return
}
ticker := time.NewTicker(s.ttl / 2) // Cleanup at half TTL interval
defer ticker.Stop()
for {
select {
case <-ctx.Done():
xlog.Debug("Stopped Open Responses store cleanup loop")
return
case <-ticker.C:
s.Cleanup()
}
}
}
// Count returns the number of stored responses
func (s *ResponseStore) Count() int {
s.mu.RLock()
defer s.mu.RUnlock()
return len(s.responses)
}
// StoreBackground stores a background response with cancel function and optional streaming support
func (s *ResponseStore) StoreBackground(responseID string, request *schema.OpenResponsesRequest, response *schema.ORResponseResource, cancelFunc context.CancelFunc, streamEnabled bool) {
s.mu.Lock()
defer s.mu.Unlock()
// Build item index for quick lookup
items := make(map[string]*schema.ORItemField)
for i := range response.Output {
item := &response.Output[i]
if item.ID != "" {
items[item.ID] = item
}
}
stored := &StoredResponse{
Request: request,
Response: response,
Items: items,
StoredAt: time.Now(),
ExpiresAt: nil,
CancelFunc: cancelFunc,
StreamEvents: []StreamedEvent{},
StreamEnabled: streamEnabled,
IsBackground: true,
EventsChan: make(chan struct{}, 100), // Buffered channel for event notifications
}
// Set expiration if TTL is configured
if s.ttl > 0 {
expiresAt := time.Now().Add(s.ttl)
stored.ExpiresAt = &expiresAt
}
s.responses[responseID] = stored
xlog.Debug("Stored background Open Responses response", "response_id", responseID, "stream_enabled", streamEnabled)
}
// UpdateStatus updates the status of a stored response
func (s *ResponseStore) UpdateStatus(responseID string, status string, completedAt *int64) error {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return fmt.Errorf("response not found: %s", responseID)
}
stored.mu.Lock()
defer stored.mu.Unlock()
stored.Response.Status = status
stored.Response.CompletedAt = completedAt
xlog.Debug("Updated response status", "response_id", responseID, "status", status)
return nil
}
// UpdateResponse updates the entire response object for a stored response
func (s *ResponseStore) UpdateResponse(responseID string, response *schema.ORResponseResource) error {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return fmt.Errorf("response not found: %s", responseID)
}
stored.mu.Lock()
defer stored.mu.Unlock()
// Rebuild item index
items := make(map[string]*schema.ORItemField)
for i := range response.Output {
item := &response.Output[i]
if item.ID != "" {
items[item.ID] = item
}
}
stored.Response = response
stored.Items = items
xlog.Debug("Updated response", "response_id", responseID, "status", response.Status, "items_count", len(items))
return nil
}
// AppendEvent appends a streaming event to the buffer for resume support
func (s *ResponseStore) AppendEvent(responseID string, event *schema.ORStreamEvent) error {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return fmt.Errorf("response not found: %s", responseID)
}
// Serialize the event
data, err := json.Marshal(event)
if err != nil {
return fmt.Errorf("failed to marshal event: %w", err)
}
stored.mu.Lock()
stored.StreamEvents = append(stored.StreamEvents, StreamedEvent{
SequenceNumber: event.SequenceNumber,
EventType: event.Type,
Data: data,
})
stored.mu.Unlock()
// Notify any subscribers of new event
select {
case stored.EventsChan <- struct{}{}:
default:
// Channel full, subscribers will catch up
}
return nil
}
// GetEventsAfter returns all events with sequence number greater than startingAfter
func (s *ResponseStore) GetEventsAfter(responseID string, startingAfter int) ([]StreamedEvent, error) {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return nil, fmt.Errorf("response not found: %s", responseID)
}
stored.mu.RLock()
defer stored.mu.RUnlock()
var result []StreamedEvent
for _, event := range stored.StreamEvents {
if event.SequenceNumber > startingAfter {
result = append(result, event)
}
}
return result, nil
}
// Cancel cancels a background response if it's still in progress
func (s *ResponseStore) Cancel(responseID string) (*schema.ORResponseResource, error) {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return nil, fmt.Errorf("response not found: %s", responseID)
}
stored.mu.Lock()
defer stored.mu.Unlock()
// If already in a terminal state, just return the response (idempotent)
status := stored.Response.Status
if status == schema.ORStatusCompleted || status == schema.ORStatusFailed ||
status == schema.ORStatusIncomplete || status == schema.ORStatusCancelled {
xlog.Debug("Response already in terminal state", "response_id", responseID, "status", status)
return stored.Response, nil
}
// Cancel the context if available
if stored.CancelFunc != nil {
stored.CancelFunc()
xlog.Debug("Cancelled background response", "response_id", responseID)
}
// Update status to cancelled
now := time.Now().Unix()
stored.Response.Status = schema.ORStatusCancelled
stored.Response.CompletedAt = &now
return stored.Response, nil
}
// GetEventsChan returns the events notification channel for a response
func (s *ResponseStore) GetEventsChan(responseID string) (chan struct{}, error) {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return nil, fmt.Errorf("response not found: %s", responseID)
}
return stored.EventsChan, nil
}
// IsStreamEnabled checks if a response was created with streaming enabled
func (s *ResponseStore) IsStreamEnabled(responseID string) (bool, error) {
s.mu.RLock()
stored, exists := s.responses[responseID]
s.mu.RUnlock()
if !exists {
return false, fmt.Errorf("response not found: %s", responseID)
}
stored.mu.RLock()
defer stored.mu.RUnlock()
return stored.StreamEnabled, nil
}

View File

@@ -0,0 +1,13 @@
package openresponses
import (
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestStore(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "ResponseStore Suite")
}

View File

@@ -0,0 +1,626 @@
package openresponses
import (
"context"
"fmt"
"time"
"github.com/mudler/LocalAI/core/schema"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("ResponseStore", func() {
var store *ResponseStore
BeforeEach(func() {
store = NewResponseStore(0) // No TTL for most tests
})
AfterEach(func() {
// Clean up
})
Describe("Store and Get", func() {
It("should store and retrieve a response", func() {
responseID := "resp_test123"
request := &schema.OpenResponsesRequest{
Model: "test-model",
Input: "Hello",
}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
CreatedAt: time.Now().Unix(),
Status: "completed",
Model: "test-model",
Output: []schema.ORItemField{
{
Type: "message",
ID: "msg_123",
Status: "completed",
Role: "assistant",
Content: []schema.ORContentPart{{
Type: "output_text",
Text: "Hello, world!",
Annotations: []schema.ORAnnotation{},
Logprobs: []schema.ORLogProb{},
}},
},
},
}
store.Store(responseID, request, response)
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored).ToNot(BeNil())
Expect(stored.Response.ID).To(Equal(responseID))
Expect(stored.Request.Model).To(Equal("test-model"))
Expect(len(stored.Items)).To(Equal(1))
Expect(stored.Items["msg_123"]).ToNot(BeNil())
Expect(stored.Items["msg_123"].ID).To(Equal("msg_123"))
})
It("should return error for non-existent response", func() {
_, err := store.Get("nonexistent")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("not found"))
})
It("should index all items by ID", func() {
responseID := "resp_test456"
request := &schema.OpenResponsesRequest{
Model: "test-model",
Input: "Test",
}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Output: []schema.ORItemField{
{
Type: "message",
ID: "msg_1",
Status: "completed",
Role: "assistant",
},
{
Type: "function_call",
ID: "fc_1",
Status: "completed",
CallID: "fc_1",
Name: "test_function",
Arguments: `{"arg": "value"}`,
},
{
Type: "message",
ID: "msg_2",
Status: "completed",
Role: "assistant",
},
},
}
store.Store(responseID, request, response)
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(len(stored.Items)).To(Equal(3))
Expect(stored.Items["msg_1"]).ToNot(BeNil())
Expect(stored.Items["fc_1"]).ToNot(BeNil())
Expect(stored.Items["msg_2"]).ToNot(BeNil())
})
It("should handle items without IDs", func() {
responseID := "resp_test789"
request := &schema.OpenResponsesRequest{
Model: "test-model",
Input: "Test",
}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Output: []schema.ORItemField{
{
Type: "message",
ID: "", // No ID
Status: "completed",
Role: "assistant",
},
{
Type: "message",
ID: "msg_with_id",
Status: "completed",
Role: "assistant",
},
},
}
store.Store(responseID, request, response)
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
// Only items with IDs are indexed
Expect(len(stored.Items)).To(Equal(1))
Expect(stored.Items["msg_with_id"]).ToNot(BeNil())
})
})
Describe("GetItem", func() {
It("should retrieve a specific item by ID", func() {
responseID := "resp_item_test"
itemID := "msg_specific"
request := &schema.OpenResponsesRequest{
Model: "test-model",
Input: "Test",
}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Output: []schema.ORItemField{
{
Type: "message",
ID: itemID,
Status: "completed",
Role: "assistant",
Content: []schema.ORContentPart{{
Type: "output_text",
Text: "Specific message",
Annotations: []schema.ORAnnotation{},
Logprobs: []schema.ORLogProb{},
}},
},
},
}
store.Store(responseID, request, response)
item, err := store.GetItem(responseID, itemID)
Expect(err).ToNot(HaveOccurred())
Expect(item).ToNot(BeNil())
Expect(item.ID).To(Equal(itemID))
Expect(item.Type).To(Equal("message"))
})
It("should return error for non-existent item", func() {
responseID := "resp_item_test2"
request := &schema.OpenResponsesRequest{
Model: "test-model",
Input: "Test",
}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Output: []schema.ORItemField{
{
Type: "message",
ID: "msg_existing",
Status: "completed",
},
},
}
store.Store(responseID, request, response)
_, err := store.GetItem(responseID, "nonexistent_item")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("item not found"))
})
It("should return error for non-existent response when getting item", func() {
_, err := store.GetItem("nonexistent_response", "any_item")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("response not found"))
})
})
Describe("FindItem", func() {
It("should find an item across all stored responses", func() {
// Store first response
responseID1 := "resp_find_1"
itemID1 := "msg_find_1"
store.Store(responseID1, &schema.OpenResponsesRequest{Model: "test"}, &schema.ORResponseResource{
ID: responseID1,
Object: "response",
Output: []schema.ORItemField{
{Type: "message", ID: itemID1, Status: "completed"},
},
})
// Store second response
responseID2 := "resp_find_2"
itemID2 := "msg_find_2"
store.Store(responseID2, &schema.OpenResponsesRequest{Model: "test"}, &schema.ORResponseResource{
ID: responseID2,
Object: "response",
Output: []schema.ORItemField{
{Type: "message", ID: itemID2, Status: "completed"},
},
})
// Find item from first response
item, foundResponseID, err := store.FindItem(itemID1)
Expect(err).ToNot(HaveOccurred())
Expect(item).ToNot(BeNil())
Expect(item.ID).To(Equal(itemID1))
Expect(foundResponseID).To(Equal(responseID1))
// Find item from second response
item, foundResponseID, err = store.FindItem(itemID2)
Expect(err).ToNot(HaveOccurred())
Expect(item).ToNot(BeNil())
Expect(item.ID).To(Equal(itemID2))
Expect(foundResponseID).To(Equal(responseID2))
})
It("should return error when item not found in any response", func() {
_, _, err := store.FindItem("nonexistent_item")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("item not found in any stored response"))
})
})
Describe("Delete", func() {
It("should delete a stored response", func() {
responseID := "resp_delete_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
}
store.Store(responseID, request, response)
Expect(store.Count()).To(Equal(1))
store.Delete(responseID)
Expect(store.Count()).To(Equal(0))
_, err := store.Get(responseID)
Expect(err).To(HaveOccurred())
})
It("should handle deleting non-existent response gracefully", func() {
// Should not panic
store.Delete("nonexistent")
Expect(store.Count()).To(Equal(0))
})
})
Describe("Count", func() {
It("should return correct count of stored responses", func() {
Expect(store.Count()).To(Equal(0))
store.Store("resp_1", &schema.OpenResponsesRequest{Model: "test"}, &schema.ORResponseResource{ID: "resp_1", Object: "response"})
Expect(store.Count()).To(Equal(1))
store.Store("resp_2", &schema.OpenResponsesRequest{Model: "test"}, &schema.ORResponseResource{ID: "resp_2", Object: "response"})
Expect(store.Count()).To(Equal(2))
store.Delete("resp_1")
Expect(store.Count()).To(Equal(1))
})
})
Describe("TTL and Expiration", func() {
It("should set expiration when TTL is configured", func() {
ttlStore := NewResponseStore(100 * time.Millisecond)
responseID := "resp_ttl_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{ID: responseID, Object: "response"}
ttlStore.Store(responseID, request, response)
stored, err := ttlStore.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored.ExpiresAt).ToNot(BeNil())
Expect(stored.ExpiresAt.After(time.Now())).To(BeTrue())
})
It("should not set expiration when TTL is 0", func() {
responseID := "resp_no_ttl"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{ID: responseID, Object: "response"}
store.Store(responseID, request, response)
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored.ExpiresAt).To(BeNil())
})
It("should clean up expired responses", func() {
ttlStore := NewResponseStore(50 * time.Millisecond)
responseID := "resp_expire_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{ID: responseID, Object: "response"}
ttlStore.Store(responseID, request, response)
Expect(ttlStore.Count()).To(Equal(1))
// Wait for expiration (longer than TTL and cleanup interval)
time.Sleep(150 * time.Millisecond)
// Cleanup should remove expired response (may have already been cleaned by goroutine)
count := ttlStore.Cleanup()
// Count might be 0 if cleanup goroutine already ran, or 1 if we're first
Expect(count).To(BeNumerically(">=", 0))
Expect(ttlStore.Count()).To(Equal(0))
_, err := ttlStore.Get(responseID)
Expect(err).To(HaveOccurred())
})
It("should return error for expired response", func() {
ttlStore := NewResponseStore(50 * time.Millisecond)
responseID := "resp_expire_error"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{ID: responseID, Object: "response"}
ttlStore.Store(responseID, request, response)
// Wait for expiration (but not long enough for cleanup goroutine to remove it)
time.Sleep(75 * time.Millisecond)
// Try to get before cleanup goroutine removes it
_, err := ttlStore.Get(responseID)
// Error could be "expired" or "not found" (if cleanup already ran)
Expect(err).To(HaveOccurred())
// Either error message is acceptable
errMsg := err.Error()
Expect(errMsg).To(Or(ContainSubstring("expired"), ContainSubstring("not found")))
})
})
Describe("Thread Safety", func() {
It("should handle concurrent stores and gets", func() {
// This is a basic concurrency test
done := make(chan bool, 10)
for i := 0; i < 10; i++ {
go func(id int) {
responseID := fmt.Sprintf("resp_concurrent_%d", id)
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Output: []schema.ORItemField{
{Type: "message", ID: fmt.Sprintf("msg_%d", id), Status: "completed"},
},
}
store.Store(responseID, request, response)
// Retrieve immediately
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored).ToNot(BeNil())
done <- true
}(i)
}
// Wait for all goroutines
for i := 0; i < 10; i++ {
<-done
}
Expect(store.Count()).To(Equal(10))
})
})
Describe("GetGlobalStore", func() {
It("should return singleton instance", func() {
store1 := GetGlobalStore()
store2 := GetGlobalStore()
Expect(store1).To(Equal(store2))
})
It("should persist data across GetGlobalStore calls", func() {
globalStore := GetGlobalStore()
responseID := "resp_global_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{ID: responseID, Object: "response"}
globalStore.Store(responseID, request, response)
// Get store again
globalStore2 := GetGlobalStore()
stored, err := globalStore2.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored).ToNot(BeNil())
})
})
Describe("Background Mode Support", func() {
It("should store background response with cancel function", func() {
responseID := "resp_bg_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusQueued,
}
_, cancel := context.WithCancel(context.Background())
defer cancel()
store.StoreBackground(responseID, request, response, cancel, true)
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored).ToNot(BeNil())
Expect(stored.IsBackground).To(BeTrue())
Expect(stored.StreamEnabled).To(BeTrue())
Expect(stored.CancelFunc).ToNot(BeNil())
})
It("should update status of stored response", func() {
responseID := "resp_status_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusQueued,
}
store.Store(responseID, request, response)
err := store.UpdateStatus(responseID, schema.ORStatusInProgress, nil)
Expect(err).ToNot(HaveOccurred())
stored, err := store.Get(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(stored.Response.Status).To(Equal(schema.ORStatusInProgress))
})
It("should append and retrieve streaming events", func() {
responseID := "resp_events_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusInProgress,
}
_, cancel := context.WithCancel(context.Background())
defer cancel()
store.StoreBackground(responseID, request, response, cancel, true)
// Append events
event1 := &schema.ORStreamEvent{
Type: "response.created",
SequenceNumber: 0,
}
event2 := &schema.ORStreamEvent{
Type: "response.in_progress",
SequenceNumber: 1,
}
event3 := &schema.ORStreamEvent{
Type: "response.output_text.delta",
SequenceNumber: 2,
}
err := store.AppendEvent(responseID, event1)
Expect(err).ToNot(HaveOccurred())
err = store.AppendEvent(responseID, event2)
Expect(err).ToNot(HaveOccurred())
err = store.AppendEvent(responseID, event3)
Expect(err).ToNot(HaveOccurred())
// Get all events after -1 (all events)
events, err := store.GetEventsAfter(responseID, -1)
Expect(err).ToNot(HaveOccurred())
Expect(events).To(HaveLen(3))
// Get events after sequence 1
events, err = store.GetEventsAfter(responseID, 1)
Expect(err).ToNot(HaveOccurred())
Expect(events).To(HaveLen(1))
Expect(events[0].SequenceNumber).To(Equal(2))
})
It("should cancel an in-progress response", func() {
responseID := "resp_cancel_test"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusInProgress,
}
_, cancel := context.WithCancel(context.Background())
defer cancel()
store.StoreBackground(responseID, request, response, cancel, false)
// Cancel the response
cancelledResponse, err := store.Cancel(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(cancelledResponse.Status).To(Equal(schema.ORStatusCancelled))
Expect(cancelledResponse.CompletedAt).ToNot(BeNil())
})
It("should be idempotent when cancelling already completed response", func() {
responseID := "resp_idempotent_cancel"
request := &schema.OpenResponsesRequest{Model: "test"}
completedAt := time.Now().Unix()
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusCompleted,
CompletedAt: &completedAt,
}
store.Store(responseID, request, response)
// Try to cancel a completed response
cancelledResponse, err := store.Cancel(responseID)
Expect(err).ToNot(HaveOccurred())
// Status should remain completed (not changed to cancelled)
Expect(cancelledResponse.Status).To(Equal(schema.ORStatusCompleted))
})
It("should check if streaming is enabled", func() {
responseID := "resp_stream_check"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusQueued,
}
_, cancel := context.WithCancel(context.Background())
defer cancel()
store.StoreBackground(responseID, request, response, cancel, true)
enabled, err := store.IsStreamEnabled(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(enabled).To(BeTrue())
// Store another without streaming
responseID2 := "resp_no_stream"
store.StoreBackground(responseID2, request, response, cancel, false)
enabled2, err := store.IsStreamEnabled(responseID2)
Expect(err).ToNot(HaveOccurred())
Expect(enabled2).To(BeFalse())
})
It("should notify subscribers of new events", func() {
responseID := "resp_events_chan"
request := &schema.OpenResponsesRequest{Model: "test"}
response := &schema.ORResponseResource{
ID: responseID,
Object: "response",
Status: schema.ORStatusInProgress,
}
_, cancel := context.WithCancel(context.Background())
defer cancel()
store.StoreBackground(responseID, request, response, cancel, true)
eventsChan, err := store.GetEventsChan(responseID)
Expect(err).ToNot(HaveOccurred())
Expect(eventsChan).ToNot(BeNil())
// Append an event
event := &schema.ORStreamEvent{
Type: "response.output_text.delta",
SequenceNumber: 0,
}
go func() {
time.Sleep(10 * time.Millisecond)
store.AppendEvent(responseID, event)
}()
// Wait for notification
select {
case <-eventsChan:
// Event received
case <-time.After(1 * time.Second):
Fail("Timeout waiting for event notification")
}
})
})
})

View File

@@ -1,13 +1,33 @@
package http_test
import (
"os"
"path/filepath"
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var (
tmpdir string
modelDir string
)
func TestLocalAI(t *testing.T) {
RegisterFailHandler(Fail)
var err error
tmpdir, err = os.MkdirTemp("", "")
Expect(err).ToNot(HaveOccurred())
modelDir = filepath.Join(tmpdir, "models")
err = os.Mkdir(modelDir, 0750)
Expect(err).ToNot(HaveOccurred())
AfterSuite(func() {
err := os.RemoveAll(tmpdir)
Expect(err).ToNot(HaveOccurred())
})
RunSpecs(t, "LocalAI HTTP test suite")
}

View File

@@ -484,3 +484,103 @@ func mergeOpenAIRequestAndModelConfig(config *config.ModelConfig, input *schema.
}
return fmt.Errorf("unable to validate configuration after merging")
}
func (re *RequestExtractor) SetOpenResponsesRequest(c echo.Context) error {
input, ok := c.Get(CONTEXT_LOCALS_KEY_LOCALAI_REQUEST).(*schema.OpenResponsesRequest)
if !ok || input.Model == "" {
return echo.ErrBadRequest
}
cfg, ok := c.Get(CONTEXT_LOCALS_KEY_MODEL_CONFIG).(*config.ModelConfig)
if !ok || cfg == nil {
return echo.ErrBadRequest
}
// Extract or generate the correlation ID (Open Responses uses x-request-id)
correlationID := c.Request().Header.Get("x-request-id")
if correlationID == "" {
correlationID = uuid.New().String()
}
c.Response().Header().Set("x-request-id", correlationID)
// Use the request context directly - Echo properly supports context cancellation!
reqCtx := c.Request().Context()
c1, cancel := context.WithCancel(re.applicationConfig.Context)
// Cancel when request context is cancelled (client disconnects)
go func() {
select {
case <-reqCtx.Done():
cancel()
case <-c1.Done():
// Already cancelled
}
}()
// Add the correlation ID to the new context
ctxWithCorrelationID := context.WithValue(c1, CorrelationIDKey, correlationID)
input.Context = ctxWithCorrelationID
input.Cancel = cancel
err := mergeOpenResponsesRequestAndModelConfig(cfg, input)
if err != nil {
return err
}
if cfg.Model == "" {
xlog.Debug("replacing empty cfg.Model with input value", "input.Model", input.Model)
cfg.Model = input.Model
}
c.Set(CONTEXT_LOCALS_KEY_LOCALAI_REQUEST, input)
c.Set(CONTEXT_LOCALS_KEY_MODEL_CONFIG, cfg)
return nil
}
func mergeOpenResponsesRequestAndModelConfig(config *config.ModelConfig, input *schema.OpenResponsesRequest) error {
// Temperature
if input.Temperature != nil {
config.Temperature = input.Temperature
}
// TopP
if input.TopP != nil {
config.TopP = input.TopP
}
// MaxOutputTokens -> Maxtokens
if input.MaxOutputTokens != nil {
config.Maxtokens = input.MaxOutputTokens
}
// Convert tools to functions - this will be handled in the endpoint handler
// We just validate that tools are present if needed
// Handle tool_choice
if input.ToolChoice != nil {
switch tc := input.ToolChoice.(type) {
case string:
// "auto", "required", or "none"
if tc == "required" {
config.SetFunctionCallString("required")
} else if tc == "none" {
// Don't use tools - handled in endpoint
}
// "auto" is default - let model decide
case map[string]interface{}:
// Specific tool: {type:"function", name:"..."}
if tcType, ok := tc["type"].(string); ok && tcType == "function" {
if name, ok := tc["name"].(string); ok {
config.SetFunctionCallString(name)
}
}
}
}
if valid, _ := config.Validate(); valid {
return nil
}
return fmt.Errorf("unable to validate configuration after merging")
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,58 @@
package routes
import (
"github.com/labstack/echo/v4"
"github.com/mudler/LocalAI/core/application"
"github.com/mudler/LocalAI/core/config"
"github.com/mudler/LocalAI/core/http/endpoints/openresponses"
"github.com/mudler/LocalAI/core/http/middleware"
"github.com/mudler/LocalAI/core/schema"
)
func RegisterOpenResponsesRoutes(app *echo.Echo,
re *middleware.RequestExtractor,
application *application.Application) {
// Open Responses API endpoint
responsesHandler := openresponses.ResponsesEndpoint(
application.ModelConfigLoader(),
application.ModelLoader(),
application.TemplatesEvaluator(),
application.ApplicationConfig(),
)
responsesMiddleware := []echo.MiddlewareFunc{
middleware.TraceMiddleware(application),
re.BuildFilteredFirstAvailableDefaultModel(config.BuildUsecaseFilterFn(config.FLAG_CHAT)),
re.SetModelAndConfig(func() schema.LocalAIRequest { return new(schema.OpenResponsesRequest) }),
setOpenResponsesRequestContext(re),
}
// Main Open Responses endpoint
app.POST("/v1/responses", responsesHandler, responsesMiddleware...)
// Also support without version prefix for compatibility
app.POST("/responses", responsesHandler, responsesMiddleware...)
// GET /responses/:id - Retrieve a response (for polling background requests)
getResponseHandler := openresponses.GetResponseEndpoint()
app.GET("/v1/responses/:id", getResponseHandler, middleware.TraceMiddleware(application))
app.GET("/responses/:id", getResponseHandler, middleware.TraceMiddleware(application))
// POST /responses/:id/cancel - Cancel a background response
cancelResponseHandler := openresponses.CancelResponseEndpoint()
app.POST("/v1/responses/:id/cancel", cancelResponseHandler, middleware.TraceMiddleware(application))
app.POST("/responses/:id/cancel", cancelResponseHandler, middleware.TraceMiddleware(application))
}
// setOpenResponsesRequestContext sets up the context and cancel function for Open Responses requests
func setOpenResponsesRequestContext(re *middleware.RequestExtractor) echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
if err := re.SetOpenResponsesRequest(c); err != nil {
return err
}
return next(c)
}
}
}

View File

@@ -219,7 +219,7 @@ func RegisterUIRoutes(app *echo.Echo,
return c.Render(200, "views/chat", summary)
})
app.GET("/text2image/:model", func(c echo.Context) error {
app.GET("/image/:model", func(c echo.Context) error {
modelConfigs := cl.GetAllModelsConfigs()
modelsWithoutConfig, _ := services.ListModels(cl, ml, config.NoFilterFn, services.LOOSE_ONLY)
@@ -233,10 +233,10 @@ func RegisterUIRoutes(app *echo.Echo,
}
// Render index
return c.Render(200, "views/text2image", summary)
return c.Render(200, "views/image", summary)
})
app.GET("/text2image", func(c echo.Context) error {
app.GET("/image", func(c echo.Context) error {
modelConfigs := cl.GetAllModelsConfigs()
modelsWithoutConfig, _ := services.ListModels(cl, ml, config.NoFilterFn, services.LOOSE_ONLY)
@@ -266,7 +266,7 @@ func RegisterUIRoutes(app *echo.Echo,
}
// Render index
return c.Render(200, "views/text2image", summary)
return c.Render(200, "views/image", summary)
})
app.GET("/tts/:model", func(c echo.Context) error {
@@ -318,6 +318,56 @@ func RegisterUIRoutes(app *echo.Echo,
return c.Render(200, "views/tts", summary)
})
app.GET("/video/:model", func(c echo.Context) error {
modelConfigs := cl.GetAllModelsConfigs()
modelsWithoutConfig, _ := services.ListModels(cl, ml, config.NoFilterFn, services.LOOSE_ONLY)
summary := map[string]interface{}{
"Title": "LocalAI - Generate videos with " + c.Param("model"),
"BaseURL": middleware.BaseURL(c),
"ModelsConfig": modelConfigs,
"ModelsWithoutConfig": modelsWithoutConfig,
"Model": c.Param("model"),
"Version": internal.PrintableVersion(),
}
// Render index
return c.Render(200, "views/video", summary)
})
app.GET("/video", func(c echo.Context) error {
modelConfigs := cl.GetAllModelsConfigs()
modelsWithoutConfig, _ := services.ListModels(cl, ml, config.NoFilterFn, services.LOOSE_ONLY)
if len(modelConfigs)+len(modelsWithoutConfig) == 0 {
// If no model is available redirect to the index which suggests how to install models
return c.Redirect(302, middleware.BaseURL(c))
}
modelThatCanBeUsed := ""
title := "LocalAI - Generate videos"
for _, b := range modelConfigs {
if b.HasUsecases(config.FLAG_VIDEO) {
modelThatCanBeUsed = b.Name
title = "LocalAI - Generate videos with " + modelThatCanBeUsed
break
}
}
summary := map[string]interface{}{
"Title": title,
"BaseURL": middleware.BaseURL(c),
"ModelsConfig": modelConfigs,
"ModelsWithoutConfig": modelsWithoutConfig,
"Model": modelThatCanBeUsed,
"Version": internal.PrintableVersion(),
}
// Render index
return c.Render(200, "views/video", summary)
})
// Traces UI
app.GET("/traces", func(c echo.Context) error {
summary := map[string]interface{}{

300
core/http/static/video.js Normal file
View File

@@ -0,0 +1,300 @@
// Helper function to convert file to base64
function fileToBase64(file) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
// Remove data:image/...;base64, prefix if present
const base64 = reader.result.split(',')[1] || reader.result;
resolve(base64);
};
reader.onerror = reject;
reader.readAsDataURL(file);
});
}
function genVideo(event) {
event.preventDefault();
promptVideo();
}
async function promptVideo() {
const loader = document.getElementById("loader");
const input = document.getElementById("input");
const generateBtn = document.getElementById("generate-btn");
const resultDiv = document.getElementById("result");
const resultPlaceholder = document.getElementById("result-placeholder");
// Show loader and disable form
loader.classList.remove("hidden");
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
input.disabled = true;
generateBtn.disabled = true;
// Store the prompt for later restoration
const prompt = input.value.trim();
if (!prompt) {
alert("Please enter a prompt");
loader.classList.add("hidden");
if (resultPlaceholder) {
resultPlaceholder.style.display = "flex";
}
input.disabled = false;
generateBtn.disabled = false;
return;
}
// Collect all form values
const model = document.getElementById("video-model").value;
const size = document.getElementById("video-size").value;
const negativePrompt = document.getElementById("negative-prompt").value.trim();
// Parse size into width and height
const sizeParts = size.split("x");
let width = 512;
let height = 512;
if (sizeParts.length === 2) {
width = parseInt(sizeParts[0]) || 512;
height = parseInt(sizeParts[1]) || 512;
}
// Video-specific parameters
const secondsInput = document.getElementById("video-seconds").value.trim();
const seconds = secondsInput ? secondsInput : undefined;
const fpsInput = document.getElementById("video-fps").value.trim();
const fps = fpsInput ? parseInt(fpsInput) : 16;
const framesInput = document.getElementById("video-frames").value.trim();
const numFrames = framesInput ? parseInt(framesInput) : undefined;
// Advanced parameters
const stepInput = document.getElementById("video-steps").value.trim();
const step = stepInput ? parseInt(stepInput) : undefined;
const seedInput = document.getElementById("video-seed").value.trim();
const seed = seedInput ? parseInt(seedInput) : undefined;
const cfgScaleInput = document.getElementById("video-cfg-scale").value.trim();
const cfgScale = cfgScaleInput ? parseFloat(cfgScaleInput) : undefined;
// Prepare request body
const requestBody = {
model: model,
prompt: prompt,
width: width,
height: height,
fps: fps,
};
if (negativePrompt) {
requestBody.negative_prompt = negativePrompt;
}
if (seconds !== undefined) {
requestBody.seconds = seconds;
}
if (numFrames !== undefined) {
requestBody.num_frames = numFrames;
}
if (step !== undefined) {
requestBody.step = step;
}
if (seed !== undefined) {
requestBody.seed = seed;
}
if (cfgScale !== undefined) {
requestBody.cfg_scale = cfgScale;
}
// Handle file inputs
try {
// Start image (for img2video)
const startImageInput = document.getElementById("start-image");
if (startImageInput.files.length > 0) {
const base64 = await fileToBase64(startImageInput.files[0]);
requestBody.start_image = base64;
}
// End image
const endImageInput = document.getElementById("end-image");
if (endImageInput.files.length > 0) {
const base64 = await fileToBase64(endImageInput.files[0]);
requestBody.end_image = base64;
}
} catch (error) {
console.error("Error processing image files:", error);
resultDiv.innerHTML = '<p class="text-xs text-red-500 p-2">Error processing image files: ' + error.message + '</p>';
loader.classList.add("hidden");
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
input.disabled = false;
generateBtn.disabled = false;
return;
}
// Make API request
try {
const response = await fetch("v1/videos/generations", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(requestBody),
});
const json = await response.json();
if (json.error) {
// Display error
resultDiv.innerHTML = '<p class="text-xs text-red-500 p-2">Error: ' + json.error.message + '</p>';
loader.classList.add("hidden");
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
input.disabled = false;
generateBtn.disabled = false;
return;
}
// Clear result div and hide placeholder
resultDiv.innerHTML = '';
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
// Display generated video
if (json.data && json.data.length > 0) {
json.data.forEach((item, index) => {
const videoContainer = document.createElement("div");
videoContainer.className = "flex flex-col";
// Create video element
const video = document.createElement("video");
video.controls = true;
video.className = "w-full h-auto rounded-lg";
video.preload = "metadata";
if (item.url) {
video.src = item.url;
} else if (item.b64_json) {
video.src = "data:video/mp4;base64," + item.b64_json;
} else {
return; // Skip invalid items
}
videoContainer.appendChild(video);
// Create caption container
const captionDiv = document.createElement("div");
captionDiv.className = "mt-2 p-2 bg-[var(--color-bg-secondary)] rounded-lg text-xs";
// Prompt caption
const promptCaption = document.createElement("p");
promptCaption.className = "text-[var(--color-text-primary)] mb-1.5 break-words";
promptCaption.innerHTML = '<strong>Prompt:</strong> ' + escapeHtml(prompt);
captionDiv.appendChild(promptCaption);
// Negative prompt if provided
if (negativePrompt) {
const negativeCaption = document.createElement("p");
negativeCaption.className = "text-[var(--color-text-secondary)] mb-1.5 break-words";
negativeCaption.innerHTML = '<strong>Negative Prompt:</strong> ' + escapeHtml(negativePrompt);
captionDiv.appendChild(negativeCaption);
}
// Generation details
const detailsDiv = document.createElement("div");
detailsDiv.className = "flex flex-wrap gap-3 text-[10px] text-[var(--color-text-secondary)] mt-1.5";
detailsDiv.innerHTML = `
<span><strong>Size:</strong> ${width}x${height}</span>
${fps ? `<span><strong>FPS:</strong> ${fps}</span>` : ''}
${numFrames !== undefined ? `<span><strong>Frames:</strong> ${numFrames}</span>` : ''}
${seconds !== undefined ? `<span><strong>Duration:</strong> ${seconds}s</span>` : ''}
${step !== undefined ? `<span><strong>Steps:</strong> ${step}</span>` : ''}
${seed !== undefined ? `<span><strong>Seed:</strong> ${seed}</span>` : ''}
${cfgScale !== undefined ? `<span><strong>CFG Scale:</strong> ${cfgScale}</span>` : ''}
`;
captionDiv.appendChild(detailsDiv);
// Copy prompt button
const copyBtn = document.createElement("button");
copyBtn.className = "mt-1.5 px-2 py-0.5 text-[10px] bg-[var(--color-primary)] text-white rounded hover:opacity-80";
copyBtn.innerHTML = '<i class="fas fa-copy mr-1"></i>Copy Prompt';
copyBtn.onclick = () => {
navigator.clipboard.writeText(prompt).then(() => {
copyBtn.innerHTML = '<i class="fas fa-check mr-1"></i>Copied!';
setTimeout(() => {
copyBtn.innerHTML = '<i class="fas fa-copy mr-1"></i>Copy Prompt';
}, 2000);
});
};
captionDiv.appendChild(copyBtn);
videoContainer.appendChild(captionDiv);
resultDiv.appendChild(videoContainer);
});
// Hide placeholder when videos are displayed
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
} else {
resultDiv.innerHTML = '<p class="text-xs text-[var(--color-text-secondary)] p-2">No videos were generated.</p>';
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
}
} catch (error) {
console.error("Error generating video:", error);
resultDiv.innerHTML = '<p class="text-xs text-red-500 p-2">Error: ' + error.message + '</p>';
if (resultPlaceholder) {
resultPlaceholder.style.display = "none";
}
} finally {
// Hide loader and re-enable form
loader.classList.add("hidden");
input.disabled = false;
generateBtn.disabled = false;
input.focus();
}
}
// Helper function to escape HTML
function escapeHtml(text) {
const div = document.createElement("div");
div.textContent = text;
return div.innerHTML;
}
// Initialize
document.addEventListener("DOMContentLoaded", function() {
const input = document.getElementById("input");
const form = document.getElementById("genvideo");
if (input) {
input.focus();
}
if (form) {
form.addEventListener("submit", genVideo);
}
// Handle Enter key press in the prompt input (but allow Shift+Enter for new lines)
if (input) {
input.addEventListener("keydown", function(event) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
genVideo(event);
}
});
}
// Hide loader initially
const loader = document.getElementById("loader");
if (loader) {
loader.classList.add("hidden");
}
});

View File

@@ -28,19 +28,19 @@
{{ $cfg := . }}
{{ range .KnownUsecaseStrings }}
{{ if eq . "FLAG_IMAGE" }}
<option value="text2image/{{$cfg.Name}}" {{ if eq $cfg.Name $model }} selected {{end}} class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)]">{{$cfg.Name}}</option>
<option value="image/{{$cfg.Name}}" {{ if eq $cfg.Name $model }} selected {{end}} class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)]">{{$cfg.Name}}</option>
{{ end }}
{{ end }}
{{ end }}
{{ range .ModelsWithoutConfig }}
<option value="text2image/{{.}}" {{ if eq . $model }} selected {{ end }} class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)]">{{.}}</option>
<option value="image/{{.}}" {{ if eq . $model }} selected {{ end }} class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)]">{{.}}</option>
{{end}}
</select>
</div>
<div class="relative">
<input id="image-model" type="hidden" value="{{.Model}}">
<form id="genimage" action="text2image/{{.Model}}" method="get">
<form id="genimage" action="image/{{.Model}}" method="get">
<!-- Basic Settings -->
<div class="space-y-2">
<!-- Prompt -->
@@ -326,4 +326,4 @@
</script>
</body>
</html>
</html>

View File

@@ -315,7 +315,7 @@
</a>
{{ end }}
{{ if eq . "FLAG_IMAGE" }}
<a href="text2image/{{$backendCfg.Name}}" class="inline-flex items-center px-1.5 py-0.5 rounded text-[10px] font-medium bg-[var(--color-success)]/10 text-green-300 hover:bg-[var(--color-success)]/20 transition-colors" title="Image">
<a href="image/{{$backendCfg.Name}}" class="inline-flex items-center px-1.5 py-0.5 rounded text-[10px] font-medium bg-[var(--color-success)]/10 text-green-300 hover:bg-[var(--color-success)]/20 transition-colors" title="Image">
<i class="fas fa-image text-[8px] mr-1"></i>Image
</a>
{{ end }}

View File

@@ -25,9 +25,12 @@
<a href="chat/" class="text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] px-2 py-2 rounded-lg transition duration-300 ease-in-out hover:bg-[var(--color-bg-secondary)] flex items-center group text-sm">
<i class="fa-solid fa-comments text-[var(--color-primary)] mr-1.5 text-sm group-hover:scale-110 transition-transform"></i>Chat
</a>
<a href="text2image/" class="text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] px-2 py-2 rounded-lg transition duration-300 ease-in-out hover:bg-[var(--color-bg-secondary)] flex items-center group text-sm">
<a href="image/" class="text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] px-2 py-2 rounded-lg transition duration-300 ease-in-out hover:bg-[var(--color-bg-secondary)] flex items-center group text-sm">
<i class="fas fa-image text-[var(--color-primary)] mr-1.5 text-sm group-hover:scale-110 transition-transform"></i>Images
</a>
<a href="video/" class="text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] px-2 py-2 rounded-lg transition duration-300 ease-in-out hover:bg-[var(--color-bg-secondary)] flex items-center group text-sm">
<i class="fas fa-video text-[var(--color-primary)] mr-1.5 text-sm group-hover:scale-110 transition-transform"></i>Video
</a>
<a href="tts/" class="text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] px-2 py-2 rounded-lg transition duration-300 ease-in-out hover:bg-[var(--color-bg-secondary)] flex items-center group text-sm">
<i class="fa-solid fa-music text-[var(--color-primary)] mr-1.5 text-sm group-hover:scale-110 transition-transform"></i>TTS
</a>
@@ -85,9 +88,12 @@
<a href="chat/" class="block text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] px-3 py-2 rounded-lg transition duration-300 ease-in-out flex items-center text-sm">
<i class="fa-solid fa-comments text-[var(--color-primary)] mr-3 w-5 text-center text-sm"></i>Chat
</a>
<a href="text2image/" class="block text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] px-3 py-2 rounded-lg transition duration-300 ease-in-out flex items-center text-sm">
<a href="image/" class="block text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] px-3 py-2 rounded-lg transition duration-300 ease-in-out flex items-center text-sm">
<i class="fas fa-image text-[var(--color-primary)] mr-3 w-5 text-center text-sm"></i>Images
</a>
<a href="video/" class="block text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] px-3 py-2 rounded-lg transition duration-300 ease-in-out flex items-center text-sm">
<i class="fas fa-video text-[var(--color-primary)] mr-3 w-5 text-center text-sm"></i>Video
</a>
<a href="tts/" class="block text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] px-3 py-2 rounded-lg transition duration-300 ease-in-out flex items-center text-sm">
<i class="fa-solid fa-music text-[var(--color-primary)] mr-3 w-5 text-center text-sm"></i>TTS
</a>

View File

@@ -485,6 +485,28 @@
</div>
</div>
<!-- Open Responses Settings Section -->
<div class="bg-[var(--color-bg-secondary)] border border-[var(--color-accent)]/20 rounded-lg p-6">
<h2 class="text-xl font-semibold text-[var(--color-text-primary)] mb-4 flex items-center">
<i class="fas fa-database mr-2 text-[var(--color-accent)] text-sm"></i>
Open Responses Settings
</h2>
<p class="text-xs text-[var(--color-text-secondary)] mb-4">
Configure Open Responses API response storage
</p>
<div class="space-y-4">
<!-- Store TTL -->
<div>
<label class="block text-sm font-medium text-[var(--color-text-primary)] mb-2">Response Store TTL</label>
<p class="text-xs text-[var(--color-text-secondary)] mb-2">Time-to-live for stored responses (e.g., 1h, 30m, 0 = no expiration)</p>
<input type="text" x-model="settings.open_responses_store_ttl"
placeholder="0"
class="w-full px-3 py-2 bg-[var(--color-bg-primary)] border border-[var(--color-accent)]/20 rounded text-sm text-[var(--color-text-primary)] focus:outline-none focus:ring-2 focus:ring-[var(--color-accent)]/50">
</div>
</div>
</div>
<!-- API Keys Settings Section -->
<div class="bg-[var(--color-bg-secondary)] border border-[var(--color-error-light)] rounded-lg p-6">
<h2 class="text-xl font-semibold text-[var(--color-text-primary)] mb-4 flex items-center">
@@ -633,7 +655,8 @@ function settingsDashboard() {
galleries_json: '[]',
backend_galleries_json: '[]',
api_keys_text: '',
agent_job_retention_days: 30
agent_job_retention_days: 30,
open_responses_store_ttl: '0'
},
sourceInfo: '',
saving: false,
@@ -680,7 +703,8 @@ function settingsDashboard() {
galleries_json: JSON.stringify(data.galleries || [], null, 2),
backend_galleries_json: JSON.stringify(data.backend_galleries || [], null, 2),
api_keys_text: (data.api_keys || []).join('\n'),
agent_job_retention_days: data.agent_job_retention_days || 30
agent_job_retention_days: data.agent_job_retention_days || 30,
open_responses_store_ttl: data.open_responses_store_ttl || '0'
};
this.sourceInfo = data.source || 'default';
} else {
@@ -838,6 +862,9 @@ function settingsDashboard() {
if (this.settings.agent_job_retention_days !== undefined) {
payload.agent_job_retention_days = parseInt(this.settings.agent_job_retention_days) || 30;
}
if (this.settings.open_responses_store_ttl !== undefined) {
payload.open_responses_store_ttl = this.settings.open_responses_store_ttl;
}
const response = await fetch('/api/settings', {
method: 'POST',

315
core/http/views/video.html Normal file
View File

@@ -0,0 +1,315 @@
<!DOCTYPE html>
<html lang="en">
{{template "views/partials/head" .}}
<script defer src="static/video.js"></script>
<body class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)] flex flex-col h-screen">
<div class="flex flex-col flex-1 overflow-hidden">
{{template "views/partials/navbar" .}}
<div class="flex flex-1 overflow-hidden">
<!-- Two Column Layout: Settings on Left, Preview on Right -->
<div class="flex flex-col lg:flex-row flex-1 gap-4 p-4 overflow-hidden">
<!-- Left Column: Generation Settings -->
<div class="flex-shrink-0 lg:w-1/4 flex flex-col min-h-0">
<div class="card p-3 space-y-3 overflow-y-auto flex-1">
<!-- Model Selection - Compact -->
<div class="space-y-1.5">
<div class="flex items-center justify-between gap-2">
<label class="text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide flex-shrink-0">Model</label>
</div>
<select x-data="{ link : '' }" x-model="link" x-init="$watch('link', value => window.location = link)"
id="model-select"
class="input w-full p-1.5 text-xs"
>
<option value="" disabled class="text-[var(--color-text-secondary)]">Select a model</option>
{{ $model:=.Model}}
{{ range .ModelsConfig }}
{{ $cfg := . }}
{{ range .KnownUsecaseStrings }}
{{ if eq . "FLAG_VIDEO" }}
<option value="video/{{$cfg.Name}}" {{ if eq $cfg.Name $model }} selected {{end}} class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)]">{{$cfg.Name}}</option>
{{ end }}
{{ end }}
{{ end }}
{{ range .ModelsWithoutConfig }}
<option value="video/{{.}}" {{ if eq . $model }} selected {{ end }} class="bg-[var(--color-bg-primary)] text-[var(--color-text-primary)]">{{.}}</option>
{{end}}
</select>
</div>
<div class="relative">
<input id="video-model" type="hidden" value="{{.Model}}">
<form id="genvideo" action="video/{{.Model}}" method="get">
<!-- Basic Settings -->
<div class="space-y-2">
<!-- Prompt -->
<div class="space-y-1">
<label for="input" class="block text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide">
<i class="fas fa-magic mr-1.5 text-[var(--color-primary)]"></i>Prompt
</label>
<textarea
id="input"
name="input"
placeholder="Describe the video you want to generate..."
autocomplete="off"
rows="3"
class="input w-full p-1.5 text-xs resize-y"
required
></textarea>
</div>
<!-- Negative Prompt -->
<div class="space-y-1">
<label for="negative-prompt" class="block text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide">
<i class="fas fa-ban mr-1.5 text-[var(--color-primary)]"></i>Negative Prompt
</label>
<textarea
id="negative-prompt"
name="negative-prompt"
placeholder="Things to avoid in the video..."
rows="2"
class="input w-full p-1.5 text-xs resize-y"
></textarea>
</div>
<!-- Size Selection with Presets -->
<div class="space-y-1">
<label for="video-size" class="block text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide">
<i class="fas fa-expand-arrows-alt mr-1.5 text-[var(--color-primary)]"></i>Video Size
</label>
<div class="flex flex-wrap gap-1.5 mb-1.5">
<button type="button" class="size-preset px-2 py-0.5 text-[10px] rounded border border-[var(--color-border)] hover:bg-[var(--color-bg-secondary)]" data-size="256x256">256×256</button>
<button type="button" class="size-preset px-2 py-0.5 text-[10px] rounded border border-[var(--color-border)] hover:bg-[var(--color-bg-secondary)]" data-size="512x512">512×512</button>
<button type="button" class="size-preset px-2 py-0.5 text-[10px] rounded border border-[var(--color-border)] hover:bg-[var(--color-bg-secondary)]" data-size="768x768">768×768</button>
<button type="button" class="size-preset px-2 py-0.5 text-[10px] rounded border border-[var(--color-border)] hover:bg-[var(--color-bg-secondary)]" data-size="1024x1024">1024×1024</button>
</div>
<input
type="text"
id="video-size"
value="512x512"
placeholder="e.g., 256x256, 512x512, 1024x1024"
class="input p-1.5 text-xs w-full"
/>
</div>
<!-- Video Duration / FPS / Frames -->
<div class="space-y-1">
<label for="video-seconds" class="block text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide">
<i class="fas fa-clock mr-1.5 text-[var(--color-primary)]"></i>Duration (seconds)
</label>
<input
type="number"
id="video-seconds"
name="seconds"
min="1"
max="60"
placeholder="Leave empty for default"
class="input p-1.5 text-xs w-full"
/>
</div>
<div class="space-y-1">
<label for="video-fps" class="block text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide">
<i class="fas fa-film mr-1.5 text-[var(--color-primary)]"></i>FPS
</label>
<input
type="number"
id="video-fps"
name="fps"
min="1"
max="60"
value="16"
placeholder="Frames per second"
class="input p-1.5 text-xs w-full"
/>
</div>
<div class="space-y-1">
<label for="video-frames" class="block text-xs font-medium text-[var(--color-text-secondary)] uppercase tracking-wide">
<i class="fas fa-images mr-1.5 text-[var(--color-primary)]"></i>Number of Frames
</label>
<input
type="number"
id="video-frames"
name="num_frames"
min="1"
max="500"
placeholder="Leave empty for default"
class="input p-1.5 text-xs w-full"
/>
</div>
</div>
<!-- Advanced Settings (Collapsible) -->
<div class="space-y-2">
<button type="button" id="advanced-toggle" class="w-full flex items-center justify-between px-2 py-1.5 text-xs rounded text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] transition-colors">
<span><i class="fa-solid fa-sliders mr-1.5 text-[var(--color-primary)]"></i> Advanced Settings</span>
<i class="fas fa-chevron-down text-[10px]" id="advanced-chevron"></i>
</button>
<div id="advanced-settings" class="hidden p-2 bg-[var(--color-bg-secondary)] border border-[var(--color-primary-border)]/20 rounded pl-4 border-l-2 border-[var(--color-bg-secondary)] space-y-2">
<!-- Steps -->
<div class="space-y-1">
<label for="video-steps" class="block text-xs text-[var(--color-text-secondary)]">
<i class="fas fa-step-forward mr-1.5 text-[var(--color-primary)]"></i>Steps
</label>
<input
type="number"
id="video-steps"
name="step"
min="1"
max="100"
placeholder="Leave empty for default"
class="input p-1.5 text-xs w-full"
/>
</div>
<!-- Seed -->
<div class="space-y-1">
<label for="video-seed" class="block text-xs text-[var(--color-text-secondary)]">
<i class="fas fa-seedling mr-1.5 text-[var(--color-primary)]"></i>Seed
</label>
<input
type="number"
id="video-seed"
name="seed"
min="0"
placeholder="Leave empty for random"
class="input p-1.5 text-xs w-full"
/>
</div>
<!-- CFG Scale -->
<div class="space-y-1">
<label for="video-cfg-scale" class="block text-xs text-[var(--color-text-secondary)]">
<i class="fas fa-sliders-h mr-1.5 text-[var(--color-primary)]"></i>CFG Scale
</label>
<input
type="number"
id="video-cfg-scale"
name="cfg_scale"
min="0"
max="20"
step="0.1"
placeholder="Leave empty for default"
class="input p-1.5 text-xs w-full"
/>
</div>
</div>
</div>
<!-- Image Inputs (Collapsible) -->
<div class="space-y-2">
<button type="button" id="image-inputs-toggle" class="w-full flex items-center justify-between px-2 py-1.5 text-xs rounded text-[var(--color-text-secondary)] hover:text-[var(--color-text-primary)] hover:bg-[var(--color-bg-secondary)] transition-colors">
<span><i class="fa-solid fa-image mr-1.5 text-[var(--color-primary)]"></i> Image Inputs</span>
<i class="fas fa-chevron-down text-[10px]" id="image-inputs-chevron"></i>
</button>
<div id="image-inputs-settings" class="hidden p-2 bg-[var(--color-bg-secondary)] border border-[var(--color-primary-border)]/20 rounded pl-4 border-l-2 border-[var(--color-bg-secondary)] space-y-2">
<!-- Start Image (img2video) -->
<div class="space-y-1">
<label for="start-image" class="block text-xs text-[var(--color-text-secondary)]">
<i class="fas fa-play-circle mr-1.5 text-[var(--color-primary)]"></i>Start Image (img2video)
</label>
<input
type="file"
id="start-image"
name="start_image"
accept="image/*"
class="input p-1.5 text-xs w-full"
/>
</div>
<!-- End Image -->
<div class="space-y-1">
<label for="end-image" class="block text-xs text-[var(--color-text-secondary)]">
<i class="fas fa-stop-circle mr-1.5 text-[var(--color-primary)]"></i>End Image
</label>
<input
type="file"
id="end-image"
name="end_image"
accept="image/*"
class="input p-1.5 text-xs w-full"
/>
</div>
</div>
</div>
<!-- Submit Button -->
<div>
<button
type="submit"
id="generate-btn"
class="w-full px-2 py-1.5 text-xs rounded text-[var(--color-bg-primary)] bg-[var(--color-primary)] hover:bg-[var(--color-primary)]/90 transition-colors font-medium"
>
<i class="fas fa-video mr-1.5"></i>Generate Video
</button>
</div>
</form>
</div>
</div>
</div>
<!-- Right Column: Video Preview -->
<div class="flex-grow lg:w-3/4 flex flex-col min-h-0">
<div class="relative flex-1 min-h-0 overflow-y-auto">
<!-- Loading Animation -->
<div id="loader" class="hidden absolute inset-0 flex items-center justify-center bg-[var(--color-bg-primary)]/80 rounded-xl z-10">
<div class="text-center">
<svg class="animate-spin h-10 w-10 text-[var(--color-primary)] mx-auto mb-3" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
<p class="text-xs text-[var(--color-text-secondary)]">Generating video...</p>
</div>
</div>
<!-- Placeholder when no videos -->
<div id="result-placeholder" class="min-h-[400px] flex items-center justify-center flex-shrink-0">
<p class="text-xs text-[var(--color-text-secondary)] italic text-center">Your generated videos will appear here</p>
</div>
<!-- Results container -->
<div id="result" class="grid grid-cols-1 gap-4 pb-4"></div>
</div>
</div>
</div>
</div>
</div>
<script>
// Collapsible sections
document.getElementById('advanced-toggle').addEventListener('click', function() {
const settings = document.getElementById('advanced-settings');
const chevron = document.getElementById('advanced-chevron');
settings.classList.toggle('hidden');
chevron.classList.toggle('fa-chevron-down');
chevron.classList.toggle('fa-chevron-up');
});
document.getElementById('image-inputs-toggle').addEventListener('click', function() {
const settings = document.getElementById('image-inputs-settings');
const chevron = document.getElementById('image-inputs-chevron');
settings.classList.toggle('hidden');
chevron.classList.toggle('fa-chevron-down');
chevron.classList.toggle('fa-chevron-up');
});
// Size preset buttons
document.querySelectorAll('.size-preset').forEach(button => {
button.addEventListener('click', function() {
const size = this.getAttribute('data-size');
document.getElementById('video-size').value = size;
// Update active state
document.querySelectorAll('.size-preset').forEach(btn => {
btn.classList.remove('bg-[var(--color-primary)]', 'text-white');
});
this.classList.add('bg-[var(--color-primary)]', 'text-white');
});
});
// Set initial active size preset
document.querySelector('.size-preset[data-size="512x512"]').classList.add('bg-[var(--color-primary)]', 'text-white');
</script>
</body>
</html>

View File

@@ -0,0 +1,306 @@
package schema
import (
"context"
)
// Open Responses status constants
const (
ORStatusQueued = "queued"
ORStatusInProgress = "in_progress"
ORStatusCompleted = "completed"
ORStatusFailed = "failed"
ORStatusIncomplete = "incomplete"
ORStatusCancelled = "cancelled"
)
// OpenResponsesRequest represents a request to the Open Responses API
// https://www.openresponses.org/specification
type OpenResponsesRequest struct {
Model string `json:"model"`
Input interface{} `json:"input"` // string or []ORItemParam
Tools []ORFunctionTool `json:"tools,omitempty"`
ToolChoice interface{} `json:"tool_choice,omitempty"` // "auto"|"required"|"none"|{type:"function",name:"..."}
Stream bool `json:"stream,omitempty"`
MaxOutputTokens *int `json:"max_output_tokens,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP *float64 `json:"top_p,omitempty"`
Truncation string `json:"truncation,omitempty"` // "auto"|"disabled"
Instructions string `json:"instructions,omitempty"`
Reasoning *ORReasoningParam `json:"reasoning,omitempty"`
Metadata map[string]string `json:"metadata,omitempty"`
PreviousResponseID string `json:"previous_response_id,omitempty"`
// Additional parameters from spec
TextFormat interface{} `json:"text_format,omitempty"` // TextResponseFormat or JsonSchemaResponseFormatParam
ServiceTier string `json:"service_tier,omitempty"` // "auto"|"default"|priority hint
AllowedTools []string `json:"allowed_tools,omitempty"` // Restrict which tools can be invoked
Store *bool `json:"store,omitempty"` // Whether to store the response
Include []string `json:"include,omitempty"` // What to include in response
ParallelToolCalls *bool `json:"parallel_tool_calls,omitempty"` // Allow parallel tool calls
PresencePenalty *float64 `json:"presence_penalty,omitempty"` // Presence penalty (-2.0 to 2.0)
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"` // Frequency penalty (-2.0 to 2.0)
TopLogprobs *int `json:"top_logprobs,omitempty"` // Number of top logprobs to return
Background *bool `json:"background,omitempty"` // Run request in background
MaxToolCalls *int `json:"max_tool_calls,omitempty"` // Maximum number of tool calls
// OpenAI-compatible extensions (not in Open Responses spec)
LogitBias map[string]float64 `json:"logit_bias,omitempty"` // Map of token IDs to bias values (-100 to 100)
// Internal fields (like OpenAIRequest)
Context context.Context `json:"-"`
Cancel context.CancelFunc `json:"-"`
}
// ModelName implements the LocalAIRequest interface
func (r *OpenResponsesRequest) ModelName(s *string) string {
if s != nil {
r.Model = *s
}
return r.Model
}
// ORFunctionTool represents a function tool definition
type ORFunctionTool struct {
Type string `json:"type"` // always "function"
Name string `json:"name"`
Description string `json:"description,omitempty"`
Parameters map[string]interface{} `json:"parameters,omitempty"`
Strict bool `json:"strict"` // Always include in response
}
// ORReasoningParam represents reasoning configuration
type ORReasoningParam struct {
Effort string `json:"effort,omitempty"` // "none"|"low"|"medium"|"high"|"xhigh"
Summary string `json:"summary,omitempty"` // "auto"|"concise"|"detailed"
}
// ORItemParam represents an input/output item (discriminated union by type)
type ORItemParam struct {
Type string `json:"type"` // message|function_call|function_call_output|reasoning|item_reference
ID string `json:"id,omitempty"` // Present for all output items
Status string `json:"status,omitempty"` // in_progress|completed|incomplete
// Message fields
Role string `json:"role,omitempty"` // user|assistant|system|developer
Content interface{} `json:"content,omitempty"` // string or []ORContentPart for messages
// Function call fields
CallID string `json:"call_id,omitempty"`
Name string `json:"name,omitempty"`
Arguments string `json:"arguments,omitempty"`
// Function call output fields
Output interface{} `json:"output,omitempty"` // string or []ORContentPart
// Note: For item_reference type, use the ID field above to reference the item
}
// ORContentPart represents a content block (discriminated union by type)
// For output_text: type, text, annotations, logprobs are ALL REQUIRED per Open Responses spec
type ORContentPart struct {
Type string `json:"type"` // input_text|input_image|input_file|output_text|refusal
Text string `json:"text"` // REQUIRED for output_text - must always be present (even if empty)
Annotations []ORAnnotation `json:"annotations"` // REQUIRED for output_text - must always be present (use [])
Logprobs []ORLogProb `json:"logprobs"` // REQUIRED for output_text - must always be present (use [])
ImageURL string `json:"image_url,omitempty"`
FileURL string `json:"file_url,omitempty"`
Filename string `json:"filename,omitempty"`
FileData string `json:"file_data,omitempty"`
Refusal string `json:"refusal,omitempty"`
Detail string `json:"detail,omitempty"` // low|high|auto for images
}
// OROutputTextContentPart is an alias for ORContentPart used specifically for output_text
type OROutputTextContentPart = ORContentPart
// ORItemField represents an output item (same structure as ORItemParam)
type ORItemField = ORItemParam
// ORResponseResource represents the main response object
type ORResponseResource struct {
ID string `json:"id"`
Object string `json:"object"` // always "response"
CreatedAt int64 `json:"created_at"`
CompletedAt *int64 `json:"completed_at"` // Required: present as number or null
Status string `json:"status"` // in_progress|completed|failed|incomplete
Model string `json:"model"`
Output []ORItemField `json:"output"`
Error *ORError `json:"error"` // Always present, null if no error
IncompleteDetails *ORIncompleteDetails `json:"incomplete_details"` // Always present, null if complete
PreviousResponseID *string `json:"previous_response_id"`
Instructions *string `json:"instructions"`
// Tool-related fields
Tools []ORFunctionTool `json:"tools"` // Always present, empty array if no tools
ToolChoice interface{} `json:"tool_choice"`
ParallelToolCalls bool `json:"parallel_tool_calls"`
MaxToolCalls *int `json:"max_tool_calls"` // nullable
// Sampling parameters (always required)
Temperature float64 `json:"temperature"`
TopP float64 `json:"top_p"`
PresencePenalty float64 `json:"presence_penalty"`
FrequencyPenalty float64 `json:"frequency_penalty"`
TopLogprobs int `json:"top_logprobs"` // Default to 0
MaxOutputTokens *int `json:"max_output_tokens"`
// Text format configuration
Text *ORTextConfig `json:"text"`
// Truncation and reasoning
Truncation string `json:"truncation"`
Reasoning *ORReasoning `json:"reasoning"` // nullable
// Usage statistics
Usage *ORUsage `json:"usage"` // nullable
// Metadata and operational flags
Metadata map[string]string `json:"metadata"`
Store bool `json:"store"`
Background bool `json:"background"`
ServiceTier string `json:"service_tier"`
// Safety and caching
SafetyIdentifier *string `json:"safety_identifier"` // nullable
PromptCacheKey *string `json:"prompt_cache_key"` // nullable
}
// ORTextConfig represents text format configuration
type ORTextConfig struct {
Format *ORTextFormat `json:"format,omitempty"`
}
// ORTextFormat represents the text format type
type ORTextFormat struct {
Type string `json:"type"` // "text" or "json_schema"
}
// ORError represents an error in the response
type ORError struct {
Type string `json:"type"` // invalid_request|not_found|server_error|model_error|too_many_requests
Code string `json:"code,omitempty"`
Message string `json:"message"`
Param string `json:"param,omitempty"`
}
// ORUsage represents token usage statistics
type ORUsage struct {
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
TotalTokens int `json:"total_tokens"`
InputTokensDetails *ORInputTokensDetails `json:"input_tokens_details"` // Always present
OutputTokensDetails *OROutputTokensDetails `json:"output_tokens_details"` // Always present
}
// ORInputTokensDetails represents input token breakdown
type ORInputTokensDetails struct {
CachedTokens int `json:"cached_tokens"` // Always include, even if 0
}
// OROutputTokensDetails represents output token breakdown
type OROutputTokensDetails struct {
ReasoningTokens int `json:"reasoning_tokens"` // Always include, even if 0
}
// ORReasoning represents reasoning configuration and metadata
type ORReasoning struct {
Effort string `json:"effort,omitempty"`
Summary string `json:"summary,omitempty"`
}
// ORIncompleteDetails represents details about why a response was incomplete
type ORIncompleteDetails struct {
Reason string `json:"reason"`
}
// ORStreamEvent represents a streaming event
// Note: Fields like delta, text, logprobs should be set explicitly for events that require them
// The sendSSEEvent function uses a custom serializer to handle conditional field inclusion
type ORStreamEvent struct {
Type string `json:"type"`
SequenceNumber int `json:"sequence_number"`
Response *ORResponseResource `json:"response,omitempty"`
OutputIndex *int `json:"output_index,omitempty"`
ContentIndex *int `json:"content_index,omitempty"`
SummaryIndex *int `json:"summary_index,omitempty"`
ItemID string `json:"item_id,omitempty"`
Item *ORItemField `json:"item,omitempty"`
Part *ORContentPart `json:"part,omitempty"`
Delta *string `json:"delta,omitempty"` // Pointer to distinguish unset from empty
Text *string `json:"text,omitempty"` // Pointer to distinguish unset from empty
Arguments *string `json:"arguments,omitempty"` // Pointer to distinguish unset from empty
Refusal string `json:"refusal,omitempty"`
Error *ORErrorPayload `json:"error,omitempty"`
Logprobs *[]ORLogProb `json:"logprobs,omitempty"` // Pointer to distinguish unset from empty
Obfuscation string `json:"obfuscation,omitempty"`
Annotation *ORAnnotation `json:"annotation,omitempty"`
AnnotationIndex *int `json:"annotation_index,omitempty"`
}
// ORErrorPayload represents an error payload in streaming events
type ORErrorPayload struct {
Type string `json:"type"`
Code string `json:"code,omitempty"`
Message string `json:"message"`
Param string `json:"param,omitempty"`
Headers map[string]string `json:"headers,omitempty"`
}
// ORLogProb represents log probability information
type ORLogProb struct {
Token string `json:"token"`
Logprob float64 `json:"logprob"`
Bytes []int `json:"bytes"`
TopLogprobs []ORTopLogProb `json:"top_logprobs,omitempty"`
}
// ORTopLogProb represents a top log probability
type ORTopLogProb struct {
Token string `json:"token"`
Logprob float64 `json:"logprob"`
Bytes []int `json:"bytes"`
}
// ORAnnotation represents an annotation (e.g., URL citation)
type ORAnnotation struct {
Type string `json:"type"` // url_citation
StartIndex int `json:"start_index"`
EndIndex int `json:"end_index"`
URL string `json:"url"`
Title string `json:"title"`
}
// ORContentPartWithLogprobs creates an output_text content part with logprobs converted from OpenAI format
func ORContentPartWithLogprobs(text string, logprobs *Logprobs) ORContentPart {
orLogprobs := []ORLogProb{}
// Convert OpenAI-style logprobs to Open Responses format
if logprobs != nil && len(logprobs.Content) > 0 {
for _, lp := range logprobs.Content {
// Convert top logprobs
topLPs := []ORTopLogProb{}
for _, tlp := range lp.TopLogprobs {
topLPs = append(topLPs, ORTopLogProb{
Token: tlp.Token,
Logprob: tlp.Logprob,
Bytes: tlp.Bytes,
})
}
orLogprobs = append(orLogprobs, ORLogProb{
Token: lp.Token,
Logprob: lp.Logprob,
Bytes: lp.Bytes,
TopLogprobs: topLPs,
})
}
}
return ORContentPart{
Type: "output_text",
Text: text,
Annotations: []ORAnnotation{}, // REQUIRED - must always be present as array (empty if none)
Logprobs: orLogprobs, // REQUIRED - must always be present as array (empty if none)
}
}

View File

@@ -72,6 +72,359 @@ You can list all the models available with:
curl http://localhost:8080/v1/models
```
### Anthropic Messages API
LocalAI supports the Anthropic Messages API, which is compatible with Claude clients. This endpoint provides a structured way to send messages and receive responses, with support for tools, streaming, and multimodal content.
**Endpoint:** `POST /v1/messages` or `POST /messages`
**Reference:** https://docs.anthropic.com/claude/reference/messages_post
#### Basic Usage
```bash
curl http://localhost:8080/v1/messages \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Say this is a test!"}
]
}'
```
#### Request Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The model identifier |
| `messages` | array | Yes | Array of message objects with `role` and `content` |
| `max_tokens` | integer | Yes | Maximum number of tokens to generate (must be > 0) |
| `system` | string | No | System message to set the assistant's behavior |
| `temperature` | float | No | Sampling temperature (0.0 to 1.0) |
| `top_p` | float | No | Nucleus sampling parameter |
| `top_k` | integer | No | Top-k sampling parameter |
| `stop_sequences` | array | No | Array of strings that will stop generation |
| `stream` | boolean | No | Enable streaming responses |
| `tools` | array | No | Array of tool definitions for function calling |
| `tool_choice` | string/object | No | Tool choice strategy: "auto", "any", "none", or specific tool |
| `metadata` | object | No | Custom metadata to attach to the request |
#### Message Format
Messages can contain text or structured content blocks:
```bash
curl http://localhost:8080/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this image?"
},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "base64_encoded_image_data"
}
}
]
}
]
}'
```
#### Tool Calling
The Anthropic API supports function calling through tools:
```bash
curl http://localhost:8080/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state"
}
},
"required": ["location"]
}
}
],
"tool_choice": "auto",
"messages": [
{"role": "user", "content": "What is the weather in San Francisco?"}
]
}'
```
#### Streaming
Enable streaming responses by setting `stream: true`:
```bash
curl http://localhost:8080/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"max_tokens": 1024,
"stream": true,
"messages": [
{"role": "user", "content": "Tell me a story"}
]
}'
```
Streaming responses use Server-Sent Events (SSE) format with event types: `message_start`, `content_block_start`, `content_block_delta`, `content_block_stop`, `message_delta`, and `message_stop`.
#### Response Format
```json
{
"id": "msg_abc123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "This is a test!"
}
],
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"stop_reason": "end_turn",
"usage": {
"input_tokens": 10,
"output_tokens": 5
}
}
```
### Open Responses API
LocalAI supports the Open Responses API specification, which provides a standardized interface for AI model interactions with support for background processing, streaming, tool calling, and advanced features like reasoning.
**Endpoint:** `POST /v1/responses` or `POST /responses`
**Reference:** https://www.openresponses.org/specification
#### Basic Usage
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"input": "Say this is a test!",
"max_output_tokens": 1024
}'
```
#### Request Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The model identifier |
| `input` | string/array | Yes | Input text or array of input items |
| `max_output_tokens` | integer | No | Maximum number of tokens to generate |
| `temperature` | float | No | Sampling temperature |
| `top_p` | float | No | Nucleus sampling parameter |
| `instructions` | string | No | System instructions |
| `tools` | array | No | Array of tool definitions |
| `tool_choice` | string/object | No | Tool choice: "auto", "required", "none", or specific tool |
| `stream` | boolean | No | Enable streaming responses |
| `background` | boolean | No | Run request in background (returns immediately) |
| `store` | boolean | No | Whether to store the response |
| `reasoning` | object | No | Reasoning configuration with `effort` and `summary` |
| `parallel_tool_calls` | boolean | No | Allow parallel tool calls |
| `max_tool_calls` | integer | No | Maximum number of tool calls |
| `presence_penalty` | float | No | Presence penalty (-2.0 to 2.0) |
| `frequency_penalty` | float | No | Frequency penalty (-2.0 to 2.0) |
| `top_logprobs` | integer | No | Number of top logprobs to return |
| `truncation` | string | No | Truncation mode: "auto" or "disabled" |
| `text_format` | object | No | Text format configuration |
| `metadata` | object | No | Custom metadata |
#### Input Format
Input can be a simple string or an array of structured items:
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"input": [
{
"type": "message",
"role": "user",
"content": "What is the weather?"
}
],
"max_output_tokens": 1024
}'
```
#### Background Processing
Run requests in the background for long-running tasks:
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"input": "Generate a long story",
"max_output_tokens": 4096,
"background": true
}'
```
The response will include a response ID that can be used to poll for completion:
```json
{
"id": "resp_abc123",
"object": "response",
"status": "in_progress",
"created_at": 1234567890
}
```
#### Retrieving Background Responses
Use the GET endpoint to retrieve background responses:
```bash
# Get response by ID
curl http://localhost:8080/v1/responses/resp_abc123
# Resume streaming with query parameters
curl "http://localhost:8080/v1/responses/resp_abc123?stream=true&starting_after=10"
```
#### Canceling Background Responses
Cancel a background response that's still in progress:
```bash
curl -X POST http://localhost:8080/v1/responses/resp_abc123/cancel
```
#### Tool Calling
Open Responses API supports function calling with tools:
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"input": "What is the weather in San Francisco?",
"tools": [
{
"type": "function",
"name": "get_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state"
}
},
"required": ["location"]
}
}
],
"tool_choice": "auto",
"max_output_tokens": 1024
}'
```
#### Reasoning Configuration
Configure reasoning effort and summary style:
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"input": "Solve this complex problem step by step",
"reasoning": {
"effort": "high",
"summary": "detailed"
},
"max_output_tokens": 2048
}'
```
#### Response Format
```json
{
"id": "resp_abc123",
"object": "response",
"created_at": 1234567890,
"completed_at": 1234567895,
"status": "completed",
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"output": [
{
"type": "message",
"id": "msg_001",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "This is a test!",
"annotations": [],
"logprobs": []
}
],
"status": "completed"
}
],
"error": null,
"incomplete_details": null,
"temperature": 0.7,
"top_p": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"usage": {
"input_tokens": 10,
"output_tokens": 5,
"total_tokens": 15,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens_details": {
"reasoning_tokens": 0
}
}
}
```
## Backends
### RWKV

View File

@@ -164,6 +164,57 @@ curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{
}' | aplay
```
### Pocket TTS
[Pocket TTS](https://github.com/kyutai-labs/pocket-tts) is a lightweight text-to-speech model designed to run efficiently on CPUs. It supports voice cloning through HuggingFace voice URLs or local audio files.
#### Setup
Install the `pocket-tts` model in the Model gallery or run `local-ai run models install pocket-tts`.
#### Usage
Use the tts endpoint by specifying the pocket-tts backend:
```
curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{
"model": "pocket-tts",
"input":"Hello world, this is a test."
}' | aplay
```
#### Voice cloning
Pocket TTS supports voice cloning through built-in voice names, HuggingFace URLs, or local audio files. You can configure a model with a specific voice:
```yaml
name: pocket-tts
backend: pocket-tts
tts:
voice: "azelma" # Built-in voice name
# Or use HuggingFace URL: "hf://kyutai/tts-voices/alba-mackenna/casual.wav"
# Or use local file path: "path/to/voice.wav"
# Available built-in voices: alba, marius, javert, jean, fantine, cosette, eponine, azelma
```
You can also pre-load a default voice for faster first generation:
```yaml
name: pocket-tts
backend: pocket-tts
options:
- "default_voice:azelma" # Pre-load this voice when model loads
```
Then you can use the model:
```
curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{
"model": "pocket-tts",
"input":"Hello world, this is a test."
}' | aplay
```
### Vall-E-X
[VALL-E-X](https://github.com/Plachtaa/VALL-E-X) is an open source implementation of Microsoft's VALL-E X zero-shot TTS model.

View File

@@ -112,6 +112,66 @@ curl http://localhost:8080/v1/chat/completions \
</details>
### Anthropic Messages API
LocalAI supports the Anthropic Messages API for Claude-compatible models. [Anthropic documentation](https://docs.anthropic.com/claude/reference/messages_post).
<details>
```bash
curl http://localhost:8080/v1/messages \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "gpt-4",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "How are you doing?"}
],
"temperature": 0.7
}'
```
</details>
### Open Responses API
LocalAI supports the Open Responses API specification with support for background processing, streaming, and advanced features. [Open Responses documentation](https://www.openresponses.org/specification).
<details>
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"input": "Say this is a test!",
"max_output_tokens": 1024,
"temperature": 0.7
}'
```
For background processing:
```bash
curl http://localhost:8080/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"input": "Generate a long story",
"max_output_tokens": 4096,
"background": true
}'
```
Then retrieve the response:
```bash
curl http://localhost:8080/v1/responses/<response_id>
```
</details>
### Image Generation
Creates an image given a prompt. [OpenAI documentation](https://platform.openai.com/docs/api-reference/images/create).

View File

@@ -20,7 +20,7 @@ Choose the installation method that best suits your needs:
1. **[Docker](docker/)** ⭐ **Recommended** - Works on all platforms, easiest setup
2. **[macOS](macos/)** - Download and install the DMG application
3. **[Linux](linux/)** - Install on Linux using the one-liner script or binaries
3. **[Linux](linux/)** - Install on Linux using binaries (install.sh script currently has issues - see [issue #8032](https://github.com/mudler/LocalAI/issues/8032))
4. **[Kubernetes](kubernetes/)** - Deploy LocalAI on Kubernetes clusters
5. **[Build from Source](build/)** - Build LocalAI from source code
@@ -36,6 +36,6 @@ This will start LocalAI. The API will be available at `http://localhost:8080`. F
For other platforms:
- **macOS**: Download the [DMG](macos/)
- **Linux**: Use the `curl https://localai.io/install.sh | sh` [one-liner](linux/)
- **Linux**: See the [Linux installation guide](linux/) for installation options. **Note:** The `install.sh` script is currently experiencing issues - see [issue #8032](https://github.com/mudler/LocalAI/issues/8032) for details.
For detailed instructions, see the [Docker installation guide](docker/).

View File

@@ -6,7 +6,11 @@ url: '/installation/linux/'
---
## One-Line Installer (Recommended)
## One-Line Installer
{{% notice warning %}}
**The `install.sh` script is currently experiencing issues and may produce broken or misconfigured installations. Please use alternative installation methods (Docker or manual binary installation) until [issue #8032](https://github.com/mudler/LocalAI/issues/8032) is resolved.**
{{% /notice %}}
The fastest way to install LocalAI on Linux is with the installation script:

View File

@@ -97,7 +97,7 @@ For more information on VRAM management, see [VRAM and Memory Management]({{%rel
| `--opaque-errors` | `false` | If true, all error responses are replaced with blank 500 errors. This is intended only for hardening against information leaks and is normally not recommended | `$LOCALAI_OPAQUE_ERRORS` |
| `--use-subtle-key-comparison` | `false` | If true, API Key validation comparisons will be performed using constant-time comparisons rather than simple equality. This trades off performance on each request for resilience against timing attacks | `$LOCALAI_SUBTLE_KEY_COMPARISON` |
| `--disable-api-key-requirement-for-http-get` | `false` | If true, a valid API key is not required to issue GET requests to portions of the web UI. This should only be enabled in secure testing environments | `$LOCALAI_DISABLE_API_KEY_REQUIREMENT_FOR_HTTP_GET` |
| `--http-get-exempted-endpoints` | `^/$,^/browse/?$,^/talk/?$,^/p2p/?$,^/chat/?$,^/text2image/?$,^/tts/?$,^/static/.*$,^/swagger.*$` | If `--disable-api-key-requirement-for-http-get` is overridden to true, this is the list of endpoints to exempt. Only adjust this in case of a security incident or as a result of a personal security posture review | `$LOCALAI_HTTP_GET_EXEMPTED_ENDPOINTS` |
| `--http-get-exempted-endpoints` | `^/$,^/browse/?$,^/talk/?$,^/p2p/?$,^/chat/?$,^/image/?$,^/text2image/?$,^/tts/?$,^/static/.*$,^/swagger.*$` | If `--disable-api-key-requirement-for-http-get` is overridden to true, this is the list of endpoints to exempt. Only adjust this in case of a security incident or as a result of a personal security posture review | `$LOCALAI_HTTP_GET_EXEMPTED_ENDPOINTS` |
## P2P Flags

View File

@@ -42,6 +42,7 @@ LocalAI will attempt to automatically load models which are not explicitly confi
| [silero-vad](https://github.com/snakers4/silero-vad) with [Golang bindings](https://github.com/streamer45/silero-vad-go) | Silero VAD | no | Voice Activity Detection | no | no | CPU |
| [neutts](https://github.com/neuphonic/neuttsair) | NeuTTSAir | no | Text-to-speech with voice cloning | no | no | CUDA 12/13, ROCm, CPU |
| [vibevoice](https://github.com/microsoft/VibeVoice) | VibeVoice-Realtime | no | Real-time text-to-speech with voice cloning | no | no | CUDA 12/13, ROCm, Intel, CPU |
| [pocket-tts](https://github.com/kyutai-labs/pocket-tts) | Pocket TTS | no | Lightweight CPU-based text-to-speech with voice cloning | no | no | CUDA 12/13, ROCm, Intel, CPU |
| [mlx-audio](https://github.com/Blaizzy/mlx-audio) | MLX | no | Text-tospeech | no | no | Metal (Apple Silicon) |
## Image & Video Generation

View File

@@ -1,3 +1,3 @@
{
"version": "v3.9.0"
"version": "v3.10.0"
}

View File

@@ -29,6 +29,7 @@
This description emphasizes its capabilities, efficiency, and versatility for multimodal search tasks.
overrides:
reranking: true
parameters:
model: llama-cpp/models/Qwen3-VL-Reranker-8B.Q4_K_M.gguf
name: Qwen3-VL-Reranker-8B-GGUF
@@ -478,6 +479,16 @@
- filename: voices/streaming_model/en-Davis_man.pt
uri: https://raw.githubusercontent.com/microsoft/VibeVoice/main/demo/voices/streaming_model/en-Davis_man.pt
sha256: 67561d63bfa2153616e4c02fd967007c182593fc53738a6ad94bf5f84e8832ac
- &pocket-tts
url: "github:mudler/LocalAI/gallery/pocket-tts.yaml@master"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
license: mit
tags:
- text-to-speech
- TTS
name: "pocket-tts"
urls:
- https://github.com/kyutai-labs/pocket-tts
- &qwen3vl
url: "github:mudler/LocalAI/gallery/qwen3.yaml@master"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
@@ -1237,6 +1248,63 @@
cuda: true
pipeline_type: QwenImageEditPipeline
enable_parameters: num_inference_steps,image
- &ltx2
name: "ltx-2"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
- https://huggingface.co/Lightricks/LTX-2
license: ltx-2-community-license-agreement
tags:
- diffusers
- gpu
- image-to-video
- video-generation
- audio-video
description: |
**LTX-2** is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
**Key Features:**
- **Joint Audio-Video Generation**: Generates synchronized video and audio in a single model
- **Image-to-Video**: Converts static images into dynamic videos with matching audio
- **High Quality**: Produces realistic video with natural motion and synchronized audio
- **Open Weights**: Available under the LTX-2 Community License Agreement
**Model Details:**
- **Model Type**: Diffusion-based audio-video foundation model
- **Architecture**: DiT (Diffusion Transformer) based
- **Developed by**: Lightricks
- **Paper**: [LTX-2: Efficient Joint Audio-Visual Foundation Model](https://arxiv.org/abs/2601.03233)
**Usage Tips:**
- Width & height settings must be divisible by 32
- Frame count must be divisible by 8 + 1 (e.g., 9, 17, 25, 33, 41, 49, 57, 65, 73, 81, 89, 97, 105, 113, 121)
- Recommended settings: width=768, height=512, num_frames=121, frame_rate=24.0
- For best results, use detailed prompts describing motion and scene dynamics
**Limitations:**
- This model is not intended or able to provide factual information
- Prompt following is heavily influenced by the prompting-style
- When generating audio without speech, the audio may be of lower quality
**Citation:**
```bibtex
@article{hacohen2025ltx2,
title={LTX-2: Efficient Joint Audio-Visual Foundation Model},
author={HaCohen, Yoav and Brazowski, Benny and Chiprut, Nisan and others},
journal={arXiv preprint arXiv:2601.03233},
year={2025}
}
```
overrides:
backend: diffusers
low_vram: true
parameters:
model: Lightricks/LTX-2
diffusers:
cuda: true
pipeline_type: LTX2ImageToVideoPipeline
options:
- torch_dtype:bf16
- &gptoss
name: "gpt-oss-20b"
url: "github:mudler/LocalAI/gallery/harmony.yaml@master"
@@ -3755,6 +3823,41 @@
- filename: boomerang-qwen3-4.9B.Q4_K_M.gguf
sha256: 11e6c068351d104dee31dd63550e5e2fc9be70467c1cfc07a6f84030cb701537
uri: huggingface://mradermacher/boomerang-qwen3-4.9B-GGUF/boomerang-qwen3-4.9B.Q4_K_M.gguf
- !!merge <<: *qwen3
name: "qwen3-coder-30b-a3b-instruct"
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/-s1gyJfvbE1RgO5iBeNOi.png
url: "github:mudler/LocalAI/gallery/qwen3.yaml@master"
urls:
- https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct
- https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF
description: |
Qwen3-Coder is available in multiple sizes. Today, we're excited to introduce Qwen3-Coder-30B-A3B-Instruct. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:
- Significant Performance among open models on Agentic Coding, Agentic Browser-Use, and other foundational coding tasks.
- Long-context Capabilities with native support for 256K tokens, extendable up to 1M tokens using Yarn, optimized for repository-scale understanding.
- Agentic Coding supporting for most platform such as Qwen Code, CLINE, featuring a specially designed function call format.
Model Overview:
Qwen3-Coder-30B-A3B-Instruct has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 262,144 natively.
NOTE: This model supports only non-thinking mode and does not generate <think></think> blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.
overrides:
parameters:
model: Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf
files:
- filename: Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf
sha256: fadc3e5f8d42bf7e894a785b05082e47daee4df26680389817e2093056f088ad
uri: huggingface://unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf
- &gemma3
url: "github:mudler/LocalAI/gallery/gemma.yaml@master"
name: "gemma-3-27b-it"

34
gallery/pocket-tts.yaml Normal file
View File

@@ -0,0 +1,34 @@
---
name: localai
config_file: |-
name: pocket-tts
backend: pocket-tts
description: |
Pocket TTS is a lightweight text-to-speech model designed to run efficiently on CPUs.
This model supports voice cloning through HuggingFace voice URLs or local audio files.
parameters:
model: ""
# TTS configuration
tts:
# Voice selection - can be:
# 1. Built-in voice name (e.g., "alba", "marius", "javert", "jean", "fantine", "cosette", "eponine", "azelma")
# 2. HuggingFace URL (e.g., "hf://kyutai/tts-voices/alba-mackenna/casual.wav")
# 3. Local file path (relative to model directory or absolute)
# voice: "azelma"
# Alternative: use audio_path to specify a voice file directly
# audio_path: "hf://kyutai/tts-voices/alba-mackenna/casual.wav"
known_usecases:
- tts
# Backend-specific options
# These are passed as "key:value" strings to the backend
options:
# Default voice to pre-load (optional)
# Can be a voice name or HuggingFace URL
# If set, this voice will be loaded when the model loads for faster first generation
- "default_voice:azelma"
# - "default_voice:hf://kyutai/tts-voices/alba-mackenna/casual.wav"

18
go.mod
View File

@@ -6,7 +6,7 @@ toolchain go1.24.5
require (
dario.cat/mergo v1.0.2
fyne.io/fyne/v2 v2.7.1
fyne.io/fyne/v2 v2.7.2
github.com/Masterminds/sprig/v3 v3.3.0
github.com/alecthomas/kong v1.13.0
github.com/anthropics/anthropic-sdk-go v1.19.0
@@ -20,7 +20,7 @@ require (
github.com/gofrs/flock v0.13.0
github.com/google/go-containerregistry v0.20.7
github.com/google/uuid v1.6.0
github.com/gpustack/gguf-parser-go v0.22.1
github.com/gpustack/gguf-parser-go v0.23.1
github.com/hpcloud/tail v1.0.0
github.com/ipfs/go-log v1.0.5
github.com/jaypipes/ghw v0.21.2
@@ -32,13 +32,13 @@ require (
github.com/mholt/archiver/v3 v3.5.1
github.com/microcosm-cc/bluemonday v1.0.27
github.com/modelcontextprotocol/go-sdk v1.2.0
github.com/mudler/cogito v0.7.2
github.com/mudler/cogito v0.8.1
github.com/mudler/edgevpn v0.31.1
github.com/mudler/go-processmanager v0.0.0-20240820160718-8b802d3ecf82
github.com/mudler/go-processmanager v0.1.0
github.com/mudler/memory v0.0.0-20251216220809-d1256471a6c2
github.com/mudler/xlog v0.0.5
github.com/onsi/ginkgo/v2 v2.27.3
github.com/onsi/gomega v1.38.3
github.com/onsi/ginkgo/v2 v2.27.5
github.com/onsi/gomega v1.39.0
github.com/otiai10/copy v1.14.1
github.com/otiai10/openaigo v1.7.0
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5
@@ -59,7 +59,6 @@ require (
go.opentelemetry.io/otel/metric v1.39.0
go.opentelemetry.io/otel/sdk/metric v1.39.0
google.golang.org/grpc v1.78.0
google.golang.org/protobuf v1.36.10
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
oras.land/oras-go/v2 v2.6.0
@@ -74,10 +73,11 @@ require (
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tidwall/sjson v1.2.5 // indirect
github.com/valyala/fasttemplate v1.2.2 // indirect
google.golang.org/protobuf v1.36.10 // indirect
)
require (
fyne.io/systray v1.11.1-0.20250603113521-ca66a66d8b58 // indirect
fyne.io/systray v1.12.0 // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/lipgloss v1.1.1-0.20250404203927-76690c660834 // indirect
@@ -324,7 +324,7 @@ require (
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect
golang.org/x/mod v0.30.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.39.0 // indirect
golang.org/x/sys v0.40.0 // indirect
golang.org/x/term v0.38.0 // indirect
golang.org/x/text v0.32.0 // indirect
golang.org/x/tools v0.39.0 // indirect

32
go.sum
View File

@@ -8,10 +8,10 @@ dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
fyne.io/fyne/v2 v2.7.1 h1:ja7rNHWWEooha4XBIZNnPP8tVFwmTfwMJdpZmLxm2Zc=
fyne.io/fyne/v2 v2.7.1/go.mod h1:xClVlrhxl7D+LT+BWYmcrW4Nf+dJTvkhnPgji7spAwE=
fyne.io/systray v1.11.1-0.20250603113521-ca66a66d8b58 h1:eA5/u2XRd8OUkoMqEv3IBlFYSruNlXD8bRHDiqm0VNI=
fyne.io/systray v1.11.1-0.20250603113521-ca66a66d8b58/go.mod h1:RVwqP9nYMo7h5zViCBHri2FgjXF7H2cub7MAq4NSoLs=
fyne.io/fyne/v2 v2.7.2 h1:XiNpWkn0PzX43ZCjbb0QYGg1RCxVbugwfVgikWZBCMw=
fyne.io/fyne/v2 v2.7.2/go.mod h1:PXbqY3mQmJV3J1NRUR2VbVgUUx3vgvhuFJxyjRK/4Ug=
fyne.io/systray v1.12.0 h1:CA1Kk0e2zwFlxtc02L3QFSiIbxJ/P0n582YrZHT7aTM=
fyne.io/systray v1.12.0/go.mod h1:RVwqP9nYMo7h5zViCBHri2FgjXF7H2cub7MAq4NSoLs=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
@@ -292,8 +292,8 @@ github.com/gorilla/css v1.0.1 h1:ntNaBIghp6JmvWnxbZKANoLyuXTPZ4cAMlo6RyhlbO8=
github.com/gorilla/css v1.0.1/go.mod h1:BvnYkspnSzMmwRK+b8/xgNPLiIuNZr6vbZBTPQ2A3b0=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gpustack/gguf-parser-go v0.22.1 h1:FRnEDWqT0Rcplr/R9ctCRSN2+3DhVsf6dnR5/i9JA4E=
github.com/gpustack/gguf-parser-go v0.22.1/go.mod h1:y4TwTtDqFWTK+xvprOjRUh+dowgU2TKCX37vRKvGiZ0=
github.com/gpustack/gguf-parser-go v0.23.1 h1:0U7DOrsi7ryx2L/dlMy+BSQ5bJV4AuMEIgGBs4RK46A=
github.com/gpustack/gguf-parser-go v0.23.1/go.mod h1:y4TwTtDqFWTK+xvprOjRUh+dowgU2TKCX37vRKvGiZ0=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
@@ -507,14 +507,14 @@ github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7P
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/mudler/cogito v0.7.2 h1:J5eHZPsxpoKcnYUfogje5u0nnzGww7ytv7nSn1DMpms=
github.com/mudler/cogito v0.7.2/go.mod h1:6sfja3lcu2nWRzEc0wwqGNu/eCG3EWgij+8s7xyUeQ4=
github.com/mudler/cogito v0.8.1 h1:66qPJkAMrq/Vo8AC/PvXWuVxYPhi7X2DQuJIilL8+3I=
github.com/mudler/cogito v0.8.1/go.mod h1:6sfja3lcu2nWRzEc0wwqGNu/eCG3EWgij+8s7xyUeQ4=
github.com/mudler/edgevpn v0.31.1 h1:7qegiDWd0kAg6ljhNHxqvp8hbo/6BbzSdbb7/2WZfiY=
github.com/mudler/edgevpn v0.31.1/go.mod h1:ftV5B0nKFzm4R8vR80UYnCb2nf7lxCRgAALxUEEgCf8=
github.com/mudler/go-piper v0.0.0-20241023091659-2494246fd9fc h1:RxwneJl1VgvikiX28EkpdAyL4yQVnJMrbquKospjHyA=
github.com/mudler/go-piper v0.0.0-20241023091659-2494246fd9fc/go.mod h1:O7SwdSWMilAWhBZMK9N9Y/oBDyMMzshE3ju8Xkexwig=
github.com/mudler/go-processmanager v0.0.0-20240820160718-8b802d3ecf82 h1:FVT07EI8njvsD4tC2Hw8Xhactp5AWhsQWD4oTeQuSAU=
github.com/mudler/go-processmanager v0.0.0-20240820160718-8b802d3ecf82/go.mod h1:Urp7LG5jylKoDq0663qeBh0pINGcRl35nXdKx82PSoU=
github.com/mudler/go-processmanager v0.1.0 h1:fcSKgF9U/a1Z7KofAFeZnke5YseadCI5GqL9oT0LS3E=
github.com/mudler/go-processmanager v0.1.0/go.mod h1:h6kmHUZeafr+k5hRYpGLMzJFH4hItHffgpRo2QIkP+o=
github.com/mudler/memory v0.0.0-20251216220809-d1256471a6c2 h1:+WHsL/j6EWOMUiMVIOJNKOwSKiQt/qDPc9fePCf87fA=
github.com/mudler/memory v0.0.0-20251216220809-d1256471a6c2/go.mod h1:EA8Ashhd56o32qN7ouPKFSRUs/Z+LrRCF4v6R2Oarm8=
github.com/mudler/water v0.0.0-20250808092830-dd90dcf09025 h1:WFLP5FHInarYGXi6B/Ze204x7Xy6q/I4nCZnWEyPHK0=
@@ -561,10 +561,10 @@ github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.27.3 h1:ICsZJ8JoYafeXFFlFAG75a7CxMsJHwgKwtO+82SE9L8=
github.com/onsi/ginkgo/v2 v2.27.3/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.38.3 h1:eTX+W6dobAYfFeGC2PV6RwXRu/MyT+cQguijutvkpSM=
github.com/onsi/gomega v1.38.3/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4=
github.com/onsi/ginkgo/v2 v2.27.5 h1:ZeVgZMx2PDMdJm/+w5fE/OyG6ILo1Y3e+QX4zSR0zTE=
github.com/onsi/ginkgo/v2 v2.27.5/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.39.0 h1:y2ROC3hKFmQZJNFeGAMeHZKkjBL65mIZcvrLQBF9k6Q=
github.com/onsi/gomega v1.39.0/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
@@ -978,8 +978,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54 h1:E2/AqCUMZGgd73TQkxUMcMla25GB9i/5HOdLr+uH7Vo=
golang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54/go.mod h1:hKdjCMrbv9skySur+Nek8Hd0uJ0GuxJIoIX2payrIdQ=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=

View File

@@ -1492,7 +1492,7 @@ func ParseFunctionCall(llmresult string, functionConfig FunctionsConfig) []FuncC
results := []FuncCallResults{}
llmResults := []string{}
returnResult := func(results []string) (result []FuncCallResults, e error) {
extractJSON := func(results []string) (result []FuncCallResults, e error) {
// As we have to change the result before processing, we can't stream the answer token-by-token (yet?)
result = make([]FuncCallResults, 0)
@@ -1593,7 +1593,7 @@ func ParseFunctionCall(llmresult string, functionConfig FunctionsConfig) []FuncC
if len(llmResults) == 0 {
llmResults = append(llmResults, llmresult)
}
results, _ = returnResult(llmResults)
results, _ = extractJSON(llmResults)
}
// Determine which XML format to use (if any)
@@ -1632,8 +1632,16 @@ func ParseFunctionCall(llmresult string, functionConfig FunctionsConfig) []FuncC
// But skip if JSONRegexMatch or ResponseRegex was used (they already extracted the content)
xmlResults, err := ParseXML(llmresult, xmlFormat)
if err == nil && len(xmlResults) > 0 {
xlog.Debug("Found additional XML tool calls alongside JSON", "xml_count", len(xmlResults))
results = append(results, xmlResults...)
// Check if JSON is inside XML tags, if so, skip it
for _, result := range xmlResults {
jsonResults, _ := extractJSON([]string{result.Name})
if len(jsonResults) > 0 {
xlog.Debug("Found valid JSON inside XML tags, skipping XML parsing", "json_count", len(jsonResults))
} else {
xlog.Debug("Found additional XML tool calls alongside JSON", "xml_count", len(xmlResults))
results = append(results, xmlResults...)
}
}
}
}

View File

@@ -820,6 +820,23 @@ Final text`
Expect(results[0].Name).To(Equal("first"))
Expect(results[1].Name).To(Equal("second"))
})
It("should not duplicate parse JSON inside tool_call tags", func() {
// This test reproduces a bug where JSON inside <tool_call> tags
// gets parsed twice: once as JSON (correctly) and once as XML (incorrectly)
// The XML parser should not run when JSON parsing already found valid results
input := `<tool_call>
{"name": "get_current_weather", "arguments": {"location": "Beijing", "unit": "celsius"}}
</tool_call>`
results := ParseFunctionCall(input, functionConfig)
// Should only have 1 result, not 2 (one correct + one malformed)
Expect(results).To(HaveLen(1), "Should not create duplicate entries when JSON is inside XML tags")
Expect(results[0].Name).To(Equal("get_current_weather"))
Expect(results[0].Arguments).To(Equal(`{"location":"Beijing","unit":"celsius"}`))
// Verify the name is not the entire JSON object (which would indicate malformed XML parsing)
Expect(results[0].Name).NotTo(ContainSubstring(`{"name"`), "Function name should not contain JSON object")
})
})
Context("Iterative Parser (ChatMsgParser)", func() {

View File

@@ -1,114 +0,0 @@
package functions
import (
"strings"
)
// ExtractReasoning extracts reasoning content from thinking tags and returns
// both the extracted reasoning and the cleaned content (with tags removed).
// It handles <thinking>...</thinking> and <think>...</think> tags.
// Multiple reasoning blocks are concatenated with newlines.
func ExtractReasoning(content string) (reasoning string, cleanedContent string) {
if content == "" {
return "", content
}
var reasoningParts []string
var cleanedParts []string
remaining := content
// Define tag pairs to look for
tagPairs := []struct {
start string
end string
}{
{"<thinking>", "</thinking>"},
{"<think>", "</think>"},
}
// Track the last position we've processed
lastPos := 0
for {
// Find the earliest tag start
earliestStart := -1
earliestEnd := -1
isUnclosed := false
var matchedTag struct {
start string
end string
}
for _, tagPair := range tagPairs {
startIdx := strings.Index(remaining[lastPos:], tagPair.start)
if startIdx == -1 {
continue
}
startIdx += lastPos
// Find the corresponding end tag
endIdx := strings.Index(remaining[startIdx+len(tagPair.start):], tagPair.end)
if endIdx == -1 {
// Unclosed tag - extract what we have
if earliestStart == -1 || startIdx < earliestStart {
earliestStart = startIdx
earliestEnd = len(remaining)
isUnclosed = true
matchedTag = tagPair
}
continue
}
endIdx += startIdx + len(tagPair.start)
// Found a complete tag pair
if earliestStart == -1 || startIdx < earliestStart {
earliestStart = startIdx
earliestEnd = endIdx + len(tagPair.end)
isUnclosed = false
matchedTag = tagPair
}
}
if earliestStart == -1 {
// No more tags found, add remaining content
if lastPos < len(remaining) {
cleanedParts = append(cleanedParts, remaining[lastPos:])
}
break
}
// Add content before the tag
if earliestStart > lastPos {
cleanedParts = append(cleanedParts, remaining[lastPos:earliestStart])
}
// Extract reasoning content
reasoningStart := earliestStart + len(matchedTag.start)
// For unclosed tags, earliestEnd is already at the end of the string
// For closed tags, earliestEnd points to after the closing tag, so we subtract the end tag length
var reasoningEnd int
if isUnclosed {
// Unclosed tag - extract everything to the end
reasoningEnd = len(remaining)
} else {
// Closed tag - exclude the end tag
reasoningEnd = earliestEnd - len(matchedTag.end)
}
if reasoningEnd > reasoningStart {
reasoningContent := strings.TrimSpace(remaining[reasoningStart:reasoningEnd])
if reasoningContent != "" {
reasoningParts = append(reasoningParts, reasoningContent)
}
}
// Move past this tag
lastPos = earliestEnd
}
// Combine reasoning parts
reasoning = strings.Join(reasoningParts, "\n\n")
// Combine cleaned content parts
cleanedContent = strings.Join(cleanedParts, "")
return reasoning, cleanedContent
}

View File

@@ -1,261 +0,0 @@
package functions_test
import (
"strings"
. "github.com/mudler/LocalAI/pkg/functions"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("ExtractReasoning", func() {
Context("when content has no reasoning tags", func() {
It("should return empty reasoning and original content", func() {
content := "This is regular content without any tags."
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal(content))
})
It("should handle empty string", func() {
content := ""
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(BeEmpty())
})
It("should handle content with only whitespace", func() {
content := " \n\t "
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal(content))
})
})
Context("when content has <thinking> tags", func() {
It("should extract reasoning from single thinking block", func() {
content := "Some text <thinking>This is my reasoning</thinking> More text"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("This is my reasoning"))
Expect(cleaned).To(Equal("Some text More text"))
})
It("should extract reasoning and preserve surrounding content", func() {
content := "Before <thinking>Reasoning here</thinking> After"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Reasoning here"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle thinking block at the start", func() {
content := "<thinking>Start reasoning</thinking> Regular content"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Start reasoning"))
Expect(cleaned).To(Equal(" Regular content"))
})
It("should handle thinking block at the end", func() {
content := "Regular content <thinking>End reasoning</thinking>"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("End reasoning"))
Expect(cleaned).To(Equal("Regular content "))
})
It("should handle only thinking block", func() {
content := "<thinking>Only reasoning</thinking>"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Only reasoning"))
Expect(cleaned).To(BeEmpty())
})
It("should trim whitespace from reasoning content", func() {
content := "Text <thinking> \n Reasoning with spaces \n </thinking> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Reasoning with spaces"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has <think> tags", func() {
It("should extract reasoning from redacted_reasoning block", func() {
content := "Text <think>Redacted reasoning</think> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Redacted reasoning"))
Expect(cleaned).To(Equal("Text More"))
})
It("should handle redacted_reasoning with multiline content", func() {
content := "Before <think>Line 1\nLine 2\nLine 3</think> After"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Line 1\nLine 2\nLine 3"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle redacted_reasoning with complex content", func() {
content := "Start <think>Complex reasoning\nwith\nmultiple\nlines</think> End"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Complex reasoning\nwith\nmultiple\nlines"))
Expect(cleaned).To(Equal("Start End"))
})
})
Context("when content has multiple reasoning blocks", func() {
It("should concatenate multiple thinking blocks with newlines", func() {
content := "Text <thinking>First</thinking> Middle <thinking>Second</thinking> End"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("First\n\nSecond"))
Expect(cleaned).To(Equal("Text Middle End"))
})
It("should handle multiple different tag types", func() {
content := "A <thinking>One</thinking> B <think>Two</think> C <think>Three</think> D"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(ContainSubstring("One"))
Expect(reasoning).To(ContainSubstring("Two"))
Expect(reasoning).To(ContainSubstring("Three"))
Expect(cleaned).To(Equal("A B C D"))
})
It("should handle nested tags correctly (extracts first match)", func() {
content := "Text <thinking>Outer <think>Inner</think></thinking> More"
reasoning, cleaned := ExtractReasoning(content)
// Should extract the outer thinking block
Expect(reasoning).To(ContainSubstring("Outer"))
Expect(reasoning).To(ContainSubstring("Inner"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has unclosed reasoning tags", func() {
It("should extract unclosed thinking block", func() {
content := "Text <thinking>Unclosed reasoning"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Unclosed reasoning"))
Expect(cleaned).To(Equal("Text "))
})
It("should extract unclosed think block", func() {
content := "Before <think>Incomplete"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Incomplete"))
Expect(cleaned).To(Equal("Before "))
})
It("should extract unclosed redacted_reasoning block", func() {
content := "Start <think>Partial reasoning content"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Partial reasoning content"))
Expect(cleaned).To(Equal("Start "))
})
It("should handle unclosed tag at the end", func() {
content := "Regular content <thinking>Unclosed at end"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Unclosed at end"))
Expect(cleaned).To(Equal("Regular content "))
})
})
Context("when content has empty reasoning blocks", func() {
It("should ignore empty thinking block", func() {
content := "Text <thinking></thinking> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal("Text More"))
})
It("should ignore thinking block with only whitespace", func() {
content := "Text <thinking> \n\t </thinking> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has reasoning tags with special characters", func() {
It("should handle reasoning with newlines", func() {
content := "Before <thinking>Line 1\nLine 2\nLine 3</thinking> After"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Line 1\nLine 2\nLine 3"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle reasoning with code blocks", func() {
content := "Text <thinking>Reasoning with ```code``` blocks</thinking> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Reasoning with ```code``` blocks"))
Expect(cleaned).To(Equal("Text More"))
})
It("should handle reasoning with JSON", func() {
content := "Before <think>{\"key\": \"value\"}</think> After"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("{\"key\": \"value\"}"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle reasoning with HTML-like content", func() {
content := "Text <thinking>Reasoning with <tags> inside</thinking> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Reasoning with <tags> inside"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has reasoning mixed with regular content", func() {
It("should preserve content order correctly", func() {
content := "Start <thinking>Reasoning</thinking> Middle <think>More reasoning</think> End"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(ContainSubstring("Reasoning"))
Expect(reasoning).To(ContainSubstring("More reasoning"))
Expect(cleaned).To(Equal("Start Middle End"))
})
It("should handle reasoning in the middle of a sentence", func() {
content := "This is a <thinking>reasoning</thinking> sentence."
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("reasoning"))
Expect(cleaned).To(Equal("This is a sentence."))
})
})
Context("edge cases", func() {
It("should handle content with only opening tag", func() {
content := "<thinking>"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal(""))
})
It("should handle content with only closing tag", func() {
content := "</thinking>"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal("</thinking>"))
})
It("should handle mismatched tags", func() {
content := "<thinking>Content</think>"
reasoning, cleaned := ExtractReasoning(content)
// Should extract unclosed thinking block
Expect(reasoning).To(ContainSubstring("Content"))
Expect(cleaned).To(Equal(""))
})
It("should handle very long reasoning content", func() {
longReasoning := strings.Repeat("This is reasoning content. ", 100)
content := "Text <thinking>" + longReasoning + "</thinking> More"
reasoning, cleaned := ExtractReasoning(content)
// TrimSpace is applied, so we need to account for that
Expect(reasoning).To(Equal(strings.TrimSpace(longReasoning)))
Expect(cleaned).To(Equal("Text More"))
})
It("should handle reasoning with unicode characters", func() {
content := "Text <thinking>Reasoning with 中文 and emoji 🧠</thinking> More"
reasoning, cleaned := ExtractReasoning(content)
Expect(reasoning).To(Equal("Reasoning with 中文 and emoji 🧠"))
Expect(cleaned).To(Equal("Text More"))
})
})
})

8
pkg/reasoning/config.go Normal file
View File

@@ -0,0 +1,8 @@
package reasoning
type ReasoningConfig struct {
// ThinkingForcedOpen indicates that the model outputs reasoning without an opening tag.
// When true, all content from the start is treated as reasoning until a closing tag is found.
// This is useful for models like GLM-4 that output reasoning without <think> but end with </think>.
ThinkingForcedOpen bool `yaml:"thinking_forced_open,omitempty" json:"thinking_forced_open,omitempty"`
}

18
pkg/reasoning/options.go Normal file
View File

@@ -0,0 +1,18 @@
package reasoning
// options holds the configuration for reasoning extraction
type options struct {
thinkingForcedOpen bool
}
// Option is a functional option for configuring reasoning extraction
type Option func(*options)
// WithThinkingForcedOpen configures the extractor to treat all content from the start
// as reasoning until a closing tag is found. This is useful for models like GLM-4
// that output reasoning without <think> but end with </think>.
func WithThinkingForcedOpen() Option {
return func(o *options) {
o.thinkingForcedOpen = true
}
}

256
pkg/reasoning/reasoning.go Normal file
View File

@@ -0,0 +1,256 @@
package reasoning
import (
"strings"
)
// Common thinking/reasoning opening tags used by various models.
// These match the tags detected by llama.cpp in common/chat.cpp
var thinkingOpenTags = []string{
// DeepSeek R1, V3.1, Nemotron V2, MiniMax M2, Hermes 2 Pro, Granite, Exaone MOE
"<think>\n",
"<think>",
// Generic thinking tags
"<thinking>\n",
"<thinking>",
// Apertus
"<|inner_prefix|>",
// Command R7B
"<|START_THINKING|>",
// Seed
"<seed:think>",
// Magistral (not in llama.cpp but common)
"[THINK]\n",
"[THINK]",
}
// DetectThinkingForcedOpen checks if a prompt ends with a thinking opening tag.
// This is used to automatically detect when the model template has already added
// the opening thinking tag, meaning the model will output reasoning content directly.
// Returns true if the prompt ends with a known thinking opening tag.
func DetectThinkingForcedOpen(prompt string) bool {
for _, tag := range thinkingOpenTags {
if strings.HasSuffix(prompt, tag) {
return true
}
}
return false
}
// Extract extracts reasoning content from thinking tags and returns
// both the extracted reasoning and the cleaned content (with tags removed).
// It handles <thinking>...</thinking> and <think>...</think> tags.
// Multiple reasoning blocks are concatenated with newlines.
// It also handles the case where only a closing tag is present (no opening tag),
// in which case everything before the closing tag is treated as reasoning.
//
// Use WithThinkingForcedOpen() option when all content from the start should be
// treated as reasoning until a closing tag is found.
func Extract(content string, opts ...Option) (reasoning string, cleanedContent string) {
if content == "" {
return "", content
}
cfg := &options{}
for _, opt := range opts {
opt(cfg)
}
if cfg.thinkingForcedOpen {
return extractForcedOpen(content)
}
return extractFromTags(content)
}
// extractForcedOpen handles the case where reasoning starts without an opening tag.
// All content from the start is treated as reasoning until a closing tag is found.
func extractForcedOpen(content string) (reasoning string, cleanedContent string) {
// Look for the earliest closing tag
// These match the closing tags used by llama.cpp for various models
closingTags := []string{
"</thinking>",
"</think>",
"<|END_THINKING|>", // Command R7B
"<|inner_suffix|>", // Apertus
"</seed:think>", // Seed
"[/THINK]", // Magistral
}
earliestCloseIdx := -1
var matchedCloseTag string
for _, closeTag := range closingTags {
idx := strings.Index(content, closeTag)
if idx != -1 && (earliestCloseIdx == -1 || idx < earliestCloseIdx) {
earliestCloseIdx = idx
matchedCloseTag = closeTag
}
}
if earliestCloseIdx == -1 {
// No closing tag found - all content is reasoning (still streaming)
return strings.TrimSpace(content), ""
}
// Found closing tag - everything before is reasoning, everything after is content
reasoning = strings.TrimSpace(content[:earliestCloseIdx])
cleanedContent = content[earliestCloseIdx+len(matchedCloseTag):]
// Continue processing the rest for any additional reasoning blocks
if cleanedContent != "" {
additionalReasoning, finalContent := extractFromTags(cleanedContent)
if additionalReasoning != "" {
if reasoning != "" {
reasoning = reasoning + "\n\n" + additionalReasoning
} else {
reasoning = additionalReasoning
}
}
cleanedContent = finalContent
}
return reasoning, cleanedContent
}
// extractFromTags extracts reasoning content from thinking tags.
// This is the core implementation that handles standard tag-based extraction.
func extractFromTags(content string) (reasoning string, cleanedContent string) {
if content == "" {
return "", content
}
var reasoningParts []string
var cleanedParts []string
remaining := content
// Define tag pairs to look for
// These match the tags used by llama.cpp for various models
tagPairs := []struct {
start string
end string
}{
{"<thinking>", "</thinking>"},
{"<think>", "</think>"},
{"<|START_THINKING|>", "<|END_THINKING|>"}, // Command R7B
{"<|inner_prefix|>", "<|inner_suffix|>"}, // Apertus
{"<seed:think>", "</seed:think>"}, // Seed
{"[THINK]", "[/THINK]"}, // Magistral
}
// Track the last position we've processed
lastPos := 0
for {
// Find the earliest tag start
earliestStart := -1
earliestEnd := -1
isUnclosed := false
isClosingOnly := false
var matchedTag struct {
start string
end string
}
for _, tagPair := range tagPairs {
startIdx := strings.Index(remaining[lastPos:], tagPair.start)
endIdx := strings.Index(remaining[lastPos:], tagPair.end)
// Check for closing-only tag (closing tag appears before or without opening tag)
if endIdx != -1 && (startIdx == -1 || endIdx < startIdx) {
// Found a closing tag without a preceding opening tag
closingTagPos := endIdx + lastPos
if earliestStart == -1 || closingTagPos < earliestStart || (isClosingOnly && closingTagPos < earliestEnd) {
earliestStart = lastPos
earliestEnd = closingTagPos + len(tagPair.end)
isClosingOnly = true
isUnclosed = false
matchedTag = tagPair
}
continue
}
if startIdx == -1 {
continue
}
startIdx += lastPos
// Find the corresponding end tag after the start tag
endIdxAfterStart := strings.Index(remaining[startIdx+len(tagPair.start):], tagPair.end)
if endIdxAfterStart == -1 {
// Unclosed tag - extract what we have
if earliestStart == -1 || startIdx < earliestStart {
earliestStart = startIdx
earliestEnd = len(remaining)
isUnclosed = true
isClosingOnly = false
matchedTag = tagPair
}
continue
}
endIdxAfterStart += startIdx + len(tagPair.start)
// Found a complete tag pair
if earliestStart == -1 || startIdx < earliestStart {
earliestStart = startIdx
earliestEnd = endIdxAfterStart + len(tagPair.end)
isUnclosed = false
isClosingOnly = false
matchedTag = tagPair
}
}
if earliestStart == -1 {
// No more tags found, add remaining content
if lastPos < len(remaining) {
cleanedParts = append(cleanedParts, remaining[lastPos:])
}
break
}
if isClosingOnly {
// Closing tag without opening tag - content before closing tag is reasoning
reasoningContent := strings.TrimSpace(remaining[lastPos : earliestEnd-len(matchedTag.end)])
if reasoningContent != "" {
reasoningParts = append(reasoningParts, reasoningContent)
}
// Move past the closing tag
lastPos = earliestEnd
continue
}
// Add content before the tag
if earliestStart > lastPos {
cleanedParts = append(cleanedParts, remaining[lastPos:earliestStart])
}
// Extract reasoning content
reasoningStart := earliestStart + len(matchedTag.start)
// For unclosed tags, earliestEnd is already at the end of the string
// For closed tags, earliestEnd points to after the closing tag, so we subtract the end tag length
var reasoningEnd int
if isUnclosed {
// Unclosed tag - extract everything to the end
reasoningEnd = len(remaining)
} else {
// Closed tag - exclude the end tag
reasoningEnd = earliestEnd - len(matchedTag.end)
}
if reasoningEnd > reasoningStart {
reasoningContent := strings.TrimSpace(remaining[reasoningStart:reasoningEnd])
if reasoningContent != "" {
reasoningParts = append(reasoningParts, reasoningContent)
}
}
// Move past this tag
lastPos = earliestEnd
}
// Combine reasoning parts
reasoning = strings.Join(reasoningParts, "\n\n")
// Combine cleaned content parts
cleanedContent = strings.Join(cleanedParts, "")
return reasoning, cleanedContent
}

View File

@@ -0,0 +1,13 @@
package reasoning_test
import (
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestReasoning(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Reasoning Suite")
}

View File

@@ -0,0 +1,499 @@
package reasoning_test
import (
"strings"
. "github.com/mudler/LocalAI/pkg/reasoning"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("DetectThinkingForcedOpen", func() {
It("should detect <think> at end of prompt", func() {
Expect(DetectThinkingForcedOpen("Some prompt<think>")).To(BeTrue())
Expect(DetectThinkingForcedOpen("Some prompt<think>\n")).To(BeTrue())
})
It("should detect <thinking> at end of prompt", func() {
Expect(DetectThinkingForcedOpen("Some prompt<thinking>")).To(BeTrue())
Expect(DetectThinkingForcedOpen("Some prompt<thinking>\n")).To(BeTrue())
})
It("should detect model-specific tags", func() {
Expect(DetectThinkingForcedOpen("Some prompt<|inner_prefix|>")).To(BeTrue())
Expect(DetectThinkingForcedOpen("Some prompt<|START_THINKING|>")).To(BeTrue())
Expect(DetectThinkingForcedOpen("Some prompt<seed:think>")).To(BeTrue())
Expect(DetectThinkingForcedOpen("Some prompt[THINK]")).To(BeTrue())
Expect(DetectThinkingForcedOpen("Some prompt[THINK]\n")).To(BeTrue())
})
It("should not detect if tag is in the middle", func() {
Expect(DetectThinkingForcedOpen("Some <think> prompt")).To(BeFalse())
Expect(DetectThinkingForcedOpen("<think>reasoning</think>")).To(BeFalse())
})
It("should not detect if no thinking tag", func() {
Expect(DetectThinkingForcedOpen("Some regular prompt")).To(BeFalse())
Expect(DetectThinkingForcedOpen("")).To(BeFalse())
})
})
var _ = Describe("Extract", func() {
Context("when content has no reasoning tags", func() {
It("should return empty reasoning and original content", func() {
content := "This is regular content without any tags."
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal(content))
})
It("should handle empty string", func() {
content := ""
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(BeEmpty())
})
It("should handle content with only whitespace", func() {
content := " \n\t "
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal(content))
})
})
Context("when content has <thinking> tags", func() {
It("should extract reasoning from single thinking block", func() {
content := "Some text <thinking>This is my reasoning</thinking> More text"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("This is my reasoning"))
Expect(cleaned).To(Equal("Some text More text"))
})
It("should extract reasoning and preserve surrounding content", func() {
content := "Before <thinking>Reasoning here</thinking> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning here"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle thinking block at the start", func() {
content := "<thinking>Start reasoning</thinking> Regular content"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Start reasoning"))
Expect(cleaned).To(Equal(" Regular content"))
})
It("should handle thinking block at the end", func() {
content := "Regular content <thinking>End reasoning</thinking>"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("End reasoning"))
Expect(cleaned).To(Equal("Regular content "))
})
It("should handle only thinking block", func() {
content := "<thinking>Only reasoning</thinking>"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Only reasoning"))
Expect(cleaned).To(BeEmpty())
})
It("should trim whitespace from reasoning content", func() {
content := "Text <thinking> \n Reasoning with spaces \n </thinking> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning with spaces"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has <think> tags", func() {
It("should extract reasoning from redacted_reasoning block", func() {
content := "Text <think>Redacted reasoning</think> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Redacted reasoning"))
Expect(cleaned).To(Equal("Text More"))
})
It("should handle redacted_reasoning with multiline content", func() {
content := "Before <think>Line 1\nLine 2\nLine 3</think> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Line 1\nLine 2\nLine 3"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle redacted_reasoning with complex content", func() {
content := "Start <think>Complex reasoning\nwith\nmultiple\nlines</think> End"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Complex reasoning\nwith\nmultiple\nlines"))
Expect(cleaned).To(Equal("Start End"))
})
})
Context("when content has multiple reasoning blocks", func() {
It("should concatenate multiple thinking blocks with newlines", func() {
content := "Text <thinking>First</thinking> Middle <thinking>Second</thinking> End"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("First\n\nSecond"))
Expect(cleaned).To(Equal("Text Middle End"))
})
It("should handle multiple different tag types", func() {
content := "A <thinking>One</thinking> B <think>Two</think> C <think>Three</think> D"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(ContainSubstring("One"))
Expect(reasoning).To(ContainSubstring("Two"))
Expect(reasoning).To(ContainSubstring("Three"))
Expect(cleaned).To(Equal("A B C D"))
})
It("should handle nested tags correctly (extracts first match)", func() {
content := "Text <thinking>Outer <think>Inner</think></thinking> More"
reasoning, cleaned := Extract(content)
// Should extract the outer thinking block
Expect(reasoning).To(ContainSubstring("Outer"))
Expect(reasoning).To(ContainSubstring("Inner"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has unclosed reasoning tags", func() {
It("should extract unclosed thinking block", func() {
content := "Text <thinking>Unclosed reasoning"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Unclosed reasoning"))
Expect(cleaned).To(Equal("Text "))
})
It("should extract unclosed think block", func() {
content := "Before <think>Incomplete"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Incomplete"))
Expect(cleaned).To(Equal("Before "))
})
It("should extract unclosed redacted_reasoning block", func() {
content := "Start <think>Partial reasoning content"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Partial reasoning content"))
Expect(cleaned).To(Equal("Start "))
})
It("should handle unclosed tag at the end", func() {
content := "Regular content <thinking>Unclosed at end"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Unclosed at end"))
Expect(cleaned).To(Equal("Regular content "))
})
})
Context("when content has empty reasoning blocks", func() {
It("should ignore empty thinking block", func() {
content := "Text <thinking></thinking> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal("Text More"))
})
It("should ignore thinking block with only whitespace", func() {
content := "Text <thinking> \n\t </thinking> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has reasoning tags with special characters", func() {
It("should handle reasoning with newlines", func() {
content := "Before <thinking>Line 1\nLine 2\nLine 3</thinking> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Line 1\nLine 2\nLine 3"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle reasoning with code blocks", func() {
content := "Text <thinking>Reasoning with ```code``` blocks</thinking> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning with ```code``` blocks"))
Expect(cleaned).To(Equal("Text More"))
})
It("should handle reasoning with JSON", func() {
content := "Before <think>{\"key\": \"value\"}</think> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("{\"key\": \"value\"}"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle reasoning with HTML-like content", func() {
content := "Text <thinking>Reasoning with <tags> inside</thinking> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning with <tags> inside"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("when content has reasoning mixed with regular content", func() {
It("should preserve content order correctly", func() {
content := "Start <thinking>Reasoning</thinking> Middle <think>More reasoning</think> End"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(ContainSubstring("Reasoning"))
Expect(reasoning).To(ContainSubstring("More reasoning"))
Expect(cleaned).To(Equal("Start Middle End"))
})
It("should handle reasoning in the middle of a sentence", func() {
content := "This is a <thinking>reasoning</thinking> sentence."
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning"))
Expect(cleaned).To(Equal("This is a sentence."))
})
})
Context("edge cases without WithThinkingForcedOpen", func() {
It("should handle content with only opening tag", func() {
content := "<thinking>"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal(""))
})
It("should handle content with only closing tag (no content before)", func() {
content := "</thinking>"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(BeEmpty())
})
It("should extract reasoning when only closing tag is present", func() {
// GLM-4 style: reasoning content followed by closing tag without opening tag
content := "This is reasoning content</think>this is the actual response"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("This is reasoning content"))
Expect(cleaned).To(Equal("this is the actual response"))
})
It("should handle closing-only tag with multiline reasoning", func() {
content := "1. First point\n2. Second point\n3. Third point</think>Final answer"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("1. First point\n2. Second point\n3. Third point"))
Expect(cleaned).To(Equal("Final answer"))
})
It("should handle closing-only tag with complex reasoning (GLM-4 example)", func() {
content := "**Analyze the user's input:** The user says something.\n\n**Final Decision:** Output the text.</think>this is a test"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("**Analyze the user's input:** The user says something.\n\n**Final Decision:** Output the text."))
Expect(cleaned).To(Equal("this is a test"))
})
It("should handle closing-only thinking tag", func() {
content := "Some reasoning here</thinking>actual content"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Some reasoning here"))
Expect(cleaned).To(Equal("actual content"))
})
It("should handle mismatched tags", func() {
content := "<thinking>Content</think>"
reasoning, cleaned := Extract(content)
// Should extract unclosed thinking block
Expect(reasoning).To(ContainSubstring("Content"))
Expect(cleaned).To(Equal(""))
})
It("should handle very long reasoning content", func() {
longReasoning := strings.Repeat("This is reasoning content. ", 100)
content := "Text <thinking>" + longReasoning + "</thinking> More"
reasoning, cleaned := Extract(content)
// TrimSpace is applied, so we need to account for that
Expect(reasoning).To(Equal(strings.TrimSpace(longReasoning)))
Expect(cleaned).To(Equal("Text More"))
})
It("should handle reasoning with unicode characters", func() {
content := "Text <thinking>Reasoning with 中文 and emoji 🧠</thinking> More"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning with 中文 and emoji 🧠"))
Expect(cleaned).To(Equal("Text More"))
})
})
Context("with WithThinkingForcedOpen option", func() {
It("should treat all content as reasoning until closing tag", func() {
content := "This is reasoning</think>this is content"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("This is reasoning"))
Expect(cleaned).To(Equal("this is content"))
})
It("should treat all content as reasoning when no closing tag (streaming)", func() {
content := "This is reasoning content still streaming"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("This is reasoning content still streaming"))
Expect(cleaned).To(BeEmpty())
})
It("should handle GLM-4 style output", func() {
content := "**Analyze:** The user says something.\n\n**Final Decision:** Output the text.</think>this is a test"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("**Analyze:** The user says something.\n\n**Final Decision:** Output the text."))
Expect(cleaned).To(Equal("this is a test"))
})
It("should handle multiline reasoning with closing tag", func() {
content := "1. First point\n2. Second point\n3. Third point</think>Final answer"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("1. First point\n2. Second point\n3. Third point"))
Expect(cleaned).To(Equal("Final answer"))
})
It("should handle </thinking> closing tag", func() {
content := "Some reasoning here</thinking>actual content"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Some reasoning here"))
Expect(cleaned).To(Equal("actual content"))
})
It("should handle additional reasoning blocks after initial forced open", func() {
content := "Initial reasoning</think>content<think>more reasoning</think>final content"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Initial reasoning\n\nmore reasoning"))
Expect(cleaned).To(Equal("contentfinal content"))
})
It("should handle empty content", func() {
reasoning, cleaned := Extract("", WithThinkingForcedOpen())
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(BeEmpty())
})
It("should handle only closing tag", func() {
content := "</think>only content"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(BeEmpty())
Expect(cleaned).To(Equal("only content"))
})
It("should find earliest closing tag", func() {
// </think> comes before </thinking>
content := "Reasoning</think>content</thinking>more"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Reasoning"))
Expect(cleaned).To(Equal("content</thinking>more"))
})
It("should handle Command R7B closing tag", func() {
content := "Reasoning content<|END_THINKING|>actual response"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
It("should handle Apertus closing tag", func() {
content := "Reasoning content<|inner_suffix|>actual response"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
It("should handle Seed closing tag", func() {
content := "Reasoning content</seed:think>actual response"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
It("should handle Magistral closing tag", func() {
content := "Reasoning content[/THINK]actual response"
reasoning, cleaned := Extract(content, WithThinkingForcedOpen())
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
})
Context("with model-specific tag pairs", func() {
It("should extract Command R7B reasoning tags", func() {
content := "Before <|START_THINKING|>reasoning here<|END_THINKING|> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning here"))
Expect(cleaned).To(Equal("Before After"))
})
It("should extract Apertus reasoning tags", func() {
content := "Before <|inner_prefix|>reasoning here<|inner_suffix|> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning here"))
Expect(cleaned).To(Equal("Before After"))
})
It("should extract Seed reasoning tags", func() {
content := "Before <seed:think>reasoning here</seed:think> After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning here"))
Expect(cleaned).To(Equal("Before After"))
})
It("should extract Magistral reasoning tags", func() {
content := "Before [THINK]reasoning here[/THINK] After"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning here"))
Expect(cleaned).To(Equal("Before After"))
})
It("should handle unclosed Command R7B tag", func() {
content := "Before <|START_THINKING|>reasoning still streaming"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning still streaming"))
Expect(cleaned).To(Equal("Before "))
})
It("should handle unclosed Apertus tag", func() {
content := "Before <|inner_prefix|>reasoning still streaming"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning still streaming"))
Expect(cleaned).To(Equal("Before "))
})
It("should handle unclosed Seed tag", func() {
content := "Before <seed:think>reasoning still streaming"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning still streaming"))
Expect(cleaned).To(Equal("Before "))
})
It("should handle unclosed Magistral tag", func() {
content := "Before [THINK]reasoning still streaming"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("reasoning still streaming"))
Expect(cleaned).To(Equal("Before "))
})
It("should handle closing-only Command R7B tag", func() {
content := "Reasoning content<|END_THINKING|>actual response"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
It("should handle closing-only Apertus tag", func() {
content := "Reasoning content<|inner_suffix|>actual response"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
It("should handle closing-only Seed tag", func() {
content := "Reasoning content</seed:think>actual response"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
It("should handle closing-only Magistral tag", func() {
content := "Reasoning content[/THINK]actual response"
reasoning, cleaned := Extract(content)
Expect(reasoning).To(Equal("Reasoning content"))
Expect(cleaned).To(Equal("actual response"))
})
})
})

View File

@@ -148,10 +148,10 @@ package_cuda_libs() {
done
# Copy CUDA target directory for runtime compilation support
if [ -d "/usr/local/cuda/targets" ]; then
mkdir -p "$TARGET_LIB_DIR/../cuda"
cp -arfL /usr/local/cuda/targets "$TARGET_LIB_DIR/../cuda/" 2>/dev/null || true
fi
# if [ -d "/usr/local/cuda/targets" ]; then
# mkdir -p "$TARGET_LIB_DIR/../cuda"
# cp -arfL /usr/local/cuda/targets "$TARGET_LIB_DIR/../cuda/" 2>/dev/null || true
# fi
echo "CUDA libraries packaged successfully"
}

View File

@@ -1259,6 +1259,116 @@ const docTemplate = `{
}
}
},
"/v1/responses": {
"post": {
"summary": "Create a response using the Open Responses API",
"parameters": [
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/schema.OpenResponsesRequest"
}
}
],
"responses": {
"200": {
"description": "Response",
"schema": {
"$ref": "#/definitions/schema.ORResponseResource"
}
}
}
}
},
"/v1/responses/{id}": {
"get": {
"description": "Retrieve a response by ID. Can be used for polling background responses or resuming streaming responses.",
"summary": "Get a response by ID",
"parameters": [
{
"type": "string",
"description": "Response ID",
"name": "id",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Set to 'true' to resume streaming",
"name": "stream",
"in": "query"
},
{
"type": "integer",
"description": "Sequence number to resume from (for streaming)",
"name": "starting_after",
"in": "query"
}
],
"responses": {
"200": {
"description": "Response",
"schema": {
"$ref": "#/definitions/schema.ORResponseResource"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "object",
"additionalProperties": true
}
},
"404": {
"description": "Not Found",
"schema": {
"type": "object",
"additionalProperties": true
}
}
}
}
},
"/v1/responses/{id}/cancel": {
"post": {
"description": "Cancel a background response if it's still in progress",
"summary": "Cancel a response",
"parameters": [
{
"type": "string",
"description": "Response ID",
"name": "id",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "Response",
"schema": {
"$ref": "#/definitions/schema.ORResponseResource"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "object",
"additionalProperties": true
}
},
"404": {
"description": "Not Found",
"schema": {
"type": "object",
"additionalProperties": true
}
}
}
}
},
"/v1/sound-generation": {
"post": {
"summary": "Generates audio from the input text.",
@@ -2507,6 +2617,322 @@ const docTemplate = `{
}
}
},
"schema.ORError": {
"type": "object",
"properties": {
"code": {
"type": "string"
},
"message": {
"type": "string"
},
"param": {
"type": "string"
},
"type": {
"description": "invalid_request|not_found|server_error|model_error|too_many_requests",
"type": "string"
}
}
},
"schema.ORFunctionTool": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"name": {
"type": "string"
},
"parameters": {
"type": "object",
"additionalProperties": true
},
"strict": {
"description": "Always include in response",
"type": "boolean"
},
"type": {
"description": "always \"function\"",
"type": "string"
}
}
},
"schema.ORIncompleteDetails": {
"type": "object",
"properties": {
"reason": {
"type": "string"
}
}
},
"schema.ORInputTokensDetails": {
"type": "object",
"properties": {
"cached_tokens": {
"description": "Always include, even if 0",
"type": "integer"
}
}
},
"schema.ORItemField": {
"type": "object",
"properties": {
"arguments": {
"type": "string"
},
"call_id": {
"description": "Function call fields",
"type": "string"
},
"content": {
"description": "string or []ORContentPart for messages"
},
"id": {
"description": "Present for all output items",
"type": "string"
},
"name": {
"type": "string"
},
"output": {
"description": "Function call output fields"
},
"role": {
"description": "Message fields",
"type": "string"
},
"status": {
"description": "in_progress|completed|incomplete",
"type": "string"
},
"type": {
"description": "message|function_call|function_call_output|reasoning|item_reference",
"type": "string"
}
}
},
"schema.OROutputTokensDetails": {
"type": "object",
"properties": {
"reasoning_tokens": {
"description": "Always include, even if 0",
"type": "integer"
}
}
},
"schema.ORReasoning": {
"type": "object",
"properties": {
"effort": {
"type": "string"
},
"summary": {
"type": "string"
}
}
},
"schema.ORReasoningParam": {
"type": "object",
"properties": {
"effort": {
"description": "\"none\"|\"low\"|\"medium\"|\"high\"|\"xhigh\"",
"type": "string"
},
"summary": {
"description": "\"auto\"|\"concise\"|\"detailed\"",
"type": "string"
}
}
},
"schema.ORResponseResource": {
"type": "object",
"properties": {
"background": {
"type": "boolean"
},
"completed_at": {
"description": "Required: present as number or null",
"type": "integer"
},
"created_at": {
"type": "integer"
},
"error": {
"description": "Always present, null if no error",
"allOf": [
{
"$ref": "#/definitions/schema.ORError"
}
]
},
"frequency_penalty": {
"type": "number"
},
"id": {
"type": "string"
},
"incomplete_details": {
"description": "Always present, null if complete",
"allOf": [
{
"$ref": "#/definitions/schema.ORIncompleteDetails"
}
]
},
"instructions": {
"type": "string"
},
"max_output_tokens": {
"type": "integer"
},
"max_tool_calls": {
"description": "nullable",
"type": "integer"
},
"metadata": {
"description": "Metadata and operational flags",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"model": {
"type": "string"
},
"object": {
"description": "always \"response\"",
"type": "string"
},
"output": {
"type": "array",
"items": {
"$ref": "#/definitions/schema.ORItemField"
}
},
"parallel_tool_calls": {
"type": "boolean"
},
"presence_penalty": {
"type": "number"
},
"previous_response_id": {
"type": "string"
},
"prompt_cache_key": {
"description": "nullable",
"type": "string"
},
"reasoning": {
"description": "nullable",
"allOf": [
{
"$ref": "#/definitions/schema.ORReasoning"
}
]
},
"safety_identifier": {
"description": "Safety and caching",
"type": "string"
},
"service_tier": {
"type": "string"
},
"status": {
"description": "in_progress|completed|failed|incomplete",
"type": "string"
},
"store": {
"type": "boolean"
},
"temperature": {
"description": "Sampling parameters (always required)",
"type": "number"
},
"text": {
"description": "Text format configuration",
"allOf": [
{
"$ref": "#/definitions/schema.ORTextConfig"
}
]
},
"tool_choice": {},
"tools": {
"description": "Tool-related fields",
"type": "array",
"items": {
"$ref": "#/definitions/schema.ORFunctionTool"
}
},
"top_logprobs": {
"description": "Default to 0",
"type": "integer"
},
"top_p": {
"type": "number"
},
"truncation": {
"description": "Truncation and reasoning",
"type": "string"
},
"usage": {
"description": "Usage statistics",
"allOf": [
{
"$ref": "#/definitions/schema.ORUsage"
}
]
}
}
},
"schema.ORTextConfig": {
"type": "object",
"properties": {
"format": {
"$ref": "#/definitions/schema.ORTextFormat"
}
}
},
"schema.ORTextFormat": {
"type": "object",
"properties": {
"type": {
"description": "\"text\" or \"json_schema\"",
"type": "string"
}
}
},
"schema.ORUsage": {
"type": "object",
"properties": {
"input_tokens": {
"type": "integer"
},
"input_tokens_details": {
"description": "Always present",
"allOf": [
{
"$ref": "#/definitions/schema.ORInputTokensDetails"
}
]
},
"output_tokens": {
"type": "integer"
},
"output_tokens_details": {
"description": "Always present",
"allOf": [
{
"$ref": "#/definitions/schema.OROutputTokensDetails"
}
]
},
"total_tokens": {
"type": "integer"
}
}
},
"schema.OpenAIModel": {
"type": "object",
"properties": {
@@ -2781,6 +3207,114 @@ const docTemplate = `{
}
}
},
"schema.OpenResponsesRequest": {
"type": "object",
"properties": {
"allowed_tools": {
"description": "Restrict which tools can be invoked",
"type": "array",
"items": {
"type": "string"
}
},
"background": {
"description": "Run request in background",
"type": "boolean"
},
"frequency_penalty": {
"description": "Frequency penalty (-2.0 to 2.0)",
"type": "number"
},
"include": {
"description": "What to include in response",
"type": "array",
"items": {
"type": "string"
}
},
"input": {
"description": "string or []ORItemParam"
},
"instructions": {
"type": "string"
},
"logit_bias": {
"description": "OpenAI-compatible extensions (not in Open Responses spec)",
"type": "object",
"additionalProperties": {
"type": "number",
"format": "float64"
}
},
"max_output_tokens": {
"type": "integer"
},
"max_tool_calls": {
"description": "Maximum number of tool calls",
"type": "integer"
},
"metadata": {
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"model": {
"type": "string"
},
"parallel_tool_calls": {
"description": "Allow parallel tool calls",
"type": "boolean"
},
"presence_penalty": {
"description": "Presence penalty (-2.0 to 2.0)",
"type": "number"
},
"previous_response_id": {
"type": "string"
},
"reasoning": {
"$ref": "#/definitions/schema.ORReasoningParam"
},
"service_tier": {
"description": "\"auto\"|\"default\"|priority hint",
"type": "string"
},
"store": {
"description": "Whether to store the response",
"type": "boolean"
},
"stream": {
"type": "boolean"
},
"temperature": {
"type": "number"
},
"text_format": {
"description": "Additional parameters from spec"
},
"tool_choice": {
"description": "\"auto\"|\"required\"|\"none\"|{type:\"function\",name:\"...\"}"
},
"tools": {
"type": "array",
"items": {
"$ref": "#/definitions/schema.ORFunctionTool"
}
},
"top_logprobs": {
"description": "Number of top logprobs to return",
"type": "integer"
},
"top_p": {
"type": "number"
},
"truncation": {
"description": "\"auto\"|\"disabled\"",
"type": "string"
}
}
},
"schema.P2PNodesResponse": {
"type": "object",
"properties": {

View File

@@ -1252,6 +1252,116 @@
}
}
},
"/v1/responses": {
"post": {
"summary": "Create a response using the Open Responses API",
"parameters": [
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/schema.OpenResponsesRequest"
}
}
],
"responses": {
"200": {
"description": "Response",
"schema": {
"$ref": "#/definitions/schema.ORResponseResource"
}
}
}
}
},
"/v1/responses/{id}": {
"get": {
"description": "Retrieve a response by ID. Can be used for polling background responses or resuming streaming responses.",
"summary": "Get a response by ID",
"parameters": [
{
"type": "string",
"description": "Response ID",
"name": "id",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Set to 'true' to resume streaming",
"name": "stream",
"in": "query"
},
{
"type": "integer",
"description": "Sequence number to resume from (for streaming)",
"name": "starting_after",
"in": "query"
}
],
"responses": {
"200": {
"description": "Response",
"schema": {
"$ref": "#/definitions/schema.ORResponseResource"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "object",
"additionalProperties": true
}
},
"404": {
"description": "Not Found",
"schema": {
"type": "object",
"additionalProperties": true
}
}
}
}
},
"/v1/responses/{id}/cancel": {
"post": {
"description": "Cancel a background response if it's still in progress",
"summary": "Cancel a response",
"parameters": [
{
"type": "string",
"description": "Response ID",
"name": "id",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "Response",
"schema": {
"$ref": "#/definitions/schema.ORResponseResource"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "object",
"additionalProperties": true
}
},
"404": {
"description": "Not Found",
"schema": {
"type": "object",
"additionalProperties": true
}
}
}
}
},
"/v1/sound-generation": {
"post": {
"summary": "Generates audio from the input text.",
@@ -2500,6 +2610,322 @@
}
}
},
"schema.ORError": {
"type": "object",
"properties": {
"code": {
"type": "string"
},
"message": {
"type": "string"
},
"param": {
"type": "string"
},
"type": {
"description": "invalid_request|not_found|server_error|model_error|too_many_requests",
"type": "string"
}
}
},
"schema.ORFunctionTool": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"name": {
"type": "string"
},
"parameters": {
"type": "object",
"additionalProperties": true
},
"strict": {
"description": "Always include in response",
"type": "boolean"
},
"type": {
"description": "always \"function\"",
"type": "string"
}
}
},
"schema.ORIncompleteDetails": {
"type": "object",
"properties": {
"reason": {
"type": "string"
}
}
},
"schema.ORInputTokensDetails": {
"type": "object",
"properties": {
"cached_tokens": {
"description": "Always include, even if 0",
"type": "integer"
}
}
},
"schema.ORItemField": {
"type": "object",
"properties": {
"arguments": {
"type": "string"
},
"call_id": {
"description": "Function call fields",
"type": "string"
},
"content": {
"description": "string or []ORContentPart for messages"
},
"id": {
"description": "Present for all output items",
"type": "string"
},
"name": {
"type": "string"
},
"output": {
"description": "Function call output fields"
},
"role": {
"description": "Message fields",
"type": "string"
},
"status": {
"description": "in_progress|completed|incomplete",
"type": "string"
},
"type": {
"description": "message|function_call|function_call_output|reasoning|item_reference",
"type": "string"
}
}
},
"schema.OROutputTokensDetails": {
"type": "object",
"properties": {
"reasoning_tokens": {
"description": "Always include, even if 0",
"type": "integer"
}
}
},
"schema.ORReasoning": {
"type": "object",
"properties": {
"effort": {
"type": "string"
},
"summary": {
"type": "string"
}
}
},
"schema.ORReasoningParam": {
"type": "object",
"properties": {
"effort": {
"description": "\"none\"|\"low\"|\"medium\"|\"high\"|\"xhigh\"",
"type": "string"
},
"summary": {
"description": "\"auto\"|\"concise\"|\"detailed\"",
"type": "string"
}
}
},
"schema.ORResponseResource": {
"type": "object",
"properties": {
"background": {
"type": "boolean"
},
"completed_at": {
"description": "Required: present as number or null",
"type": "integer"
},
"created_at": {
"type": "integer"
},
"error": {
"description": "Always present, null if no error",
"allOf": [
{
"$ref": "#/definitions/schema.ORError"
}
]
},
"frequency_penalty": {
"type": "number"
},
"id": {
"type": "string"
},
"incomplete_details": {
"description": "Always present, null if complete",
"allOf": [
{
"$ref": "#/definitions/schema.ORIncompleteDetails"
}
]
},
"instructions": {
"type": "string"
},
"max_output_tokens": {
"type": "integer"
},
"max_tool_calls": {
"description": "nullable",
"type": "integer"
},
"metadata": {
"description": "Metadata and operational flags",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"model": {
"type": "string"
},
"object": {
"description": "always \"response\"",
"type": "string"
},
"output": {
"type": "array",
"items": {
"$ref": "#/definitions/schema.ORItemField"
}
},
"parallel_tool_calls": {
"type": "boolean"
},
"presence_penalty": {
"type": "number"
},
"previous_response_id": {
"type": "string"
},
"prompt_cache_key": {
"description": "nullable",
"type": "string"
},
"reasoning": {
"description": "nullable",
"allOf": [
{
"$ref": "#/definitions/schema.ORReasoning"
}
]
},
"safety_identifier": {
"description": "Safety and caching",
"type": "string"
},
"service_tier": {
"type": "string"
},
"status": {
"description": "in_progress|completed|failed|incomplete",
"type": "string"
},
"store": {
"type": "boolean"
},
"temperature": {
"description": "Sampling parameters (always required)",
"type": "number"
},
"text": {
"description": "Text format configuration",
"allOf": [
{
"$ref": "#/definitions/schema.ORTextConfig"
}
]
},
"tool_choice": {},
"tools": {
"description": "Tool-related fields",
"type": "array",
"items": {
"$ref": "#/definitions/schema.ORFunctionTool"
}
},
"top_logprobs": {
"description": "Default to 0",
"type": "integer"
},
"top_p": {
"type": "number"
},
"truncation": {
"description": "Truncation and reasoning",
"type": "string"
},
"usage": {
"description": "Usage statistics",
"allOf": [
{
"$ref": "#/definitions/schema.ORUsage"
}
]
}
}
},
"schema.ORTextConfig": {
"type": "object",
"properties": {
"format": {
"$ref": "#/definitions/schema.ORTextFormat"
}
}
},
"schema.ORTextFormat": {
"type": "object",
"properties": {
"type": {
"description": "\"text\" or \"json_schema\"",
"type": "string"
}
}
},
"schema.ORUsage": {
"type": "object",
"properties": {
"input_tokens": {
"type": "integer"
},
"input_tokens_details": {
"description": "Always present",
"allOf": [
{
"$ref": "#/definitions/schema.ORInputTokensDetails"
}
]
},
"output_tokens": {
"type": "integer"
},
"output_tokens_details": {
"description": "Always present",
"allOf": [
{
"$ref": "#/definitions/schema.OROutputTokensDetails"
}
]
},
"total_tokens": {
"type": "integer"
}
}
},
"schema.OpenAIModel": {
"type": "object",
"properties": {
@@ -2774,6 +3200,114 @@
}
}
},
"schema.OpenResponsesRequest": {
"type": "object",
"properties": {
"allowed_tools": {
"description": "Restrict which tools can be invoked",
"type": "array",
"items": {
"type": "string"
}
},
"background": {
"description": "Run request in background",
"type": "boolean"
},
"frequency_penalty": {
"description": "Frequency penalty (-2.0 to 2.0)",
"type": "number"
},
"include": {
"description": "What to include in response",
"type": "array",
"items": {
"type": "string"
}
},
"input": {
"description": "string or []ORItemParam"
},
"instructions": {
"type": "string"
},
"logit_bias": {
"description": "OpenAI-compatible extensions (not in Open Responses spec)",
"type": "object",
"additionalProperties": {
"type": "number",
"format": "float64"
}
},
"max_output_tokens": {
"type": "integer"
},
"max_tool_calls": {
"description": "Maximum number of tool calls",
"type": "integer"
},
"metadata": {
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"model": {
"type": "string"
},
"parallel_tool_calls": {
"description": "Allow parallel tool calls",
"type": "boolean"
},
"presence_penalty": {
"description": "Presence penalty (-2.0 to 2.0)",
"type": "number"
},
"previous_response_id": {
"type": "string"
},
"reasoning": {
"$ref": "#/definitions/schema.ORReasoningParam"
},
"service_tier": {
"description": "\"auto\"|\"default\"|priority hint",
"type": "string"
},
"store": {
"description": "Whether to store the response",
"type": "boolean"
},
"stream": {
"type": "boolean"
},
"temperature": {
"type": "number"
},
"text_format": {
"description": "Additional parameters from spec"
},
"tool_choice": {
"description": "\"auto\"|\"required\"|\"none\"|{type:\"function\",name:\"...\"}"
},
"tools": {
"type": "array",
"items": {
"$ref": "#/definitions/schema.ORFunctionTool"
}
},
"top_logprobs": {
"description": "Number of top logprobs to return",
"type": "integer"
},
"top_p": {
"type": "number"
},
"truncation": {
"description": "\"auto\"|\"disabled\"",
"type": "string"
}
}
},
"schema.P2PNodesResponse": {
"type": "object",
"properties": {

View File

@@ -742,6 +742,212 @@ definitions:
tunnelAddress:
type: string
type: object
schema.ORError:
properties:
code:
type: string
message:
type: string
param:
type: string
type:
description: invalid_request|not_found|server_error|model_error|too_many_requests
type: string
type: object
schema.ORFunctionTool:
properties:
description:
type: string
name:
type: string
parameters:
additionalProperties: true
type: object
strict:
description: Always include in response
type: boolean
type:
description: always "function"
type: string
type: object
schema.ORIncompleteDetails:
properties:
reason:
type: string
type: object
schema.ORInputTokensDetails:
properties:
cached_tokens:
description: Always include, even if 0
type: integer
type: object
schema.ORItemField:
properties:
arguments:
type: string
call_id:
description: Function call fields
type: string
content:
description: string or []ORContentPart for messages
id:
description: Present for all output items
type: string
name:
type: string
output:
description: Function call output fields
role:
description: Message fields
type: string
status:
description: in_progress|completed|incomplete
type: string
type:
description: message|function_call|function_call_output|reasoning|item_reference
type: string
type: object
schema.OROutputTokensDetails:
properties:
reasoning_tokens:
description: Always include, even if 0
type: integer
type: object
schema.ORReasoning:
properties:
effort:
type: string
summary:
type: string
type: object
schema.ORReasoningParam:
properties:
effort:
description: '"none"|"low"|"medium"|"high"|"xhigh"'
type: string
summary:
description: '"auto"|"concise"|"detailed"'
type: string
type: object
schema.ORResponseResource:
properties:
background:
type: boolean
completed_at:
description: 'Required: present as number or null'
type: integer
created_at:
type: integer
error:
allOf:
- $ref: '#/definitions/schema.ORError'
description: Always present, null if no error
frequency_penalty:
type: number
id:
type: string
incomplete_details:
allOf:
- $ref: '#/definitions/schema.ORIncompleteDetails'
description: Always present, null if complete
instructions:
type: string
max_output_tokens:
type: integer
max_tool_calls:
description: nullable
type: integer
metadata:
additionalProperties:
type: string
description: Metadata and operational flags
type: object
model:
type: string
object:
description: always "response"
type: string
output:
items:
$ref: '#/definitions/schema.ORItemField'
type: array
parallel_tool_calls:
type: boolean
presence_penalty:
type: number
previous_response_id:
type: string
prompt_cache_key:
description: nullable
type: string
reasoning:
allOf:
- $ref: '#/definitions/schema.ORReasoning'
description: nullable
safety_identifier:
description: Safety and caching
type: string
service_tier:
type: string
status:
description: in_progress|completed|failed|incomplete
type: string
store:
type: boolean
temperature:
description: Sampling parameters (always required)
type: number
text:
allOf:
- $ref: '#/definitions/schema.ORTextConfig'
description: Text format configuration
tool_choice: {}
tools:
description: Tool-related fields
items:
$ref: '#/definitions/schema.ORFunctionTool'
type: array
top_logprobs:
description: Default to 0
type: integer
top_p:
type: number
truncation:
description: Truncation and reasoning
type: string
usage:
allOf:
- $ref: '#/definitions/schema.ORUsage'
description: Usage statistics
type: object
schema.ORTextConfig:
properties:
format:
$ref: '#/definitions/schema.ORTextFormat'
type: object
schema.ORTextFormat:
properties:
type:
description: '"text" or "json_schema"'
type: string
type: object
schema.ORUsage:
properties:
input_tokens:
type: integer
input_tokens_details:
allOf:
- $ref: '#/definitions/schema.ORInputTokensDetails'
description: Always present
output_tokens:
type: integer
output_tokens_details:
allOf:
- $ref: '#/definitions/schema.OROutputTokensDetails'
description: Always present
total_tokens:
type: integer
type: object
schema.OpenAIModel:
properties:
id:
@@ -936,6 +1142,82 @@ definitions:
total_tokens:
type: integer
type: object
schema.OpenResponsesRequest:
properties:
allowed_tools:
description: Restrict which tools can be invoked
items:
type: string
type: array
background:
description: Run request in background
type: boolean
frequency_penalty:
description: Frequency penalty (-2.0 to 2.0)
type: number
include:
description: What to include in response
items:
type: string
type: array
input:
description: string or []ORItemParam
instructions:
type: string
logit_bias:
additionalProperties:
format: float64
type: number
description: OpenAI-compatible extensions (not in Open Responses spec)
type: object
max_output_tokens:
type: integer
max_tool_calls:
description: Maximum number of tool calls
type: integer
metadata:
additionalProperties:
type: string
type: object
model:
type: string
parallel_tool_calls:
description: Allow parallel tool calls
type: boolean
presence_penalty:
description: Presence penalty (-2.0 to 2.0)
type: number
previous_response_id:
type: string
reasoning:
$ref: '#/definitions/schema.ORReasoningParam'
service_tier:
description: '"auto"|"default"|priority hint'
type: string
store:
description: Whether to store the response
type: boolean
stream:
type: boolean
temperature:
type: number
text_format:
description: Additional parameters from spec
tool_choice:
description: '"auto"|"required"|"none"|{type:"function",name:"..."}'
tools:
items:
$ref: '#/definitions/schema.ORFunctionTool'
type: array
top_logprobs:
description: Number of top logprobs to return
type: integer
top_p:
type: number
truncation:
description: '"auto"|"disabled"'
type: string
type: object
schema.P2PNodesResponse:
properties:
federated_nodes:
@@ -1962,6 +2244,80 @@ paths:
schema:
$ref: '#/definitions/schema.JINARerankResponse'
summary: Reranks a list of phrases by relevance to a given text query.
/v1/responses:
post:
parameters:
- description: Request body
in: body
name: request
required: true
schema:
$ref: '#/definitions/schema.OpenResponsesRequest'
responses:
"200":
description: Response
schema:
$ref: '#/definitions/schema.ORResponseResource'
summary: Create a response using the Open Responses API
/v1/responses/{id}:
get:
description: Retrieve a response by ID. Can be used for polling background responses
or resuming streaming responses.
parameters:
- description: Response ID
in: path
name: id
required: true
type: string
- description: Set to 'true' to resume streaming
in: query
name: stream
type: string
- description: Sequence number to resume from (for streaming)
in: query
name: starting_after
type: integer
responses:
"200":
description: Response
schema:
$ref: '#/definitions/schema.ORResponseResource'
"400":
description: Bad Request
schema:
additionalProperties: true
type: object
"404":
description: Not Found
schema:
additionalProperties: true
type: object
summary: Get a response by ID
/v1/responses/{id}/cancel:
post:
description: Cancel a background response if it's still in progress
parameters:
- description: Response ID
in: path
name: id
required: true
type: string
responses:
"200":
description: Response
schema:
$ref: '#/definitions/schema.ORResponseResource'
"400":
description: Bad Request
schema:
additionalProperties: true
type: object
"404":
description: Not Found
schema:
additionalProperties: true
type: object
summary: Cancel a response
/v1/sound-generation:
post:
parameters: