Compare commits

..

230 Commits

Author SHA1 Message Date
Ettore Di Giacinto
2adddef5fe Address feedback from review
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-02 21:34:23 +01:00
majiayu000
d89c7b731a fix: resolve duplicate MCP route registration causing 50% failure rate
Fixes #7772

The issue was caused by duplicate registration of the MCP endpoint
/mcp/v1/chat/completions in both openai.go and localai.go, leading
to a race condition where requests would randomly hit different
handlers with incompatible behaviors.

Changes:
- Removed duplicate MCP route registration from openai.go
- Kept the localai.MCPStreamEndpoint as the canonical handler
- Added all three MCP route patterns for backward compatibility:
  * /v1/mcp/chat/completions
  * /mcp/v1/chat/completions
  * /mcp/chat/completions
- Added comments to clarify route ownership and prevent future conflicts
- Fixed formatting in ui_api.go

The localai.MCPStreamEndpoint handler is more feature-complete as it
supports both streaming and non-streaming modes, while the removed
openai.MCPCompletionEndpoint only supported synchronous requests.

This eliminates the ~50% failure rate where the cogito library would
receive "Invalid http method" errors when internal HTTP requests were
routed to the wrong handler.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: majiayu000 <1835304752@qq.com>
2026-01-02 21:29:05 +01:00
Ettore Di Giacinto
5f6c941399 fix(llama.cpp/mmproj): fix loading mmproj in nested sub-dirs different from model path (#7832)
fix(mmproj): fix loading mmproj in nested sub-dirs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-02 20:17:30 +01:00
LocalAI [bot]
1639fc6309 chore(model gallery): 🤖 add 1 new models via gallery agent (#7831)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-02 15:10:00 +01:00
Ettore Di Giacinto
841e8f6d47 fix(image-gen): fix scrolling issues (#7829)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-02 09:05:49 +01:00
LocalAI [bot]
fd152c97c0 chore(model gallery): 🤖 add 1 new models via gallery agent (#7826)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-02 08:45:43 +01:00
LocalAI [bot]
949de04052 chore: ⬆️ Update ggml-org/llama.cpp to ced765be44ce173c374f295b3c6f4175f8fd109b (#7822)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-02 08:44:49 +01:00
Ettore Di Giacinto
76cfe1f367 feat(image-gen/UI): move controls to the left, make the page more compact (#7823)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-01-01 22:07:42 +01:00
LocalAI [bot]
5ee6c1810b feat(swagger): update swagger (#7820)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2026-01-01 21:16:38 +01:00
LocalAI [bot]
7db79aadfa chore(model-gallery): ⬆️ update checksum (#7821)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-01 21:16:11 +01:00
nold
dee48679b4 Fix(gallery): Updated checksums for qwen3-vl-30b instruct & thinking (#7819)
* Fix(gallery): SHA256 hashes for qwen3-vl-30b-instruct

Signed-off-by: nold <Nold360@users.noreply.github.com>

* Fix(gallery): SHA256 checksums for qwen3-vl-30b-thinking

Signed-off-by: nold <Nold360@users.noreply.github.com>

---------

Signed-off-by: nold <Nold360@users.noreply.github.com>
2026-01-01 20:33:55 +01:00
LocalAI [bot]
94b47a9310 chore(model gallery): 🤖 add 1 new models via gallery agent (#7816)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-01-01 19:20:26 +01:00
LocalAI [bot]
bc3e8793ed chore: ⬆️ Update ggml-org/llama.cpp to 13814eb370d2f0b70e1830cc577b6155b17aee47 (#7809)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-31 23:04:01 +01:00
LocalAI [bot]
91978bb3a5 chore: ⬆️ Update ggml-org/whisper.cpp to e9898ddfb908ffaa7026c66852a023889a5a7202 (#7810)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-31 22:59:05 +01:00
Ettore Di Giacinto
797f27f09f feat(UI): image generation improvements (#7804)
* chore: drop mode from image generation(unused)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(UI): improve image generation front-end

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(UI): only ref images. files is to be deprecated

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* do not override default steps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-31 21:59:46 +01:00
LocalAI [bot]
3f1631aa87 chore(model gallery): 🤖 add 1 new models via gallery agent (#7807)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-31 19:29:59 +01:00
LocalAI [bot]
dad509637e chore(model gallery): 🤖 add 1 new models via gallery agent (#7801)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-31 09:18:35 +01:00
LocalAI [bot]
218f3a126a chore: ⬆️ Update ggml-org/llama.cpp to 0f89d2ecf14270f45f43c442e90ae433fd82dab1 (#7795)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-31 08:53:41 +01:00
Ettore Di Giacinto
be77a845fa fix(gallery agent): change model
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-30 22:34:25 +00:00
Ettore Di Giacinto
ca32286022 fix(gallery agent): change model
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-30 22:27:48 +00:00
Ettore Di Giacinto
1f592505dd fix(gallery agent): change model
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-30 22:22:45 +00:00
Ettore Di Giacinto
b3bc623eb3 fix(gallery agent): fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-30 22:18:02 +00:00
Ettore Di Giacinto
e56391cf14 Add individual sponsors acknowledgment in README
Added a section to acknowledge individual sponsors and their contributions.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-30 23:01:22 +01:00
Ettore Di Giacinto
ef3ffe4a4e fix(gallery agent): fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-30 21:56:54 +00:00
Ettore Di Giacinto
3cffde2cd5 fix(gallery agent): skip model selection if only one
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-30 21:53:37 +00:00
LocalAI [bot]
234bf7e2ad feat(swagger): update swagger (#7794)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-30 21:05:01 +00:00
lif
ba73d2e759 fix: Failed to download checksums.txt when using launch to install localai (#7788)
* fix: add retry logic and fallback for checksums.txt download

- Add HTTP client with 30s timeout to ReleaseManager
- Implement downloadFileWithRetry with 3 attempts and exponential backoff
- Allow manual checksum placement at ~/.localai/checksums/checksums-<version>.txt
- Continue installation with warning if checksum download/verification fails
- Add test for HTTPClient initialization
- Fix linter error in systray_manager.go

Fixes #7385

Signed-off-by: majiayu000 <1835304752@qq.com>

* fix: add retry logic and improve checksums.txt download handling

This commit addresses issue #7385 by implementing:
- Retry logic (3 attempts) for checksum file downloads
- Fallback to manually placed checksum files
- Option to proceed with installation if checksums unavailable (with warnings)
- Fixed resource leaks in download retry loop
- Added configurable HTTP client with 30s timeout

The installation will now be more resilient to network issues while
maintaining security through checksum verification when available.

Signed-off-by: majiayu000 <1835304752@qq.com>

* fix: check for existing checksum file before downloading

This commit addresses the review feedback from mudler on PR #7788.
The code now checks if there's already a checksum file (either manually
placed or previously downloaded) and honors that, skipping download
entirely in such case.

Changes:
- Check for existing checksum file at ~/.localai/checksums/checksums-<version>.txt first
- Check for existing downloaded checksum file at binary path
- Only attempt to download if no existing checksum file is found
- This prevents unnecessary network requests and honors user-placed checksums

Signed-off-by: majiayu000 <1835304752@qq.com>

🤖 Generated with Claude Code

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 18:33:44 +01:00
Ettore Di Giacinto
592697216b Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.11" (#7789)
Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.11 (#7774)"

This reverts commit 0c16f55b45.
2025-12-30 09:58:13 +01:00
lif
8bd7143a44 fix: propagate validation errors (#7787)
fix: validate MCP configuration in model config

Fixes #7334

The Validate() function was not checking if MCP configuration
(mcp.stdio and mcp.remote) contains valid JSON. This caused
malformed JSON with missing commas to be silently accepted.

Changes:
- Add MCP configuration validation to ModelConfig.Validate()
- Properly report validation errors instead of discarding them
- Add test cases for valid and invalid MCP configurations

The fix ensures that malformed JSON in MCP config sections
will now be caught and reported during validation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 09:54:27 +01:00
lif
0d0ef0121c fix: Usage for image generation is incorrect (and causes error in LiteLLM) (#7786)
* fix: Add usage fields to image generation response for OpenAI API compatibility

Fixes #7354

Added input_tokens, output_tokens, and input_tokens_details fields to the
image generation API response to comply with OpenAI's image generation API
specification. This resolves validation errors in LiteLLM and the OpenAI SDK.

Changes:
- Added InputTokensDetails struct with text_tokens and image_tokens fields
- Extended OpenAIUsage struct with input_tokens, output_tokens, and input_tokens_details
- Updated ImageEndpoint to populate usage object with required fields
- Updated InpaintingEndpoint to populate usage object with required fields
- All fields initialized to 0 as per current behavior

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: majiayu000 <1835304752@qq.com>

* fix: Correct usage field types for image generation API compatibility

Changed InputTokens and OutputTokens from pointer types (*int) to
regular int types to match OpenAI API specification. This fixes
validation errors with LiteLLM and OpenAI SDK when parsing image
generation responses.

Fixes #7354

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: majiayu000 <1835304752@qq.com>

---------

Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 09:53:05 +01:00
lif
d7b2eee08f fix: add nil checks before mergo.Merge to prevent panic in gallery model installation (#7785)
Fixes #7420

Added nil checks before calling mergo.Merge in InstallModelFromGallery and InstallModel
functions to prevent panic when req.Overrides or configOverrides are nil. The panic was
occurring at models.go:248 during Qwen-Image-Edit gallery model download.

Changes:
- Added nil check for req.Overrides before merging in InstallModelFromGallery (line 126)
- Added nil check for configOverrides before merging in InstallModel (line 248)
- Added test case to verify nil configOverrides are handled without panic

Signed-off-by: majiayu000 <1835304752@qq.com>
2025-12-30 09:51:45 +01:00
LocalAI [bot]
bc8ec5cb39 chore: ⬆️ Update ggml-org/llama.cpp to c9a3b40d6578f2381a1373d10249403d58c3c5bd (#7778)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-30 08:27:16 +01:00
dependabot[bot]
3f38fecdfc chore(deps): bump github.com/modelcontextprotocol/go-sdk from 1.1.0 to 1.2.0 (#7776)
chore(deps): bump github.com/modelcontextprotocol/go-sdk

Bumps [github.com/modelcontextprotocol/go-sdk](https://github.com/modelcontextprotocol/go-sdk) from 1.1.0 to 1.2.0.
- [Release notes](https://github.com/modelcontextprotocol/go-sdk/releases)
- [Commits](https://github.com/modelcontextprotocol/go-sdk/compare/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: github.com/modelcontextprotocol/go-sdk
  dependency-version: 1.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-29 22:15:29 +01:00
dependabot[bot]
20a4199229 chore(deps): bump github.com/schollz/progressbar/v3 from 3.18.0 to 3.19.0 (#7775)
chore(deps): bump github.com/schollz/progressbar/v3

Bumps [github.com/schollz/progressbar/v3](https://github.com/schollz/progressbar) from 3.18.0 to 3.19.0.
- [Release notes](https://github.com/schollz/progressbar/releases)
- [Commits](https://github.com/schollz/progressbar/compare/v3.18.0...v3.19.0)

---
updated-dependencies:
- dependency-name: github.com/schollz/progressbar/v3
  dependency-version: 3.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-29 22:15:11 +01:00
Ettore Di Giacinto
ded9955881 chore(ci): do not select models if we have only 1 result
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-29 22:14:14 +01:00
dependabot[bot]
cf78f9a2a8 chore(deps): bump google.golang.org/grpc from 1.77.0 to 1.78.0 (#7777)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.77.0 to 1.78.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.77.0...v1.78.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.78.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-29 21:03:57 +01:00
dependabot[bot]
0c16f55b45 chore(deps): bump securego/gosec from 2.22.9 to 2.22.11 (#7774)
Bumps [securego/gosec](https://github.com/securego/gosec) from 2.22.9 to 2.22.11.
- [Release notes](https://github.com/securego/gosec/releases)
- [Commits](https://github.com/securego/gosec/compare/v2.22.9...v2.22.11)

---
updated-dependencies:
- dependency-name: securego/gosec
  dependency-version: 2.22.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-29 19:18:29 +00:00
Richard Palethorpe
0b80167912 chore: ⬆️ Update leejet/stable-diffusion.cpp to 4ff2c8c74bd17c2cfffe3a01be77743fb3efba2f (#7771)
* ⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix: Add KL_OPTIMAL scheduler, pass sampler to default scheduler for LCM and fixup other refactorings from upstream

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* Delete backend/go/stablediffusion-ggml/compile_commands.json

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

---------

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-29 19:06:35 +01:00
Richard Palethorpe
99b5c5f156 feat(api): Allow tracing of requests and responses (#7609)
* feat(api): Allow tracing of requests and responses

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* feat(traces): Add traces UI

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-12-29 11:06:06 +01:00
Ettore Di Giacinto
9ab812a8e8 chore(ci): be more precise when detecting existing models (#7767)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-29 10:06:42 +01:00
Ettore Di Giacinto
185a685211 fix(amd-gpu): correctly show total and used vram (#7761)
An example output of `rocm-smi --showproductname --showmeminfo vram --showuniqueid --csv`:

```
device,Unique ID,VRAM Total Memory (B),VRAM Total Used Memory (B),Card Series,Card Model,Card Vendor,Card SKU,Subsystem ID,Device Rev,Node ID,GUID,GFX Version
card0,0x9246____________,17163091968,692142080,Navi 21 [Radeon RX 6800/6800 XT / 6900 XT],0x73bf,Advanced Micro Devices Inc. [AMD/ATI],001,0x2406,0xc1,1,45534,gfx1030
card1,N/A,67108864,26079232,Raphael,0x164e,Advanced Micro Devices Inc. [AMD/ATI],RAPHAEL,0x364e,0xc6,2,52156,gfx1036
```

Total memory is actually showed before the total used memory as can be seen in https://github.com/LostRuins/koboldcpp/issues/1104#issuecomment-2321143507.

This PR fixes https://github.com/mudler/LocalAI/issues/7724

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-29 07:57:07 +01:00
LocalAI [bot]
1a6fd0f7fc chore: ⬆️ Update ggml-org/llama.cpp to 4ffc47cb2001e7d523f9ff525335bbe34b1a2858 (#7760)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-28 21:10:39 +00:00
LocalAI [bot]
c95c482f36 chore: ⬆️ Update ggml-org/llama.cpp to a4bf35889eda36d3597cd0f8f333f5b8a2fcaefc (#7751)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-27 21:09:12 +00:00
Ettore Di Giacinto
21c464c34f fix(cli): import via CLI needs system state (#7746)
pass system state to application config to avoid nil pointer exception
during import.

Fixes: https://github.com/mudler/LocalAI/issues/7728

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-27 11:10:28 +01:00
LocalAI [bot]
ddf0281785 chore: ⬆️ Update ggml-org/llama.cpp to 7ac8902133da6eb390c4d8368a7d252279123942 (#7740)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-26 21:44:34 +00:00
LocalAI [bot]
86c68c9623 chore: ⬆️ Update ggml-org/llama.cpp to 85c40c9b02941ebf1add1469af75f1796d513ef4 (#7731)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-25 21:10:28 +00:00
Ettore Di Giacinto
c844b7ac58 feat: disable force eviction (#7725)
* feat: allow to set forcing backends eviction while requests are in flight

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: try to make the request sit and retry if eviction couldn't be done

Otherwise calls that in order to pass would need to shutdown other
backends would just fail.

In this way instead we make the request sit and retry eviction until it
succeeds. The thresholds can be configured by the user.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* expose settings to CLI

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update docs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-25 14:26:18 +01:00
Ettore Di Giacinto
bb459e671f fix(ui): correctly parse import errors (#7726)
errors are nested

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-25 10:43:12 +01:00
LocalAI [bot]
2fe6e278c8 chore: ⬆️ Update ggml-org/llama.cpp to c18428423018ed214c004e6ecaedb0cbdda06805 (#7718)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-25 10:00:40 +01:00
LocalAI [bot]
ae69921d77 chore: ⬆️ Update ggml-org/whisper.cpp to 6114e692136bea917dc88a5eb2e532c3d133d963 (#7717)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-25 10:00:24 +01:00
Ettore Di Giacinto
bf2f95c684 chore(docs): update docs with cuda 13 instructions and the new vibevoice backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-25 10:00:07 +01:00
LocalAI [bot]
94069f2751 docs: ⬆️ update docs version mudler/LocalAI (#7716)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-24 21:06:02 +00:00
LocalAI [bot]
aadec0b8cb chore(model gallery): 🤖 add 1 new models via gallery agent (#7712)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-24 13:00:54 +01:00
Ettore Di Giacinto
35d71cf25e fix: remove duplicate logging line
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-24 09:35:18 +01:00
Ettore Di Giacinto
39a5a84e64 fix: include virtual config
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-24 09:30:29 +01:00
Ettore Di Giacinto
83ed16f325 chore(logging): be consistent and do not emit logs from echo (#7710)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-24 09:22:27 +01:00
Ettore Di Giacinto
c8173f0f67 chore(gallery): cleanup old architectures
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-24 09:14:03 +01:00
LocalAI [bot]
6dc2dbc835 chore(model gallery): 🤖 add 1 new models via gallery agent (#7707)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-24 08:34:18 +01:00
Ettore Di Giacinto
0a168830ea chore(deps): Bump llama.cpp to '5b6c9bc0f3c8f55598b9999b65aff7ce4119bc15' and refactor usage of base params (#7706)
* chore(deps): Bump llama.cpp to '5b6c9bc0f3c8f55598b9999b65aff7ce4119bc15' and refactor usage of base params

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: update AGENTS.md

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-24 00:28:27 +01:00
LocalAI [bot]
96d3f0ebc8 chore(model gallery): 🤖 add 1 new models via gallery agent (#7700)
chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-23 08:53:18 +01:00
Ettore Di Giacinto
b8aacb39e8 Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.11" (#7698)
Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.11 (#7690)"

This reverts commit b698033ef9.
2025-12-22 23:58:42 +01:00
Ettore Di Giacinto
b36a7593fa chore(gallery): cleanup old (superseded) archs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-22 22:55:53 +00:00
Ettore Di Giacinto
1ab91edc08 chore(gallery): cleanup old (superseded) archs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-22 22:53:29 +00:00
Ettore Di Giacinto
31f4e0c46d chore(gallery agent): various fixups (#7697)
* chore(ci/agent): fix formatting issues

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: get icon from readme/hf and prepend to the gallery file

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-22 23:46:40 +01:00
dependabot[bot]
07c80fba88 chore(deps): bump github.com/containerd/containerd from 1.7.29 to 1.7.30 (#7692)
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.7.29 to 1.7.30.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.7.29...v1.7.30)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-version: 1.7.30
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 22:43:42 +01:00
dependabot[bot]
9256a21d2c chore(deps): bump github.com/jaypipes/ghw from 0.21.1 to 0.21.2 (#7694)
Bumps [github.com/jaypipes/ghw](https://github.com/jaypipes/ghw) from 0.21.1 to 0.21.2.
- [Release notes](https://github.com/jaypipes/ghw/releases)
- [Commits](https://github.com/jaypipes/ghw/compare/v0.21.1...v0.21.2)

---
updated-dependencies:
- dependency-name: github.com/jaypipes/ghw
  dependency-version: 0.21.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 22:43:00 +01:00
dependabot[bot]
b3a81292c1 chore(deps): bump github.com/mudler/cogito from 0.7.1 to 0.7.2 (#7691)
Bumps [github.com/mudler/cogito](https://github.com/mudler/cogito) from 0.7.1 to 0.7.2.
- [Release notes](https://github.com/mudler/cogito/releases)
- [Commits](https://github.com/mudler/cogito/compare/v0.7.1...v0.7.2)

---
updated-dependencies:
- dependency-name: github.com/mudler/cogito
  dependency-version: 0.7.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 22:42:35 +01:00
dependabot[bot]
5fc0cafd86 chore(deps): bump github.com/mudler/xlog from 0.0.3 to 0.0.4 (#7695)
Bumps [github.com/mudler/xlog](https://github.com/mudler/xlog) from 0.0.3 to 0.0.4.
- [Release notes](https://github.com/mudler/xlog/releases)
- [Commits](https://github.com/mudler/xlog/compare/v0.0.3...v0.0.4)

---
updated-dependencies:
- dependency-name: github.com/mudler/xlog
  dependency-version: 0.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 22:42:08 +01:00
Richard Palethorpe
9783aeaef5 chore: Add AGENTS.md (#7688)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-12-22 22:41:33 +01:00
dependabot[bot]
b698033ef9 chore(deps): bump securego/gosec from 2.22.9 to 2.22.11 (#7690)
Bumps [securego/gosec](https://github.com/securego/gosec) from 2.22.9 to 2.22.11.
- [Release notes](https://github.com/securego/gosec/releases)
- [Commits](https://github.com/securego/gosec/compare/v2.22.9...v2.22.11)

---
updated-dependencies:
- dependency-name: securego/gosec
  dependency-version: 2.22.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-22 19:09:06 +00:00
Ettore Di Giacinto
fc6057a952 chore(deps): bump llama.cpp to '0e1ccf15c7b6d05c720551b537857ecf6194d420' (#7684)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-22 09:50:42 +01:00
Ettore Di Giacinto
8b3e0ebf8a chore: allow to set local-ai log format, default to custom one (#7679)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-21 21:21:59 +01:00
Mikhail Khludnev
53b0530275 docs: Add langchain-localai integration package to documentation (#7677)
Add `langchain-localai` integration package to documentation

Signed-off-by: Mikhail Khludnev <mkhludnev@users.noreply.github.com>
2025-12-21 21:02:14 +01:00
Ettore Di Giacinto
99d301fcf9 chore(deps): bump xlog to v0.0.3 (#7675)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-21 19:36:54 +01:00
Ettore Di Giacinto
c37785b78c chore(refactor): move logging to common package based on slog (#7668)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-21 19:33:13 +01:00
LocalAI [bot]
38cde81ff4 chore: ⬆️ Update ggml-org/llama.cpp to 52ab19df633f3de5d4db171a16f2d9edd2342fec (#7665)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-20 21:09:15 +00:00
Ettore Di Giacinto
8ba5d6e796 chore(cogito): respect application-level logging and propagate (#7656)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-19 23:02:08 +01:00
Ettore Di Giacinto
8b6f443cd5 chore(deps): bump cogito to latest and adapt API changes (#7655)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-19 22:50:18 +01:00
LocalAI [bot]
626057bcca chore: ⬆️ Update ggml-org/llama.cpp to ce734a8a2f9fb6eb4f0383ab1370a1b0014ab787 (#7654)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-19 21:15:39 +00:00
LocalAI [bot]
aa0efeb0a8 chore: ⬆️ Update ggml-org/whisper.cpp to 6c22e792cb0ee155b6587ce71a8410c3aeb06949 (#7644)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-19 09:26:41 +01:00
LocalAI [bot]
f25ac00bca chore: ⬆️ Update ggml-org/llama.cpp to f9ec8858edea4a0ecfea149d6815ebfb5ecc3bcd (#7642)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-18 21:17:14 +00:00
Richard Palethorpe
c3494a0927 chore: ⬆️ Update leejet/stable-diffusion.cpp to bda7fab9f208dff4b67179a68f694b6ddec13326 (#7639)
* ⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix(stablediffusion-ggml): Don't set removed lora model dir

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-18 20:52:22 +01:00
Richard Palethorpe
716dba94b4 feat(whisper): Add prompt to condition transcription output (#7624)
* chore(makefile): Add buildargs for sd and cuda when building backend

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* feat(whisper): Add prompt to condition transcription output

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-12-18 14:40:45 +01:00
mintyleaf
247983265d fix(uri): consider subfolders when expanding huggingface URLs (#7634)
Update uri.go

Signed-off-by: mintyleaf <mintyleafdev@gmail.com>
2025-12-18 09:12:16 +01:00
LocalAI [bot]
5515119a7e chore: ⬆️ Update ggml-org/llama.cpp to d37fc935059211454e9ad2e2a44e8ed78fd6d1ce (#7629)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-18 09:07:09 +01:00
LocalAI [bot]
4535e7dfc4 chore: ⬆️ Update ggml-org/whisper.cpp to 3e79e73eee32e924fbd34587f2f2ac5a45a26b61 (#7630)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-18 09:06:48 +01:00
Ettore Di Giacinto
d8ee02e607 chore(tests): simplify tests and run intensive ones only once
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-18 09:05:58 +01:00
Ettore Di Giacinto
2d2e8759bb fix(ci): remove specific version for grpcio packages (#7627)
Updated grpcio-tools and grpcio installation to latest version.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-17 19:18:07 +01:00
LocalAI [bot]
14bb65b57b chore: ⬆️ Update ggml-org/llama.cpp to ef83fb8601229ff650d952985be47e82d644bfaa (#7611)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-17 08:32:42 +01:00
Ettore Di Giacinto
3ca90876f1 chore(memory detection): do not use go-sigar as requires CGO on darwin (#7618)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 23:10:42 +01:00
Ettore Di Giacinto
f251bdee64 chore: fixup tests with defaults from constants
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 21:26:55 +00:00
Ettore Di Giacinto
61afe4ca60 chore: drop drawin-x86_64 support (#7616)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 21:22:15 +01:00
Ettore Di Giacinto
424c95edba fix: correctly propagate error during model load (#7610)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 18:26:54 +01:00
Ettore Di Giacinto
b348a99b03 chore: move defaults to constants
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 17:40:51 +01:00
Ettore Di Giacinto
f3c70a96ba chore(memory-reclaimer): use saner defaults
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 16:25:09 +01:00
Ettore Di Giacinto
e3e5f59965 fix(ram): do not read from cgroup (#7606)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 13:28:11 +01:00
blightbow
67baf66555 feat(mlx): add thread-safe LRU prompt cache and min_p/top_k sampling (#7556)
* feat(mlx): add thread-safe LRU prompt cache

Port mlx-lm's LRUPromptCache to fix race condition where concurrent
requests corrupt shared KV cache state. The previous implementation
used a single prompt_cache instance shared across all requests.

Changes:
- Add backend/python/common/mlx_cache.py with ThreadSafeLRUPromptCache
- Modify backend.py to use per-request cache isolation via fetch/insert
- Add prefix matching for cache reuse across similar prompts
- Add LRU eviction (default 10 entries, configurable)
- Add concurrency and cache unit tests

The cache uses a trie-based structure for efficient prefix matching,
allowing prompts that share common prefixes to reuse cached KV states.
Thread safety is provided via threading.Lock.

New configuration options:
- max_cache_entries: Maximum LRU cache entries (default: 10)
- max_kv_size: Maximum KV cache size per entry (default: None)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Blightbow <blightbow@users.noreply.github.com>

* feat(mlx): add min_p and top_k sampler support

Add MinP field to proto (field 52) following the precedent set by
other non-OpenAI sampling parameters like TopK, TailFreeSamplingZ,
TypicalP, and Mirostat.

Changes:
- backend.proto: Add float MinP field for min-p sampling
- backend.py: Extract and pass min_p and top_k to mlx_lm sampler
  (top_k was in proto but not being passed)
- test.py: Fix test_sampling_params to use valid proto fields and
  switch to MLX-compatible model (mlx-community/Llama-3.2-1B-Instruct)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Blightbow <blightbow@users.noreply.github.com>

* refactor(mlx): move mlx_cache.py from common to mlx backend

The ThreadSafeLRUPromptCache is only used by the mlx backend. After
evaluating mlx-vlm, it was determined that the cache cannot be shared
because mlx-vlm's generate/stream_generate functions don't support
the prompt_cache parameter that mlx_lm provides.

- Move mlx_cache.py from backend/python/common/ to backend/python/mlx/
- Remove sys.path manipulation from backend.py and test.py
- Fix test assertion to expect "MLX model loaded successfully"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Blightbow <blightbow@users.noreply.github.com>

* test(mlx): add comprehensive cache tests and document upstream behavior

Added comprehensive unit tests (test_mlx_cache.py) covering all cache
operation modes:
- Exact match
- Shorter prefix match
- Longer prefix match with trimming
- No match scenarios
- LRU eviction and access order
- Reference counting and deep copy behavior
- Multi-model namespacing
- Thread safety with data integrity verification

Documents upstream mlx_lm/server.py behavior: single-token prefixes are
deliberately not matched (uses > 0, not >= 0) to allow longer cached
sequences to be preferred for trimming. This is acceptable because real
prompts with chat templates are always many tokens.

Removed weak unit tests from test.py that only verified "no exception
thrown" rather than correctness.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Blightbow <blightbow@users.noreply.github.com>

* chore(mlx): remove unused MinP proto field

The MinP field was added to PredictOptions but is not populated by the
Go frontend/API. The MLX backend uses getattr with a default value,
so it works without the proto field.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Blightbow <blightbow@users.noreply.github.com>

---------

Signed-off-by: Blightbow <blightbow@users.noreply.github.com>
Co-authored-by: Blightbow <blightbow@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 11:27:46 +01:00
Ettore Di Giacinto
878c9d46d5 fix: improve ram estimation (#7603)
* fix: default to 10seconds of watchdog if runtime setting is malformed

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: use gosigar for RAM estimation

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 10:18:36 +01:00
Ettore Di Giacinto
b841a495da Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.11" (#7602)
Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.11 (#7588)"

This reverts commit 648dfc0389.
2025-12-16 09:48:46 +01:00
Ettore Di Giacinto
f75903d7f7 Update latest project news in README
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-16 09:16:42 +01:00
Ettore Di Giacinto
50f9c9a058 feat(watchdog): add Memory resource reclaimer (#7583)
* feat(watchdog): add GPU reclaimer

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Handle vram calculation for unified memory devices

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Support RAM eviction, set watchdog interval from runtime settings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-16 09:15:18 +01:00
dependabot[bot]
dbd25885c3 chore(deps): bump sentence-transformers from 5.1.0 to 5.2.0 in /backend/python/transformers (#7594)
chore(deps): bump sentence-transformers in /backend/python/transformers

Bumps [sentence-transformers](https://github.com/huggingface/sentence-transformers) from 5.1.0 to 5.2.0.
- [Release notes](https://github.com/huggingface/sentence-transformers/releases)
- [Commits](https://github.com/huggingface/sentence-transformers/compare/v5.1.0...v5.2.0)

---
updated-dependencies:
- dependency-name: sentence-transformers
  dependency-version: 5.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 09:12:57 +01:00
dependabot[bot]
3d55055126 chore(deps): bump github.com/jaypipes/ghw from 0.20.0 to 0.21.1 (#7591)
Bumps [github.com/jaypipes/ghw](https://github.com/jaypipes/ghw) from 0.20.0 to 0.21.1.
- [Release notes](https://github.com/jaypipes/ghw/releases)
- [Commits](https://github.com/jaypipes/ghw/compare/v0.20.0...v0.21.1)

---
updated-dependencies:
- dependency-name: github.com/jaypipes/ghw
  dependency-version: 0.21.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 08:16:05 +01:00
dependabot[bot]
af7ba2e3de chore(deps): bump github.com/labstack/echo/v4 from 4.13.4 to 4.14.0 (#7589)
Bumps [github.com/labstack/echo/v4](https://github.com/labstack/echo) from 4.13.4 to 4.14.0.
- [Release notes](https://github.com/labstack/echo/releases)
- [Changelog](https://github.com/labstack/echo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/labstack/echo/compare/v4.13.4...v4.14.0)

---
updated-dependencies:
- dependency-name: github.com/labstack/echo/v4
  dependency-version: 4.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 08:15:41 +01:00
LocalAI [bot]
7a3b0bbfaa chore: ⬆️ Update leejet/stable-diffusion.cpp to 200cb6f2ca07e40fa83b610a4e595f4da06ec709 (#7597)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-16 08:15:15 +01:00
dependabot[bot]
648dfc0389 chore(deps): bump securego/gosec from 2.22.9 to 2.22.11 (#7588)
Bumps [securego/gosec](https://github.com/securego/gosec) from 2.22.9 to 2.22.11.
- [Release notes](https://github.com/securego/gosec/releases)
- [Commits](https://github.com/securego/gosec/compare/v2.22.9...v2.22.11)

---
updated-dependencies:
- dependency-name: securego/gosec
  dependency-version: 2.22.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 01:49:11 +00:00
dependabot[bot]
b396413ad5 chore(deps): bump actions/download-artifact from 6 to 7 (#7587)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 00:14:02 +01:00
dependabot[bot]
2ad928678c chore(deps): bump peter-evans/create-pull-request from 7 to 8 (#7586)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7 to 8.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](https://github.com/peter-evans/create-pull-request/compare/v7...v8)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 00:13:42 +01:00
dependabot[bot]
9b27b53a50 chore(deps): bump github.com/onsi/ginkgo/v2 from 2.27.2 to 2.27.3 (#7590)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.27.2 to 2.27.3.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.27.2...v2.27.3)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.27.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 22:58:45 +01:00
Ettore Di Giacinto
2387b266d8 chore(llama.cpp): Add Missing llama.cpp Options to gRPC Server (#7584)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-15 21:55:20 +01:00
dependabot[bot]
0f2df23c61 chore(deps): bump actions/upload-artifact from 5 to 6 (#7585)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 19:33:48 +00:00
Ettore Di Giacinto
8ac7e8c299 fix(chat-ui): model selection toggle and new chat (#7574)
Fixes a minor glitch that happens when switching model in from the chat
pane where the header was not getting updated. Besides, it allows to
create new chat directly when clicking from the management pane to the
model.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-14 22:29:11 +01:00
LocalAI [bot]
0f5cc4c07b chore: ⬆️ Update ggml-org/llama.cpp to 5c8a717128cc98aa9e5b1c44652f5cf458fd426e (#7573)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-14 22:21:54 +01:00
LocalAI [bot]
3e4e6777d8 chore: ⬆️ Update ggml-org/llama.cpp to 5266379bcae74214af397f36aa81b2a08b15d545 (#7563)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-14 11:41:10 +01:00
Simon Redman
5de539ab07 fix(7355): Update llama-cpp grpc for v3 interface (#7566)
* fix(7355): Update llama-cpp grpc for v3 interface

Signed-off-by: Simon Redman <simon@ergotech.com>

* feat(llama-gprc): Trim whitespace from servers list

Signed-off-by: Simon Redman <simon@ergotech.com>

* Trim trailing spaces in grpc-server.cpp

Signed-off-by: Simon Redman <simon@ergotech.com>

---------

Signed-off-by: Simon Redman <simon@ergotech.com>
2025-12-14 11:40:33 +01:00
LocalAI [bot]
3013d1c7b5 chore: ⬆️ Update leejet/stable-diffusion.cpp to 43a70e819b9254dee0d017305d6992f6bb27f850 (#7562)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-13 22:52:20 +01:00
LocalAI [bot]
073b3855d9 chore: ⬆️ Update ggml-org/whisper.cpp to 2551e4ce98db69027d08bd99bcc3f1a4e2ad2cef (#7561)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-13 21:22:14 +00:00
Ettore Di Giacinto
e1874cdb54 feat(ui): add mask to install custom backends (#7559)
* feat: allow to install backends from URL in the WebUI and API

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* trace backends installations

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-13 19:11:32 +01:00
Ettore Di Giacinto
7790a24682 Revert "chore(deps): bump torch from 2.5.1+cxx11.abi to 2.7.1+cpu in /backend/python/diffusers in the pip group across 1 directory" (#7558)
Revert "chore(deps): bump torch from 2.5.1+cxx11.abi to 2.7.1+cpu in /backend…"

This reverts commit 1b4aa6f1be.
2025-12-13 17:04:46 +01:00
dependabot[bot]
1b4aa6f1be chore(deps): bump torch from 2.5.1+cxx11.abi to 2.7.1+cpu in /backend/python/diffusers in the pip group across 1 directory (#7549)
chore(deps): bump torch

Bumps the pip group with 1 update in the /backend/python/diffusers directory: torch.


Updates `torch` from 2.5.1+cxx11.abi to 2.7.1+cpu

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.7.1+cpu
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-13 13:12:18 +00:00
Ettore Di Giacinto
504d954aea Add chardet to requirements-l4t13.txt
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-13 12:59:03 +01:00
Ettore Di Giacinto
1383ad6d6d Change runner from macOS-14 to macos-latest
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-13 10:11:27 +01:00
Ettore Di Giacinto
5e270ba5bd Change runner from macOS-14 to macos-latest
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-13 10:10:47 +01:00
Ettore Di Giacinto
6d2a535813 chore(l4t13): use pytorch index (#7546)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-13 10:04:57 +01:00
Ettore Di Giacinto
abfb0ff8fe feat(stablediffusion-ggml): add lora support (#7542)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-13 08:29:06 +01:00
LocalAI [bot]
2bd6faaff5 chore: ⬆️ Update leejet/stable-diffusion.cpp to 11ab095230b2b67210f5da4d901588d56c71fe3a (#7539)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-12 21:31:13 +00:00
Ettore Di Giacinto
1a9f5da1b7 Update Discord badge with dynamic member count
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-12 12:50:55 +01:00
Ettore Di Giacinto
7f823fce7c Update Discord badge in README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-12 12:34:57 +01:00
Ettore Di Giacinto
fc5b9ebfcc feat(loader): enhance single active backend to support LRU eviction (#7535)
* feat(loader): refactor single active backend support to LRU

This changeset introduces LRU management of loaded backends. Users can
set now a maximum number of models to be loaded concurrently, and, when
setting LocalAI in single active backend mode we set LRU to 1 for
backward compatibility.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Update docs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-12 12:28:38 +01:00
LocalAI [bot]
c141a40e00 chore(model-gallery): ⬆️ update checksum (#7530)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-12 08:16:04 +01:00
Ettore Di Giacinto
0b130fb811 fix(llama.cpp): handle corner cases with tool array content (#7528)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-12 08:15:45 +01:00
LocalAI [bot]
0771a2d3ec chore: ⬆️ Update ggml-org/llama.cpp to a81a569577cc38b32558958b048228150be63eae (#7529)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-11 21:55:44 +00:00
Richard Palethorpe
9441eb509a chore(makefile): Add buildargs for sd and cuda when building backend (#7525)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-12-11 20:33:19 +01:00
Ettore Di Giacinto
8442f33712 chore(deps): bump stable-diffusion.cpp to '8823dc48bcc1598eb9671da7b69e45338d0cc5a5' (#7524)
* chore(deps): bump stable-diffusion.cpp to '8823dc48bcc1598eb9671da7b69e45338d0cc5a5'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(Dockerfile.golang): Make curl noisy to see when download fails

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: Richard Palethorpe <io@richiejp.com>
2025-12-11 20:32:25 +01:00
Ettore Di Giacinto
5dde7e9ac6 fix: make sure to close on errors (#7521)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-11 14:03:20 +01:00
LocalAI [bot]
72621a1d1c chore: ⬆️ Update ggml-org/llama.cpp to 4dff236a522bd0ed949331d6cb1ee2a1b3615c35 (#7508)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-11 08:15:38 +01:00
Ettore Di Giacinto
3b5c2ea633 feat(ui): allow to order search results (#7507)
* feat(ui): improve table view and let items to be sorted

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactorings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: add tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: use constants

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-11 00:11:33 +01:00
LocalAI [bot]
e1d060d147 chore: ⬆️ Update ggml-org/whisper.cpp to 9f5ed26e43c680bece09df7bdc8c1b7835f0e537 (#7509)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-10 23:09:13 +01:00
Ettore Di Giacinto
32dcb58e89 feat(vibevoice): add new backend (#7494)
* feat(vibevoice): add backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore: add workflow and backend index

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(gallery): add vibevoice

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use self-hosted for intel builds

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pin python version for l4t

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-10 21:14:21 +01:00
LocalAI [bot]
ef44ace73f chore: ⬆️ Update ggml-org/llama.cpp to 086a63e3a5d2dbbb7183a74db453459e544eb55a (#7496)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-10 12:05:13 +01:00
Ettore Di Giacinto
f51d3e380b fix(config): make syncKnownUsecasesFromString idempotent (#7493)
fix(config): correctly parse usecases from strings

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-09 21:08:22 +01:00
Ettore Di Giacinto
6cc5cac7b0 fix(downloader): do not download model files if not necessary (#7492)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-09 19:08:10 +01:00
Ettore Di Giacinto
74ee1463fe chore(deps/llama-cpp): bump to '2fa51c19b028180b35d316e9ed06f5f0f7ada2c1' (#7484)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-09 15:41:37 +01:00
LocalAI [bot]
6c7b215687 chore: ⬆️ Update ggml-org/whisper.cpp to a8f45ab11d6731e591ae3d0230be3fec6c2efc91 (#7483)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-09 08:33:30 +01:00
dependabot[bot]
5e0bc37de3 chore(deps): bump github.com/onsi/gomega from 1.38.2 to 1.38.3 (#7475)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.38.2 to 1.38.3.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.38.2...v1.38.3)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-version: 1.38.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-09 01:24:08 +00:00
dependabot[bot]
e28a00c952 chore(deps): bump go.opentelemetry.io/otel/exporters/prometheus from 0.60.0 to 0.61.0 (#7477)
chore(deps): bump go.opentelemetry.io/otel/exporters/prometheus

Bumps [go.opentelemetry.io/otel/exporters/prometheus](https://github.com/open-telemetry/opentelemetry-go) from 0.60.0 to 0.61.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/exporters/prometheus/v0.60.0...exporters/prometheus/v0.61.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/exporters/prometheus
  dependency-version: 0.61.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 23:43:13 +00:00
dependabot[bot]
08f9a52594 chore(deps): bump github.com/mudler/cogito from 0.5.1 to 0.6.0 (#7474)
Bumps [github.com/mudler/cogito](https://github.com/mudler/cogito) from 0.5.1 to 0.6.0.
- [Release notes](https://github.com/mudler/cogito/releases)
- [Commits](https://github.com/mudler/cogito/compare/v0.5.1...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/mudler/cogito
  dependency-version: 0.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 22:40:33 +01:00
dependabot[bot]
bbce461f57 chore(deps): bump protobuf from 6.33.1 to 6.33.2 in /backend/python/transformers (#7481)
chore(deps): bump protobuf in /backend/python/transformers

Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 6.33.1 to 6.33.2.
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Commits](https://github.com/protocolbuffers/protobuf/commits)

---
updated-dependencies:
- dependency-name: protobuf
  dependency-version: 6.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 22:13:18 +01:00
dependabot[bot]
22e13c362a chore(deps): bump actions/stale from 10.1.0 to 10.1.1 (#7473)
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.0 to 10.1.1.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](5f858e3efb...997185467f)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 21:15:37 +01:00
dependabot[bot]
6bd0442698 chore(deps): bump go.opentelemetry.io/otel/sdk/metric from 1.38.0 to 1.39.0 (#7476)
chore(deps): bump go.opentelemetry.io/otel/sdk/metric

Bumps [go.opentelemetry.io/otel/sdk/metric](https://github.com/open-telemetry/opentelemetry-go) from 1.38.0 to 1.39.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.38.0...v1.39.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/sdk/metric
  dependency-version: 1.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 19:30:21 +00:00
Ettore Di Giacinto
0380bfe006 Enhance README with video and screenshots
Added YouTube video link and screenshots section to README.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-08 17:08:15 +01:00
Ettore Di Giacinto
00a05208bc chore(docs): center video
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-08 16:59:11 +01:00
Ettore Di Giacinto
4a7cd256c9 Revise 'Screenshots' section to include video
Updated section title and added video link for LocalAI.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-08 16:56:34 +01:00
Ettore Di Giacinto
a27d0d151f Embed YouTube video in documentation
Added an embedded YouTube video to the documentation.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-08 16:53:20 +01:00
Ettore Di Giacinto
03a17a2986 fix(paths): remove trailing slash from requests (#7451)
This removes any ambiguity from how paths are handled, and at the same
time it uniforms the ui paths with the other paths that don't have a
trailing slash

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-07 21:45:09 +01:00
Ettore Di Giacinto
8ca98c90ea chore(importers/llama.cpp): add models to 'llama-cpp' subfolder (#7450)
This makes paths predictable, and avoids multiple model files to show in
the main view

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-07 21:44:57 +01:00
Ettore Di Giacinto
18b8956bd9 chore(gallery agent): strip thinking tags (#7464)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-07 19:25:41 +01:00
Ettore Di Giacinto
262afd28a0 chore(gallery agent): summary now is at root of the git repository (#7463)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-07 19:23:27 +01:00
LocalAI [bot]
5610384d8a chore: ⬆️ Update ggml-org/llama.cpp to db97837385edfbc772230debbd49e5efae843a71 (#7447)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-07 08:32:35 +01:00
rampa3
6aee29d18f fix(ui): Update few links in web UI from 'browse' to '/browse/' (#7445)
* Update few links in web UI from 'browse' to '/browse/'

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>

* Update core/http/views/404.html

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

* Update core/http/views/error.html

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

* Update core/http/views/manage.html

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

---------

Signed-off-by: rampa3 <68955305+rampa3@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-06 22:40:26 +01:00
LocalAI [bot]
c3493e4917 chore: ⬆️ Update ggml-org/whisper.cpp to a88b93f85f08fc6045e5d8a8c3f94b7be0ac8bce (#7448)
⬆️ Update ggml-org/whisper.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-06 21:26:25 +00:00
LocalAI [bot]
edf7141b9b chore: ⬆️ Update ggml-org/llama.cpp to 8160b38a5fa8a25490ca33ffdd200cda51405688 (#7438)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-06 13:35:24 +01:00
Ettore Di Giacinto
446b686470 Update model version in gallery-agent workflow
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-05 22:08:16 +01:00
Ettore Di Giacinto
b287944f07 Add Proto Dependencies installation step
Added steps to install protobuf and Go dependencies in the GitHub Actions workflow.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-05 21:40:36 +01:00
LocalAI [bot]
f3ae358689 chore(model-gallery): ⬆️ update checksum (#7437)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-05 15:20:21 +01:00
Richard Palethorpe
c7aaeab683 fix(stablediffusion-ggml): Correct Z-Image model name (#7436)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-12-05 14:57:39 +01:00
Ettore Di Giacinto
024aa6a55b chore(deps): bump llama.cpp to 'bde188d60f58012ada0725c6dd5ba7c69fe4dd87' (#7434)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-05 00:17:35 +01:00
Ettore Di Giacinto
7ce8a56e96 chore(ci/agent): correctly invoke go run
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-04 23:12:04 +01:00
Ettore Di Giacinto
3e9ed48432 chore(ci/agent): support quantization
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-04 22:56:35 +01:00
Ettore Di Giacinto
963796ff51 Update localai-github-action to version 1.1
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 22:50:55 +01:00
Ettore Di Giacinto
6bd9a304bc Add local AI model to gallery agent workflow
Updated the GitHub Actions workflow to include the local AI model and modified environment variables for the gallery agent.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 22:43:31 +01:00
Ettore Di Giacinto
7990c7a401 chore(agent): update gallery agent to use importers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-04 22:23:43 +01:00
LocalAI [bot]
4bb93b1c4c chore(model-gallery): ⬆️ update checksum (#7433)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-04 21:23:26 +01:00
Copilot
1abbedd732 feat(diffusers): implement dynamic pipeline loader to remove per-pipeline conditionals (#7365)
* Initial plan

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add dynamic loader for diffusers pipelines and refactor backend.py

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix pipeline discovery error handling and test mock issue

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Address code review feedback: direct imports, better error handling, improved tests

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Address remaining code review feedback: specific exceptions, registry access, test imports

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add defensive fallback for DiffusionPipeline registry access

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Actually use dynamic pipeline loading for all pipelines in backend

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use dynamic loader consistently for all pipelines including AutoPipelineForText2Image

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Move dynamic loader tests into test.py for CI compatibility

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Extend dynamic loader to discover any diffusers class type, not just DiffusionPipeline

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add AutoPipeline classes to pipeline registry for default model loading

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(python): set pyvenv python home

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* do pyenv update during start

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Minor changes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 19:02:06 +01:00
Ettore Di Giacinto
92ee8c2256 fix(ui): prevent box overflow in chat view (#7430)
Otherwise tool call and result might overflow the box

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-04 17:21:17 +01:00
Ettore Di Giacinto
78105e6b20 chore(ui): uniform buttons (#7429)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-04 17:18:51 +01:00
Richard Palethorpe
c2e4a1f29b feat(stablediffusion): Passthrough more parameters to support z-image and flux2 (#7419)
* feat(stablediffusion): Passthrough more parameters to support z-image and flux2

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* chore(z-image): Add Z-Image-Turbo GGML to library

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(stablediffusion-ggml): flush stderr and check errors when writing PNG

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(stablediffusion-ggml): Re-allocate Go strings in C++

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(stablediffusion-ggml): Try to avoid segfaults

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(stablediffusion-ggml): Init sample and easycache params

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 17:08:21 +01:00
Ettore Di Giacinto
100ebdfa2c chore(ci): do not overload the apple tests
Skip tests that are already run on other jobs and not really adding anything here. We have already functional tests that cover apple.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 14:15:15 +01:00
LocalAI [bot]
ca2e878aaf chore: ⬆️ Update ggml-org/llama.cpp to e9f9483464e6f01d843d7f0293bd9c7bc6b2221c (#7421)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 11:54:01 +01:00
Igor B. Poretsky
96e123d53a Messages output fix (#7424)
The internal echo command in sh does not support "-e" and "-E" options
and interprets backslash escape sequences by default. So we prefer the
external echo command when it is available.
2025-12-04 11:30:02 +01:00
LocalAI [bot]
7c5a0cde64 chore: ⬆️ Update leejet/stable-diffusion.cpp to 5865b5e7034801af1a288a9584631730b25272c6 (#7422)
⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-04 11:29:16 +01:00
Ettore Di Giacinto
edcbf82b31 chore(ci): add wget 2025-12-04 10:01:34 +01:00
Ettore Di Giacinto
6558caca85 chore(ci): adapt also golang-based backends docker images 2025-12-04 09:14:08 +01:00
Ettore Di Giacinto
b4172762d7 chore(ci): do override pip in 24.04
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 22:54:13 +01:00
Ettore Di Giacinto
dc6182bbb1 chore(ci): add wget to llama-cpp docker image builder
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 22:48:41 +01:00
Ettore Di Giacinto
1d1d52da59 chore(ci): small fixups to build arm64 images
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 21:42:33 +01:00
Ettore Di Giacinto
46b1a1848f chore(ci): minor fixup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 16:47:31 +01:00
LocalAI [bot]
957eea3da3 chore: ⬆️ Update ggml-org/llama.cpp to 61bde8e21f4a1f9a98c9205831ca3e55457b4c78 (#7415)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-03 16:27:12 +01:00
Ettore Di Giacinto
ab4f2742a6 chore(ci): minor fixup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 16:26:33 +01:00
Ettore Di Giacinto
03f3bf2d94 chore(ci): only install runtime libs needed on arm64
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 15:13:21 +01:00
Ettore Di Giacinto
774ddc60db chore(ci): specify ubuntu version in pipelines
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 11:10:18 +01:00
Ettore Di Giacinto
0ca1322b43 chore(ci): correctly pass ubuntu-version
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 09:58:10 +01:00
Ettore Di Giacinto
8dfeea2f55 fix: use ubuntu 24.04 for cuda13 l4t images (#7418)
* fix: use ubuntu 24.04 for cuda13 l4t images

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop openblas from containers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-03 09:47:03 +01:00
Ettore Di Giacinto
fea9018dc5 Revert "feat(stablediffusion): Passthrough more parameters to support z-image and flux2" (#7417)
Revert "feat(stablediffusion): Passthrough more parameters to support z-image…"

This reverts commit 4018e59b2a.
2025-12-02 22:14:28 +01:00
Ettore Di Giacinto
d8c7e90a69 Add Dockerfile for arm64 with nvpl installation (#7416)
Added installation of nvpl and updated apt-get commands for arm64 architecture.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-02 21:55:42 +01:00
Ettore Di Giacinto
c045b7a6bb Update Dockerfile to install cudss package
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-02 21:23:21 +01:00
Ettore Di Giacinto
7a5c61b057 fix: configure sbsa packages for arm64 (#7413)
* fix: configure sbsa packages for arm64

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-02 18:59:36 +01:00
Richard Palethorpe
4018e59b2a feat(stablediffusion): Passthrough more parameters to support z-image and flux2 (#7414)
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-12-02 18:28:26 +01:00
Richard Palethorpe
aaece6685f chore(deps/stable-diffusion-ggml): update stablediffusion-ggml (#7411)
* ⬆️ Update leejet/stable-diffusion.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix(stablediffusion-ggml): fixup schedulers and samplers arrays, use default getters

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-12-02 16:35:39 +01:00
Ettore Di Giacinto
f5df806f35 Fixup tags
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-02 15:15:41 +01:00
Ettore Di Giacinto
cfd95745ed feat: add cuda13 images (#7404)
* chore(ci): add cuda13 jobs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to pipelines and to capabilities. Start to work on the gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* gallery

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* capabilities: try to detect by looking at /usr/local

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* neutts

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* backends.yaml

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add cuda13 l4t requirements.txt

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add cuda13 requirements.txt

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pin vllm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Not all backends are compatible

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add vllm to requirements

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* vllm is not pre-compiled for cuda 13

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-02 14:24:35 +01:00
dependabot[bot]
9872bdf455 chore(deps): bump appleboy/ssh-action from 1.2.3 to 1.2.4 (#7410)
Bumps [appleboy/ssh-action](https://github.com/appleboy/ssh-action) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/appleboy/ssh-action/releases)
- [Commits](https://github.com/appleboy/ssh-action/compare/v1.2.3...v1.2.4)

---
updated-dependencies:
- dependency-name: appleboy/ssh-action
  dependency-version: 1.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-02 08:00:16 +01:00
LocalAI [bot]
665441ca94 chore: ⬆️ Update ggml-org/llama.cpp to ec18edfcba94dacb166e6523612fc0129cead67a (#7406)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-12-02 07:59:52 +01:00
dependabot[bot]
60f50a356f chore(deps): bump github.com/google/go-containerregistry from 0.19.2 to 0.20.7 (#7409)
chore(deps): bump github.com/google/go-containerregistry

Bumps [github.com/google/go-containerregistry](https://github.com/google/go-containerregistry) from 0.19.2 to 0.20.7.
- [Release notes](https://github.com/google/go-containerregistry/releases)
- [Commits](https://github.com/google/go-containerregistry/compare/v0.19.2...v0.20.7)

---
updated-dependencies:
- dependency-name: github.com/google/go-containerregistry
  dependency-version: 0.20.7
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 22:55:30 +00:00
Ettore Di Giacinto
045baf7fd2 fix(ui): navbar ordering and login icon (#7407)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-01 21:20:11 +01:00
Ettore Di Giacinto
8a54ffa668 fix: do not require auth for readyz/healthz endpoints (#7403)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-01 10:35:28 +01:00
Ettore Di Giacinto
e3bcba5c45 chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a (#7402)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-12-01 07:50:40 +01:00
LocalAI [bot]
17d84c8556 feat(swagger): update swagger (#7400)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-30 21:39:29 +00:00
Ettore Di Giacinto
a3423f33e1 feat(agent-jobs): add multimedia support (#7398)
* feat(agent-jobs): add multimedia support

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Refactoring

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-30 14:09:25 +01:00
Ettore Di Giacinto
45ee10ec50 feat(hf-api): return files in nested directories (#7396)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-30 09:06:54 +01:00
LocalAI [bot]
0824fd8efd chore: ⬆️ Update ggml-org/llama.cpp to 8c32d9d96d9ae345a0150cae8572859e9aafea0b (#7395)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-30 09:06:18 +01:00
LocalAI [bot]
a9b8869964 feat(swagger): update swagger (#7394)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-30 09:05:46 +01:00
Ettore Di Giacinto
54b5dfa8e1 chore: refactor css, restyle to be slightly minimalistic (#7397)
restyle

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-29 22:11:44 +01:00
Ettore Di Giacinto
468ac608f3 chore(deps): bump llama.cpp to 'd82b7a7c1d73c0674698d9601b1bbb0200933f29' (#7392)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-29 08:58:07 +01:00
Ettore Di Giacinto
53e5b2d6be feat: agent jobs panel (#7390)
* feat(agent): agent jobs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Multiple webhooks, simplify

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Do not use cron with seconds

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Create separate pages for details

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Detect if no models have MCP configuration, show wizard

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Make services test to run

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-11-28 23:05:39 +01:00
Ettore Di Giacinto
4b5977f535 chore: drop pinning of python 3.12 (#7389)
Update install.sh

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-28 11:02:56 +01:00
Ettore Di Giacinto
0d877b1e71 Revert "chore(l4t): Update extra index URL for requirements-l4t.txt" (#7388)
Revert "chore(l4t): Update extra index URL for requirements-l4t.txt (#7383)"

This reverts commit 0d781e6b7e.
2025-11-28 11:02:11 +01:00
Ettore Di Giacinto
e27f1370eb chore(diffusers): Add PY_STANDALONE_TAG for l4t Python version (#7387)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-28 09:34:05 +01:00
LocalAI [bot]
1a53fd2b9b chore: ⬆️ Update ggml-org/llama.cpp to 4abef75f2cf2eee75eb5083b30a94cf981587394 (#7382)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-28 00:08:27 +01:00
Ettore Di Giacinto
e01d821314 chore: Add Python 3.12 support for l4t build profile (#7384)
Set Python version to 3.12 for l4t build profile.

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-27 23:00:09 +01:00
Ettore Di Giacinto
0d781e6b7e chore(l4t): Update extra index URL for requirements-l4t.txt (#7383)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-27 22:02:06 +01:00
LocalAI [bot]
4c41f96157 docs: ⬆️ update docs version mudler/LocalAI (#7381)
⬆️ Update docs version mudler/LocalAI

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-27 21:49:31 +01:00
Igor B. Poretsky
a8eb1c421b Clean data directory (#7378)
It seems to be no point to copy /etc/skel content to newly created data
directory.
2025-11-27 17:48:32 +01:00
Igor B. Poretsky
d27a281783 Correct user deletion with all its data (#7368)
Actually it is not necessary to remove particularly the local-ai data
directory before user deletion. It will be accomplished automatically by
the userdel command. But it is crucial to remove additional users from
the local-ai group to allow userdel command to delete the group itself.
2025-11-27 17:47:55 +01:00
Igor B. Poretsky
c411fe09fb Conventional way of adding extra apt repository (#7362) 2025-11-27 17:46:26 +01:00
Ettore Di Giacinto
7ccc383a8b chore(l4t/diffusers): bump nvidia l4t index for pytorch 2.9 (#7379)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-27 17:42:01 +01:00
Ettore Di Giacinto
2f8a2b1297 chore(deps): update diffusers dependency to use GitHub repo for l4t (#7369)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-27 16:02:48 +01:00
Igor B. Poretsky
acbcb44dbc Initialize sudo reference before its first actual use (#7367)
Unfortunately, in my previous pr I missed the fact that uninstall
procedure uses sudo as well. La colpa mia.
2025-11-27 15:20:46 +01:00
Igor B. Poretsky
ab022172a9 chore: switch from /usr/share to /var/lib for data storage (#7361)
* More appropriate place for data storing

The /usr/share subtree in Linux is used for data that generally are not
supposed to change. Conventional places for changeable data are usually
located under /var, so /var/lib seems to be a reasonable default here.

* Data paths consistency fix

* Directory name consistency fix
2025-11-27 09:18:28 +01:00
LocalAI [bot]
b5f4f4ac6d chore: ⬆️ Update ggml-org/llama.cpp to eec1e33a9ed71b79422e39cc489719cf4f8e0777 (#7363)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-11-27 09:17:25 +01:00
292 changed files with 24375 additions and 13268 deletions

9
.env
View File

@@ -32,15 +32,6 @@
# Forces shutdown of the backends if busy (only if LOCALAI_SINGLE_ACTIVE_BACKEND is set)
# LOCALAI_FORCE_BACKEND_SHUTDOWN=true
## Specify a build type. Available: cublas, openblas, clblas.
## cuBLAS: This is a GPU-accelerated version of the complete standard BLAS (Basic Linear Algebra Subprograms) library. It's provided by Nvidia and is part of their CUDA toolkit.
## OpenBLAS: This is an open-source implementation of the BLAS library that aims to provide highly optimized code for various platforms. It includes support for multi-threading and can be compiled to use hardware-specific features for additional performance. OpenBLAS can run on many kinds of hardware, including CPUs from Intel, AMD, and ARM.
## clBLAS: This is an open-source implementation of the BLAS library that uses OpenCL, a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. clBLAS is designed to take advantage of the parallel computing power of GPUs but can also run on any hardware that supports OpenCL. This includes hardware from different vendors like Nvidia, AMD, and Intel.
# BUILD_TYPE=openblas
## Uncomment and set to true to enable rebuilding from source
# REBUILD=true
## Path where to store generated images
# LOCALAI_IMAGE_PATH=/tmp/generated/images

View File

@@ -2,11 +2,16 @@ package main
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"regexp"
"slices"
"strings"
"github.com/ghodss/yaml"
hfapi "github.com/mudler/LocalAI/pkg/huggingface-api"
cogito "github.com/mudler/cogito"
@@ -45,7 +50,12 @@ func cleanTextContent(text string) string {
}
// Remove trailing empty lines from the result
result := strings.Join(cleanedLines, "\n")
return strings.TrimRight(result, "\n")
return stripThinkingTags(strings.TrimRight(result, "\n"))
}
type galleryModel struct {
Name string `yaml:"name"`
Urls []string `yaml:"urls"`
}
// isModelExisting checks if a specific model ID exists in the gallery using text search
@@ -56,9 +66,20 @@ func isModelExisting(modelID string) (bool, error) {
return false, fmt.Errorf("failed to read %s: %w", indexPath, err)
}
contentStr := string(content)
// Simple text search - if the model ID appears anywhere in the file, it exists
return strings.Contains(contentStr, modelID), nil
var galleryModels []galleryModel
err = yaml.Unmarshal(content, &galleryModels)
if err != nil {
return false, fmt.Errorf("failed to unmarshal %s: %w", indexPath, err)
}
for _, galleryModel := range galleryModels {
if slices.Contains(galleryModel.Urls, modelID) {
return true, nil
}
}
return false, nil
}
// filterExistingModels removes models that already exist in the gallery
@@ -92,6 +113,16 @@ func getGalleryIndexPath() string {
return "gallery/index.yaml"
}
func stripThinkingTags(content string) string {
// Remove content between <thinking> and </thinking> (including multi-line)
content = regexp.MustCompile(`(?s)<thinking>.*?</thinking>`).ReplaceAllString(content, "")
// Remove content between <think> and </think> (including multi-line)
content = regexp.MustCompile(`(?s)<think>.*?</think>`).ReplaceAllString(content, "")
// Clean up any extra whitespace
content = strings.TrimSpace(content)
return content
}
func getRealReadme(ctx context.Context, repository string) (string, error) {
// Create a conversation fragment
fragment := cogito.NewEmptyFragment().
@@ -120,6 +151,11 @@ func getRealReadme(ctx context.Context, repository string) (string, error) {
}
func selectMostInterestingModels(ctx context.Context, searchResult *SearchResult) ([]ProcessedModel, error) {
if len(searchResult.Models) == 1 {
return searchResult.Models, nil
}
// Create a conversation fragment
fragment := cogito.NewEmptyFragment().
AddMessage("user",
@@ -218,71 +254,192 @@ Return your analysis and selection reasoning.`)
return filteredModels, nil
}
// ModelFamily represents a YAML anchor/family
type ModelFamily struct {
Anchor string `json:"anchor"`
Name string `json:"name"`
// ModelMetadata represents extracted metadata from a model
type ModelMetadata struct {
Tags []string `json:"tags"`
License string `json:"license"`
}
// selectModelFamily selects the appropriate model family/anchor for a given model
func selectModelFamily(ctx context.Context, model ProcessedModel, availableFamilies []ModelFamily) (string, error) {
// extractModelMetadata extracts tags and license from model README and documentation
func extractModelMetadata(ctx context.Context, model ProcessedModel) ([]string, string, error) {
// Create a conversation fragment
fragment := cogito.NewEmptyFragment().
AddMessage("user",
`Your task is to select the most appropriate model family/anchor for a given AI model. You will be provided with:
1. Information about the model (name, description, etc.)
2. A list of available model families/anchors
`Your task is to extract metadata from an AI model's README and documentation. You will be provided with:
1. Model information (ID, author, description)
2. README content
You need to select the family that best matches the model's architecture, capabilities, or characteristics. Consider:
- Model architecture (e.g., Llama, Qwen, Mistral, etc.)
- Model capabilities (e.g., vision, coding, chat, etc.)
- Model size/type (e.g., small, medium, large)
- Model purpose (e.g., general purpose, specialized, etc.)
You need to extract:
1. **Tags**: An array of relevant tags that describe the model. Use common tags from the gallery such as:
- llm, gguf, gpu, cpu, multimodal, image-to-text, text-to-text, text-to-speech, tts
- thinking, reasoning, chat, instruction-tuned, code, vision
- Model family names (e.g., llama, qwen, mistral, gemma) if applicable
- Any other relevant descriptive tags
Select 3-8 most relevant tags.
Return the anchor name that best fits the model.`)
2. **License**: The license identifier (e.g., "apache-2.0", "mit", "llama2", "gpl-3.0", "bsd", "cc-by-4.0").
If no license is found, return an empty string.
Return the extracted metadata in a structured format.`)
// Add model information
modelInfo := "Model Information:\n"
modelInfo += fmt.Sprintf(" ID: %s\n", model.ModelID)
modelInfo += fmt.Sprintf(" Author: %s\n", model.Author)
modelInfo += fmt.Sprintf(" Downloads: %d\n", model.Downloads)
modelInfo += fmt.Sprintf(" Description: %s\n", model.ReadmeContentPreview)
fragment = fragment.AddMessage("user", modelInfo)
// Add available families
familiesInfo := "Available Model Families:\n"
for _, family := range availableFamilies {
familiesInfo += fmt.Sprintf(" - %s (%s)\n", family.Anchor, family.Name)
if model.ReadmeContent != "" {
modelInfo += fmt.Sprintf(" README Content:\n%s\n", model.ReadmeContent)
} else if model.ReadmeContentPreview != "" {
modelInfo += fmt.Sprintf(" README Preview: %s\n", model.ReadmeContentPreview)
}
fragment = fragment.AddMessage("user", familiesInfo)
fragment = fragment.AddMessage("user", "Select the most appropriate family anchor for this model. Return just the anchor name.")
fragment = fragment.AddMessage("user", modelInfo)
fragment = fragment.AddMessage("user", "Extract the tags and license from the model information. Return the metadata as a JSON object with 'tags' (array of strings) and 'license' (string).")
// Get a response
newFragment, err := llm.Ask(ctx, fragment)
if err != nil {
return "", err
return nil, "", err
}
// Extract the selected family
selectedFamily := strings.TrimSpace(newFragment.LastMessage().Content)
// Extract structured metadata
metadata := ModelMetadata{}
// Validate that the selected family exists in our list
for _, family := range availableFamilies {
if family.Anchor == selectedFamily {
return selectedFamily, nil
}
s := structures.Structure{
Schema: jsonschema.Definition{
Type: jsonschema.Object,
AdditionalProperties: false,
Properties: map[string]jsonschema.Definition{
"tags": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{Type: jsonschema.String},
Description: "Array of relevant tags describing the model",
},
"license": {
Type: jsonschema.String,
Description: "License identifier (e.g., apache-2.0, mit, llama2). Empty string if not found.",
},
},
Required: []string{"tags", "license"},
},
Object: &metadata,
}
// If no exact match, try to find a close match
for _, family := range availableFamilies {
if strings.Contains(strings.ToLower(family.Anchor), strings.ToLower(selectedFamily)) ||
strings.Contains(strings.ToLower(selectedFamily), strings.ToLower(family.Anchor)) {
return family.Anchor, nil
}
err = newFragment.ExtractStructure(ctx, llm, s)
if err != nil {
return nil, "", err
}
// Default fallback
return "llama3", nil
return metadata.Tags, metadata.License, nil
}
// extractIconFromReadme scans the README content for image URLs and returns the first suitable icon URL found
func extractIconFromReadme(readmeContent string) string {
if readmeContent == "" {
return ""
}
// Regular expressions to match image URLs in various formats (case-insensitive)
// Match markdown image syntax: ![alt](url) - case insensitive extensions
markdownImageRegex := regexp.MustCompile(`(?i)!\[[^\]]*\]\(([^)]+\.(png|jpg|jpeg|svg|webp|gif))\)`)
// Match HTML img tags: <img src="url">
htmlImageRegex := regexp.MustCompile(`(?i)<img[^>]+src=["']([^"']+\.(png|jpg|jpeg|svg|webp|gif))["']`)
// Match plain URLs ending with image extensions
plainImageRegex := regexp.MustCompile(`(?i)https?://[^\s<>"']+\.(png|jpg|jpeg|svg|webp|gif)`)
// Try markdown format first
matches := markdownImageRegex.FindStringSubmatch(readmeContent)
if len(matches) > 1 && matches[1] != "" {
url := strings.TrimSpace(matches[1])
// Prefer HuggingFace CDN URLs or absolute URLs
if strings.HasPrefix(strings.ToLower(url), "http") {
return url
}
}
// Try HTML img tags
matches = htmlImageRegex.FindStringSubmatch(readmeContent)
if len(matches) > 1 && matches[1] != "" {
url := strings.TrimSpace(matches[1])
if strings.HasPrefix(strings.ToLower(url), "http") {
return url
}
}
// Try plain URLs
matches = plainImageRegex.FindStringSubmatch(readmeContent)
if len(matches) > 0 {
url := strings.TrimSpace(matches[0])
if strings.HasPrefix(strings.ToLower(url), "http") {
return url
}
}
return ""
}
// getHuggingFaceAvatarURL attempts to get the HuggingFace avatar URL for a user
func getHuggingFaceAvatarURL(author string) string {
if author == "" {
return ""
}
// Try to fetch user info from HuggingFace API
// HuggingFace API endpoint: https://huggingface.co/api/users/{username}
baseURL := "https://huggingface.co"
userURL := fmt.Sprintf("%s/api/users/%s", baseURL, author)
req, err := http.NewRequest("GET", userURL, nil)
if err != nil {
return ""
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return ""
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return ""
}
// Parse the response to get avatar URL
var userInfo map[string]interface{}
body, err := io.ReadAll(resp.Body)
if err != nil {
return ""
}
if err := json.Unmarshal(body, &userInfo); err != nil {
return ""
}
// Try to extract avatar URL from response
if avatar, ok := userInfo["avatarUrl"].(string); ok && avatar != "" {
return avatar
}
if avatar, ok := userInfo["avatar"].(string); ok && avatar != "" {
return avatar
}
return ""
}
// extractModelIcon extracts icon URL from README or falls back to HuggingFace avatar
func extractModelIcon(model ProcessedModel) string {
// First, try to extract icon from README
if icon := extractIconFromReadme(model.ReadmeContent); icon != "" {
return icon
}
// Fallback: Try to get HuggingFace user avatar
if model.Author != "" {
if avatar := getHuggingFaceAvatarURL(model.Author); avatar != "" {
return avatar
}
}
return ""
}

View File

@@ -2,13 +2,61 @@ package main
import (
"context"
"encoding/json"
"fmt"
"os"
"strings"
"github.com/ghodss/yaml"
"github.com/mudler/LocalAI/core/gallery/importers"
)
func formatTextContent(text string) string {
return formatTextContentWithIndent(text, 4, 6)
}
// formatTextContentWithIndent formats text content with specified base and list item indentation
func formatTextContentWithIndent(text string, baseIndent int, listItemIndent int) string {
var formattedLines []string
lines := strings.Split(text, "\n")
for _, line := range lines {
trimmed := strings.TrimRight(line, " \t\r")
if trimmed == "" {
// Keep empty lines as empty (no indentation)
formattedLines = append(formattedLines, "")
} else {
// Preserve relative indentation from yaml.Marshal output
// Count existing leading spaces to preserve relative structure
leadingSpaces := len(trimmed) - len(strings.TrimLeft(trimmed, " \t"))
trimmedStripped := strings.TrimLeft(trimmed, " \t")
var totalIndent int
if strings.HasPrefix(trimmedStripped, "-") {
// List items: use listItemIndent (ignore existing leading spaces)
totalIndent = listItemIndent
} else {
// Regular lines: use baseIndent + preserve relative indentation
// This handles both top-level keys (leadingSpaces=0) and nested properties (leadingSpaces>0)
totalIndent = baseIndent + leadingSpaces
}
indentStr := strings.Repeat(" ", totalIndent)
formattedLines = append(formattedLines, indentStr+trimmedStripped)
}
}
formattedText := strings.Join(formattedLines, "\n")
// Remove any trailing spaces from the formatted description
formattedText = strings.TrimRight(formattedText, " \t")
return formattedText
}
// generateYAMLEntry generates a YAML entry for a model using the specified anchor
func generateYAMLEntry(model ProcessedModel, familyAnchor string) string {
func generateYAMLEntry(model ProcessedModel, quantization string) string {
modelConfig, err := importers.DiscoverModelConfig("https://huggingface.co/"+model.ModelID, json.RawMessage(`{ "quantization": "`+quantization+`"}`))
if err != nil {
panic(err)
}
// Extract model name from ModelID
parts := strings.Split(model.ModelID, "/")
modelName := model.ModelID
@@ -22,18 +70,6 @@ func generateYAMLEntry(model ProcessedModel, familyAnchor string) string {
modelName = strings.ReplaceAll(modelName, "-q3_k_m", "")
modelName = strings.ReplaceAll(modelName, "-q2_k", "")
fileName := ""
checksum := ""
if model.PreferredModelFile != nil {
fileParts := strings.Split(model.PreferredModelFile.Path, "/")
if len(fileParts) > 0 {
fileName = fileParts[len(fileParts)-1]
}
checksum = model.PreferredModelFile.SHA256
} else {
fileName = model.ModelID
}
description := model.ReadmeContent
if description == "" {
description = fmt.Sprintf("AI model: %s", modelName)
@@ -41,142 +77,88 @@ func generateYAMLEntry(model ProcessedModel, familyAnchor string) string {
// Clean up description to prevent YAML linting issues
description = cleanTextContent(description)
formattedDescription := formatTextContent(description)
// Format description for YAML (indent each line and ensure no trailing spaces)
lines := strings.Split(description, "\n")
var formattedLines []string
for _, line := range lines {
if strings.TrimSpace(line) == "" {
// Keep empty lines as empty (no indentation)
formattedLines = append(formattedLines, "")
} else {
// Add indentation to non-empty lines
formattedLines = append(formattedLines, " "+line)
}
configFile := formatTextContent(modelConfig.ConfigFile)
filesYAML, _ := yaml.Marshal(modelConfig.Files)
// Files section: list items need 4 spaces (not 6), since files: is at 2 spaces
files := formatTextContentWithIndent(string(filesYAML), 4, 4)
// Build metadata sections
var metadataSections []string
// Add license if present
if model.License != "" {
metadataSections = append(metadataSections, fmt.Sprintf(` license: "%s"`, model.License))
}
formattedDescription := strings.Join(formattedLines, "\n")
// Remove any trailing spaces from the formatted description
formattedDescription = strings.TrimRight(formattedDescription, " \t")
// Add tags if present
if len(model.Tags) > 0 {
tagsYAML, _ := yaml.Marshal(model.Tags)
tagsFormatted := formatTextContentWithIndent(string(tagsYAML), 4, 4)
tagsFormatted = strings.TrimRight(tagsFormatted, "\n")
metadataSections = append(metadataSections, fmt.Sprintf(" tags:\n%s", tagsFormatted))
}
// Add icon if present
if model.Icon != "" {
metadataSections = append(metadataSections, fmt.Sprintf(` icon: %s`, model.Icon))
}
// Build the metadata block
metadataBlock := ""
if len(metadataSections) > 0 {
metadataBlock = strings.Join(metadataSections, "\n") + "\n"
}
yamlTemplate := ""
if checksum != "" {
yamlTemplate = `- !!merge <<: *%s
name: "%s"
yamlTemplate = `- name: "%s"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
- https://huggingface.co/%s
description: |
%s
%s%s
overrides:
parameters:
model: %s
%s
files:
- filename: %s
sha256: %s
uri: huggingface://%s/%s`
return fmt.Sprintf(yamlTemplate,
familyAnchor,
modelName,
model.ModelID,
formattedDescription,
fileName,
fileName,
checksum,
model.ModelID,
fileName,
)
} else {
yamlTemplate = `- !!merge <<: *%s
name: "%s"
urls:
- https://huggingface.co/%s
description: |
%s
overrides:
parameters:
model: %s`
return fmt.Sprintf(yamlTemplate,
familyAnchor,
modelName,
model.ModelID,
formattedDescription,
fileName,
)
%s`
// Trim trailing newlines from formatted sections to prevent extra blank lines
formattedDescription = strings.TrimRight(formattedDescription, "\n")
configFile = strings.TrimRight(configFile, "\n")
files = strings.TrimRight(files, "\n")
// Add newline before metadata block if present
if metadataBlock != "" {
metadataBlock = "\n" + strings.TrimRight(metadataBlock, "\n")
}
}
// extractModelFamilies extracts all YAML anchors from the gallery index.yaml file
func extractModelFamilies() ([]ModelFamily, error) {
// Read the index.yaml file
indexPath := getGalleryIndexPath()
content, err := os.ReadFile(indexPath)
if err != nil {
return nil, fmt.Errorf("failed to read %s: %w", indexPath, err)
}
lines := strings.Split(string(content), "\n")
var families []ModelFamily
for _, line := range lines {
line = strings.TrimSpace(line)
// Look for YAML anchors (lines starting with "- &")
if strings.HasPrefix(line, "- &") {
// Extract the anchor name (everything after "- &")
anchor := strings.TrimPrefix(line, "- &")
// Remove any trailing colon or other characters
anchor = strings.Split(anchor, ":")[0]
anchor = strings.Split(anchor, " ")[0]
if anchor != "" {
families = append(families, ModelFamily{
Anchor: anchor,
Name: anchor, // Use anchor as name for now
})
}
}
}
return families, nil
return fmt.Sprintf(yamlTemplate,
modelName,
model.ModelID,
formattedDescription,
metadataBlock,
configFile,
files,
)
}
// generateYAMLForModels generates YAML entries for selected models and appends to index.yaml
func generateYAMLForModels(ctx context.Context, models []ProcessedModel) error {
// Extract available model families
families, err := extractModelFamilies()
if err != nil {
return fmt.Errorf("failed to extract model families: %w", err)
}
fmt.Printf("Found %d model families: %v\n", len(families),
func() []string {
var names []string
for _, f := range families {
names = append(names, f.Anchor)
}
return names
}())
func generateYAMLForModels(ctx context.Context, models []ProcessedModel, quantization string) error {
// Generate YAML entries for each model
var yamlEntries []string
for _, model := range models {
fmt.Printf("Selecting family for model: %s\n", model.ModelID)
// Select appropriate family for this model
familyAnchor, err := selectModelFamily(ctx, model, families)
if err != nil {
fmt.Printf("Error selecting family for %s: %v, using default\n", model.ModelID, err)
familyAnchor = "llama3" // Default fallback
}
fmt.Printf("Selected family '%s' for model %s\n", familyAnchor, model.ModelID)
fmt.Printf("Generating YAML entry for model: %s\n", model.ModelID)
// Generate YAML entry
yamlEntry := generateYAMLEntry(model, familyAnchor)
yamlEntry := generateYAMLEntry(model, quantization)
yamlEntries = append(yamlEntries, yamlEntry)
}
// Append to index.yaml
// Prepend to index.yaml (write at the top)
if len(yamlEntries) > 0 {
indexPath := getGalleryIndexPath()
fmt.Printf("Appending YAML entries to %s...\n", indexPath)
fmt.Printf("Prepending YAML entries to %s...\n", indexPath)
// Read current content
content, err := os.ReadFile(indexPath)
@@ -184,11 +166,26 @@ func generateYAMLForModels(ctx context.Context, models []ProcessedModel) error {
return fmt.Errorf("failed to read %s: %w", indexPath, err)
}
// Append new entries
// Remove trailing whitespace from existing content and join entries without extra newlines
existingContent := strings.TrimRight(string(content), " \t\n\r")
existingContent := string(content)
yamlBlock := strings.Join(yamlEntries, "\n")
newContent := existingContent + "\n" + yamlBlock + "\n"
// Check if file starts with "---"
var newContent string
if strings.HasPrefix(existingContent, "---\n") {
// File starts with "---", prepend new entries after it
restOfContent := strings.TrimPrefix(existingContent, "---\n")
// Ensure proper spacing: "---\n" + new entries + "\n" + rest of content
newContent = "---\n" + yamlBlock + "\n" + restOfContent
} else if strings.HasPrefix(existingContent, "---") {
// File starts with "---" but no newline after
restOfContent := strings.TrimPrefix(existingContent, "---")
newContent = "---\n" + yamlBlock + "\n" + strings.TrimPrefix(restOfContent, "\n")
} else {
// No "---" at start, prepend new entries at the very beginning
// Trim leading whitespace from existing content
existingContent = strings.TrimLeft(existingContent, " \t\n\r")
newContent = yamlBlock + "\n" + existingContent
}
// Write back to file
err = os.WriteFile(indexPath, []byte(newContent), 0644)
@@ -196,7 +193,7 @@ func generateYAMLForModels(ctx context.Context, models []ProcessedModel) error {
return fmt.Errorf("failed to write %s: %w", indexPath, err)
}
fmt.Printf("Successfully added %d models to %s\n", len(yamlEntries), indexPath)
fmt.Printf("Successfully prepended %d models to %s\n", len(yamlEntries), indexPath)
}
return nil

View File

@@ -34,6 +34,9 @@ type ProcessedModel struct {
ReadmeContentPreview string `json:"readme_content_preview,omitempty"`
QuantizationPreferences []string `json:"quantization_preferences"`
ProcessingError string `json:"processing_error,omitempty"`
Tags []string `json:"tags,omitempty"`
License string `json:"license,omitempty"`
Icon string `json:"icon,omitempty"`
}
// SearchResult represents the complete result of searching and processing models
@@ -116,14 +119,24 @@ func main() {
}
fmt.Println(result.FormattedOutput)
var models []ProcessedModel
// Use AI agent to select the most interesting models
fmt.Println("Using AI agent to select the most interesting models...")
models, err := selectMostInterestingModels(context.Background(), result)
if err != nil {
fmt.Fprintf(os.Stderr, "Error in model selection: %v\n", err)
// Continue with original result if selection fails
if len(result.Models) > 1 {
fmt.Println("More than one model found (", len(result.Models), "), using AI agent to select the most interesting models")
for _, model := range result.Models {
fmt.Println("Model: ", model.ModelID)
}
// Use AI agent to select the most interesting models
fmt.Println("Using AI agent to select the most interesting models...")
models, err = selectMostInterestingModels(context.Background(), result)
if err != nil {
fmt.Fprintf(os.Stderr, "Error in model selection: %v\n", err)
// Continue with original result if selection fails
models = result.Models
}
} else if len(result.Models) == 1 {
models = result.Models
fmt.Println("Only one model found, using it directly")
}
fmt.Print(models)
@@ -154,7 +167,7 @@ func main() {
addedModelURLs = append(addedModelURLs, modelURL)
}
fmt.Println("Generating YAML entries for selected models...")
err = generateYAMLForModels(context.Background(), models)
err = generateYAMLForModels(context.Background(), models, quantization)
if err != nil {
fmt.Fprintf(os.Stderr, "Error generating YAML entries: %v\n", err)
os.Exit(1)
@@ -312,9 +325,28 @@ func searchAndProcessModels(searchTerm string, limit int, quantization string) (
outputBuilder.WriteString(fmt.Sprintf(" README Content Preview: %s\n",
processedModel.ReadmeContentPreview))
} else {
continue
fmt.Printf(" Warning: Failed to get real readme: %v\n", err)
}
fmt.Println("Real readme got", readmeContent)
// Extract metadata (tags, license) from README using LLM
fmt.Println("Extracting metadata for", model.ModelID, "waiting...")
tags, license, err := extractModelMetadata(context.Background(), processedModel)
if err == nil {
processedModel.Tags = tags
processedModel.License = license
outputBuilder.WriteString(fmt.Sprintf(" Tags: %v\n", tags))
outputBuilder.WriteString(fmt.Sprintf(" License: %s\n", license))
} else {
fmt.Printf(" Warning: Failed to extract metadata: %v\n", err)
}
// Extract icon from README or use HuggingFace avatar
icon := extractModelIcon(processedModel)
if icon != "" {
processedModel.Icon = icon
outputBuilder.WriteString(fmt.Sprintf(" Icon: %s\n", icon))
}
// Get README content
// readmeContent, err := client.GetReadmeContent(model.ModelID, details.ReadmeFile.Path)
// if err == nil {

View File

@@ -25,7 +25,7 @@ func runSyntheticMode() error {
// Generate YAML entries and append to gallery/index.yaml
fmt.Println("Generating YAML entries for synthetic models...")
err := generateYAMLForModels(context.Background(), models)
err := generateYAMLForModels(context.Background(), models, "Q4_K_M")
if err != nil {
return fmt.Errorf("error generating YAML entries: %w", err)
}
@@ -138,6 +138,25 @@ func (g *SyntheticDataGenerator) GenerateProcessedModel() ProcessedModel {
readmeContent := g.generateReadmeContent(modelName, author)
// Generate sample metadata
licenses := []string{"apache-2.0", "mit", "llama2", "gpl-3.0", "bsd", ""}
license := licenses[g.rand.Intn(len(licenses))]
sampleTags := []string{"llm", "gguf", "gpu", "cpu", "text-to-text", "chat", "instruction-tuned"}
numTags := g.rand.Intn(4) + 3 // 3-6 tags
tags := make([]string, numTags)
for i := 0; i < numTags; i++ {
tags[i] = sampleTags[g.rand.Intn(len(sampleTags))]
}
// Remove duplicates
tags = g.removeDuplicates(tags)
// Optionally include icon (50% chance)
icon := ""
if g.rand.Intn(2) == 0 {
icon = fmt.Sprintf("https://cdn-avatars.huggingface.co/v1/production/uploads/%s.png", g.randomString(24))
}
return ProcessedModel{
ModelID: modelID,
Author: author,
@@ -150,6 +169,9 @@ func (g *SyntheticDataGenerator) GenerateProcessedModel() ProcessedModel {
ReadmeContentPreview: truncateString(readmeContent, 200),
QuantizationPreferences: []string{"Q4_K_M", "Q4_K_S", "Q3_K_M", "Q2_K"},
ProcessingError: "",
Tags: tags,
License: license,
Icon: icon,
}
}
@@ -179,6 +201,18 @@ func (g *SyntheticDataGenerator) randomDate() string {
return pastDate.Format("2006-01-02T15:04:05.000Z")
}
func (g *SyntheticDataGenerator) removeDuplicates(slice []string) []string {
keys := make(map[string]bool)
result := []string{}
for _, item := range slice {
if !keys[item] {
keys[item] = true
result = append(result, item)
}
}
return result
}
func (g *SyntheticDataGenerator) generateReadmeContent(modelName, author string) string {
templates := []string{
fmt.Sprintf("# %s Model\n\nThis is a %s model developed by %s. It's designed for various natural language processing tasks including text generation, question answering, and conversation.\n\n## Features\n\n- High-quality text generation\n- Efficient inference\n- Multiple quantization options\n- Easy to use with LocalAI\n\n## Usage\n\nUse this model with LocalAI for various AI tasks.", strings.Title(modelName), modelName, author),

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
---
name: 'build python backend container images (reusable)'
name: 'build backend container images (reusable)'
on:
workflow_call:
@@ -53,6 +53,11 @@ on:
description: 'Skip drivers'
default: 'false'
type: string
ubuntu-version:
description: 'Ubuntu version'
required: false
default: '2204'
type: string
secrets:
dockerUsername:
required: false
@@ -208,6 +213,7 @@ jobs:
CUDA_MINOR_VERSION=${{ inputs.cuda-minor-version }}
BASE_IMAGE=${{ inputs.base-image }}
BACKEND=${{ inputs.backend }}
UBUNTU_VERSION=${{ inputs.ubuntu-version }}
context: ${{ inputs.context }}
file: ${{ inputs.dockerfile }}
cache-from: type=gha
@@ -228,6 +234,7 @@ jobs:
CUDA_MINOR_VERSION=${{ inputs.cuda-minor-version }}
BASE_IMAGE=${{ inputs.base-image }}
BACKEND=${{ inputs.backend }}
UBUNTU_VERSION=${{ inputs.ubuntu-version }}
context: ${{ inputs.context }}
file: ${{ inputs.dockerfile }}
cache-from: type=gha

View File

@@ -74,7 +74,7 @@ jobs:
BACKEND=${{ inputs.backend }} BUILD_TYPE=${{ inputs.build-type }} USE_PIP=${{ inputs.use-pip }} make build-darwin-${{ inputs.lang }}-backend
- name: Upload ${{ inputs.backend }}.tar
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v6
with:
name: ${{ inputs.backend }}-tar
path: backend-images/${{ inputs.backend }}.tar
@@ -85,7 +85,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download ${{ inputs.backend }}.tar
uses: actions/download-artifact@v6
uses: actions/download-artifact@v7
with:
name: ${{ inputs.backend }}-tar
path: .

View File

@@ -52,6 +52,7 @@ jobs:
dockerfile: ${{ matrix.dockerfile }}
skip-drivers: ${{ matrix.skip-drivers }}
context: ${{ matrix.context }}
ubuntu-version: ${{ matrix.ubuntu-version }}
secrets:
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}
@@ -69,7 +70,7 @@ jobs:
tag-suffix: ${{ matrix.tag-suffix }}
lang: ${{ matrix.lang || 'python' }}
use-pip: ${{ matrix.backend == 'diffusers' }}
runs-on: "macOS-14"
runs-on: "macos-latest"
secrets:
quayUsername: ${{ secrets.LOCALAI_REGISTRY_USERNAME }}
quayPassword: ${{ secrets.LOCALAI_REGISTRY_PASSWORD }}

View File

@@ -37,7 +37,7 @@ jobs:
make build-launcher-darwin
ls -liah dist
- name: Upload macOS launcher artifacts
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v6
with:
name: launcher-macos
path: dist/
@@ -60,7 +60,7 @@ jobs:
sudo apt-get install golang gcc libgl1-mesa-dev xorg-dev libxkbcommon-dev
make build-launcher-linux
- name: Upload Linux launcher artifacts
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v6
with:
name: launcher-linux
path: local-ai-launcher-linux.tar.xz

View File

@@ -49,7 +49,7 @@ jobs:
rm -rfv ${{ matrix.variable }}_message.txt
rm -rfv ${{ matrix.variable }}_commit.txt
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@v8
with:
token: ${{ secrets.UPDATE_BOT_TOKEN }}
push-to-fork: ci-forks/LocalAI

View File

@@ -17,7 +17,7 @@ jobs:
run: |
bash .github/bump_docs.sh ${{ matrix.repository }}
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@v8
with:
token: ${{ secrets.UPDATE_BOT_TOKEN }}
push-to-fork: ci-forks/LocalAI

View File

@@ -35,7 +35,7 @@ jobs:
sudo chmod 777 /hf_cache
bash .github/checksum_checker.sh gallery/index.yaml
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@v8
with:
token: ${{ secrets.UPDATE_BOT_TOKEN }}
push-to-fork: ci-forks/LocalAI

View File

@@ -33,7 +33,7 @@ jobs:
run: |
CGO_ENABLED=0 make build
- name: rm
uses: appleboy/ssh-action@v1.2.3
uses: appleboy/ssh-action@v1.2.4
with:
host: ${{ secrets.EXPLORER_SSH_HOST }}
username: ${{ secrets.EXPLORER_SSH_USERNAME }}
@@ -53,7 +53,7 @@ jobs:
rm: true
target: ./local-ai
- name: restarting
uses: appleboy/ssh-action@v1.2.3
uses: appleboy/ssh-action@v1.2.4
with:
host: ${{ secrets.EXPLORER_SSH_HOST }}
username: ${{ secrets.EXPLORER_SSH_USERNAME }}

View File

@@ -38,20 +38,33 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: '1.21'
- name: Proto Dependencies
run: |
# Install protoc
curl -L -s https://github.com/protocolbuffers/protobuf/releases/download/v26.1/protoc-26.1-linux-x86_64.zip -o protoc.zip && \
unzip -j -d /usr/local/bin protoc.zip bin/protoc && \
rm protoc.zip
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@1958fcbe2ca8bd93af633f11e97d44e567e945af
PATH="$PATH:$HOME/go/bin" make protogen-go
- uses: mudler/localai-github-action@v1.1
with:
model: 'https://huggingface.co/bartowski/Qwen_Qwen3-1.7B-GGUF'
- name: Run gallery agent
env:
OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
#OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
OPENAI_MODE: Qwen_Qwen3-1.7B-GGUF
OPENAI_BASE_URL: "http://localhost:8080"
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
#OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
SEARCH_TERM: ${{ github.event.inputs.search_term || 'GGUF' }}
LIMIT: ${{ github.event.inputs.limit || '15' }}
QUANTIZATION: ${{ github.event.inputs.quantization || 'Q4_K_M' }}
MAX_MODELS: ${{ github.event.inputs.max_models || '1' }}
run: |
export GALLERY_INDEX_PATH=$PWD/gallery/index.yaml
go run .github/gallery-agent
go run ./.github/gallery-agent
- name: Check for changes
id: check_changes
@@ -69,28 +82,28 @@ jobs:
id: read_summary
if: steps.check_changes.outputs.changes == 'true'
run: |
if [ -f ".github/gallery-agent/gallery-agent-summary.json" ]; then
if [ -f "./gallery-agent-summary.json" ]; then
echo "summary_exists=true" >> $GITHUB_OUTPUT
# Extract summary data using jq
echo "search_term=$(jq -r '.search_term' .github/gallery-agent/gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "total_found=$(jq -r '.total_found' .github/gallery-agent/gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "models_added=$(jq -r '.models_added' .github/gallery-agent/gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "quantization=$(jq -r '.quantization' .github/gallery-agent/gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "processing_time=$(jq -r '.processing_time' .github/gallery-agent/gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "search_term=$(jq -r '.search_term' ./gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "total_found=$(jq -r '.total_found' ./gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "models_added=$(jq -r '.models_added' ./gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "quantization=$(jq -r '.quantization' ./gallery-agent-summary.json)" >> $GITHUB_OUTPUT
echo "processing_time=$(jq -r '.processing_time' ./gallery-agent-summary.json)" >> $GITHUB_OUTPUT
# Create a formatted list of added models with URLs
added_models=$(jq -r 'range(0; .added_model_ids | length) as $i | "- [\(.added_model_ids[$i])](\(.added_model_urls[$i]))"' .github/gallery-agent/gallery-agent-summary.json | tr '\n' '\n')
added_models=$(jq -r 'range(0; .added_model_ids | length) as $i | "- [\(.added_model_ids[$i])](\(.added_model_urls[$i]))"' ./gallery-agent-summary.json | tr '\n' '\n')
echo "added_models<<EOF" >> $GITHUB_OUTPUT
echo "$added_models" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
rm -f .github/gallery-agent/gallery-agent-summary.json
rm -f ./gallery-agent-summary.json
else
echo "summary_exists=false" >> $GITHUB_OUTPUT
fi
- name: Create Pull Request
if: steps.check_changes.outputs.changes == 'true'
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@v8
with:
token: ${{ secrets.UPDATE_BOT_TOKEN }}
push-to-fork: ci-forks/LocalAI

View File

@@ -22,6 +22,7 @@ jobs:
base-image: ${{ matrix.base-image }}
grpc-base-image: ${{ matrix.grpc-base-image }}
makeflags: ${{ matrix.makeflags }}
ubuntu-version: ${{ matrix.ubuntu-version }}
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
@@ -43,6 +44,17 @@ jobs:
runs-on: 'ubuntu-latest'
base-image: "ubuntu:22.04"
makeflags: "--jobs=3 --output-sync=target"
ubuntu-version: '2204'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'false'
tag-suffix: '-gpu-nvidia-cuda-13'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:22.04"
makeflags: "--jobs=3 --output-sync=target"
ubuntu-version: '2204'
- build-type: 'hipblas'
platforms: 'linux/amd64'
tag-latest: 'false'
@@ -51,6 +63,7 @@ jobs:
grpc-base-image: "ubuntu:22.04"
runs-on: 'ubuntu-latest'
makeflags: "--jobs=3 --output-sync=target"
ubuntu-version: '2204'
- build-type: 'sycl'
platforms: 'linux/amd64'
tag-latest: 'false'
@@ -59,6 +72,7 @@ jobs:
tag-suffix: 'sycl'
runs-on: 'ubuntu-latest'
makeflags: "--jobs=3 --output-sync=target"
ubuntu-version: '2204'
- build-type: 'vulkan'
platforms: 'linux/amd64'
tag-latest: 'false'
@@ -66,3 +80,15 @@ jobs:
runs-on: 'ubuntu-latest'
base-image: "ubuntu:22.04"
makeflags: "--jobs=4 --output-sync=target"
ubuntu-version: '2204'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'false'
tag-suffix: '-nvidia-l4t-arm64-cuda-13'
base-image: "ubuntu:24.04"
runs-on: 'ubuntu-24.04-arm'
makeflags: "--jobs=4 --output-sync=target"
skip-drivers: 'false'
ubuntu-version: '2404'

View File

@@ -27,6 +27,7 @@ jobs:
grpc-base-image: ${{ matrix.grpc-base-image }}
aio: ${{ matrix.aio }}
makeflags: ${{ matrix.makeflags }}
ubuntu-version: ${{ matrix.ubuntu-version }}
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
@@ -44,6 +45,7 @@ jobs:
runs-on: 'ubuntu-latest'
makeflags: "--jobs=3 --output-sync=target"
aio: "-aio-gpu-hipblas"
ubuntu-version: '2204'
core-image-build:
uses: ./.github/workflows/image_build.yml
@@ -60,6 +62,7 @@ jobs:
grpc-base-image: ${{ matrix.grpc-base-image }}
makeflags: ${{ matrix.makeflags }}
skip-drivers: ${{ matrix.skip-drivers }}
ubuntu-version: ${{ matrix.ubuntu-version }}
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
@@ -78,6 +81,7 @@ jobs:
aio: "-aio-cpu"
makeflags: "--jobs=4 --output-sync=target"
skip-drivers: 'false'
ubuntu-version: '2204'
- build-type: 'cublas'
cuda-major-version: "11"
cuda-minor-version: "7"
@@ -89,6 +93,7 @@ jobs:
makeflags: "--jobs=4 --output-sync=target"
skip-drivers: 'false'
aio: "-aio-gpu-nvidia-cuda-11"
ubuntu-version: '2204'
- build-type: 'cublas'
cuda-major-version: "12"
cuda-minor-version: "0"
@@ -100,6 +105,19 @@ jobs:
skip-drivers: 'false'
makeflags: "--jobs=4 --output-sync=target"
aio: "-aio-gpu-nvidia-cuda-12"
ubuntu-version: '2204'
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/amd64'
tag-latest: 'auto'
tag-suffix: '-gpu-nvidia-cuda-13'
runs-on: 'ubuntu-latest'
base-image: "ubuntu:22.04"
skip-drivers: 'false'
makeflags: "--jobs=4 --output-sync=target"
aio: "-aio-gpu-nvidia-cuda-13"
ubuntu-version: '2204'
- build-type: 'vulkan'
platforms: 'linux/amd64'
tag-latest: 'auto'
@@ -109,6 +127,7 @@ jobs:
skip-drivers: 'false'
makeflags: "--jobs=4 --output-sync=target"
aio: "-aio-gpu-vulkan"
ubuntu-version: '2204'
- build-type: 'intel'
platforms: 'linux/amd64'
tag-latest: 'auto'
@@ -118,6 +137,7 @@ jobs:
runs-on: 'ubuntu-latest'
makeflags: "--jobs=3 --output-sync=target"
aio: "-aio-gpu-intel"
ubuntu-version: '2204'
gh-runner:
uses: ./.github/workflows/image_build.yml
@@ -134,6 +154,7 @@ jobs:
grpc-base-image: ${{ matrix.grpc-base-image }}
makeflags: ${{ matrix.makeflags }}
skip-drivers: ${{ matrix.skip-drivers }}
ubuntu-version: ${{ matrix.ubuntu-version }}
secrets:
dockerUsername: ${{ secrets.DOCKERHUB_USERNAME }}
dockerPassword: ${{ secrets.DOCKERHUB_PASSWORD }}
@@ -152,3 +173,15 @@ jobs:
runs-on: 'ubuntu-24.04-arm'
makeflags: "--jobs=4 --output-sync=target"
skip-drivers: 'true'
ubuntu-version: "2204"
- build-type: 'cublas'
cuda-major-version: "13"
cuda-minor-version: "0"
platforms: 'linux/arm64'
tag-latest: 'auto'
tag-suffix: '-nvidia-l4t-arm64-cuda-13'
base-image: "ubuntu:24.04"
runs-on: 'ubuntu-24.04-arm'
makeflags: "--jobs=4 --output-sync=target"
skip-drivers: 'false'
ubuntu-version: '2404'

View File

@@ -56,6 +56,11 @@ on:
required: false
default: ''
type: string
ubuntu-version:
description: 'Ubuntu version'
required: false
default: '2204'
type: string
secrets:
dockerUsername:
required: true
@@ -238,6 +243,7 @@ jobs:
GRPC_VERSION=v1.65.0
MAKEFLAGS=${{ inputs.makeflags }}
SKIP_DRIVERS=${{ inputs.skip-drivers }}
UBUNTU_VERSION=${{ inputs.ubuntu-version }}
context: .
file: ./Dockerfile
cache-from: type=gha
@@ -265,6 +271,7 @@ jobs:
GRPC_VERSION=v1.65.0
MAKEFLAGS=${{ inputs.makeflags }}
SKIP_DRIVERS=${{ inputs.skip-drivers }}
UBUNTU_VERSION=${{ inputs.ubuntu-version }}
context: .
file: ./Dockerfile
cache-from: type=gha

View File

@@ -10,7 +10,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v9
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v9
with:
stale-issue-message: 'This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
stale-pr-message: 'This PR is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 10 days.'

View File

@@ -109,11 +109,6 @@ jobs:
sudo apt-get update
sudo apt-get install -y cuda-nvcc-${CUDA_VERSION} libcublas-dev-${CUDA_VERSION}
export CUDACXX=/usr/local/cuda/bin/nvcc
# The python3-grpc-tools package in 22.04 is too old
pip install --user grpcio-tools==1.71.0 grpcio==1.71.0
make -C backend/python/transformers
make backends/huggingface backends/llama-cpp backends/local-store backends/silero-vad backends/piper backends/whisper backends/stablediffusion-ggml
@@ -190,7 +185,7 @@ jobs:
limit-access-to-actor: true
tests-apple:
runs-on: macOS-14
runs-on: macos-latest
strategy:
matrix:
go-version: ['1.25.x']
@@ -210,7 +205,7 @@ jobs:
- name: Dependencies
run: |
brew install protobuf grpc make protoc-gen-go protoc-gen-go-grpc libomp llvm
pip install --user --no-cache-dir grpcio-tools==1.71.0 grpcio==1.71.0
pip install --user --no-cache-dir grpcio-tools grpcio
- name: Build llama-cpp-darwin
run: |
make protogen-go

View File

@@ -25,7 +25,7 @@ jobs:
run: |
make protogen-go swagger
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@v8
with:
token: ${{ secrets.UPDATE_BOT_TOKEN }}
push-to-fork: ci-forks/LocalAI

View File

@@ -22,6 +22,9 @@ builds:
goarch:
- amd64
- arm64
ignore:
- goos: darwin
goarch: amd64
archives:
- formats: [ 'binary' ] # this removes the tar of the archives, leaving the binaries alone
name_template: local-ai-{{ .Tag }}-{{ .Os }}-{{ .Arch }}{{ if .Arm }}v{{ .Arm }}{{ end }}

79
AGENTS.md Normal file
View File

@@ -0,0 +1,79 @@
# Build and testing
Building and testing the project depends on the components involved and the platform where development is taking place. Due to the amount of context required it's usually best not to try building or testing the project unless the user requests it. If you must build the project then inspect the Makefile in the project root and the Makefiles of any backends that are effected by changes you are making. In addition the workflows in .github/workflows can be used as a reference when it is unclear how to build or test a component. The primary Makefile contains targets for building inside or outside Docker, if the user has not previously specified a preference then ask which they would like to use.
# Coding style
- The project has the following .editorconfig
```
root = true
[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.go]
indent_style = tab
[Makefile]
indent_style = tab
[*.proto]
indent_size = 2
[*.py]
indent_size = 4
[*.js]
indent_size = 2
[*.yaml]
indent_size = 2
[*.md]
trim_trailing_whitespace = false
```
- Use comments sparingly to explain why code does something, not what it does. Comments are there to add context that would be difficult to deduce from reading the code.
- Prefer modern Go e.g. use `any` not `interface{}`
# Logging
Use `github.com/mudler/xlog` for logging which has the same API as slog.
# llama.cpp Backend
The llama.cpp backend (`backend/cpp/llama-cpp/grpc-server.cpp`) is a gRPC adaptation of the upstream HTTP server (`llama.cpp/tools/server/server.cpp`). It uses the same underlying server infrastructure from `llama.cpp/tools/server/server-context.cpp`.
## Building and Testing
- Test llama.cpp backend compilation: `make backends/llama-cpp`
- The backend is built as part of the main build process
- Check `backend/cpp/llama-cpp/Makefile` for build configuration
## Architecture
- **grpc-server.cpp**: gRPC server implementation, adapts HTTP server patterns to gRPC
- Uses shared server infrastructure: `server-context.cpp`, `server-task.cpp`, `server-queue.cpp`, `server-common.cpp`
- The gRPC server mirrors the HTTP server's functionality but uses gRPC instead of HTTP
## Common Issues When Updating llama.cpp
When fixing compilation errors after upstream changes:
1. Check how `server.cpp` (HTTP server) handles the same change
2. Look for new public APIs or getter methods
3. Store copies of needed data instead of accessing private members
4. Update function calls to match new signatures
5. Test with `make backends/llama-cpp`
## Key Differences from HTTP Server
- gRPC uses `BackendServiceImpl` class with gRPC service methods
- HTTP server uses `server_routes` with HTTP handlers
- Both use the same `server_context` and task queue infrastructure
- gRPC methods: `LoadModel`, `Predict`, `PredictStream`, `Embedding`, `Rerank`, `TokenizeString`, `GetMetrics`, `Health`

View File

@@ -9,7 +9,7 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates curl wget espeak-ng libgomp1 \
ffmpeg libopenblas-base libopenblas-dev && \
ffmpeg && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
@@ -23,6 +23,7 @@ ARG SKIP_DRIVERS=false
ARG TARGETARCH
ARG TARGETVARIANT
ENV BUILD_TYPE=${BUILD_TYPE}
ARG UBUNTU_VERSION=2204
RUN mkdir -p /run/localai
RUN echo "default" > /run/localai/capability
@@ -46,15 +47,19 @@ EOT
# CuBLAS requirements
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
if ( [ "${BUILD_TYPE}" = "cublas" ] || [ "${BUILD_TYPE}" = "l4t" ] ) && [ "${SKIP_DRIVERS}" = "false" ]; then
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common pciutils
if [ "amd64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
fi
if [ "arm64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
if [ "${CUDA_MAJOR_VERSION}" = "13" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/sbsa/cuda-keyring_1.1-1_all.deb
else
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/arm64/cuda-keyring_1.1-1_all.deb
fi
fi
dpkg -i cuda-keyring_1.1-1_all.deb && \
rm -f cuda-keyring_1.1-1_all.deb && \
@@ -65,26 +70,34 @@ RUN <<EOT bash
libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} && \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
if [ "${CUDA_MAJOR_VERSION}" = "13" ] && [ "arm64" = "$TARGETARCH" ]; then
apt-get install -y --no-install-recommends \
libcufile-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcudnn9-cuda-${CUDA_MAJOR_VERSION} cuda-cupti-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libnvjitlink-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
fi
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
echo "nvidia" > /run/localai/capability
echo "nvidia-cuda-${CUDA_MAJOR_VERSION}" > /run/localai/capability
fi
EOT
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
echo "nvidia-l4t" > /run/localai/capability
echo "nvidia-l4t-cuda-${CUDA_MAJOR_VERSION}" > /run/localai/capability
fi
EOT
# https://github.com/NVIDIA/Isaac-GR00T/issues/343
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu2204-0.6.0_0.6.0-1_arm64.deb && \
dpkg -i cudss-local-tegra-repo-ubuntu2204-0.6.0_0.6.0-1_arm64.deb && \
cp /var/cudss-local-tegra-repo-ubuntu2204-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get -y install cudss
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
dpkg -i cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
cp /var/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get -y install cudss cudss-cuda-${CUDA_MAJOR_VERSION} && \
wget https://developer.download.nvidia.com/compute/nvpl/25.5/local_installers/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
dpkg -i nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
cp /var/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5/nvpl-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get install -y nvpl
fi
EOT
@@ -171,14 +184,6 @@ RUN go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2 && \
COPY --chmod=644 custom-ca-certs/* /usr/local/share/ca-certificates/
RUN update-ca-certificates
# OpenBLAS requirements and stable diffusion
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libopenblas-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN test -n "$TARGETARCH" \
|| (echo 'warn: missing $TARGETARCH, either set this `ARG` manually, or run using `docker buildkit`')

View File

@@ -4,6 +4,9 @@ GOVET=$(GOCMD) vet
BINARY_NAME=local-ai
LAUNCHER_BINARY_NAME=local-ai-launcher
CUDA_MAJOR_VERSION?=13
CUDA_MINOR_VERSION?=0
GORELEASER?=
export BUILD_TYPE?=
@@ -265,7 +268,7 @@ protoc:
echo "Unsupported OS: $$OS_NAME"; exit 1; \
fi; \
URL=https://github.com/protocolbuffers/protobuf/releases/download/v31.1/$$FILE; \
curl -L -s $$URL -o protoc.zip && \
curl -L $$URL -o protoc.zip && \
unzip -j -d $(CURDIR) protoc.zip bin/protoc && rm protoc.zip
.PHONY: protogen-go
@@ -284,12 +287,14 @@ prepare-test-extra: protogen-python
$(MAKE) -C backend/python/diffusers
$(MAKE) -C backend/python/chatterbox
$(MAKE) -C backend/python/vllm
$(MAKE) -C backend/python/vibevoice
test-extra: prepare-test-extra
$(MAKE) -C backend/python/transformers test
$(MAKE) -C backend/python/diffusers test
$(MAKE) -C backend/python/chatterbox test
$(MAKE) -C backend/python/vllm test
$(MAKE) -C backend/python/vibevoice test
DOCKER_IMAGE?=local-ai
DOCKER_AIO_IMAGE?=local-ai-aio
@@ -383,6 +388,12 @@ backends/llama-cpp-darwin: build
backends/neutts: docker-build-neutts docker-save-neutts build
./local-ai backends install "ocifile://$(abspath ./backend-images/neutts.tar)"
backends/vllm: docker-build-vllm docker-save-vllm build
./local-ai backends install "ocifile://$(abspath ./backend-images/vllm.tar)"
backends/vibevoice: docker-build-vibevoice docker-save-vibevoice build
./local-ai backends install "ocifile://$(abspath ./backend-images/vibevoice.tar)"
build-darwin-python-backend: build
bash ./scripts/build/python-darwin.sh
@@ -439,6 +450,9 @@ docker-save-kitten-tts: backend-images
docker-save-chatterbox: backend-images
docker save local-ai-backend:chatterbox -o backend-images/chatterbox.tar
docker-save-vibevoice: backend-images
docker save local-ai-backend:vibevoice -o backend-images/vibevoice.tar
docker-build-neutts:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:neutts -f backend/Dockerfile.python --build-arg BACKEND=neutts ./backend
@@ -448,6 +462,12 @@ docker-save-neutts: backend-images
docker-build-kokoro:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:kokoro -f backend/Dockerfile.python --build-arg BACKEND=kokoro ./backend
docker-build-vllm:
docker build --build-arg CUDA_MAJOR_VERSION=$(CUDA_MAJOR_VERSION) --build-arg CUDA_MINOR_VERSION=$(CUDA_MINOR_VERSION) --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:vllm -f backend/Dockerfile.python --build-arg BACKEND=vllm ./backend
docker-save-vllm: backend-images
docker save local-ai-backend:vllm -o backend-images/vllm.tar
docker-save-kokoro: backend-images
docker save local-ai-backend:kokoro -o backend-images/kokoro.tar
@@ -476,7 +496,7 @@ docker-save-bark-cpp: backend-images
docker save local-ai-backend:bark-cpp -o backend-images/bark-cpp.tar
docker-build-stablediffusion-ggml:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:stablediffusion-ggml -f backend/Dockerfile.golang --build-arg BACKEND=stablediffusion-ggml .
docker build --progress=plain --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) --build-arg CUDA_MAJOR_VERSION=$(CUDA_MAJOR_VERSION) --build-arg CUDA_MINOR_VERSION=$(CUDA_MINOR_VERSION) -t local-ai-backend:stablediffusion-ggml -f backend/Dockerfile.golang --build-arg BACKEND=stablediffusion-ggml .
docker-save-stablediffusion-ggml: backend-images
docker save local-ai-backend:stablediffusion-ggml -o backend-images/stablediffusion-ggml.tar
@@ -484,9 +504,6 @@ docker-save-stablediffusion-ggml: backend-images
docker-build-rerankers:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:rerankers -f backend/Dockerfile.python --build-arg BACKEND=rerankers .
docker-build-vllm:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:vllm -f backend/Dockerfile.python --build-arg BACKEND=vllm .
docker-build-transformers:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:transformers -f backend/Dockerfile.python --build-arg BACKEND=transformers .
@@ -497,7 +514,7 @@ docker-save-diffusers: backend-images
docker save local-ai-backend:diffusers -o backend-images/diffusers.tar
docker-build-whisper:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:whisper -f backend/Dockerfile.golang --build-arg BACKEND=whisper .
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) --build-arg CUDA_MAJOR_VERSION=$(CUDA_MAJOR_VERSION) --build-arg CUDA_MINOR_VERSION=$(CUDA_MINOR_VERSION) -t local-ai-backend:whisper -f backend/Dockerfile.golang --build-arg BACKEND=whisper .
docker-save-whisper: backend-images
docker save local-ai-backend:whisper -o backend-images/whisper.tar
@@ -514,10 +531,13 @@ docker-build-bark:
docker-build-chatterbox:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:chatterbox -f backend/Dockerfile.python --build-arg BACKEND=chatterbox ./backend
docker-build-vibevoice:
docker build --progress=plain --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:vibevoice -f backend/Dockerfile.python --build-arg BACKEND=vibevoice ./backend
docker-build-exllama2:
docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:exllama2 -f backend/Dockerfile.python --build-arg BACKEND=exllama2 .
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-exllama2
docker-build-backends: docker-build-llama-cpp docker-build-rerankers docker-build-vllm docker-build-transformers docker-build-diffusers docker-build-kokoro docker-build-faster-whisper docker-build-coqui docker-build-bark docker-build-chatterbox docker-build-vibevoice docker-build-exllama2
########################################################
### END Backends

View File

@@ -33,7 +33,7 @@
<img src="https://img.shields.io/badge/X-%23000000.svg?style=for-the-badge&logo=X&logoColor=white&label=LocalAI_API" alt="Follow LocalAI_API"/>
</a>
<a href="https://discord.gg/uJAeKSAGDy" target="blank">
<img src="https://dcbadge.vercel.app/api/server/uJAeKSAGDy?style=flat-square&theme=default-inverted" alt="Join LocalAI Discord Community"/>
<img src="https://img.shields.io/badge/dynamic/json?color=blue&label=Discord&style=for-the-badge&query=approximate_member_count&url=https%3A%2F%2Fdiscordapp.com%2Fapi%2Finvites%2FuJAeKSAGDy%3Fwith_counts%3Dtrue&logo=discord" alt="Join LocalAI Discord Community"/>
</a>
</p>
@@ -80,8 +80,18 @@
</tr>
</table>
## Screenshots
## Screenshots / Video
### Youtube video
<h1 align="center">
<br>
<a href="https://www.youtube.com/watch?v=PDqYhB9nNHA" target="_blank"> <img width="300" src="https://img.youtube.com/vi/PDqYhB9nNHA/0.jpg"> </a><br>
<br>
</h1>
### Screenshots
| Talk Interface | Generate Audio |
| --- | --- |
@@ -136,6 +146,9 @@ docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
### NVIDIA GPU Images:
```bash
# CUDA 13.0
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-13
# CUDA 12.0
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
@@ -143,7 +156,11 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gp
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11
# NVIDIA Jetson (L4T) ARM64
# CUDA 12 (for Nvidia AGX Orin and similar platforms)
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64
# CUDA 13 (for Nvidia DGX Spark)
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64-cuda-13
```
### AMD GPU Images (ROCm):
@@ -170,6 +187,9 @@ docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan
# CPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# NVIDIA CUDA 13 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-13
# NVIDIA CUDA 12 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
@@ -206,6 +226,7 @@ For more information, see [💻 Getting started](https://localai.io/basics/getti
## 📰 Latest project news
- December 2025: [Dynamic Memory Resource reclaimer](https://github.com/mudler/LocalAI/pull/7583), [Automatic fitting of models to multiple GPUS(llama.cpp)](https://github.com/mudler/LocalAI/pull/7584), [Added Vibevoice backend](https://github.com/mudler/LocalAI/pull/7494)
- November 2025: Major improvements to the UX. Among these: [Import models via URL](https://github.com/mudler/LocalAI/pull/7245) and [Multiple chats and history](https://github.com/mudler/LocalAI/pull/7325)
- October 2025: 🔌 [Model Context Protocol (MCP)](https://localai.io/docs/features/mcp/) support added for agentic capabilities with external tools
- September 2025: New Launcher application for MacOS and Linux, extended support to many backends for Mac and Nvidia L4T devices. Models: Added MLX-Audio, WAN 2.2. WebUI improvements and Python-based backends now ships portable python environments.
@@ -258,39 +279,40 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
### Text Generation & Language Models
| Backend | Description | Acceleration Support |
|---------|-------------|---------------------|
| **llama.cpp** | LLM inference in C/C++ | CUDA 11/12, ROCm, Intel SYCL, Vulkan, Metal, CPU |
| **vLLM** | Fast LLM inference with PagedAttention | CUDA 12, ROCm, Intel |
| **transformers** | HuggingFace transformers framework | CUDA 11/12, ROCm, Intel, CPU |
| **exllama2** | GPTQ inference library | CUDA 12 |
| **llama.cpp** | LLM inference in C/C++ | CUDA 11/12/13, ROCm, Intel SYCL, Vulkan, Metal, CPU |
| **vLLM** | Fast LLM inference with PagedAttention | CUDA 12/13, ROCm, Intel |
| **transformers** | HuggingFace transformers framework | CUDA 11/12/13, ROCm, Intel, CPU |
| **exllama2** | GPTQ inference library | CUDA 12/13 |
| **MLX** | Apple Silicon LLM inference | Metal (M1/M2/M3+) |
| **MLX-VLM** | Apple Silicon Vision-Language Models | Metal (M1/M2/M3+) |
### Audio & Speech Processing
| Backend | Description | Acceleration Support |
|---------|-------------|---------------------|
| **whisper.cpp** | OpenAI Whisper in C/C++ | CUDA 12, ROCm, Intel SYCL, Vulkan, CPU |
| **faster-whisper** | Fast Whisper with CTranslate2 | CUDA 12, ROCm, Intel, CPU |
| **bark** | Text-to-audio generation | CUDA 12, ROCm, Intel |
| **whisper.cpp** | OpenAI Whisper in C/C++ | CUDA 12/13, ROCm, Intel SYCL, Vulkan, CPU |
| **faster-whisper** | Fast Whisper with CTranslate2 | CUDA 12/13, ROCm, Intel, CPU |
| **bark** | Text-to-audio generation | CUDA 12/13, ROCm, Intel |
| **bark-cpp** | C++ implementation of Bark | CUDA, Metal, CPU |
| **coqui** | Advanced TTS with 1100+ languages | CUDA 12, ROCm, Intel, CPU |
| **kokoro** | Lightweight TTS model | CUDA 12, ROCm, Intel, CPU |
| **chatterbox** | Production-grade TTS | CUDA 11/12, CPU |
| **coqui** | Advanced TTS with 1100+ languages | CUDA 12/13, ROCm, Intel, CPU |
| **kokoro** | Lightweight TTS model | CUDA 12/13, ROCm, Intel, CPU |
| **chatterbox** | Production-grade TTS | CUDA 11/12/13, CPU |
| **piper** | Fast neural TTS system | CPU |
| **kitten-tts** | Kitten TTS models | CPU |
| **silero-vad** | Voice Activity Detection | CPU |
| **neutts** | Text-to-speech with voice cloning | CUDA 12, ROCm, CPU |
| **neutts** | Text-to-speech with voice cloning | CUDA 12/13, ROCm, CPU |
| **vibevoice** | Real-time TTS with voice cloning | CUDA 12/13, ROCm, Intel, CPU |
### Image & Video Generation
| Backend | Description | Acceleration Support |
|---------|-------------|---------------------|
| **stablediffusion.cpp** | Stable Diffusion in C/C++ | CUDA 12, Intel SYCL, Vulkan, CPU |
| **diffusers** | HuggingFace diffusion models | CUDA 11/12, ROCm, Intel, Metal, CPU |
| **stablediffusion.cpp** | Stable Diffusion in C/C++ | CUDA 12/13, Intel SYCL, Vulkan, CPU |
| **diffusers** | HuggingFace diffusion models | CUDA 11/12/13, ROCm, Intel, Metal, CPU |
### Specialized AI Tasks
| Backend | Description | Acceleration Support |
|---------|-------------|---------------------|
| **rfdetr** | Real-time object detection | CUDA 12, Intel, CPU |
| **rerankers** | Document reranking API | CUDA 11/12, ROCm, Intel, CPU |
| **rfdetr** | Real-time object detection | CUDA 12/13, Intel, CPU |
| **rerankers** | Document reranking API | CUDA 11/12/13, ROCm, Intel, CPU |
| **local-store** | Vector database | CPU |
| **huggingface** | HuggingFace API integration | API-based |
@@ -300,11 +322,13 @@ LocalAI supports a comprehensive range of AI backends with multiple acceleration
|-------------------|-------------------|------------------|
| **NVIDIA CUDA 11** | llama.cpp, whisper, stablediffusion, diffusers, rerankers, bark, chatterbox | Nvidia hardware |
| **NVIDIA CUDA 12** | All CUDA-compatible backends | Nvidia hardware |
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts | AMD Graphics |
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark | Intel Arc, Intel iGPUs |
| **NVIDIA CUDA 13** | All CUDA-compatible backends | Nvidia hardware |
| **AMD ROCm** | llama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neutts, vibevoice | AMD Graphics |
| **Intel oneAPI** | llama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, bark, vibevoice | Intel Arc, Intel iGPUs |
| **Apple Metal** | llama.cpp, whisper, diffusers, MLX, MLX-VLM, bark-cpp | Apple M1/M2/M3+ |
| **Vulkan** | llama.cpp, whisper, stablediffusion | Cross-platform GPUs |
| **NVIDIA Jetson** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI |
| **NVIDIA Jetson (CUDA 12)** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI (AGX Orin, etc.) |
| **NVIDIA Jetson (CUDA 13)** | llama.cpp, whisper, stablediffusion, diffusers, rfdetr | ARM64 embedded AI (DGX Spark) |
| **CPU Optimized** | All backends | AVX/AVX2/AVX512, quantization support |
### 🔗 Community and integrations
@@ -397,6 +421,10 @@ A huge thank you to our generous sponsors who support this project covering CI e
</a>
</p>
### Individual sponsors
A special thanks to individual sponsors that contributed to the project, a full list is in [Github](https://github.com/sponsors/mudler) and [buymeacoffee](https://buymeacoffee.com/mudler), a special shout out goes to [drikster80](https://github.com/drikster80) for being generous. Thank you everyone!
## 🌟 Star history
[![LocalAI Star history Chart](https://api.star-history.com/svg?repos=go-skynet/LocalAI&type=Date)](https://star-history.com/#go-skynet/LocalAI&Date)

View File

@@ -13,13 +13,14 @@ ENV DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
ARG TARGETVARIANT
ARG GO_VERSION=1.22.6
ARG UBUNTU_VERSION=2204
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
git ccache \
ca-certificates \
make cmake \
make cmake wget \
curl unzip \
libssl-dev && \
apt-get clean && \
@@ -32,6 +33,7 @@ ENV PATH=/usr/local/cuda/bin:${PATH}
# HipBLAS requirements
ENV PATH=/opt/rocm/bin:${PATH}
# Vulkan requirements
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "vulkan" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
@@ -50,15 +52,19 @@ EOT
# CuBLAS requirements
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
if ( [ "${BUILD_TYPE}" = "cublas" ] || [ "${BUILD_TYPE}" = "l4t" ] ) && [ "${SKIP_DRIVERS}" = "false" ]; then
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common pciutils
if [ "amd64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
fi
if [ "arm64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
if [ "${CUDA_MAJOR_VERSION}" = "13" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/sbsa/cuda-keyring_1.1-1_all.deb
else
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/arm64/cuda-keyring_1.1-1_all.deb
fi
fi
dpkg -i cuda-keyring_1.1-1_all.deb && \
rm -f cuda-keyring_1.1-1_all.deb && \
@@ -69,12 +75,31 @@ RUN <<EOT bash
libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} && \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
if [ "${CUDA_MAJOR_VERSION}" = "13" ] && [ "arm64" = "$TARGETARCH" ]; then
apt-get install -y --no-install-recommends \
libcufile-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcudnn9-cuda-${CUDA_MAJOR_VERSION} cuda-cupti-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libnvjitlink-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
fi
apt-get clean && \
rm -rf /var/lib/apt/lists/*
fi
EOT
# https://github.com/NVIDIA/Isaac-GR00T/issues/343
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
dpkg -i cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
cp /var/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get -y install cudss cudss-cuda-${CUDA_MAJOR_VERSION} && \
wget https://developer.download.nvidia.com/compute/nvpl/25.5/local_installers/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
dpkg -i nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
cp /var/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5/nvpl-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get install -y nvpl
fi
EOT
# If we are building with clblas support, we need the libraries for the builds
RUN if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
apt-get update && \

View File

@@ -20,7 +20,7 @@ RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
build-essential curl libssl-dev \
git && \
git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
@@ -62,6 +62,7 @@ ENV DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
ARG TARGETVARIANT
ARG GO_VERSION=1.22.6
ARG UBUNTU_VERSION=2204
RUN apt-get update && \
apt-get install -y --no-install-recommends \
@@ -70,7 +71,7 @@ RUN apt-get update && \
ca-certificates \
make \
curl unzip \
libssl-dev && \
libssl-dev wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
@@ -80,6 +81,7 @@ ENV PATH=/usr/local/cuda/bin:${PATH}
# HipBLAS requirements
ENV PATH=/opt/rocm/bin:${PATH}
# Vulkan requirements
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "vulkan" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
@@ -98,15 +100,19 @@ EOT
# CuBLAS requirements
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
if ( [ "${BUILD_TYPE}" = "cublas" ] || [ "${BUILD_TYPE}" = "l4t" ] ) && [ "${SKIP_DRIVERS}" = "false" ]; then
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common pciutils
if [ "amd64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
fi
if [ "arm64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
if [ "${CUDA_MAJOR_VERSION}" = "13" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/sbsa/cuda-keyring_1.1-1_all.deb
else
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/arm64/cuda-keyring_1.1-1_all.deb
fi
fi
dpkg -i cuda-keyring_1.1-1_all.deb && \
rm -f cuda-keyring_1.1-1_all.deb && \
@@ -117,12 +123,31 @@ RUN <<EOT bash
libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} && \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
if [ "${CUDA_MAJOR_VERSION}" = "13" ] && [ "arm64" = "$TARGETARCH" ]; then
apt-get install -y --no-install-recommends \
libcufile-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcudnn9-cuda-${CUDA_MAJOR_VERSION} cuda-cupti-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libnvjitlink-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
fi
apt-get clean && \
rm -rf /var/lib/apt/lists/*
fi
EOT
# https://github.com/NVIDIA/Isaac-GR00T/issues/343
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
dpkg -i cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
cp /var/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get -y install cudss cudss-cuda-${CUDA_MAJOR_VERSION} && \
wget https://developer.download.nvidia.com/compute/nvpl/25.5/local_installers/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
dpkg -i nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
cp /var/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5/nvpl-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get install -y nvpl
fi
EOT
# If we are building with clblas support, we need the libraries for the builds
RUN if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
apt-get update && \

View File

@@ -12,6 +12,7 @@ ENV CUDA_MINOR_VERSION=${CUDA_MINOR_VERSION}
ENV DEBIAN_FRONTEND=noninteractive
ARG TARGETARCH
ARG TARGETVARIANT
ARG UBUNTU_VERSION=2204
RUN apt-get update && \
apt-get install -y --no-install-recommends \
@@ -21,7 +22,7 @@ RUN apt-get update && \
espeak-ng \
curl \
libssl-dev \
git \
git wget \
git-lfs \
unzip clang \
upx-ucl \
@@ -30,8 +31,15 @@ RUN apt-get update && \
python3-dev llvm \
python3-venv make cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
pip install --upgrade pip
rm -rf /var/lib/apt/lists/*
RUN <<EOT bash
if [ "${UBUNTU_VERSION}" = "2404" ]; then
pip install --break-system-packages --user --upgrade pip
else
pip install --upgrade pip
fi
EOT
# Cuda
@@ -58,15 +66,19 @@ EOT
# CuBLAS requirements
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
if ( [ "${BUILD_TYPE}" = "cublas" ] || [ "${BUILD_TYPE}" = "l4t" ] ) && [ "${SKIP_DRIVERS}" = "false" ]; then
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common pciutils
if [ "amd64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/x86_64/cuda-keyring_1.1-1_all.deb
fi
if [ "arm64" = "$TARGETARCH" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
if [ "${CUDA_MAJOR_VERSION}" = "13" ]; then
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/sbsa/cuda-keyring_1.1-1_all.deb
else
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_VERSION}/arm64/cuda-keyring_1.1-1_all.deb
fi
fi
dpkg -i cuda-keyring_1.1-1_all.deb && \
rm -f cuda-keyring_1.1-1_all.deb && \
@@ -77,12 +89,31 @@ RUN <<EOT bash
libcurand-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusparse-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} && \
libcusolver-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
if [ "${CUDA_MAJOR_VERSION}" = "13" ] && [ "arm64" = "$TARGETARCH" ]; then
apt-get install -y --no-install-recommends \
libcufile-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcudnn9-cuda-${CUDA_MAJOR_VERSION} cuda-cupti-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libnvjitlink-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION}
fi
apt-get clean && \
rm -rf /var/lib/apt/lists/*
fi
EOT
# https://github.com/NVIDIA/Isaac-GR00T/issues/343
RUN <<EOT bash
if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "arm64" ]; then
wget https://developer.download.nvidia.com/compute/cudss/0.6.0/local_installers/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
dpkg -i cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0_0.6.0-1_arm64.deb && \
cp /var/cudss-local-tegra-repo-ubuntu${UBUNTU_VERSION}-0.6.0/cudss-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get -y install cudss cudss-cuda-${CUDA_MAJOR_VERSION} && \
wget https://developer.download.nvidia.com/compute/nvpl/25.5/local_installers/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
dpkg -i nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5_1.0-1_arm64.deb && \
cp /var/nvpl-local-repo-ubuntu${UBUNTU_VERSION}-25.5/nvpl-*-keyring.gpg /usr/share/keyrings/ && \
apt-get update && apt-get install -y nvpl
fi
EOT
# If we are building with clblas support, we need the libraries for the builds
RUN if [ "${BUILD_TYPE}" = "clblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then \
apt-get update && \
@@ -103,6 +134,11 @@ RUN if [ "${BUILD_TYPE}" = "hipblas" ] && [ "${SKIP_DRIVERS}" = "false" ]; then
# to locate the libraries. We run ldconfig ourselves to work around this packaging deficiency
ldconfig \
; fi
RUN if [ "${BUILD_TYPE}" = "hipblas" ]; then \
ln -s /opt/rocm-**/lib/llvm/lib/libomp.so /usr/lib/libomp.so \
; fi
# Install uv as a system package
RUN curl -LsSf https://astral.sh/uv/install.sh | UV_INSTALL_DIR=/usr/bin sh
ENV PATH="/root/.cargo/bin:${PATH}"
@@ -110,7 +146,14 @@ ENV PATH="/root/.cargo/bin:${PATH}"
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Install grpcio-tools (the version in 22.04 is too old)
RUN pip install --user grpcio-tools==1.71.0 grpcio==1.71.0
RUN <<EOT bash
if [ "${UBUNTU_VERSION}" = "2404" ]; then
pip install --break-system-packages --user grpcio-tools==1.71.0 grpcio==1.71.0
else
pip install grpcio-tools==1.71.0 grpcio==1.71.0
fi
EOT
COPY python/${BACKEND} /${BACKEND}
COPY backend.proto /${BACKEND}/backend.proto

View File

@@ -282,6 +282,7 @@ message TranscriptRequest {
uint32 threads = 4;
bool translate = 5;
bool diarize = 6;
string prompt = 7;
}
message TranscriptResult {
@@ -300,7 +301,6 @@ message TranscriptSegment {
message GenerateImageRequest {
int32 height = 1;
int32 width = 2;
int32 mode = 3;
int32 step = 4;
int32 seed = 5;
string positive_prompt = 6;

View File

@@ -1,5 +1,5 @@
LLAMA_VERSION?=583cb83416467e8abf9b37349dcf1f6a0083745a
LLAMA_VERSION?=ced765be44ce173c374f295b3c6f4175f8fd109b
LLAMA_REPO?=https://github.com/ggerganov/llama.cpp
CMAKE_ARGS?=

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +0,0 @@
diff --git a/tools/mtmd/clip.cpp b/tools/mtmd/clip.cpp
index 3cd0d2fa..6c5e811a 100644
--- a/tools/mtmd/clip.cpp
+++ b/tools/mtmd/clip.cpp
@@ -2608,7 +2608,7 @@ bool clip_image_batch_encode(clip_ctx * ctx, const int n_threads, const clip_ima
struct ggml_tensor * patches = ggml_graph_get_tensor(gf, "patches");
int* patches_data = (int*)malloc(ggml_nbytes(patches));
for (int i = 0; i < num_patches; i++) {
- patches_data[i] = i + 1;
+ patches_data[i] = i;
}
ggml_backend_tensor_set(patches, patches_data, 0, ggml_nbytes(patches));
free(patches_data);

View File

@@ -1,11 +1,14 @@
#!/bin/bash
## Patches
## Apply patches from the `patches` directory
for patch in $(ls patches); do
echo "Applying patch $patch"
patch -d llama.cpp/ -p1 < patches/$patch
done
if [ -d "patches" ]; then
for patch in $(ls patches); do
echo "Applying patch $patch"
patch -d llama.cpp/ -p1 < patches/$patch
done
fi
set -e
@@ -26,30 +29,3 @@ else
fi
set -e
# Now to keep maximum compatibility with the original server.cpp, we need to remove the index.html.gz.hpp and loading.html.hpp includes
# and remove the main function
# TODO: upstream this to the original server.cpp by extracting the upstream main function to a separate file
awk '
/int[ \t]+main[ \t]*\(/ { # If the line starts the main function
in_main=1; # Set a flag
open_braces=0; # Track number of open braces
}
in_main {
open_braces += gsub(/\{/, "{"); # Count opening braces
open_braces -= gsub(/\}/, "}"); # Count closing braces
if (open_braces == 0) { # If all braces are closed
in_main=0; # End skipping
}
next; # Skip lines inside main
}
!in_main # Print lines not inside main
' "llama.cpp/tools/server/server.cpp" > llama.cpp/tools/grpc-server/server.cpp
# remove index.html.gz.hpp and loading.html.hpp includes
if [[ "$OSTYPE" == "darwin"* ]]; then
# macOS
sed -i '' '/#include "index\.html\.gz\.hpp"/d; /#include "loading\.html\.hpp"/d' llama.cpp/tools/grpc-server/server.cpp
else
# Linux and others
sed -i '/#include "index\.html\.gz\.hpp"/d; /#include "loading\.html\.hpp"/d' llama.cpp/tools/grpc-server/server.cpp
fi

View File

@@ -4,11 +4,11 @@
package main
import (
"github.com/rs/zerolog/log"
"github.com/mudler/xlog"
)
func assert(cond bool, msg string) {
if !cond {
log.Fatal().Stack().Msg(msg)
xlog.Fatal().Stack().Msg(msg)
}
}

View File

@@ -7,8 +7,7 @@ import (
"os"
grpc "github.com/mudler/LocalAI/pkg/grpc"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/mudler/xlog"
)
var (
@@ -16,7 +15,7 @@ var (
)
func main() {
log.Logger = log.Output(zerolog.ConsoleWriter{Out: os.Stderr})
xlog.SetLogger(xlog.NewLogger(xlog.LogLevel(os.Getenv("LOCALAI_LOG_LEVEL")), os.Getenv("LOCALAI_LOG_FORMAT")))
flag.Parse()

View File

@@ -12,7 +12,7 @@ import (
"github.com/mudler/LocalAI/pkg/grpc/base"
pb "github.com/mudler/LocalAI/pkg/grpc/proto"
"github.com/rs/zerolog/log"
"github.com/mudler/xlog"
)
type Store struct {
@@ -135,7 +135,7 @@ func (s *Store) StoresSet(opts *pb.StoresSetOptions) error {
} else {
sample = k.Floats
}
log.Debug().Msgf("Key is not normalized: %v", sample)
xlog.Debug("Key is not normalized", "sample", sample)
}
kvs[i] = Pair{
@@ -238,7 +238,7 @@ func (s *Store) StoresDelete(opts *pb.StoresDeleteOptions) error {
assert(!hasKey(s.keys, k), fmt.Sprintf("Key exists, but was not found: t=%d, %v", len(tail_ks), k))
}
log.Debug().Msgf("Delete: found = %v, t = %d, j = %d, len(merge_ks) = %d, len(merge_vs) = %d", found, len(tail_ks), j, len(merge_ks), len(merge_vs))
xlog.Debug("Delete", "found", found, "tailLen", len(tail_ks), "j", j, "mergeKeysLen", len(merge_ks), "mergeValuesLen", len(merge_vs))
}
merge_ks = append(merge_ks, tail_ks...)
@@ -261,7 +261,7 @@ func (s *Store) StoresDelete(opts *pb.StoresDeleteOptions) error {
}(), "Keys to delete still present")
if len(s.keys) != l {
log.Debug().Msgf("Delete: Some keys not found: len(s.keys) = %d, l = %d", len(s.keys), l)
xlog.Debug("Delete: Some keys not found", "keysLen", len(s.keys), "expectedLen", l)
}
return nil
@@ -273,7 +273,7 @@ func (s *Store) StoresGet(opts *pb.StoresGetOptions) (pb.StoresGetResult, error)
ks := sortIntoKeySlicese(opts.Keys)
if len(s.keys) == 0 {
log.Debug().Msgf("Get: No keys in store")
xlog.Debug("Get: No keys in store")
}
if s.keyLen == -1 {
@@ -305,7 +305,7 @@ func (s *Store) StoresGet(opts *pb.StoresGetOptions) (pb.StoresGetResult, error)
}
if len(pbKeys) != len(opts.Keys) {
log.Debug().Msgf("Get: Some keys not found: len(pbKeys) = %d, len(opts.Keys) = %d, len(s.Keys) = %d", len(pbKeys), len(opts.Keys), len(s.keys))
xlog.Debug("Get: Some keys not found", "pbKeysLen", len(pbKeys), "optsKeysLen", len(opts.Keys), "storeKeysLen", len(s.keys))
}
return pb.StoresGetResult{
@@ -507,7 +507,7 @@ func (s *Store) StoresFind(opts *pb.StoresFindOptions) (pb.StoresFindResult, err
} else {
sample = tk
}
log.Debug().Msgf("Trying to compare non-normalized key with normalized keys: %v", sample)
xlog.Debug("Trying to compare non-normalized key with normalized keys", "sample", sample)
}
return s.StoresFindFallback(opts)

View File

@@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
# stablediffusion.cpp (ggml)
STABLEDIFFUSION_GGML_REPO?=https://github.com/leejet/stable-diffusion.cpp
STABLEDIFFUSION_GGML_VERSION?=0ebe6fe118f125665939b27c89f34ed38716bff8
STABLEDIFFUSION_GGML_VERSION?=4ff2c8c74bd17c2cfffe3a01be77743fb3efba2f
CMAKE_ARGS+=-DGGML_MAX_NAME=128

View File

@@ -1,3 +1,5 @@
#include "stable-diffusion.h"
#include <cmath>
#include <cstdint>
#define GGML_MAX_NAME 128
@@ -6,7 +8,9 @@
#include <time.h>
#include <string>
#include <vector>
#include <map>
#include <filesystem>
#include <algorithm>
#include "gosd.h"
#define STB_IMAGE_IMPLEMENTATION
@@ -20,11 +24,13 @@
#define STB_IMAGE_RESIZE_IMPLEMENTATION
#define STB_IMAGE_RESIZE_STATIC
#include "stb_image_resize.h"
#include <stdlib.h>
#include <regex>
// Names of the sampler method, same order as enum sample_method in stable-diffusion.h
const char* sample_method_str[] = {
"default",
"euler",
"euler_a",
"heun",
"dpm2",
"dpm++2s_a",
@@ -35,29 +41,384 @@ const char* sample_method_str[] = {
"lcm",
"ddim_trailing",
"tcd",
"euler_a",
};
static_assert(std::size(sample_method_str) == SAMPLE_METHOD_COUNT, "sample method mismatch");
// Names of the sigma schedule overrides, same order as sample_schedule in stable-diffusion.h
const char* schedulers[] = {
"default",
"discrete",
"karras",
"exponential",
"ays",
"gits",
"sgm_uniform",
"simple",
"smoothstep",
"kl_optimal",
"lcm",
};
static_assert(std::size(schedulers) == SCHEDULE_COUNT, "schedulers mismatch");
static_assert(std::size(schedulers) == SCHEDULER_COUNT, "schedulers mismatch");
// New enum string arrays
const char* rng_type_str[] = {
"std_default",
"cuda",
"cpu",
};
static_assert(std::size(rng_type_str) == RNG_TYPE_COUNT, "rng type mismatch");
const char* prediction_str[] = {
"epsilon",
"v",
"edm_v",
"flow",
"flux_flow",
"flux2_flow",
};
static_assert(std::size(prediction_str) == PREDICTION_COUNT, "prediction mismatch");
const char* lora_apply_mode_str[] = {
"auto",
"immediately",
"at_runtime",
};
static_assert(std::size(lora_apply_mode_str) == LORA_APPLY_MODE_COUNT, "lora apply mode mismatch");
constexpr const char* sd_type_str[] = {
"f32", // 0
"f16", // 1
"q4_0", // 2
"q4_1", // 3
nullptr, // 4
nullptr, // 5
"q5_0", // 6
"q5_1", // 7
"q8_0", // 8
"q8_1", // 9
"q2_k", // 10
"q3_k", // 11
"q4_k", // 12
"q5_k", // 13
"q6_k", // 14
"q8_k", // 15
"iq2_xxs", // 16
"iq2_xs", // 17
"iq3_xxs", // 18
"iq1_s", // 19
"iq4_nl", // 20
"iq3_s", // 21
"iq2_s", // 22
"iq4_xs", // 23
"i8", // 24
"i16", // 25
"i32", // 26
"i64", // 27
"f64", // 28
"iq1_m", // 29
"bf16", // 30
nullptr, nullptr, nullptr, nullptr, // 31-34
"tq1_0", // 35
"tq2_0", // 36
nullptr, nullptr, // 37-38
"mxfp4" // 39
};
static_assert(std::size(sd_type_str) == SD_TYPE_COUNT, "sd type mismatch");
sd_ctx_params_t ctx_params;
sd_ctx_t* sd_c;
// Moved from the context (load time) to generation time params
scheduler_t scheduler = scheduler_t::DEFAULT;
scheduler_t scheduler = SCHEDULER_COUNT;
sample_method_t sample_method = SAMPLE_METHOD_COUNT;
sample_method_t sample_method;
// Storage for embeddings (needs to persist for the lifetime of ctx_params)
static std::vector<sd_embedding_t> embedding_vec;
// Storage for embedding strings (needs to persist as long as embedding_vec references them)
static std::vector<std::string> embedding_strings;
// Storage for LoRAs (needs to persist for the lifetime of generation params)
static std::vector<sd_lora_t> lora_vec;
// Storage for LoRA strings (needs to persist as long as lora_vec references them)
static std::vector<std::string> lora_strings;
// Storage for lora_dir path
static std::string lora_dir_path;
// Build embeddings vector from directory, similar to upstream CLI
static void build_embedding_vec(const char* embedding_dir) {
embedding_vec.clear();
embedding_strings.clear();
if (!embedding_dir || strlen(embedding_dir) == 0) {
return;
}
if (!std::filesystem::exists(embedding_dir) || !std::filesystem::is_directory(embedding_dir)) {
fprintf(stderr, "Embedding directory does not exist or is not a directory: %s\n", embedding_dir);
return;
}
static const std::vector<std::string> valid_ext = {".pt", ".safetensors", ".gguf"};
for (const auto& entry : std::filesystem::directory_iterator(embedding_dir)) {
if (!entry.is_regular_file()) {
continue;
}
auto path = entry.path();
std::string ext = path.extension().string();
bool valid = false;
for (const auto& e : valid_ext) {
if (ext == e) {
valid = true;
break;
}
}
if (!valid) {
continue;
}
std::string name = path.stem().string();
std::string full_path = path.string();
// Store strings in persistent storage
embedding_strings.push_back(name);
embedding_strings.push_back(full_path);
sd_embedding_t item;
item.name = embedding_strings[embedding_strings.size() - 2].c_str();
item.path = embedding_strings[embedding_strings.size() - 1].c_str();
embedding_vec.push_back(item);
fprintf(stderr, "Found embedding: %s -> %s\n", item.name, item.path);
}
fprintf(stderr, "Loaded %zu embeddings from %s\n", embedding_vec.size(), embedding_dir);
}
// Discover LoRA files in directory and build a map of name -> path
static std::map<std::string, std::string> discover_lora_files(const char* lora_dir) {
std::map<std::string, std::string> lora_map;
if (!lora_dir || strlen(lora_dir) == 0) {
fprintf(stderr, "LoRA directory not specified\n");
return lora_map;
}
if (!std::filesystem::exists(lora_dir) || !std::filesystem::is_directory(lora_dir)) {
fprintf(stderr, "LoRA directory does not exist or is not a directory: %s\n", lora_dir);
return lora_map;
}
static const std::vector<std::string> valid_ext = {".safetensors", ".ckpt", ".pt", ".gguf"};
fprintf(stderr, "Discovering LoRA files in: %s\n", lora_dir);
for (const auto& entry : std::filesystem::directory_iterator(lora_dir)) {
if (!entry.is_regular_file()) {
continue;
}
auto path = entry.path();
std::string ext = path.extension().string();
bool valid = false;
for (const auto& e : valid_ext) {
if (ext == e) {
valid = true;
break;
}
}
if (!valid) {
continue;
}
std::string name = path.stem().string(); // stem() already removes extension
std::string full_path = path.string();
// Store the name (without extension) -> full path mapping
// This allows users to specify just the name in <lora:name:strength>
lora_map[name] = full_path;
fprintf(stderr, "Found LoRA file: %s -> %s\n", name.c_str(), full_path.c_str());
}
fprintf(stderr, "Discovered %zu LoRA files in %s\n", lora_map.size(), lora_dir);
return lora_map;
}
// Helper function to check if a path is absolute (matches upstream)
static bool is_absolute_path(const std::string& p) {
#ifdef _WIN32
// Windows: C:/path or C:\path
return p.size() > 1 && std::isalpha(static_cast<unsigned char>(p[0])) && p[1] == ':';
#else
// Unix: /path
return !p.empty() && p[0] == '/';
#endif
}
// Parse LoRAs from prompt string (e.g., "<lora:name:1.0>" or "<lora:name>")
// Returns a vector of LoRA info and the cleaned prompt with LoRA tags removed
// Matches upstream implementation more closely
static std::pair<std::vector<sd_lora_t>, std::string> parse_loras_from_prompt(const std::string& prompt, const char* lora_dir) {
std::vector<sd_lora_t> loras;
std::string cleaned_prompt = prompt;
if (!lora_dir || strlen(lora_dir) == 0) {
fprintf(stderr, "LoRA directory not set, cannot parse LoRAs from prompt\n");
return {loras, cleaned_prompt};
}
// Discover LoRA files for name-based lookup
std::map<std::string, std::string> discovered_lora_map = discover_lora_files(lora_dir);
// Map to accumulate multipliers for the same LoRA (matches upstream)
std::map<std::string, float> lora_map;
std::map<std::string, float> high_noise_lora_map;
static const std::regex re(R"(<lora:([^:>]+):([^>]+)>)");
static const std::vector<std::string> valid_ext = {".pt", ".safetensors", ".gguf"};
std::smatch m;
std::string tmp = prompt;
fprintf(stderr, "Parsing LoRAs from prompt: %s\n", prompt.c_str());
while (std::regex_search(tmp, m, re)) {
std::string raw_path = m[1].str();
const std::string raw_mul = m[2].str();
float mul = 0.f;
try {
mul = std::stof(raw_mul);
} catch (...) {
tmp = m.suffix().str();
cleaned_prompt = std::regex_replace(cleaned_prompt, re, "", std::regex_constants::format_first_only);
fprintf(stderr, "Invalid LoRA multiplier '%s', skipping\n", raw_mul.c_str());
continue;
}
bool is_high_noise = false;
static const std::string prefix = "|high_noise|";
if (raw_path.rfind(prefix, 0) == 0) {
raw_path.erase(0, prefix.size());
is_high_noise = true;
}
std::filesystem::path final_path;
if (is_absolute_path(raw_path)) {
final_path = raw_path;
} else {
// Try name-based lookup first
auto it = discovered_lora_map.find(raw_path);
if (it != discovered_lora_map.end()) {
final_path = it->second;
} else {
// Try case-insensitive lookup
bool found = false;
for (const auto& pair : discovered_lora_map) {
std::string lower_name = raw_path;
std::string lower_key = pair.first;
std::transform(lower_name.begin(), lower_name.end(), lower_name.begin(), ::tolower);
std::transform(lower_key.begin(), lower_key.end(), lower_key.begin(), ::tolower);
if (lower_name == lower_key) {
final_path = pair.second;
found = true;
break;
}
}
if (!found) {
// Try as relative path in lora_dir
final_path = std::filesystem::path(lora_dir) / raw_path;
}
}
}
// Try adding extensions if file doesn't exist
if (!std::filesystem::exists(final_path)) {
bool found = false;
for (const auto& ext : valid_ext) {
std::filesystem::path try_path = final_path;
try_path += ext;
if (std::filesystem::exists(try_path)) {
final_path = try_path;
found = true;
break;
}
}
if (!found) {
fprintf(stderr, "WARNING: LoRA file not found: %s\n", final_path.lexically_normal().string().c_str());
tmp = m.suffix().str();
cleaned_prompt = std::regex_replace(cleaned_prompt, re, "", std::regex_constants::format_first_only);
continue;
}
}
// Normalize path (matches upstream)
const std::string key = final_path.lexically_normal().string();
// Accumulate multiplier if same LoRA appears multiple times (matches upstream)
if (is_high_noise) {
high_noise_lora_map[key] += mul;
} else {
lora_map[key] += mul;
}
fprintf(stderr, "Parsed LoRA: path='%s', multiplier=%.2f, is_high_noise=%s\n",
key.c_str(), mul, is_high_noise ? "true" : "false");
cleaned_prompt = std::regex_replace(cleaned_prompt, re, "", std::regex_constants::format_first_only);
tmp = m.suffix().str();
}
// Build final LoRA vector from accumulated maps (matches upstream)
// Store all path strings first to ensure they persist
for (const auto& kv : lora_map) {
lora_strings.push_back(kv.first);
}
for (const auto& kv : high_noise_lora_map) {
lora_strings.push_back(kv.first);
}
// Now build the LoRA vector with pointers to the stored strings
size_t string_idx = 0;
for (const auto& kv : lora_map) {
sd_lora_t item;
item.is_high_noise = false;
item.path = lora_strings[string_idx].c_str();
item.multiplier = kv.second;
loras.push_back(item);
string_idx++;
}
for (const auto& kv : high_noise_lora_map) {
sd_lora_t item;
item.is_high_noise = true;
item.path = lora_strings[string_idx].c_str();
item.multiplier = kv.second;
loras.push_back(item);
string_idx++;
}
// Clean up extra spaces
std::regex space_regex(R"(\s+)");
cleaned_prompt = std::regex_replace(cleaned_prompt, space_regex, " ");
// Trim leading/trailing spaces
size_t first = cleaned_prompt.find_first_not_of(" \t");
if (first != std::string::npos) {
cleaned_prompt.erase(0, first);
}
size_t last = cleaned_prompt.find_last_not_of(" \t");
if (last != std::string::npos) {
cleaned_prompt.erase(last + 1);
}
fprintf(stderr, "Parsed %zu LoRA(s) from prompt. Cleaned prompt: %s\n", loras.size(), cleaned_prompt.c_str());
return {loras, cleaned_prompt};
}
// Copied from the upstream CLI
static void sd_log_cb(enum sd_log_level_t level, const char* log, void* data) {
@@ -98,7 +459,7 @@ int load_model(const char *model, char *model_path, char* options[], int threads
const char *stableDiffusionModel = "";
if (diff == 1 ) {
stableDiffusionModel = model;
stableDiffusionModel = strdup(model);
model = "";
}
@@ -109,8 +470,38 @@ int load_model(const char *model, char *model_path, char* options[], int threads
const char *vae_path = "";
const char *scheduler_str = "";
const char *sampler = "";
const char *clip_vision_path = "";
const char *llm_path = "";
const char *llm_vision_path = "";
const char *diffusion_model_path = stableDiffusionModel;
const char *high_noise_diffusion_model_path = "";
const char *taesd_path = "";
const char *control_net_path = "";
const char *embedding_dir = "";
const char *photo_maker_path = "";
const char *tensor_type_rules = "";
char *lora_dir = model_path;
bool lora_dir_allocated = false;
bool vae_decode_only = true;
int n_threads = threads;
enum sd_type_t wtype = SD_TYPE_COUNT;
enum rng_type_t rng_type = CUDA_RNG;
enum rng_type_t sampler_rng_type = RNG_TYPE_COUNT;
enum prediction_t prediction = PREDICTION_COUNT;
enum lora_apply_mode_t lora_apply_mode = LORA_APPLY_AUTO;
bool offload_params_to_cpu = false;
bool keep_clip_on_cpu = false;
bool keep_control_net_on_cpu = false;
bool keep_vae_on_cpu = false;
bool diffusion_flash_attn = false;
bool tae_preview_only = false;
bool diffusion_conv_direct = false;
bool vae_conv_direct = false;
bool force_sdxl_vae_conv_scale = false;
bool chroma_use_dit_mask = true;
bool chroma_use_t5_mask = false;
int chroma_t5_mask_pad = 1;
float flow_shift = INFINITY;
fprintf(stderr, "parsing options: %p\n", options);
@@ -123,16 +514,16 @@ int load_model(const char *model, char *model_path, char* options[], int threads
}
if (!strcmp(optname, "clip_l_path")) {
clip_l_path = optval;
clip_l_path = strdup(optval);
}
if (!strcmp(optname, "clip_g_path")) {
clip_g_path = optval;
clip_g_path = strdup(optval);
}
if (!strcmp(optname, "t5xxl_path")) {
t5xxl_path = optval;
t5xxl_path = strdup(optval);
}
if (!strcmp(optname, "vae_path")) {
vae_path = optval;
vae_path = strdup(optval);
}
if (!strcmp(optname, "scheduler")) {
scheduler_str = optval;
@@ -147,18 +538,201 @@ int load_model(const char *model, char *model_path, char* options[], int threads
std::filesystem::path lora_path(optval);
std::filesystem::path full_lora_path = model_path_str / lora_path;
lora_dir = strdup(full_lora_path.string().c_str());
lora_dir_allocated = true;
fprintf(stderr, "Lora dir resolved to: %s\n", lora_dir);
lora_dir_path = full_lora_path.string();
fprintf(stderr, "LoRA dir resolved to: %s\n", lora_dir);
} else {
lora_dir = strdup(optval);
lora_dir_allocated = true;
lora_dir_path = std::string(optval);
fprintf(stderr, "No model path provided, using lora dir as-is: %s\n", lora_dir);
}
// Discover LoRAs immediately when directory is set
if (lora_dir && strlen(lora_dir) > 0) {
discover_lora_files(lora_dir);
}
}
// New parsing
if (!strcmp(optname, "clip_vision_path")) clip_vision_path = strdup(optval);
if (!strcmp(optname, "llm_path")) llm_path = strdup(optval);
if (!strcmp(optname, "llm_vision_path")) llm_vision_path = strdup(optval);
if (!strcmp(optname, "diffusion_model_path")) diffusion_model_path = strdup(optval);
if (!strcmp(optname, "high_noise_diffusion_model_path")) high_noise_diffusion_model_path = strdup(optval);
if (!strcmp(optname, "taesd_path")) taesd_path = strdup(optval);
if (!strcmp(optname, "control_net_path")) control_net_path = strdup(optval);
if (!strcmp(optname, "embedding_dir")) {
// Path join with model dir
if (model_path && strlen(model_path) > 0) {
std::filesystem::path model_path_str(model_path);
std::filesystem::path embedding_path(optval);
std::filesystem::path full_embedding_path = model_path_str / embedding_path;
embedding_dir = strdup(full_embedding_path.string().c_str());
fprintf(stderr, "Embedding dir resolved to: %s\n", embedding_dir);
} else {
embedding_dir = strdup(optval);
fprintf(stderr, "No model path provided, using embedding dir as-is: %s\n", embedding_dir);
}
}
if (!strcmp(optname, "photo_maker_path")) photo_maker_path = strdup(optval);
if (!strcmp(optname, "tensor_type_rules")) tensor_type_rules = strdup(optval);
if (!strcmp(optname, "vae_decode_only")) vae_decode_only = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "offload_params_to_cpu")) offload_params_to_cpu = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "keep_clip_on_cpu")) keep_clip_on_cpu = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "keep_control_net_on_cpu")) keep_control_net_on_cpu = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "keep_vae_on_cpu")) keep_vae_on_cpu = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "diffusion_flash_attn")) diffusion_flash_attn = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "tae_preview_only")) tae_preview_only = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "diffusion_conv_direct")) diffusion_conv_direct = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "vae_conv_direct")) vae_conv_direct = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "force_sdxl_vae_conv_scale")) force_sdxl_vae_conv_scale = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "chroma_use_dit_mask")) chroma_use_dit_mask = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "chroma_use_t5_mask")) chroma_use_t5_mask = (strcmp(optval, "true") == 0 || strcmp(optval, "1") == 0);
if (!strcmp(optname, "n_threads")) n_threads = atoi(optval);
if (!strcmp(optname, "chroma_t5_mask_pad")) chroma_t5_mask_pad = atoi(optval);
if (!strcmp(optname, "flow_shift")) flow_shift = atof(optval);
if (!strcmp(optname, "rng_type")) {
int found = -1;
for (int m = 0; m < RNG_TYPE_COUNT; m++) {
if (!strcmp(optval, rng_type_str[m])) {
found = m;
break;
}
}
if (found != -1) {
rng_type = (rng_type_t)found;
fprintf(stderr, "Found rng_type: %s\n", optval);
} else {
fprintf(stderr, "Invalid rng_type: %s, using default\n", optval);
}
}
if (!strcmp(optname, "sampler_rng_type")) {
int found = -1;
for (int m = 0; m < RNG_TYPE_COUNT; m++) {
if (!strcmp(optval, rng_type_str[m])) {
found = m;
break;
}
}
if (found != -1) {
sampler_rng_type = (rng_type_t)found;
fprintf(stderr, "Found sampler_rng_type: %s\n", optval);
} else {
fprintf(stderr, "Invalid sampler_rng_type: %s, using default\n", optval);
}
}
if (!strcmp(optname, "prediction")) {
int found = -1;
for (int m = 0; m < PREDICTION_COUNT; m++) {
if (!strcmp(optval, prediction_str[m])) {
found = m;
break;
}
}
if (found != -1) {
prediction = (prediction_t)found;
fprintf(stderr, "Found prediction: %s\n", optval);
} else {
fprintf(stderr, "Invalid prediction: %s, using default\n", optval);
}
}
if (!strcmp(optname, "lora_apply_mode")) {
int found = -1;
for (int m = 0; m < LORA_APPLY_MODE_COUNT; m++) {
if (!strcmp(optval, lora_apply_mode_str[m])) {
found = m;
break;
}
}
if (found != -1) {
lora_apply_mode = (lora_apply_mode_t)found;
fprintf(stderr, "Found lora_apply_mode: %s\n", optval);
} else {
fprintf(stderr, "Invalid lora_apply_mode: %s, using default\n", optval);
}
}
if (!strcmp(optname, "wtype")) {
int found = -1;
for (int m = 0; m < SD_TYPE_COUNT; m++) {
if (sd_type_str[m] && !strcmp(optval, sd_type_str[m])) {
found = m;
break;
}
}
if (found != -1) {
wtype = (sd_type_t)found;
fprintf(stderr, "Found wtype: %s\n", optval);
} else {
fprintf(stderr, "Invalid wtype: %s, using default\n", optval);
}
}
}
fprintf(stderr, "parsed options\n");
// Build embeddings vector from directory if provided
build_embedding_vec(embedding_dir);
fprintf (stderr, "Creating context\n");
sd_ctx_params_init(&ctx_params);
ctx_params.model_path = model;
ctx_params.clip_l_path = clip_l_path;
ctx_params.clip_g_path = clip_g_path;
ctx_params.clip_vision_path = clip_vision_path;
ctx_params.t5xxl_path = t5xxl_path;
ctx_params.llm_path = llm_path;
ctx_params.llm_vision_path = llm_vision_path;
ctx_params.diffusion_model_path = diffusion_model_path;
ctx_params.high_noise_diffusion_model_path = high_noise_diffusion_model_path;
ctx_params.vae_path = vae_path;
ctx_params.taesd_path = taesd_path;
ctx_params.control_net_path = control_net_path;
if (lora_dir && strlen(lora_dir) > 0) {
lora_dir_path = std::string(lora_dir);
fprintf(stderr, "LoRA model directory set to: %s\n", lora_dir);
// Discover LoRAs at load time for logging
discover_lora_files(lora_dir);
} else {
fprintf(stderr, "WARNING: LoRA model directory not set. LoRAs in prompts will not be loaded.\n");
}
// Set embeddings array and count
ctx_params.embeddings = embedding_vec.empty() ? NULL : embedding_vec.data();
ctx_params.embedding_count = static_cast<uint32_t>(embedding_vec.size());
ctx_params.photo_maker_path = photo_maker_path;
ctx_params.tensor_type_rules = tensor_type_rules;
ctx_params.vae_decode_only = vae_decode_only;
// XXX: Setting to true causes a segfault on the second run
ctx_params.free_params_immediately = false;
ctx_params.n_threads = n_threads;
ctx_params.rng_type = rng_type;
ctx_params.keep_clip_on_cpu = keep_clip_on_cpu;
if (wtype != SD_TYPE_COUNT) ctx_params.wtype = wtype;
if (sampler_rng_type != RNG_TYPE_COUNT) ctx_params.sampler_rng_type = sampler_rng_type;
if (prediction != PREDICTION_COUNT) ctx_params.prediction = prediction;
if (lora_apply_mode != LORA_APPLY_MODE_COUNT) ctx_params.lora_apply_mode = lora_apply_mode;
ctx_params.offload_params_to_cpu = offload_params_to_cpu;
ctx_params.keep_control_net_on_cpu = keep_control_net_on_cpu;
ctx_params.keep_vae_on_cpu = keep_vae_on_cpu;
ctx_params.diffusion_flash_attn = diffusion_flash_attn;
ctx_params.tae_preview_only = tae_preview_only;
ctx_params.diffusion_conv_direct = diffusion_conv_direct;
ctx_params.vae_conv_direct = vae_conv_direct;
ctx_params.force_sdxl_vae_conv_scale = force_sdxl_vae_conv_scale;
ctx_params.chroma_use_dit_mask = chroma_use_dit_mask;
ctx_params.chroma_use_t5_mask = chroma_use_t5_mask;
ctx_params.chroma_t5_mask_pad = chroma_t5_mask_pad;
ctx_params.flow_shift = flow_shift;
sd_ctx_t* sd_ctx = new_sd_ctx(&ctx_params);
if (sd_ctx == NULL) {
fprintf (stderr, "failed loading model (generic error)\n");
// TODO: Clean up allocated memory
return 1;
}
fprintf (stderr, "Created context: OK\n");
int sample_method_found = -1;
for (int m = 0; m < SAMPLE_METHOD_COUNT; m++) {
if (!strcmp(sampler, sample_method_str[m])) {
@@ -167,54 +741,24 @@ int load_model(const char *model, char *model_path, char* options[], int threads
}
}
if (sample_method_found == -1) {
fprintf(stderr, "Invalid sample method, default to EULER_A!\n");
sample_method_found = sample_method_t::SAMPLE_METHOD_DEFAULT;
sample_method_found = sd_get_default_sample_method(sd_ctx);
fprintf(stderr, "Invalid sample method, using default: %s\n", sample_method_str[sample_method_found]);
}
sample_method = (sample_method_t)sample_method_found;
for (int d = 0; d < SCHEDULE_COUNT; d++) {
for (int d = 0; d < SCHEDULER_COUNT; d++) {
if (!strcmp(scheduler_str, schedulers[d])) {
scheduler = (scheduler_t)d;
fprintf (stderr, "Found scheduler: %s\n", scheduler_str);
}
}
fprintf (stderr, "Creating context\n");
sd_ctx_params_t ctx_params;
sd_ctx_params_init(&ctx_params);
ctx_params.model_path = model;
ctx_params.clip_l_path = clip_l_path;
ctx_params.clip_g_path = clip_g_path;
ctx_params.t5xxl_path = t5xxl_path;
ctx_params.diffusion_model_path = stableDiffusionModel;
ctx_params.vae_path = vae_path;
ctx_params.taesd_path = "";
ctx_params.control_net_path = "";
ctx_params.lora_model_dir = lora_dir;
ctx_params.embedding_dir = "";
ctx_params.vae_decode_only = false;
ctx_params.free_params_immediately = false;
ctx_params.n_threads = threads;
ctx_params.rng_type = STD_DEFAULT_RNG;
sd_ctx_t* sd_ctx = new_sd_ctx(&ctx_params);
if (sd_ctx == NULL) {
fprintf (stderr, "failed loading model (generic error)\n");
// Clean up allocated memory
if (lora_dir_allocated && lora_dir) {
free(lora_dir);
}
return 1;
if (scheduler == SCHEDULER_COUNT) {
scheduler = sd_get_default_scheduler(sd_ctx, sample_method);
fprintf(stderr, "Invalid scheduler, using default: %s\n", schedulers[scheduler]);
}
fprintf (stderr, "Created context: OK\n");
sd_c = sd_ctx;
// Clean up allocated memory
if (lora_dir_allocated && lora_dir) {
free(lora_dir);
}
return 0;
}
@@ -243,12 +787,66 @@ sd_tiling_params_t* sd_img_gen_params_get_vae_tiling_params(sd_img_gen_params_t
sd_img_gen_params_t* sd_img_gen_params_new(void) {
sd_img_gen_params_t *params = (sd_img_gen_params_t *)std::malloc(sizeof(sd_img_gen_params_t));
sd_img_gen_params_init(params);
sd_sample_params_init(&params->sample_params);
sd_cache_params_init(&params->cache);
params->control_strength = 0.9f;
return params;
}
// Storage for cleaned prompt strings (needs to persist)
static std::string cleaned_prompt_storage;
static std::string cleaned_negative_prompt_storage;
void sd_img_gen_params_set_prompts(sd_img_gen_params_t *params, const char *prompt, const char *negative_prompt) {
params->prompt = prompt;
params->negative_prompt = negative_prompt;
// Clear previous LoRA data
lora_vec.clear();
lora_strings.clear();
// Parse LoRAs from prompt
std::string prompt_str = prompt ? prompt : "";
std::string negative_prompt_str = negative_prompt ? negative_prompt : "";
// Get lora_dir from ctx_params if available, otherwise use stored path
const char* lora_dir_to_use = lora_dir_path.empty() ? nullptr : lora_dir_path.c_str();
auto [loras, cleaned_prompt] = parse_loras_from_prompt(prompt_str, lora_dir_to_use);
lora_vec = loras;
cleaned_prompt_storage = cleaned_prompt;
// Also check negative prompt for LoRAs (though this is less common)
auto [neg_loras, cleaned_negative] = parse_loras_from_prompt(negative_prompt_str, lora_dir_to_use);
// Merge negative prompt LoRAs (though typically not used)
if (!neg_loras.empty()) {
fprintf(stderr, "Note: Found %zu LoRAs in negative prompt (may not be supported)\n", neg_loras.size());
}
cleaned_negative_prompt_storage = cleaned_negative;
// Set the cleaned prompts
params->prompt = cleaned_prompt_storage.c_str();
params->negative_prompt = cleaned_negative_prompt_storage.c_str();
// Set LoRAs in params
params->loras = lora_vec.empty() ? nullptr : lora_vec.data();
params->lora_count = static_cast<uint32_t>(lora_vec.size());
fprintf(stderr, "Set prompts with %zu LoRAs. Original prompt: %s\n", lora_vec.size(), prompt ? prompt : "(null)");
fprintf(stderr, "Cleaned prompt: %s\n", cleaned_prompt_storage.c_str());
// Debug: Verify LoRAs are set correctly
if (params->loras && params->lora_count > 0) {
fprintf(stderr, "DEBUG: LoRAs set in params structure:\n");
for (uint32_t i = 0; i < params->lora_count; i++) {
fprintf(stderr, " params->loras[%u]: path='%s' (ptr=%p), multiplier=%.2f, is_high_noise=%s\n",
i,
params->loras[i].path ? params->loras[i].path : "(null)",
(void*)params->loras[i].path,
params->loras[i].multiplier,
params->loras[i].is_high_noise ? "true" : "false");
}
} else {
fprintf(stderr, "DEBUG: No LoRAs set in params structure (loras=%p, lora_count=%u)\n",
(void*)params->loras, params->lora_count);
}
}
void sd_img_gen_params_set_dimensions(sd_img_gen_params_t *params, int width, int height) {
@@ -260,7 +858,7 @@ void sd_img_gen_params_set_seed(sd_img_gen_params_t *params, int64_t seed) {
params->seed = seed;
}
int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char **ref_images, int ref_images_count) {
int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char* ref_images[], int ref_images_count) {
sd_image_t* results;
@@ -440,6 +1038,24 @@ int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, cha
}
}
// Log LoRA information
if (p->loras && p->lora_count > 0) {
fprintf(stderr, "Using %u LoRA(s) in generation:\n", p->lora_count);
for (uint32_t i = 0; i < p->lora_count; i++) {
fprintf(stderr, " LoRA[%u]: path='%s', multiplier=%.2f, is_high_noise=%s\n",
i,
p->loras[i].path ? p->loras[i].path : "(null)",
p->loras[i].multiplier,
p->loras[i].is_high_noise ? "true" : "false");
}
} else {
fprintf(stderr, "No LoRAs specified for this generation\n");
}
fprintf(stderr, "Generating image with params: \nctx\n---\n%s\ngen\n---\n%s\n",
sd_ctx_params_to_str(&ctx_params),
sd_img_gen_params_to_str(p));
results = generate_image(sd_c, p);
std::free(p);
@@ -472,9 +1088,12 @@ int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, cha
fprintf (stderr, "Channel: %d\n", results[0].channel);
fprintf (stderr, "Data: %p\n", results[0].data);
stbi_write_png(dst, results[0].width, results[0].height, results[0].channel,
results[0].data, 0, NULL);
fprintf (stderr, "Saved resulting image to '%s'\n", dst);
int ret = stbi_write_png(dst, results[0].width, results[0].height, results[0].channel,
results[0].data, 0, NULL);
if (ret)
fprintf (stderr, "Saved resulting image to '%s'\n", dst);
else
fprintf(stderr, "Failed to write image to '%s'\n", dst);
// Clean up
free(results[0].data);
@@ -485,12 +1104,14 @@ int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, cha
for (auto buffer : ref_image_buffers) {
if (buffer) free(buffer);
}
fprintf (stderr, "gen_image is done: %s", dst);
fprintf (stderr, "gen_image is done: %s\n", dst);
fflush(stderr);
return 0;
return !ret;
}
int unload() {
free_sd_ctx(sd_c);
return 0;
}

View File

@@ -22,7 +22,7 @@ type SDGGML struct {
var (
LoadModel func(model, model_apth string, options []uintptr, threads int32, diff int) int
GenImage func(params uintptr, steps int, dst string, cfgScale float32, srcImage string, strength float32, maskImage string, refImages []string, refImagesCount int) int
GenImage func(params uintptr, steps int, dst string, cfgScale float32, srcImage string, strength float32, maskImage string, refImages []uintptr, refImagesCount int) int
TilingParamsSetEnabled func(params uintptr, enabled bool)
TilingParamsSetTileSizes func(params uintptr, tileSizeX int, tileSizeY int)
@@ -95,12 +95,12 @@ func (sd *SDGGML) Load(opts *pb.ModelOptions) error {
sd.cfgScale = opts.CFGScale
ret := LoadModel(modelFile, modelPathC, options, opts.Threads, diffusionModel)
runtime.KeepAlive(keepAlive)
fmt.Fprintf(os.Stderr, "LoadModel: %d\n", ret)
if ret != 0 {
return fmt.Errorf("could not load model")
}
runtime.KeepAlive(keepAlive)
return nil
}
@@ -123,10 +123,15 @@ func (sd *SDGGML) GenerateImage(opts *pb.GenerateImageRequest) error {
}
}
// At the time of writing Purego doesn't recurse into slices and convert Go strings to pointers so we need to do that
var keepAlive []any
refImagesCount := len(opts.RefImages)
refImages := make([]string, refImagesCount, refImagesCount+1)
copy(refImages, opts.RefImages)
*(*uintptr)(unsafe.Add(unsafe.Pointer(&refImages), refImagesCount)) = 0
refImages := make([]uintptr, refImagesCount, refImagesCount+1)
for i, ri := range opts.RefImages {
bytep := CString(ri)
refImages[i] = uintptr(unsafe.Pointer(bytep))
keepAlive = append(keepAlive, bytep)
}
// Default strength for img2img (0.75 is a good default)
strength := float32(0.75)
@@ -140,6 +145,8 @@ func (sd *SDGGML) GenerateImage(opts *pb.GenerateImageRequest) error {
TilingParamsSetEnabled(vaep, false)
ret := GenImage(p, int(opts.Step), dst, sd.cfgScale, srcImage, strength, maskImage, refImages, refImagesCount)
runtime.KeepAlive(keepAlive)
fmt.Fprintf(os.Stderr, "GenImage: %d\n", ret)
if ret != 0 {
return fmt.Errorf("inference failed")
}

View File

@@ -17,7 +17,7 @@ void sd_img_gen_params_set_dimensions(sd_img_gen_params_t *params, int width, in
void sd_img_gen_params_set_seed(sd_img_gen_params_t *params, int64_t seed);
int load_model(const char *model, char *model_path, char* options[], int threads, int diffusionModel);
int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char **ref_images, int ref_images_count);
int gen_image(sd_img_gen_params_t *p, int steps, char *dst, float cfg_scale, char *src_image, float strength, char *mask_image, char* ref_images[], int ref_images_count);
#ifdef __cplusplus
}
#endif

View File

@@ -3,5 +3,5 @@ sources/
build/
package/
whisper
libgowhisper.so
*.so
compile_commands.json

View File

@@ -8,7 +8,7 @@ JOBS?=$(shell nproc --ignore=1)
# whisper.cpp version
WHISPER_REPO?=https://github.com/ggml-org/whisper.cpp
WHISPER_CPP_VERSION?=19ceec8eac980403b714d603e5ca31653cd42a3f
WHISPER_CPP_VERSION?=e9898ddfb908ffaa7026c66852a023889a5a7202
SO_TARGET?=libgowhisper.so
CMAKE_ARGS+=-DBUILD_SHARED_LIBS=OFF

View File

@@ -107,7 +107,7 @@ int vad(float pcmf32[], size_t pcmf32_len, float **segs_out,
}
int transcribe(uint32_t threads, char *lang, bool translate, bool tdrz,
float pcmf32[], size_t pcmf32_len, size_t *segs_out_len) {
float pcmf32[], size_t pcmf32_len, size_t *segs_out_len, char *prompt) {
whisper_full_params wparams =
whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
@@ -122,8 +122,10 @@ int transcribe(uint32_t threads, char *lang, bool translate, bool tdrz,
wparams.debug_mode = true;
wparams.print_progress = true;
wparams.tdrz_enable = tdrz;
wparams.initial_prompt = prompt;
fprintf(stderr, "info: Enable tdrz: %d\n", tdrz);
fprintf(stderr, "info: Initial prompt: \"%s\"\n", prompt);
if (whisper_full(ctx, wparams, pcmf32, pcmf32_len)) {
fprintf(stderr, "error: transcription failed\n");

View File

@@ -17,7 +17,7 @@ var (
CppLoadModel func(modelPath string) int
CppLoadModelVAD func(modelPath string) int
CppVAD func(pcmf32 []float32, pcmf32Size uintptr, segsOut unsafe.Pointer, segsOutLen unsafe.Pointer) int
CppTranscribe func(threads uint32, lang string, translate bool, diarize bool, pcmf32 []float32, pcmf32Len uintptr, segsOutLen unsafe.Pointer) int
CppTranscribe func(threads uint32, lang string, translate bool, diarize bool, pcmf32 []float32, pcmf32Len uintptr, segsOutLen unsafe.Pointer, prompt string) int
CppGetSegmentText func(i int) string
CppGetSegmentStart func(i int) int64
CppGetSegmentEnd func(i int) int64
@@ -123,7 +123,7 @@ func (w *Whisper) AudioTranscription(opts *pb.TranscriptRequest) (pb.TranscriptR
segsLen := uintptr(0xdeadbeef)
segsLenPtr := unsafe.Pointer(&segsLen)
if ret := CppTranscribe(opts.Threads, opts.Language, opts.Translate, opts.Diarize, data, uintptr(len(data)), segsLenPtr); ret != 0 {
if ret := CppTranscribe(opts.Threads, opts.Language, opts.Translate, opts.Diarize, data, uintptr(len(data)), segsLenPtr, opts.Prompt); ret != 0 {
return pb.TranscriptResult{}, fmt.Errorf("Failed Transcribe")
}

View File

@@ -7,7 +7,8 @@ int load_model_vad(const char *const model_path);
int vad(float pcmf32[], size_t pcmf32_size, float **segs_out,
size_t *segs_out_len);
int transcribe(uint32_t threads, char *lang, bool translate, bool tdrz,
float pcmf32[], size_t pcmf32_len, size_t *segs_out_len);
float pcmf32[], size_t pcmf32_len, size_t *segs_out_len,
char *prompt);
const char *get_segment_text(int i);
int64_t get_segment_t0(int i);
int64_t get_segment_t1(int i);

View File

@@ -25,7 +25,10 @@
metal: "metal-llama-cpp"
vulkan: "vulkan-llama-cpp"
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp"
darwin-x86: "darwin-x86-llama-cpp"
nvidia-cuda-13: "cuda13-llama-cpp"
nvidia-cuda-12: "cuda12-llama-cpp"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp"
- &whispercpp
name: "whisper"
alias: "whisper"
@@ -49,6 +52,10 @@
amd: "rocm-whisper"
vulkan: "vulkan-whisper"
nvidia-l4t: "nvidia-l4t-arm64-whisper"
nvidia-cuda-13: "cuda13-whisper"
nvidia-cuda-12: "cuda12-whisper"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-whisper"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-whisper"
- &stablediffusionggml
name: "stablediffusion-ggml"
alias: "stablediffusion-ggml"
@@ -73,7 +80,10 @@
vulkan: "vulkan-stablediffusion-ggml"
nvidia-l4t: "nvidia-l4t-arm64-stablediffusion-ggml"
metal: "metal-stablediffusion-ggml"
# darwin-x86: "darwin-x86-stablediffusion-ggml"
nvidia-cuda-13: "cuda13-stablediffusion-ggml"
nvidia-cuda-12: "cuda12-stablediffusion-ggml"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-stablediffusion-ggml"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml"
- &rfdetr
name: "rfdetr"
alias: "rfdetr"
@@ -96,6 +106,9 @@
#amd: "rocm-rfdetr"
nvidia-l4t: "nvidia-l4t-arm64-rfdetr"
default: "cpu-rfdetr"
nvidia-cuda-13: "cuda13-rfdetr"
nvidia-cuda-12: "cuda12-rfdetr"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-rfdetr"
- &vllm
name: "vllm"
license: apache-2.0
@@ -128,6 +141,7 @@
nvidia: "cuda12-vllm"
amd: "rocm-vllm"
intel: "intel-vllm"
nvidia-cuda-12: "cuda12-vllm"
- &mlx
name: "mlx"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-mlx"
@@ -201,6 +215,8 @@
nvidia: "cuda12-transformers"
intel: "intel-transformers"
amd: "rocm-transformers"
nvidia-cuda-13: "cuda13-transformers"
nvidia-cuda-12: "cuda12-transformers"
- &diffusers
name: "diffusers"
icon: https://raw.githubusercontent.com/huggingface/diffusers/main/docs/source/en/imgs/diffusers_library.jpg
@@ -221,6 +237,10 @@
nvidia-l4t: "nvidia-l4t-diffusers"
metal: "metal-diffusers"
default: "cpu-diffusers"
nvidia-cuda-13: "cuda13-diffusers"
nvidia-cuda-12: "cuda12-diffusers"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-diffusers"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-diffusers"
- &exllama2
name: "exllama2"
urls:
@@ -236,6 +256,7 @@
capabilities:
nvidia: "cuda12-exllama2"
intel: "intel-exllama2"
nvidia-cuda-12: "cuda12-exllama2"
- &faster-whisper
icon: https://avatars.githubusercontent.com/u/1520500?s=200&v=4
description: |
@@ -252,6 +273,8 @@
nvidia: "cuda12-faster-whisper"
intel: "intel-faster-whisper"
amd: "rocm-faster-whisper"
nvidia-cuda-13: "cuda13-faster-whisper"
nvidia-cuda-12: "cuda12-faster-whisper"
- &kokoro
icon: https://avatars.githubusercontent.com/u/166769057?v=4
description: |
@@ -271,6 +294,9 @@
intel: "intel-kokoro"
amd: "rocm-kokoro"
nvidia-l4t: "nvidia-l4t-kokoro"
nvidia-cuda-13: "cuda13-kokoro"
nvidia-cuda-12: "cuda12-kokoro"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-kokoro"
- &coqui
urls:
- https://github.com/idiap/coqui-ai-TTS
@@ -292,6 +318,8 @@
nvidia: "cuda12-coqui"
intel: "intel-coqui"
amd: "rocm-coqui"
nvidia-cuda-13: "cuda13-coqui"
nvidia-cuda-12: "cuda12-coqui"
icon: https://avatars.githubusercontent.com/u/1338804?s=200&v=4
- &bark
urls:
@@ -308,6 +336,8 @@
cuda: "cuda12-bark"
intel: "intel-bark"
rocm: "rocm-bark"
nvidia-cuda-13: "cuda13-bark"
nvidia-cuda-12: "cuda12-bark"
icon: https://avatars.githubusercontent.com/u/99442120?s=200&v=4
- &barkcpp
urls:
@@ -354,6 +384,32 @@
metal: "metal-chatterbox"
default: "cpu-chatterbox"
nvidia-l4t: "nvidia-l4t-arm64-chatterbox"
nvidia-cuda-13: "cuda13-chatterbox"
nvidia-cuda-12: "cuda12-chatterbox"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-chatterbox"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-chatterbox"
- &vibevoice
urls:
- https://github.com/microsoft/VibeVoice
description: |
VibeVoice-Realtime is a real-time text-to-speech model that generates natural-sounding speech.
tags:
- text-to-speech
- TTS
license: mit
name: "vibevoice"
alias: "vibevoice"
capabilities:
nvidia: "cuda12-vibevoice"
intel: "intel-vibevoice"
amd: "rocm-vibevoice"
nvidia-l4t: "nvidia-l4t-vibevoice"
default: "cpu-vibevoice"
nvidia-cuda-13: "cuda13-vibevoice"
nvidia-cuda-12: "cuda12-vibevoice"
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice"
icon: https://avatars.githubusercontent.com/u/6154722?s=200&v=4
- &piper
name: "piper"
uri: "quay.io/go-skynet/local-ai-backends:latest-piper"
@@ -442,6 +498,8 @@
nvidia: "cuda12-neutts"
amd: "rocm-neutts"
nvidia-l4t: "nvidia-l4t-neutts"
nvidia-cuda-12: "cuda12-neutts"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-neutts"
- !!merge <<: *neutts
name: "neutts-development"
capabilities:
@@ -449,6 +507,22 @@
nvidia: "cuda12-neutts-development"
amd: "rocm-neutts-development"
nvidia-l4t: "nvidia-l4t-neutts-development"
nvidia-cuda-12: "cuda12-neutts-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-neutts-development"
- !!merge <<: *llamacpp
name: "llama-cpp-development"
capabilities:
default: "cpu-llama-cpp-development"
nvidia: "cuda12-llama-cpp-development"
intel: "intel-sycl-f16-llama-cpp-development"
amd: "rocm-llama-cpp-development"
metal: "metal-llama-cpp-development"
vulkan: "vulkan-llama-cpp-development"
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp-development"
nvidia-cuda-13: "cuda13-llama-cpp-development"
nvidia-cuda-12: "cuda12-llama-cpp-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-llama-cpp-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
- !!merge <<: *neutts
name: "cpu-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-neutts"
@@ -465,7 +539,7 @@
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "nvidia-l4t-neutts"
name: "nvidia-l4t-arm64-neutts"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-neutts"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-neutts
@@ -485,7 +559,7 @@
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-neutts
- !!merge <<: *neutts
name: "nvidia-l4t-neutts-development"
name: "nvidia-l4t-arm64-neutts-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-neutts"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-neutts
@@ -530,16 +604,6 @@
mirrors:
- localai/localai-backends:master-piper
## llama-cpp
- !!merge <<: *llamacpp
name: "darwin-x86-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-darwin-x86-llama-cpp"
mirrors:
- localai/localai-backends:latest-darwin-x86-llama-cpp
- !!merge <<: *llamacpp
name: "darwin-x86-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-darwin-x86-llama-cpp"
mirrors:
- localai/localai-backends:master-darwin-x86-llama-cpp
- !!merge <<: *llamacpp
name: "nvidia-l4t-arm64-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-llama-cpp"
@@ -550,6 +614,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-llama-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-nvidia-l4t-arm64-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-llama-cpp"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-nvidia-l4t-arm64-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-llama-cpp"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-llama-cpp
- !!merge <<: *llamacpp
name: "cpu-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-llama-cpp"
@@ -630,6 +704,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-llama-cpp"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-llama-cpp"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-llama-cpp
- !!merge <<: *llamacpp
name: "cuda13-llama-cpp-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-llama-cpp"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-llama-cpp
## whisper
- !!merge <<: *whispercpp
name: "nvidia-l4t-arm64-whisper"
@@ -641,6 +725,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-whisper"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-whisper
- !!merge <<: *whispercpp
name: "cuda13-nvidia-l4t-arm64-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-whisper"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-whisper
- !!merge <<: *whispercpp
name: "cuda13-nvidia-l4t-arm64-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-whisper"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-whisper
- !!merge <<: *whispercpp
name: "cpu-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-whisper"
@@ -731,6 +825,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f16-whisper"
mirrors:
- localai/localai-backends:master-gpu-intel-sycl-f16-whisper
- !!merge <<: *whispercpp
name: "cuda13-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-whisper"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-whisper
- !!merge <<: *whispercpp
name: "cuda13-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-whisper"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-whisper
## stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cpu-stablediffusion-ggml"
@@ -810,6 +914,26 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-nvidia-l4t-arm64-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-stablediffusion-ggml"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-stablediffusion-ggml"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-stablediffusion-ggml
- !!merge <<: *stablediffusionggml
name: "cuda13-stablediffusion-ggml-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-stablediffusion-ggml"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-stablediffusion-ggml
# vllm
- !!merge <<: *vllm
name: "vllm-development"
@@ -856,6 +980,7 @@
#amd: "rocm-rfdetr-development"
nvidia-l4t: "nvidia-l4t-arm64-rfdetr-development"
default: "cpu-rfdetr-development"
nvidia-cuda-13: "cuda13-rfdetr-development"
- !!merge <<: *rfdetr
name: "cuda12-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-rfdetr"
@@ -876,6 +1001,11 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-rfdetr"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-arm64-rfdetr
- !!merge <<: *rfdetr
name: "nvidia-l4t-arm64-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-rfdetr"
mirrors:
- localai/localai-backends:master-nvidia-l4t-arm64-rfdetr
- !!merge <<: *rfdetr
name: "cpu-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-rfdetr"
@@ -906,6 +1036,16 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-rfdetr"
mirrors:
- localai/localai-backends:latest-gpu-intel-rfdetr
- !!merge <<: *rfdetr
name: "cuda13-rfdetr"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-rfdetr"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-rfdetr
- !!merge <<: *rfdetr
name: "cuda13-rfdetr-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-rfdetr"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-rfdetr
## Rerankers
- !!merge <<: *rerankers
name: "rerankers-development"
@@ -913,6 +1053,7 @@
nvidia: "cuda12-rerankers-development"
intel: "intel-rerankers-development"
amd: "rocm-rerankers-development"
nvidia-cuda-13: "cuda13-rerankers-development"
- !!merge <<: *rerankers
name: "cuda11-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-11-rerankers"
@@ -953,6 +1094,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-rerankers"
mirrors:
- localai/localai-backends:master-gpu-intel-rerankers
- !!merge <<: *rerankers
name: "cuda13-rerankers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-rerankers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-rerankers
- !!merge <<: *rerankers
name: "cuda13-rerankers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-rerankers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-rerankers
## Transformers
- !!merge <<: *transformers
name: "transformers-development"
@@ -960,6 +1111,7 @@
nvidia: "cuda12-transformers-development"
intel: "intel-transformers-development"
amd: "rocm-transformers-development"
nvidia-cuda-13: "cuda13-transformers-development"
- !!merge <<: *transformers
name: "cuda12-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-transformers"
@@ -1000,6 +1152,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-transformers"
mirrors:
- localai/localai-backends:master-gpu-intel-transformers
- !!merge <<: *transformers
name: "cuda13-transformers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-transformers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-transformers
- !!merge <<: *transformers
name: "cuda13-transformers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-transformers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-transformers
## Diffusers
- !!merge <<: *diffusers
name: "diffusers-development"
@@ -1010,6 +1172,7 @@
nvidia-l4t: "nvidia-l4t-diffusers-development"
metal: "metal-diffusers-development"
default: "cpu-diffusers-development"
nvidia-cuda-13: "cuda13-diffusers-development"
- !!merge <<: *diffusers
name: "cpu-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-diffusers"
@@ -1022,14 +1185,24 @@
- localai/localai-backends:master-cpu-diffusers
- !!merge <<: *diffusers
name: "nvidia-l4t-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-l4t-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-l4t-diffusers
- localai/localai-backends:latest-nvidia-l4t-diffusers
- !!merge <<: *diffusers
name: "nvidia-l4t-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-l4t-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-diffusers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-l4t-diffusers
- localai/localai-backends:master-nvidia-l4t-diffusers
- !!merge <<: *diffusers
name: "cuda13-nvidia-l4t-arm64-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-diffusers"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-diffusers
- !!merge <<: *diffusers
name: "cuda13-nvidia-l4t-arm64-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-diffusers"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-diffusers
- !!merge <<: *diffusers
name: "cuda12-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-diffusers"
@@ -1070,6 +1243,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-diffusers"
mirrors:
- localai/localai-backends:master-gpu-intel-diffusers
- !!merge <<: *diffusers
name: "cuda13-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-diffusers"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-diffusers
- !!merge <<: *diffusers
name: "cuda13-diffusers-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-diffusers"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-diffusers
- !!merge <<: *diffusers
name: "metal-diffusers"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-diffusers"
@@ -1141,14 +1324,14 @@
- localai/localai-backends:master-gpu-intel-kokoro
- !!merge <<: *kokoro
name: "nvidia-l4t-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-l4t-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-l4t-kokoro
- localai/localai-backends:latest-nvidia-l4t-kokoro
- !!merge <<: *kokoro
name: "nvidia-l4t-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-l4t-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-kokoro"
mirrors:
- localai/localai-backends:master-gpu-nvidia-l4t-kokoro
- localai/localai-backends:master-nvidia-l4t-kokoro
- !!merge <<: *kokoro
name: "cuda11-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-11-kokoro"
@@ -1164,6 +1347,16 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-kokoro
- !!merge <<: *kokoro
name: "cuda13-kokoro"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-kokoro"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-kokoro
- !!merge <<: *kokoro
name: "cuda13-kokoro-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-kokoro"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-kokoro
## faster-whisper
- !!merge <<: *faster-whisper
name: "faster-whisper-development"
@@ -1171,6 +1364,7 @@
nvidia: "cuda12-faster-whisper-development"
intel: "intel-faster-whisper-development"
amd: "rocm-faster-whisper-development"
nvidia-cuda-13: "cuda13-faster-whisper-development"
- !!merge <<: *faster-whisper
name: "cuda11-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-11-faster-whisper"
@@ -1196,6 +1390,16 @@
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-faster-whisper"
mirrors:
- localai/localai-backends:master-gpu-intel-faster-whisper
- !!merge <<: *faster-whisper
name: "cuda13-faster-whisper"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-faster-whisper"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-faster-whisper
- !!merge <<: *faster-whisper
name: "cuda13-faster-whisper-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-faster-whisper"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-faster-whisper
## coqui
- !!merge <<: *coqui
@@ -1303,6 +1507,10 @@
metal: "metal-chatterbox-development"
default: "cpu-chatterbox-development"
nvidia-l4t: "nvidia-l4t-arm64-chatterbox"
nvidia-cuda-13: "cuda13-chatterbox-development"
nvidia-cuda-12: "cuda12-chatterbox-development"
nvidia-l4t-cuda-12: "nvidia-l4t-arm64-chatterbox"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-chatterbox"
- !!merge <<: *chatterbox
name: "cpu-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-chatterbox"
@@ -1315,14 +1523,14 @@
- localai/localai-backends:master-cpu-chatterbox
- !!merge <<: *chatterbox
name: "nvidia-l4t-arm64-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-l4t-arm64-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-arm64-chatterbox"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-l4t-arm64-chatterbox
- localai/localai-backends:latest-nvidia-l4t-arm64-chatterbox
- !!merge <<: *chatterbox
name: "nvidia-l4t-arm64-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-l4t-arm64-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-arm64-chatterbox"
mirrors:
- localai/localai-backends:master-gpu-nvidia-l4t-arm64-chatterbox
- localai/localai-backends:master-nvidia-l4t-arm64-chatterbox
- !!merge <<: *chatterbox
name: "metal-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-chatterbox"
@@ -1353,3 +1561,106 @@
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-chatterbox"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-chatterbox"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-chatterbox"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-nvidia-l4t-arm64-chatterbox"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-chatterbox"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-chatterbox
- !!merge <<: *chatterbox
name: "cuda13-nvidia-l4t-arm64-chatterbox-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-chatterbox"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-chatterbox
## vibevoice
- !!merge <<: *vibevoice
name: "vibevoice-development"
capabilities:
nvidia: "cuda12-vibevoice-development"
intel: "intel-vibevoice-development"
amd: "rocm-vibevoice-development"
nvidia-l4t: "nvidia-l4t-vibevoice-development"
default: "cpu-vibevoice-development"
nvidia-cuda-13: "cuda13-vibevoice-development"
nvidia-cuda-12: "cuda12-vibevoice-development"
nvidia-l4t-cuda-12: "nvidia-l4t-vibevoice-development"
nvidia-l4t-cuda-13: "cuda13-nvidia-l4t-arm64-vibevoice-development"
- !!merge <<: *vibevoice
name: "cpu-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-cpu-vibevoice"
mirrors:
- localai/localai-backends:latest-cpu-vibevoice
- !!merge <<: *vibevoice
name: "cpu-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-cpu-vibevoice"
mirrors:
- localai/localai-backends:master-cpu-vibevoice
- !!merge <<: *vibevoice
name: "cuda12-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-12-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-12-vibevoice
- !!merge <<: *vibevoice
name: "cuda12-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-12-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-12-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-nvidia-cuda-13-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-nvidia-cuda-13-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-nvidia-cuda-13-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-nvidia-cuda-13-vibevoice
- !!merge <<: *vibevoice
name: "intel-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-intel-vibevoice
- !!merge <<: *vibevoice
name: "intel-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-intel-vibevoice
- !!merge <<: *vibevoice
name: "rocm-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-vibevoice"
mirrors:
- localai/localai-backends:latest-gpu-rocm-hipblas-vibevoice
- !!merge <<: *vibevoice
name: "rocm-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-rocm-hipblas-vibevoice"
mirrors:
- localai/localai-backends:master-gpu-rocm-hipblas-vibevoice
- !!merge <<: *vibevoice
name: "nvidia-l4t-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-vibevoice"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-vibevoice
- !!merge <<: *vibevoice
name: "nvidia-l4t-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-vibevoice"
mirrors:
- localai/localai-backends:master-nvidia-l4t-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-nvidia-l4t-arm64-vibevoice"
uri: "quay.io/go-skynet/local-ai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice"
mirrors:
- localai/localai-backends:latest-nvidia-l4t-cuda-13-arm64-vibevoice
- !!merge <<: *vibevoice
name: "cuda13-nvidia-l4t-arm64-vibevoice-development"
uri: "quay.io/go-skynet/local-ai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice"
mirrors:
- localai/localai-backends:master-nvidia-l4t-cuda-13-arm64-vibevoice

View File

@@ -0,0 +1,8 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
torchaudio
transformers
numpy>=1.24.0,<1.26.0
# https://github.com/mudler/LocalAI/pull/6240#issuecomment-3329518289
chatterbox-tts@git+https://git@github.com/mudler/chatterbox.git@faster
accelerate

View File

@@ -0,0 +1,7 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
torchaudio
transformers
numpy>=1.24.0,<1.26.0
chatterbox-tts@git+https://git@github.com/mudler/chatterbox.git@faster
accelerate

View File

@@ -211,7 +211,7 @@ function init() {
# - hipblas
# - intel
function getBuildProfile() {
if [ x"${BUILD_TYPE:-}" == "xcublas" ]; then
if [ x"${BUILD_TYPE:-}" == "xcublas" ] || [ x"${BUILD_TYPE:-}" == "xl4t" ]; then
if [ ! -z "${CUDA_MAJOR_VERSION:-}" ]; then
echo ${BUILD_TYPE}${CUDA_MAJOR_VERSION}
else
@@ -237,7 +237,14 @@ function getBuildProfile() {
# Make the venv relocatable:
# - rewrite venv/bin/python{,3} to relative symlinks into $(_portable_dir)
# - normalize entrypoint shebangs to /usr/bin/env python3
# - optionally update pyvenv.cfg to point to the portable Python directory (only at runtime)
# Usage: _makeVenvPortable [--update-pyvenv-cfg]
_makeVenvPortable() {
local update_pyvenv_cfg=false
if [ "${1:-}" = "--update-pyvenv-cfg" ]; then
update_pyvenv_cfg=true
fi
local venv_dir="${EDIR}/venv"
local vbin="${venv_dir}/bin"
@@ -255,7 +262,39 @@ _makeVenvPortable() {
ln -s "${rel_py}" "${vbin}/python3"
ln -s "python3" "${vbin}/python"
# 2) Rewrite shebangs of entry points to use env, so the venv is relocatable
# 2) Update pyvenv.cfg to point to the portable Python directory (only at runtime)
# Use absolute path resolved at runtime so it works when the venv is copied
if [ "$update_pyvenv_cfg" = "true" ]; then
local pyvenv_cfg="${venv_dir}/pyvenv.cfg"
if [ -f "${pyvenv_cfg}" ]; then
local portable_dir="$(_portable_dir)"
# Resolve to absolute path - this ensures it works when the backend is copied
# Only resolve if the directory exists (it should if ensurePortablePython was called)
if [ -d "${portable_dir}" ]; then
portable_dir="$(cd "${portable_dir}" && pwd)"
else
# Fallback to relative path if directory doesn't exist yet
portable_dir="../python"
fi
local sed_i=(sed -i)
# macOS/BSD sed needs a backup suffix; GNU sed doesn't. Make it portable:
if sed --version >/dev/null 2>&1; then
sed_i=(sed -i)
else
sed_i=(sed -i '')
fi
# Update the home field in pyvenv.cfg
# Handle both absolute paths (starting with /) and relative paths
if grep -q "^home = " "${pyvenv_cfg}"; then
"${sed_i[@]}" "s|^home = .*|home = ${portable_dir}|" "${pyvenv_cfg}"
else
# If home field doesn't exist, add it
echo "home = ${portable_dir}" >> "${pyvenv_cfg}"
fi
fi
fi
# 3) Rewrite shebangs of entry points to use env, so the venv is relocatable
# Only touch text files that start with #! and reference the current venv.
local ve_abs="${vbin}/python"
local sed_i=(sed -i)
@@ -316,6 +355,7 @@ function ensureVenv() {
fi
fi
if [ "x${PORTABLE_PYTHON}" == "xtrue" ]; then
# During install, only update symlinks and shebangs, not pyvenv.cfg
_makeVenvPortable
fi
fi
@@ -420,6 +460,11 @@ function installRequirements() {
# - ${BACKEND_NAME}.py
function startBackend() {
ensureVenv
# Update pyvenv.cfg before running to ensure paths are correct for current location
# This is critical when the backend position is dynamic (e.g., copied from container)
if [ "x${PORTABLE_PYTHON}" == "xtrue" ] || [ -x "$(_portable_python)" ]; then
_makeVenvPortable --update-pyvenv-cfg
fi
if [ ! -z "${BACKEND_FILE:-}" ]; then
exec "${EDIR}/venv/bin/python" "${BACKEND_FILE}" "$@"
elif [ -e "${MY_DIR}/server.py" ]; then

View File

@@ -1,5 +1,136 @@
# Creating a separate environment for the diffusers project
# LocalAI Diffusers Backend
This backend provides gRPC access to Hugging Face diffusers pipelines with dynamic pipeline loading.
## Creating a separate environment for the diffusers project
```
make diffusers
```
```
## Dynamic Pipeline Loader
The diffusers backend includes a dynamic pipeline loader (`diffusers_dynamic_loader.py`) that automatically discovers and loads diffusers pipelines at runtime. This eliminates the need for per-pipeline conditional statements - new pipelines added to diffusers become available automatically without code changes.
### How It Works
1. **Pipeline Discovery**: On first use, the loader scans the `diffusers` package to find all classes that inherit from `DiffusionPipeline`.
2. **Registry Caching**: Discovery results are cached for the lifetime of the process to avoid repeated scanning.
3. **Task Aliases**: The loader automatically derives task aliases from class names (e.g., "text-to-image", "image-to-image", "inpainting") without hardcoding.
4. **Multiple Resolution Methods**: Pipelines can be resolved by:
- Exact class name (e.g., `StableDiffusionPipeline`)
- Task alias (e.g., `text-to-image`, `img2img`)
- Model ID (uses HuggingFace Hub to infer pipeline type)
### Usage Examples
```python
from diffusers_dynamic_loader import (
load_diffusers_pipeline,
get_available_pipelines,
get_available_tasks,
resolve_pipeline_class,
discover_diffusers_classes,
get_available_classes,
)
# List all available pipelines
pipelines = get_available_pipelines()
print(f"Available pipelines: {pipelines[:10]}...")
# List all task aliases
tasks = get_available_tasks()
print(f"Available tasks: {tasks}")
# Resolve a pipeline class by name
cls = resolve_pipeline_class(class_name="StableDiffusionPipeline")
# Resolve by task alias
cls = resolve_pipeline_class(task="stable-diffusion")
# Load and instantiate a pipeline
pipe = load_diffusers_pipeline(
class_name="StableDiffusionPipeline",
model_id="runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16
)
# Load from single file
pipe = load_diffusers_pipeline(
class_name="StableDiffusionPipeline",
model_id="/path/to/model.safetensors",
from_single_file=True,
torch_dtype=torch.float16
)
# Discover other diffusers classes (schedulers, models, etc.)
schedulers = discover_diffusers_classes("SchedulerMixin")
print(f"Available schedulers: {list(schedulers.keys())[:5]}...")
# Get list of available scheduler classes
scheduler_list = get_available_classes("SchedulerMixin")
```
### Generic Class Discovery
The dynamic loader can discover not just pipelines but any class type from diffusers:
```python
# Discover all scheduler classes
schedulers = discover_diffusers_classes("SchedulerMixin")
# Discover all model classes
models = discover_diffusers_classes("ModelMixin")
# Get a sorted list of available classes
scheduler_names = get_available_classes("SchedulerMixin")
```
### Special Pipeline Handling
Most pipelines are loaded dynamically through `load_diffusers_pipeline()`. Only pipelines requiring truly custom initialization logic are handled explicitly:
- `FluxTransformer2DModel`: Requires quantization and custom transformer loading (cannot use dynamic loader)
- `WanPipeline` / `WanImageToVideoPipeline`: Uses dynamic loader with special VAE (float32 dtype)
- `SanaPipeline`: Uses dynamic loader with post-load dtype conversion for VAE/text encoder
- `StableVideoDiffusionPipeline`: Uses dynamic loader with CPU offload handling
- `VideoDiffusionPipeline`: Alias for DiffusionPipeline with video flags
All other pipelines (StableDiffusionPipeline, StableDiffusionXLPipeline, FluxPipeline, etc.) are loaded purely through the dynamic loader.
### Error Handling
When a pipeline cannot be resolved, the loader provides helpful error messages listing available pipelines and tasks:
```
ValueError: Unknown pipeline class 'NonExistentPipeline'.
Available pipelines: AnimateDiffPipeline, AnimateDiffVideoToVideoPipeline, ...
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `COMPEL` | `0` | Enable Compel for prompt weighting |
| `XPU` | `0` | Enable Intel XPU support |
| `CLIPSKIP` | `1` | Enable CLIP skip support |
| `SAFETENSORS` | `1` | Use safetensors format |
| `CHUNK_SIZE` | `8` | Decode chunk size for video |
| `FPS` | `7` | Video frames per second |
| `DISABLE_CPU_OFFLOAD` | `0` | Disable CPU offload |
| `FRAMES` | `64` | Number of video frames |
| `BFL_REPO` | `ChuckMcSneed/FLUX.1-dev` | Flux base repo |
| `PYTHON_GRPC_MAX_WORKERS` | `1` | Max gRPC workers |
## Running Tests
```bash
./test.sh
```
The test suite includes:
- Unit tests for the dynamic loader (`test_dynamic_loader.py`)
- Integration tests for the gRPC backend (`test.py`)

View File

@@ -1,4 +1,10 @@
#!/usr/bin/env python3
"""
LocalAI Diffusers Backend
This backend provides gRPC access to diffusers pipelines with dynamic pipeline loading.
New pipelines added to diffusers become available automatically without code changes.
"""
from concurrent import futures
import traceback
import argparse
@@ -17,14 +23,22 @@ import backend_pb2_grpc
import grpc
from diffusers import SanaPipeline, StableDiffusion3Pipeline, StableDiffusionXLPipeline, StableDiffusionDepth2ImgPipeline, DPMSolverMultistepScheduler, StableDiffusionPipeline, DiffusionPipeline, \
EulerAncestralDiscreteScheduler, FluxPipeline, FluxTransformer2DModel, QwenImageEditPipeline, AutoencoderKLWan, WanPipeline, WanImageToVideoPipeline
from diffusers import StableDiffusionImg2ImgPipeline, AutoPipelineForText2Image, ControlNetModel, StableVideoDiffusionPipeline, Lumina2Text2ImgPipeline
# Import dynamic loader for pipeline discovery
from diffusers_dynamic_loader import (
get_pipeline_registry,
resolve_pipeline_class,
get_available_pipelines,
load_diffusers_pipeline,
)
# Import specific items still needed for special cases and safety checker
from diffusers import DiffusionPipeline, ControlNetModel
from diffusers import FluxPipeline, FluxTransformer2DModel, AutoencoderKLWan
from diffusers.pipelines.stable_diffusion import safety_checker
from diffusers.utils import load_image, export_to_video
from compel import Compel, ReturnedEmbeddingsType
from optimum.quanto import freeze, qfloat8, quantize
from transformers import CLIPTextModel, T5EncoderModel
from transformers import T5EncoderModel
from safetensors.torch import load_file
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
@@ -158,6 +172,165 @@ def get_scheduler(name: str, config: dict = {}):
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
def _load_pipeline(self, request, modelFile, fromSingleFile, torchType, variant):
"""
Load a diffusers pipeline dynamically using the dynamic loader.
This method uses load_diffusers_pipeline() for most pipelines, falling back
to explicit handling only for pipelines requiring custom initialization
(e.g., quantization, special VAE handling).
Args:
request: The gRPC request containing pipeline configuration
modelFile: Path to the model file (for single file loading)
fromSingleFile: Whether to use from_single_file() vs from_pretrained()
torchType: The torch dtype to use
variant: Model variant (e.g., "fp16")
Returns:
The loaded pipeline instance
"""
pipeline_type = request.PipelineType
# Handle IMG2IMG request flag with default pipeline
if request.IMG2IMG and pipeline_type == "":
pipeline_type = "StableDiffusionImg2ImgPipeline"
# ================================================================
# Special cases requiring custom initialization logic
# Only handle pipelines that truly need custom code (quantization,
# special VAE handling, etc.). All other pipelines use dynamic loading.
# ================================================================
# FluxTransformer2DModel - requires quantization and custom transformer loading
if pipeline_type == "FluxTransformer2DModel":
dtype = torch.bfloat16
bfl_repo = os.environ.get("BFL_REPO", "ChuckMcSneed/FLUX.1-dev")
transformer = FluxTransformer2DModel.from_single_file(modelFile, torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
if request.LowVRAM:
pipe.enable_model_cpu_offload()
return pipe
# WanPipeline - requires special VAE with float32 dtype
if pipeline_type == "WanPipeline":
vae = AutoencoderKLWan.from_pretrained(
request.Model,
subfolder="vae",
torch_dtype=torch.float32
)
pipe = load_diffusers_pipeline(
class_name="WanPipeline",
model_id=request.Model,
vae=vae,
torch_dtype=torchType
)
self.txt2vid = True
return pipe
# WanImageToVideoPipeline - requires special VAE with float32 dtype
if pipeline_type == "WanImageToVideoPipeline":
vae = AutoencoderKLWan.from_pretrained(
request.Model,
subfolder="vae",
torch_dtype=torch.float32
)
pipe = load_diffusers_pipeline(
class_name="WanImageToVideoPipeline",
model_id=request.Model,
vae=vae,
torch_dtype=torchType
)
self.img2vid = True
return pipe
# SanaPipeline - requires special VAE and text encoder dtype conversion
if pipeline_type == "SanaPipeline":
pipe = load_diffusers_pipeline(
class_name="SanaPipeline",
model_id=request.Model,
variant="bf16",
torch_dtype=torch.bfloat16
)
pipe.vae.to(torch.bfloat16)
pipe.text_encoder.to(torch.bfloat16)
return pipe
# VideoDiffusionPipeline - alias for DiffusionPipeline with txt2vid flag
if pipeline_type == "VideoDiffusionPipeline":
self.txt2vid = True
pipe = load_diffusers_pipeline(
class_name="DiffusionPipeline",
model_id=request.Model,
torch_dtype=torchType
)
return pipe
# StableVideoDiffusionPipeline - needs img2vid flag and CPU offload
if pipeline_type == "StableVideoDiffusionPipeline":
self.img2vid = True
pipe = load_diffusers_pipeline(
class_name="StableVideoDiffusionPipeline",
model_id=request.Model,
torch_dtype=torchType,
variant=variant
)
if not DISABLE_CPU_OFFLOAD:
pipe.enable_model_cpu_offload()
return pipe
# ================================================================
# Dynamic pipeline loading - the default path for most pipelines
# Uses the dynamic loader to instantiate any pipeline by class name
# ================================================================
# Build kwargs for dynamic loading
load_kwargs = {"torch_dtype": torchType}
# Add variant if not loading from single file
if not fromSingleFile and variant:
load_kwargs["variant"] = variant
# Add use_safetensors for from_pretrained
if not fromSingleFile:
load_kwargs["use_safetensors"] = SAFETENSORS
# Determine pipeline class name - default to AutoPipelineForText2Image
effective_pipeline_type = pipeline_type if pipeline_type else "AutoPipelineForText2Image"
# Use dynamic loader for all pipelines
try:
pipe = load_diffusers_pipeline(
class_name=effective_pipeline_type,
model_id=modelFile if fromSingleFile else request.Model,
from_single_file=fromSingleFile,
**load_kwargs
)
except Exception as e:
# Provide helpful error with available pipelines
available = get_available_pipelines()
raise ValueError(
f"Failed to load pipeline '{effective_pipeline_type}': {e}\n"
f"Available pipelines: {', '.join(available[:30])}..."
) from e
# Apply LowVRAM optimization if supported and requested
if request.LowVRAM and hasattr(pipe, 'enable_model_cpu_offload'):
pipe.enable_model_cpu_offload()
return pipe
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
@@ -231,139 +404,16 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
fromSingleFile = request.Model.startswith("http") or request.Model.startswith("/") or local
self.img2vid = False
self.txt2vid = False
## img2img
if (request.PipelineType == "StableDiffusionImg2ImgPipeline") or (request.IMG2IMG and request.PipelineType == ""):
if fromSingleFile:
self.pipe = StableDiffusionImg2ImgPipeline.from_single_file(modelFile,
torch_dtype=torchType)
else:
self.pipe = StableDiffusionImg2ImgPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "StableDiffusionDepth2ImgPipeline":
self.pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
## img2vid
elif request.PipelineType == "StableVideoDiffusionPipeline":
self.img2vid = True
self.pipe = StableVideoDiffusionPipeline.from_pretrained(
request.Model, torch_dtype=torchType, variant=variant
)
if not DISABLE_CPU_OFFLOAD:
self.pipe.enable_model_cpu_offload()
## text2img
elif request.PipelineType == "AutoPipelineForText2Image" or request.PipelineType == "":
self.pipe = AutoPipelineForText2Image.from_pretrained(request.Model,
torch_dtype=torchType,
use_safetensors=SAFETENSORS,
variant=variant)
elif request.PipelineType == "StableDiffusionPipeline":
if fromSingleFile:
self.pipe = StableDiffusionPipeline.from_single_file(modelFile,
torch_dtype=torchType)
else:
self.pipe = StableDiffusionPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "DiffusionPipeline":
self.pipe = DiffusionPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "QwenImageEditPipeline":
self.pipe = QwenImageEditPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "VideoDiffusionPipeline":
self.txt2vid = True
self.pipe = DiffusionPipeline.from_pretrained(request.Model,
torch_dtype=torchType)
elif request.PipelineType == "StableDiffusionXLPipeline":
if fromSingleFile:
self.pipe = StableDiffusionXLPipeline.from_single_file(modelFile,
torch_dtype=torchType,
use_safetensors=True)
else:
self.pipe = StableDiffusionXLPipeline.from_pretrained(
request.Model,
torch_dtype=torchType,
use_safetensors=True,
variant=variant)
elif request.PipelineType == "StableDiffusion3Pipeline":
if fromSingleFile:
self.pipe = StableDiffusion3Pipeline.from_single_file(modelFile,
torch_dtype=torchType,
use_safetensors=True)
else:
self.pipe = StableDiffusion3Pipeline.from_pretrained(
request.Model,
torch_dtype=torchType,
use_safetensors=True,
variant=variant)
elif request.PipelineType == "FluxPipeline":
if fromSingleFile:
self.pipe = FluxPipeline.from_single_file(modelFile,
torch_dtype=torchType,
use_safetensors=True)
else:
self.pipe = FluxPipeline.from_pretrained(
request.Model,
torch_dtype=torch.bfloat16)
if request.LowVRAM:
self.pipe.enable_model_cpu_offload()
elif request.PipelineType == "FluxTransformer2DModel":
dtype = torch.bfloat16
# specify from environment or default to "ChuckMcSneed/FLUX.1-dev"
bfl_repo = os.environ.get("BFL_REPO", "ChuckMcSneed/FLUX.1-dev")
transformer = FluxTransformer2DModel.from_single_file(modelFile, torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
self.pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
self.pipe.transformer = transformer
self.pipe.text_encoder_2 = text_encoder_2
if request.LowVRAM:
self.pipe.enable_model_cpu_offload()
elif request.PipelineType == "Lumina2Text2ImgPipeline":
self.pipe = Lumina2Text2ImgPipeline.from_pretrained(
request.Model,
torch_dtype=torch.bfloat16)
if request.LowVRAM:
self.pipe.enable_model_cpu_offload()
elif request.PipelineType == "SanaPipeline":
self.pipe = SanaPipeline.from_pretrained(
request.Model,
variant="bf16",
torch_dtype=torch.bfloat16)
self.pipe.vae.to(torch.bfloat16)
self.pipe.text_encoder.to(torch.bfloat16)
elif request.PipelineType == "WanPipeline":
# WAN2.2 pipeline requires special VAE handling
vae = AutoencoderKLWan.from_pretrained(
request.Model,
subfolder="vae",
torch_dtype=torch.float32
)
self.pipe = WanPipeline.from_pretrained(
request.Model,
vae=vae,
torch_dtype=torchType
)
self.txt2vid = True # WAN2.2 is a text-to-video pipeline
elif request.PipelineType == "WanImageToVideoPipeline":
# WAN2.2 image-to-video pipeline
vae = AutoencoderKLWan.from_pretrained(
request.Model,
subfolder="vae",
torch_dtype=torch.float32
)
self.pipe = WanImageToVideoPipeline.from_pretrained(
request.Model,
vae=vae,
torch_dtype=torchType
)
self.img2vid = True # WAN2.2 image-to-video pipeline
# Load pipeline using dynamic loader
# Special cases that require custom initialization are handled first
self.pipe = self._load_pipeline(
request=request,
modelFile=modelFile,
fromSingleFile=fromSingleFile,
torchType=torchType,
variant=variant
)
if CLIPSKIP and request.CLIPSkip != 0:
self.clip_skip = request.CLIPSkip
@@ -501,10 +551,12 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
# create a dictionary of values for the parameters
options = {
"negative_prompt": request.negative_prompt,
"num_inference_steps": steps,
}
if hasattr(request, 'negative_prompt') and request.negative_prompt != "":
options["negative_prompt"] = request.negative_prompt
# Handle image source: prioritize RefImages over request.src
image_src = None
if hasattr(request, 'ref_images') and request.ref_images and len(request.ref_images) > 0:
@@ -528,17 +580,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
if CLIPSKIP and self.clip_skip != 0:
options["clip_skip"] = self.clip_skip
# Get the keys that we will build the args for our pipe for
keys = options.keys()
if request.EnableParameters != "":
keys = [key.strip() for key in request.EnableParameters.split(",")]
if request.EnableParameters == "none":
keys = []
# create a dictionary of parameters by using the keys from EnableParameters and the values from defaults
kwargs = {key: options.get(key) for key in keys if key in options}
kwargs = {}
# populate kwargs from self.options.
kwargs.update(self.options)

View File

@@ -0,0 +1,538 @@
"""
Dynamic Diffusers Pipeline Loader
This module provides dynamic discovery and loading of diffusers pipelines at runtime,
eliminating the need for per-pipeline conditional statements. New pipelines added to
diffusers become available automatically without code changes.
The module also supports discovering other diffusers classes like schedulers, models,
and other components, making it a generic solution for dynamic class loading.
Usage:
from diffusers_dynamic_loader import load_diffusers_pipeline, get_available_pipelines
# Load by class name
pipe = load_diffusers_pipeline(class_name="StableDiffusionPipeline", model_id="...", torch_dtype=torch.float16)
# Load by task alias
pipe = load_diffusers_pipeline(task="text-to-image", model_id="...", torch_dtype=torch.float16)
# Load using model_id (infers from HuggingFace Hub if possible)
pipe = load_diffusers_pipeline(model_id="runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
# Get list of available pipelines
available = get_available_pipelines()
# Discover other diffusers classes (schedulers, models, etc.)
schedulers = discover_diffusers_classes("SchedulerMixin")
models = discover_diffusers_classes("ModelMixin")
"""
import importlib
import re
import sys
from typing import Any, Dict, List, Optional, Tuple, Type
# Global cache for discovered pipelines - computed once per process
_pipeline_registry: Optional[Dict[str, Type]] = None
_task_aliases: Optional[Dict[str, List[str]]] = None
# Global cache for other discovered class types
_class_registries: Dict[str, Dict[str, Type]] = {}
def _camel_to_kebab(name: str) -> str:
"""
Convert CamelCase to kebab-case.
Examples:
StableDiffusionPipeline -> stable-diffusion-pipeline
StableDiffusionXLImg2ImgPipeline -> stable-diffusion-xl-img-2-img-pipeline
"""
# Insert hyphen before uppercase letters (but not at the start)
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1-\2', name)
# Insert hyphen before uppercase letters following lowercase letters or numbers
s2 = re.sub('([a-z0-9])([A-Z])', r'\1-\2', s1)
return s2.lower()
def _extract_task_keywords(class_name: str) -> List[str]:
"""
Extract task-related keywords from a pipeline class name.
This function derives useful task aliases from the class name without
hardcoding per-pipeline branches.
Returns a list of potential task aliases for this pipeline.
"""
aliases = []
name_lower = class_name.lower()
# Direct task mappings based on common patterns in class names
task_patterns = {
'text2image': ['text-to-image', 'txt2img', 'text2image'],
'texttoimage': ['text-to-image', 'txt2img', 'text2image'],
'txt2img': ['text-to-image', 'txt2img', 'text2image'],
'img2img': ['image-to-image', 'img2img', 'image2image'],
'image2image': ['image-to-image', 'img2img', 'image2image'],
'imagetoimage': ['image-to-image', 'img2img', 'image2image'],
'img2video': ['image-to-video', 'img2vid', 'img2video'],
'imagetovideo': ['image-to-video', 'img2vid', 'img2video'],
'text2video': ['text-to-video', 'txt2vid', 'text2video'],
'texttovideo': ['text-to-video', 'txt2vid', 'text2video'],
'inpaint': ['inpainting', 'inpaint'],
'depth2img': ['depth-to-image', 'depth2img'],
'depthtoimage': ['depth-to-image', 'depth2img'],
'controlnet': ['controlnet', 'control-net'],
'upscale': ['upscaling', 'upscale', 'super-resolution'],
'superresolution': ['upscaling', 'upscale', 'super-resolution'],
}
# Check for each pattern in the class name
for pattern, task_aliases in task_patterns.items():
if pattern in name_lower:
aliases.extend(task_aliases)
# Also detect general pipeline types from the class name structure
# E.g., StableDiffusionPipeline -> stable-diffusion, flux -> flux
# Remove "Pipeline" suffix and convert to kebab case
if class_name.endswith('Pipeline'):
base_name = class_name[:-8] # Remove "Pipeline"
kebab_name = _camel_to_kebab(base_name)
aliases.append(kebab_name)
# Extract model family name (e.g., "stable-diffusion" from "stable-diffusion-xl-img-2-img")
parts = kebab_name.split('-')
if len(parts) >= 2:
# Try the first two words as a family name
family = '-'.join(parts[:2])
if family not in aliases:
aliases.append(family)
# If no specific task pattern matched but class contains "Pipeline", add "text-to-image" as default
# since most diffusion pipelines support text-to-image generation
if 'text-to-image' not in aliases and 'image-to-image' not in aliases:
# Only add for pipelines that seem to be generation pipelines (not schedulers, etc.)
if 'pipeline' in name_lower and not any(x in name_lower for x in ['scheduler', 'processor', 'encoder']):
# Don't automatically add - let it be explicit
pass
return list(set(aliases)) # Remove duplicates
def discover_diffusers_classes(
base_class_name: str,
include_base: bool = True
) -> Dict[str, Type]:
"""
Discover all subclasses of a given base class from diffusers.
This function provides a generic way to discover any type of diffusers class,
not just pipelines. It can be used to discover schedulers, models, processors,
and other components.
Args:
base_class_name: Name of the base class to search for subclasses
(e.g., "DiffusionPipeline", "SchedulerMixin", "ModelMixin")
include_base: Whether to include the base class itself in results
Returns:
Dict mapping class names to class objects
Examples:
# Discover all pipeline classes
pipelines = discover_diffusers_classes("DiffusionPipeline")
# Discover all scheduler classes
schedulers = discover_diffusers_classes("SchedulerMixin")
# Discover all model classes
models = discover_diffusers_classes("ModelMixin")
# Discover AutoPipeline classes
auto_pipelines = discover_diffusers_classes("AutoPipelineForText2Image")
"""
global _class_registries
# Check cache first
if base_class_name in _class_registries:
return _class_registries[base_class_name]
import diffusers
# Try to get the base class from diffusers
base_class = None
try:
base_class = getattr(diffusers, base_class_name)
except AttributeError:
# Try to find in submodules
for submodule in ['schedulers', 'models', 'pipelines']:
try:
module = importlib.import_module(f'diffusers.{submodule}')
if hasattr(module, base_class_name):
base_class = getattr(module, base_class_name)
break
except (ImportError, ModuleNotFoundError):
continue
if base_class is None:
raise ValueError(f"Could not find base class '{base_class_name}' in diffusers")
registry: Dict[str, Type] = {}
# Include base class if requested
if include_base:
registry[base_class_name] = base_class
# Scan diffusers module for subclasses
for attr_name in dir(diffusers):
try:
attr = getattr(diffusers, attr_name)
if (isinstance(attr, type) and
issubclass(attr, base_class) and
(include_base or attr is not base_class)):
registry[attr_name] = attr
except (ImportError, AttributeError, TypeError, RuntimeError, ModuleNotFoundError):
continue
# Cache the results
_class_registries[base_class_name] = registry
return registry
def get_available_classes(base_class_name: str) -> List[str]:
"""
Get a sorted list of all discovered class names for a given base class.
Args:
base_class_name: Name of the base class (e.g., "SchedulerMixin")
Returns:
Sorted list of discovered class names
"""
return sorted(discover_diffusers_classes(base_class_name).keys())
def _discover_pipelines() -> Tuple[Dict[str, Type], Dict[str, List[str]]]:
"""
Discover all subclasses of DiffusionPipeline from diffusers.
This function uses the generic discover_diffusers_classes() internally
and adds pipeline-specific task alias generation. It also includes
AutoPipeline classes which are special utility classes for automatic
pipeline selection.
Returns:
A tuple of (pipeline_registry, task_aliases) where:
- pipeline_registry: Dict mapping class names to class objects
- task_aliases: Dict mapping task aliases to lists of class names
"""
# Use the generic discovery function
pipeline_registry = discover_diffusers_classes("DiffusionPipeline", include_base=True)
# Also add AutoPipeline classes - these are special utility classes that are
# NOT subclasses of DiffusionPipeline but are commonly used
import diffusers
auto_pipeline_classes = [
"AutoPipelineForText2Image",
"AutoPipelineForImage2Image",
"AutoPipelineForInpainting",
]
for cls_name in auto_pipeline_classes:
try:
cls = getattr(diffusers, cls_name)
if cls is not None:
pipeline_registry[cls_name] = cls
except AttributeError:
# Class not available in this version of diffusers
pass
# Generate task aliases for pipelines
task_aliases: Dict[str, List[str]] = {}
for attr_name in pipeline_registry:
if attr_name == "DiffusionPipeline":
continue # Skip base class for alias generation
aliases = _extract_task_keywords(attr_name)
for alias in aliases:
if alias not in task_aliases:
task_aliases[alias] = []
if attr_name not in task_aliases[alias]:
task_aliases[alias].append(attr_name)
return pipeline_registry, task_aliases
def get_pipeline_registry() -> Dict[str, Type]:
"""
Get the cached pipeline registry.
Returns a dictionary mapping pipeline class names to their class objects.
The registry is built on first access and cached for subsequent calls.
"""
global _pipeline_registry, _task_aliases
if _pipeline_registry is None:
_pipeline_registry, _task_aliases = _discover_pipelines()
return _pipeline_registry
def get_task_aliases() -> Dict[str, List[str]]:
"""
Get the cached task aliases dictionary.
Returns a dictionary mapping task aliases (e.g., "text-to-image") to
lists of pipeline class names that support that task.
"""
global _pipeline_registry, _task_aliases
if _task_aliases is None:
_pipeline_registry, _task_aliases = _discover_pipelines()
return _task_aliases
def get_available_pipelines() -> List[str]:
"""
Get a sorted list of all discovered pipeline class names.
Returns:
List of pipeline class names available for loading.
"""
return sorted(get_pipeline_registry().keys())
def get_available_tasks() -> List[str]:
"""
Get a sorted list of all available task aliases.
Returns:
List of task aliases (e.g., ["text-to-image", "image-to-image", ...])
"""
return sorted(get_task_aliases().keys())
def resolve_pipeline_class(
class_name: Optional[str] = None,
task: Optional[str] = None,
model_id: Optional[str] = None
) -> Type:
"""
Resolve a pipeline class from class_name, task, or model_id.
Priority:
1. If class_name is provided, look it up directly
2. If task is provided, resolve through task aliases
3. If model_id is provided, try to infer from HuggingFace Hub
Args:
class_name: Exact pipeline class name (e.g., "StableDiffusionPipeline")
task: Task alias (e.g., "text-to-image", "img2img")
model_id: HuggingFace model ID (e.g., "runwayml/stable-diffusion-v1-5")
Returns:
The resolved pipeline class.
Raises:
ValueError: If no pipeline could be resolved.
"""
registry = get_pipeline_registry()
aliases = get_task_aliases()
# 1. Direct class name lookup
if class_name:
if class_name in registry:
return registry[class_name]
# Try case-insensitive match
for name, cls in registry.items():
if name.lower() == class_name.lower():
return cls
raise ValueError(
f"Unknown pipeline class '{class_name}'. "
f"Available pipelines: {', '.join(sorted(registry.keys())[:20])}..."
)
# 2. Task alias lookup
if task:
task_lower = task.lower().replace('_', '-')
if task_lower in aliases:
# Return the first matching pipeline for this task
matching_classes = aliases[task_lower]
if matching_classes:
return registry[matching_classes[0]]
# Try partial matching
for alias, classes in aliases.items():
if task_lower in alias or alias in task_lower:
if classes:
return registry[classes[0]]
raise ValueError(
f"Unknown task '{task}'. "
f"Available tasks: {', '.join(sorted(aliases.keys())[:20])}..."
)
# 3. Try to infer from HuggingFace Hub
if model_id:
try:
from huggingface_hub import model_info
info = model_info(model_id)
# Check pipeline_tag
if hasattr(info, 'pipeline_tag') and info.pipeline_tag:
tag = info.pipeline_tag.lower().replace('_', '-')
if tag in aliases:
matching_classes = aliases[tag]
if matching_classes:
return registry[matching_classes[0]]
# Check model card for hints
if hasattr(info, 'cardData') and info.cardData:
card = info.cardData
if 'pipeline_tag' in card:
tag = card['pipeline_tag'].lower().replace('_', '-')
if tag in aliases:
matching_classes = aliases[tag]
if matching_classes:
return registry[matching_classes[0]]
except ImportError:
# huggingface_hub not available
pass
except (KeyError, AttributeError, ValueError, OSError):
# Model info lookup failed - common cases:
# - KeyError: Missing keys in model card
# - AttributeError: Missing attributes on model info
# - ValueError: Invalid model data
# - OSError: Network or file access issues
pass
# Fallback: use DiffusionPipeline.from_pretrained which auto-detects
# DiffusionPipeline is always added to registry in _discover_pipelines (line 132)
# but use .get() with import fallback for extra safety
from diffusers import DiffusionPipeline
return registry.get('DiffusionPipeline', DiffusionPipeline)
raise ValueError(
"Must provide at least one of: class_name, task, or model_id. "
f"Available pipelines: {', '.join(sorted(registry.keys())[:20])}... "
f"Available tasks: {', '.join(sorted(aliases.keys())[:20])}..."
)
def load_diffusers_pipeline(
class_name: Optional[str] = None,
task: Optional[str] = None,
model_id: Optional[str] = None,
from_single_file: bool = False,
**kwargs
) -> Any:
"""
Load a diffusers pipeline dynamically.
This function resolves the appropriate pipeline class based on the provided
parameters and instantiates it with the given kwargs.
Args:
class_name: Exact pipeline class name (e.g., "StableDiffusionPipeline")
task: Task alias (e.g., "text-to-image", "img2img")
model_id: HuggingFace model ID or local path
from_single_file: If True, use from_single_file() instead of from_pretrained()
**kwargs: Additional arguments passed to from_pretrained() or from_single_file()
Returns:
An instantiated pipeline object.
Raises:
ValueError: If no pipeline could be resolved.
Exception: If pipeline loading fails.
Examples:
# Load by class name
pipe = load_diffusers_pipeline(
class_name="StableDiffusionPipeline",
model_id="runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16
)
# Load by task
pipe = load_diffusers_pipeline(
task="text-to-image",
model_id="runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16
)
# Load from single file
pipe = load_diffusers_pipeline(
class_name="StableDiffusionPipeline",
model_id="/path/to/model.safetensors",
from_single_file=True,
torch_dtype=torch.float16
)
"""
# Resolve the pipeline class
pipeline_class = resolve_pipeline_class(
class_name=class_name,
task=task,
model_id=model_id
)
# If no model_id provided but we have a class, we can't load
if model_id is None:
raise ValueError("model_id is required to load a pipeline")
# Load the pipeline
try:
if from_single_file:
# Check if the class has from_single_file method
if hasattr(pipeline_class, 'from_single_file'):
return pipeline_class.from_single_file(model_id, **kwargs)
else:
raise ValueError(
f"Pipeline class {pipeline_class.__name__} does not support from_single_file(). "
f"Use from_pretrained() instead."
)
else:
return pipeline_class.from_pretrained(model_id, **kwargs)
except Exception as e:
# Provide helpful error message
available = get_available_pipelines()
raise RuntimeError(
f"Failed to load pipeline '{pipeline_class.__name__}' from '{model_id}': {e}\n"
f"Available pipelines: {', '.join(available[:20])}..."
) from e
def get_pipeline_info(class_name: str) -> Dict[str, Any]:
"""
Get information about a specific pipeline class.
Args:
class_name: The pipeline class name
Returns:
Dictionary with pipeline information including:
- name: Class name
- aliases: List of task aliases
- supports_single_file: Whether from_single_file() is available
- docstring: Class docstring (if available)
"""
registry = get_pipeline_registry()
aliases = get_task_aliases()
if class_name not in registry:
raise ValueError(f"Unknown pipeline: {class_name}")
cls = registry[class_name]
# Find all aliases for this pipeline
pipeline_aliases = []
for alias, classes in aliases.items():
if class_name in classes:
pipeline_aliases.append(alias)
return {
'name': class_name,
'aliases': pipeline_aliases,
'supports_single_file': hasattr(cls, 'from_single_file'),
'docstring': cls.__doc__[:200] if cls.__doc__ else None
}

View File

@@ -16,4 +16,11 @@ if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
# Use python 3.12 for l4t
if [ "x${BUILD_PROFILE}" == "xl4t12" ] || [ "x${BUILD_PROFILE}" == "xl4t13" ]; then
PYTHON_VERSION="3.12"
PYTHON_PATCH="12"
PY_STANDALONE_TAG="20251120"
fi
installRequirements

View File

@@ -0,0 +1,12 @@
--extra-index-url https://download.pytorch.org/whl/cu130
git+https://github.com/huggingface/diffusers
opencv-python
transformers
torchvision
accelerate
compel
peft
sentencepiece
torch
ftfy
optimum-quanto

View File

@@ -1,12 +0,0 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu126/
torch
diffusers
transformers
accelerate
compel
peft
optimum-quanto
numpy<2
sentencepiece
torchvision
ftfy

View File

@@ -0,0 +1,12 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu129/
torch
git+https://github.com/huggingface/diffusers
transformers
accelerate
compel
peft
optimum-quanto
numpy<2
sentencepiece
torchvision
ftfy

View File

@@ -0,0 +1,13 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
git+https://github.com/huggingface/diffusers
transformers
accelerate
compel
peft
optimum-quanto
numpy<2
sentencepiece
torchvision
ftfy
chardet

View File

@@ -1,15 +1,26 @@
"""
A test script to test the gRPC service
A test script to test the gRPC service and dynamic loader
"""
import unittest
import subprocess
import time
import backend_pb2
import backend_pb2_grpc
from unittest.mock import patch, MagicMock
import grpc
# Import dynamic loader for testing (these don't need gRPC)
import diffusers_dynamic_loader as loader
from diffusers import DiffusionPipeline, StableDiffusionPipeline
# Try to import gRPC modules - may not be available during unit testing
try:
import grpc
import backend_pb2
import backend_pb2_grpc
GRPC_AVAILABLE = True
except ImportError:
GRPC_AVAILABLE = False
@unittest.skipUnless(GRPC_AVAILABLE, "gRPC modules not available")
class TestBackendServicer(unittest.TestCase):
"""
TestBackendServicer is the class that tests the gRPC service
@@ -82,3 +93,222 @@ class TestBackendServicer(unittest.TestCase):
self.fail("Image gen service failed")
finally:
self.tearDown()
class TestDiffusersDynamicLoader(unittest.TestCase):
"""Test cases for the diffusers dynamic loader functionality."""
@classmethod
def setUpClass(cls):
"""Set up test fixtures - clear caches to ensure fresh discovery."""
# Reset the caches to ensure fresh discovery
loader._pipeline_registry = None
loader._task_aliases = None
def test_camel_to_kebab_conversion(self):
"""Test CamelCase to kebab-case conversion."""
test_cases = [
("StableDiffusionPipeline", "stable-diffusion-pipeline"),
("StableDiffusionXLPipeline", "stable-diffusion-xl-pipeline"),
("FluxPipeline", "flux-pipeline"),
("DiffusionPipeline", "diffusion-pipeline"),
]
for input_val, expected in test_cases:
with self.subTest(input=input_val):
result = loader._camel_to_kebab(input_val)
self.assertEqual(result, expected)
def test_extract_task_keywords(self):
"""Test task keyword extraction from class names."""
# Test text-to-image detection
aliases = loader._extract_task_keywords("StableDiffusionPipeline")
self.assertIn("stable-diffusion", aliases)
# Test img2img detection
aliases = loader._extract_task_keywords("StableDiffusionImg2ImgPipeline")
self.assertIn("image-to-image", aliases)
self.assertIn("img2img", aliases)
# Test inpainting detection
aliases = loader._extract_task_keywords("StableDiffusionInpaintPipeline")
self.assertIn("inpainting", aliases)
self.assertIn("inpaint", aliases)
# Test depth2img detection
aliases = loader._extract_task_keywords("StableDiffusionDepth2ImgPipeline")
self.assertIn("depth-to-image", aliases)
def test_discover_pipelines_finds_known_classes(self):
"""Test that pipeline discovery finds at least one known pipeline class."""
registry = loader.get_pipeline_registry()
# Check that the registry is not empty
self.assertGreater(len(registry), 0, "Pipeline registry should not be empty")
# Check for known pipeline classes
known_pipelines = [
"StableDiffusionPipeline",
"DiffusionPipeline",
]
for pipeline_name in known_pipelines:
with self.subTest(pipeline=pipeline_name):
self.assertIn(
pipeline_name,
registry,
f"Expected to find {pipeline_name} in registry"
)
def test_discover_pipelines_caches_results(self):
"""Test that pipeline discovery results are cached."""
# Get registry twice
registry1 = loader.get_pipeline_registry()
registry2 = loader.get_pipeline_registry()
# Should be the same object (cached)
self.assertIs(registry1, registry2, "Registry should be cached")
def test_get_available_pipelines(self):
"""Test getting list of available pipelines."""
available = loader.get_available_pipelines()
# Should return a list
self.assertIsInstance(available, list)
# Should contain known pipelines
self.assertIn("StableDiffusionPipeline", available)
self.assertIn("DiffusionPipeline", available)
# Should be sorted
self.assertEqual(available, sorted(available))
def test_get_available_tasks(self):
"""Test getting list of available task aliases."""
tasks = loader.get_available_tasks()
# Should return a list
self.assertIsInstance(tasks, list)
# Should be sorted
self.assertEqual(tasks, sorted(tasks))
def test_resolve_pipeline_class_by_name(self):
"""Test resolving pipeline class by exact name."""
cls = loader.resolve_pipeline_class(class_name="StableDiffusionPipeline")
self.assertEqual(cls, StableDiffusionPipeline)
def test_resolve_pipeline_class_by_name_case_insensitive(self):
"""Test that class name resolution is case-insensitive."""
cls1 = loader.resolve_pipeline_class(class_name="StableDiffusionPipeline")
cls2 = loader.resolve_pipeline_class(class_name="stablediffusionpipeline")
self.assertEqual(cls1, cls2)
def test_resolve_pipeline_class_by_task(self):
"""Test resolving pipeline class by task alias."""
# Get the registry to find available tasks
aliases = loader.get_task_aliases()
# Test with a common task that should be available
if "stable-diffusion" in aliases:
cls = loader.resolve_pipeline_class(task="stable-diffusion")
self.assertIsNotNone(cls)
def test_resolve_pipeline_class_unknown_name_raises(self):
"""Test that resolving unknown class name raises ValueError with helpful message."""
with self.assertRaises(ValueError) as ctx:
loader.resolve_pipeline_class(class_name="NonExistentPipeline")
# Check that error message includes available pipelines
error_msg = str(ctx.exception)
self.assertIn("Unknown pipeline class", error_msg)
self.assertIn("Available pipelines", error_msg)
def test_resolve_pipeline_class_unknown_task_raises(self):
"""Test that resolving unknown task raises ValueError with helpful message."""
with self.assertRaises(ValueError) as ctx:
loader.resolve_pipeline_class(task="nonexistent-task-xyz")
# Check that error message includes available tasks
error_msg = str(ctx.exception)
self.assertIn("Unknown task", error_msg)
self.assertIn("Available tasks", error_msg)
def test_resolve_pipeline_class_no_params_raises(self):
"""Test that calling with no parameters raises helpful ValueError."""
with self.assertRaises(ValueError) as ctx:
loader.resolve_pipeline_class()
error_msg = str(ctx.exception)
self.assertIn("Must provide at least one of", error_msg)
def test_get_pipeline_info(self):
"""Test getting pipeline information."""
info = loader.get_pipeline_info("StableDiffusionPipeline")
self.assertEqual(info['name'], "StableDiffusionPipeline")
self.assertIsInstance(info['aliases'], list)
self.assertIsInstance(info['supports_single_file'], bool)
def test_get_pipeline_info_unknown_raises(self):
"""Test that getting info for unknown pipeline raises ValueError."""
with self.assertRaises(ValueError) as ctx:
loader.get_pipeline_info("NonExistentPipeline")
self.assertIn("Unknown pipeline", str(ctx.exception))
def test_discover_diffusers_classes_pipelines(self):
"""Test generic class discovery for DiffusionPipeline."""
classes = loader.discover_diffusers_classes("DiffusionPipeline")
# Should return a dict
self.assertIsInstance(classes, dict)
# Should contain known pipeline classes
self.assertIn("DiffusionPipeline", classes)
self.assertIn("StableDiffusionPipeline", classes)
def test_discover_diffusers_classes_caches_results(self):
"""Test that class discovery results are cached."""
classes1 = loader.discover_diffusers_classes("DiffusionPipeline")
classes2 = loader.discover_diffusers_classes("DiffusionPipeline")
# Should be the same object (cached)
self.assertIs(classes1, classes2)
def test_discover_diffusers_classes_exclude_base(self):
"""Test discovering classes without base class."""
classes = loader.discover_diffusers_classes("DiffusionPipeline", include_base=False)
# Should still contain subclasses
self.assertIn("StableDiffusionPipeline", classes)
def test_get_available_classes(self):
"""Test getting list of available classes for a base class."""
classes = loader.get_available_classes("DiffusionPipeline")
# Should return a sorted list
self.assertIsInstance(classes, list)
self.assertEqual(classes, sorted(classes))
# Should contain known classes
self.assertIn("StableDiffusionPipeline", classes)
class TestDiffusersDynamicLoaderWithMocks(unittest.TestCase):
"""Test cases using mocks to test edge cases."""
def test_load_pipeline_requires_model_id(self):
"""Test that load_diffusers_pipeline requires model_id."""
with self.assertRaises(ValueError) as ctx:
loader.load_diffusers_pipeline(class_name="StableDiffusionPipeline")
self.assertIn("model_id is required", str(ctx.exception))
def test_resolve_with_model_id_uses_diffusion_pipeline_fallback(self):
"""Test that resolving with only model_id falls back to DiffusionPipeline."""
# When model_id is provided, if hub lookup is not successful,
# should fall back to DiffusionPipeline.
# This tests the fallback behavior - the actual hub lookup may succeed
# or fail depending on network, but the fallback path should work.
cls = loader.resolve_pipeline_class(model_id="some/nonexistent/model")
self.assertEqual(cls, DiffusionPipeline)

View File

@@ -0,0 +1,9 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch==2.9.1
faster-whisper
opencv-python
accelerate
compel
peft
sentencepiece
optimum-quanto

View File

@@ -0,0 +1,7 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch==2.9.1
torchaudio==2.9.1
transformers
accelerate
kokoro
soundfile

View File

@@ -14,11 +14,13 @@ import backend_pb2_grpc
import grpc
from mlx_lm import load, generate, stream_generate
from mlx_lm.sample_utils import make_sampler
from mlx_lm.models.cache import make_prompt_cache
from mlx_lm.models.cache import make_prompt_cache, can_trim_prompt_cache, trim_prompt_cache
import mlx.core as mx
import base64
import io
from mlx_cache import ThreadSafeLRUPromptCache
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
@@ -118,10 +120,16 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
self.model, self.tokenizer = load(request.Model, tokenizer_config=tokenizer_config)
else:
self.model, self.tokenizer = load(request.Model)
# Initialize prompt cache for efficient generation
max_kv_size = self.options.get("max_kv_size", None)
self.prompt_cache = make_prompt_cache(self.model, max_kv_size)
# Initialize thread-safe LRU prompt cache for efficient generation
max_cache_entries = self.options.get("max_cache_entries", 10)
self.max_kv_size = self.options.get("max_kv_size", None)
self.model_key = request.Model
self.lru_cache = ThreadSafeLRUPromptCache(
max_size=max_cache_entries,
can_trim_fn=can_trim_prompt_cache,
trim_fn=trim_prompt_cache,
)
except Exception as err:
print(f"Error loading MLX model {err=}, {type(err)=}", file=sys.stderr)
@@ -134,6 +142,8 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
Generates text based on the given prompt and sampling parameters using MLX.
Uses thread-safe LRU prompt cache for efficient prefix reuse across requests.
Args:
request: The predict request.
context: The gRPC context.
@@ -141,31 +151,48 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
Returns:
backend_pb2.Reply: The predict result.
"""
prompt_cache = None
cache_key = None
try:
# Prepare the prompt
prompt = self._prepare_prompt(request)
# Prepare the prompt and tokenize for cache key
prompt_text = self._prepare_prompt(request)
cache_key = self._get_tokens_from_prompt(prompt_text)
# Fetch nearest cache (exact, shorter prefix, or create new)
prompt_cache, remaining_tokens = self.lru_cache.fetch_nearest_cache(
self.model_key, cache_key
)
if prompt_cache is None:
prompt_cache = make_prompt_cache(self.model, self.max_kv_size)
remaining_tokens = cache_key
# Build generation parameters using request attributes and options
max_tokens, sampler_params = self._build_generation_params(request)
print(f"Generating text with MLX - max_tokens: {max_tokens}, sampler_params: {sampler_params}", file=sys.stderr)
print(f"Generating text with MLX - max_tokens: {max_tokens}, cache_hit: {len(remaining_tokens) < len(cache_key)}", file=sys.stderr)
# Create sampler with parameters
sampler = make_sampler(**sampler_params)
# Generate text using MLX with proper parameters
response = generate(
# Use stream_generate to track generated tokens for cache key
generated_text = []
for response in stream_generate(
self.model,
self.tokenizer,
prompt=prompt,
prompt=remaining_tokens if remaining_tokens else cache_key,
max_tokens=max_tokens,
sampler=sampler,
prompt_cache=self.prompt_cache,
verbose=False
)
return backend_pb2.Reply(message=bytes(response, encoding='utf-8'))
prompt_cache=prompt_cache,
):
generated_text.append(response.text)
cache_key.append(response.token)
# Insert completed cache
self.lru_cache.insert_cache(self.model_key, cache_key, prompt_cache)
return backend_pb2.Reply(message=bytes(''.join(generated_text), encoding='utf-8'))
except Exception as e:
print(f"Error in MLX Predict: {e}", file=sys.stderr)
context.set_code(grpc.StatusCode.INTERNAL)
@@ -194,6 +221,8 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
Generates text based on the given prompt and sampling parameters, and streams the results using MLX.
Uses thread-safe LRU prompt cache for efficient prefix reuse across requests.
Args:
request: The predict stream request.
context: The gRPC context.
@@ -201,35 +230,56 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
Yields:
backend_pb2.Reply: Streaming predict results.
"""
prompt_cache = None
cache_key = None
try:
# Prepare the prompt
prompt = self._prepare_prompt(request)
# Prepare the prompt and tokenize for cache key
prompt_text = self._prepare_prompt(request)
cache_key = self._get_tokens_from_prompt(prompt_text)
# Fetch nearest cache (exact, shorter prefix, or create new)
prompt_cache, remaining_tokens = self.lru_cache.fetch_nearest_cache(
self.model_key, cache_key
)
if prompt_cache is None:
prompt_cache = make_prompt_cache(self.model, self.max_kv_size)
remaining_tokens = cache_key
# Build generation parameters using request attributes and options
max_tokens, sampler_params = self._build_generation_params(request, default_max_tokens=512)
print(f"Streaming text with MLX - max_tokens: {max_tokens}, sampler_params: {sampler_params}", file=sys.stderr)
print(f"Streaming text with MLX - max_tokens: {max_tokens}, cache_hit: {len(remaining_tokens) < len(cache_key)}", file=sys.stderr)
# Create sampler with parameters
sampler = make_sampler(**sampler_params)
# Stream text generation using MLX with proper parameters
for response in stream_generate(
self.model,
self.tokenizer,
prompt=prompt,
prompt=remaining_tokens if remaining_tokens else cache_key,
max_tokens=max_tokens,
sampler=sampler,
prompt_cache=self.prompt_cache,
prompt_cache=prompt_cache,
):
cache_key.append(response.token)
yield backend_pb2.Reply(message=bytes(response.text, encoding='utf-8'))
except Exception as e:
print(f"Error in MLX PredictStream: {e}", file=sys.stderr)
context.set_code(grpc.StatusCode.INTERNAL)
context.set_details(f"Streaming generation failed: {str(e)}")
yield backend_pb2.Reply(message=bytes("", encoding='utf-8'))
finally:
# Always insert cache, even on interruption
if prompt_cache is not None and cache_key is not None:
try:
self.lru_cache.insert_cache(self.model_key, cache_key, prompt_cache)
except Exception as e:
print(f"Error inserting cache: {e}", file=sys.stderr)
def _prepare_prompt(self, request):
"""
Prepare the prompt for MLX generation, handling chat templates if needed.
@@ -246,16 +296,31 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
messages = []
for msg in request.Messages:
messages.append({"role": msg.role, "content": msg.content})
prompt = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
messages,
tokenize=False,
add_generation_prompt=True
)
return prompt
else:
return request.Prompt
def _get_tokens_from_prompt(self, prompt_text: str) -> List[int]:
"""
Tokenize prompt text for cache key generation.
Args:
prompt_text: The prompt string to tokenize.
Returns:
List[int]: List of token IDs.
"""
tokens = self.tokenizer.encode(prompt_text)
if hasattr(tokens, 'tolist'):
return tokens.tolist()
return list(tokens)
@@ -284,11 +349,19 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
top_p = getattr(request, 'TopP', 0.0)
if top_p == 0.0:
top_p = 1.0 # Default top_p
min_p = getattr(request, 'MinP', 0.0)
# min_p default of 0.0 means disabled (no filtering)
top_k = getattr(request, 'TopK', 0)
# top_k default of 0 means disabled (no filtering)
# Initialize sampler parameters
sampler_params = {
'temp': temp,
'top_p': top_p,
'min_p': min_p,
'top_k': top_k,
'xtc_threshold': 0.0,
'xtc_probability': 0.0,
}
@@ -308,7 +381,9 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
sampler_option_mapping = {
'temp': 'temp',
'temperature': 'temp', # alias
'top_p': 'top_p',
'top_p': 'top_p',
'min_p': 'min_p',
'top_k': 'top_k',
'xtc_threshold': 'xtc_threshold',
'xtc_probability': 'xtc_probability',
}

View File

@@ -0,0 +1,266 @@
"""
Thread-safe LRU prompt cache for MLX-based backends.
Ported from mlx_lm/server.py (MIT License, Copyright 2023-2024 Apple Inc.)
with thread-safety additions for LocalAI's gRPC backend.
Usage:
from mlx_cache import ThreadSafeLRUPromptCache
# In LoadModel:
self.lru_cache = ThreadSafeLRUPromptCache(max_size=10)
# In Predict/PredictStream:
prompt_cache, remaining_tokens = self.lru_cache.fetch_nearest_cache(model_key, tokens)
# ... generate ...
self.lru_cache.insert_cache(model_key, tokens, prompt_cache)
"""
import copy
import threading
from collections import deque
from dataclasses import dataclass
from typing import Any, List, Optional, Tuple
@dataclass
class CacheEntry:
"""A cache entry with reference counting."""
prompt_cache: List[Any]
count: int
@dataclass
class SearchResult:
"""Result of searching the cache trie."""
model: Any
exact: Optional[List[int]]
shorter: Optional[List[int]]
longer: Optional[List[int]]
common_prefix: int
class ThreadSafeLRUPromptCache:
"""
Thread-safe LRU cache with prefix matching for prompt KV caches.
This cache stores KV caches keyed by token sequences and supports:
- Exact match: Return the cache for the exact token sequence
- Shorter prefix match: Return a cache for a prefix of the tokens
- Longer prefix match: If a longer sequence is cached and can be trimmed
- LRU eviction: When max_size is exceeded, evict least recently used
Thread safety is provided via a threading.Lock that protects all
cache operations.
Args:
max_size: Maximum number of cache entries (default: 10)
can_trim_fn: Optional function to check if a cache can be trimmed
trim_fn: Optional function to trim a cache
"""
def __init__(
self,
max_size: int = 10,
can_trim_fn: Optional[Any] = None,
trim_fn: Optional[Any] = None,
):
self.max_size = max_size
self._cache = {}
self._lru = deque()
self._lock = threading.Lock()
# Optional trim functions (for longer prefix reuse)
self._can_trim_fn = can_trim_fn
self._trim_fn = trim_fn
def _search(self, model, tokens: List[int]) -> SearchResult:
"""
Search the cache for a prompt cache. Return exact or close match.
The cache is organized as a trie where each node is keyed by a token.
This allows efficient prefix matching.
"""
if model not in self._cache:
return SearchResult(model, None, None, None, 0)
current = self._cache[model]
last_cache_index = -1
index = 0
# Traverse the trie following the token sequence
while index < len(tokens) and tokens[index] in current:
current = current[tokens[index]]
if "cache" in current:
last_cache_index = index
index += 1
# Exact match - no need to search for longer or shorter caches
if last_cache_index == len(tokens) - 1:
return SearchResult(model, tuple(tokens), None, None, 0)
# Find the shorter cache (a prefix that has a cache)
# Note: Uses > 0 (not >= 0) to match upstream mlx_lm/server.py behavior.
# Single-token prefixes are not matched, which allows longer cached
# sequences to be preferred for trimming. This is acceptable because
# real prompts with chat templates are always many tokens.
shorter = None
if last_cache_index > 0:
shorter = tuple(tokens[: last_cache_index + 1])
# Check for caches that are longer than our token sequence
longer = None
common_prefix = index
if index > 0 and last_cache_index <= 0:
best = None
stack = [(current, [])]
while stack:
current, extra = stack.pop()
if "cache" in current:
if best is None or len(extra) < len(best):
best = extra
else:
for tok in current:
stack.append((current[tok], extra + [tok]))
if best is not None:
longer = tuple(tokens[:index] + best)
return SearchResult(model, None, shorter, longer, common_prefix)
def _get(self, model, tokens: Tuple[int, ...]) -> CacheEntry:
"""Get a cache entry by traversing the trie."""
current = self._cache[model]
for tok in tokens:
current = current[tok]
return current["cache"]
def _delete(self, model, tokens: Tuple[int, ...]) -> None:
"""Delete a cache entry and clean up empty trie nodes."""
path = [self._cache[model]]
for tok in tokens:
path.append(path[-1][tok])
del path[-1]["cache"]
# Clean up empty nodes bottom-up
for i in reversed(range(len(tokens))):
d_prev, d, t = path[i], path[i + 1], tokens[i]
if len(d) > 0:
break
del d_prev[t]
def _extract(self, model, tokens: Tuple[int, ...]) -> CacheEntry:
"""
Extract a cache entry for exclusive use.
If the entry has count > 1, deep copy and decrement.
If count == 1, remove from cache entirely.
"""
cache_entry = self._get(model, tokens)
if cache_entry.count == 1:
self._delete(model, tokens)
self._lru.remove((model, tokens))
return cache_entry
cache_entry.count -= 1
return CacheEntry(
copy.deepcopy(cache_entry.prompt_cache),
1,
)
def fetch_nearest_cache(
self, model, tokens: List[int]
) -> Tuple[Optional[List[Any]], List[int]]:
"""
Fetch the nearest cache for the given token sequence.
Thread-safe. Returns (cache, remaining_tokens) where:
- cache: The KV cache to use (or None if no cache found)
- remaining_tokens: Tokens that still need to be processed
Args:
model: Model identifier (used to namespace caches)
tokens: The full token sequence for the prompt
Returns:
Tuple of (prompt_cache, remaining_tokens)
"""
with self._lock:
tokens_tuple = tuple(tokens)
result = self._search(model, tokens)
# Exact match - extract and return
if result.exact is not None:
cache_entry = self._extract(result.model, result.exact)
return cache_entry.prompt_cache, []
# Shorter prefix match - extract and return remaining
if result.shorter is not None:
cache_entry = self._extract(result.model, result.shorter)
prefix_len = len(result.shorter)
return cache_entry.prompt_cache, list(tokens[prefix_len:])
# Longer prefix match - try to trim if possible
if result.longer is not None and self._can_trim_fn is not None:
cache_entry = self._get(result.model, result.longer)
if self._can_trim_fn(cache_entry.prompt_cache):
# Deep copy and trim
trimmed_cache = copy.deepcopy(cache_entry.prompt_cache)
prefix = min(len(tokens) - 1, result.common_prefix)
num_to_trim = len(result.longer) - prefix
if self._trim_fn is not None:
self._trim_fn(trimmed_cache, num_to_trim)
return trimmed_cache, list(tokens[prefix:])
# No match found
return None, list(tokens)
def insert_cache(
self, model, tokens: List[int], prompt_cache: List[Any]
) -> None:
"""
Insert a cache entry after generation completes.
Thread-safe. Handles LRU eviction if max_size is exceeded.
Args:
model: Model identifier (used to namespace caches)
tokens: The full token sequence (prompt + generated)
prompt_cache: The KV cache to store
"""
with self._lock:
tokens_tuple = tuple(tokens)
if model not in self._cache:
self._cache[model] = {}
current = self._cache[model]
# Build trie path
for tok in tokens_tuple:
if tok not in current:
current[tok] = {}
current = current[tok]
# Update or create entry
if "cache" in current:
current["cache"].count += 1
self._lru.remove((model, tokens_tuple))
else:
current["cache"] = CacheEntry(prompt_cache, 1)
# Update LRU order
self._lru.append((model, tokens_tuple))
# Evict if over capacity
if len(self._lru) > self.max_size:
evict_model, evict_tokens = self._lru.popleft()
self._delete(evict_model, evict_tokens)
def clear(self) -> None:
"""Clear all cache entries. Thread-safe."""
with self._lock:
self._cache.clear()
self._lru.clear()
def __len__(self) -> int:
"""Return the number of cache entries. Thread-safe."""
with self._lock:
return len(self._lru)

View File

@@ -1,17 +1,10 @@
import unittest
import subprocess
import time
import backend_pb2
import backend_pb2_grpc
import grpc
import unittest
import subprocess
import time
import grpc
import backend_pb2_grpc
import backend_pb2
import backend_pb2_grpc
class TestBackendServicer(unittest.TestCase):
"""
@@ -47,9 +40,9 @@ class TestBackendServicer(unittest.TestCase):
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
response = stub.LoadModel(backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit"))
self.assertTrue(response.success)
self.assertEqual(response.message, "Model loaded successfully")
self.assertEqual(response.message, "MLX model loaded successfully")
except Exception as err:
print(err)
self.fail("LoadModel service failed")
@@ -64,7 +57,7 @@ class TestBackendServicer(unittest.TestCase):
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
response = stub.LoadModel(backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit"))
self.assertTrue(response.success)
req = backend_pb2.PredictOptions(Prompt="The capital of France is")
resp = stub.Predict(req)
@@ -84,7 +77,7 @@ class TestBackendServicer(unittest.TestCase):
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="facebook/opt-125m"))
response = stub.LoadModel(backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit"))
self.assertTrue(response.success)
req = backend_pb2.PredictOptions(
@@ -95,26 +88,13 @@ class TestBackendServicer(unittest.TestCase):
TopK=40,
PresencePenalty=0.1,
FrequencyPenalty=0.2,
RepetitionPenalty=1.1,
MinP=0.05,
Seed=42,
StopPrompts=["\n"],
StopTokenIds=[50256],
BadWords=["badword"],
IncludeStopStrInOutput=True,
IgnoreEOS=True,
MinTokens=5,
Logprobs=5,
PromptLogprobs=5,
SkipSpecialTokens=True,
SpacesBetweenSpecialTokens=True,
TruncatePromptTokens=10,
GuidedDecoding=True,
N=2,
)
resp = stub.Predict(req)
self.assertIsNotNone(resp.message)
self.assertIsNotNone(resp.logprobs)
except Exception as err:
print(err)
self.fail("sampling params service failed")
@@ -143,4 +123,112 @@ class TestBackendServicer(unittest.TestCase):
print(err)
self.fail("Embedding service failed")
finally:
self.tearDown()
self.tearDown()
def test_concurrent_requests(self):
"""
This method tests that concurrent requests don't corrupt each other's cache state.
This is a regression test for the race condition in the original implementation.
"""
import concurrent.futures
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit"))
self.assertTrue(response.success)
def make_request(prompt):
req = backend_pb2.PredictOptions(Prompt=prompt, Tokens=20)
return stub.Predict(req)
# Run 5 concurrent requests with different prompts
prompts = [
"The capital of France is",
"The capital of Germany is",
"The capital of Italy is",
"The capital of Spain is",
"The capital of Portugal is",
]
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
futures = [executor.submit(make_request, p) for p in prompts]
results = [f.result() for f in concurrent.futures.as_completed(futures)]
# All results should be non-empty
messages = [r.message for r in results]
self.assertTrue(all(len(m) > 0 for m in messages), "All requests should return non-empty responses")
print(f"Concurrent test passed: {len(messages)} responses received")
except Exception as err:
print(err)
self.fail("Concurrent requests test failed")
finally:
self.tearDown()
def test_cache_reuse(self):
"""
This method tests that repeated prompts reuse cached KV states.
The second request should benefit from the cached prompt processing.
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit"))
self.assertTrue(response.success)
prompt = "The quick brown fox jumps over the lazy dog. "
# First request - populates cache
req1 = backend_pb2.PredictOptions(Prompt=prompt, Tokens=10)
resp1 = stub.Predict(req1)
self.assertIsNotNone(resp1.message)
# Second request with same prompt - should reuse cache
req2 = backend_pb2.PredictOptions(Prompt=prompt, Tokens=10)
resp2 = stub.Predict(req2)
self.assertIsNotNone(resp2.message)
print(f"Cache reuse test passed: first={len(resp1.message)} bytes, second={len(resp2.message)} bytes")
except Exception as err:
print(err)
self.fail("Cache reuse test failed")
finally:
self.tearDown()
def test_prefix_cache_reuse(self):
"""
This method tests that prompts sharing a common prefix benefit from cached KV states.
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="mlx-community/Llama-3.2-1B-Instruct-4bit"))
self.assertTrue(response.success)
# First request with base prompt
prompt_base = "Once upon a time in a land far away, "
req1 = backend_pb2.PredictOptions(Prompt=prompt_base, Tokens=10)
resp1 = stub.Predict(req1)
self.assertIsNotNone(resp1.message)
# Second request with extended prompt (same prefix)
prompt_extended = prompt_base + "there lived a brave knight who "
req2 = backend_pb2.PredictOptions(Prompt=prompt_extended, Tokens=10)
resp2 = stub.Predict(req2)
self.assertIsNotNone(resp2.message)
print(f"Prefix cache test passed: base={len(resp1.message)} bytes, extended={len(resp2.message)} bytes")
except Exception as err:
print(err)
self.fail("Prefix cache reuse test failed")
finally:
self.tearDown()
# Unit tests for ThreadSafeLRUPromptCache are in test_mlx_cache.py

View File

@@ -0,0 +1,480 @@
"""
Comprehensive unit tests for ThreadSafeLRUPromptCache.
Tests all cache operation modes:
- Exact match
- Shorter prefix match
- Longer prefix match (with trimming)
- No match
- LRU eviction
- Reference counting
- Multi-model namespacing
- Thread safety with data integrity verification
"""
import unittest
import concurrent.futures
import threading
import copy
from mlx_cache import ThreadSafeLRUPromptCache
class TestCacheExactMatch(unittest.TestCase):
"""Tests for exact match cache behavior."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=10)
def test_exact_match_returns_cache_and_empty_remaining(self):
"""Exact match should return the cache with no remaining tokens."""
tokens = [1, 2, 3, 4, 5]
mock_cache = ["kv_cache_data"]
self.cache.insert_cache("model1", tokens, mock_cache)
result_cache, remaining = self.cache.fetch_nearest_cache("model1", tokens)
self.assertEqual(result_cache, mock_cache)
self.assertEqual(remaining, [])
def test_exact_match_extracts_and_removes_from_cache(self):
"""Fetching exact match with count=1 should remove entry from cache."""
tokens = [1, 2, 3]
self.cache.insert_cache("model1", tokens, ["cache"])
self.assertEqual(len(self.cache), 1)
# First fetch extracts the entry
self.cache.fetch_nearest_cache("model1", tokens)
# Cache should now be empty
self.assertEqual(len(self.cache), 0)
# Second fetch should return None (no match)
result_cache, remaining = self.cache.fetch_nearest_cache("model1", tokens)
self.assertIsNone(result_cache)
self.assertEqual(remaining, tokens)
class TestCacheShorterPrefix(unittest.TestCase):
"""Tests for shorter prefix match behavior."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=10)
def test_shorter_prefix_returns_cache_with_remaining_tokens(self):
"""When cached prefix is shorter, return cache and remaining suffix."""
short_tokens = [1, 2, 3]
long_tokens = [1, 2, 3, 4, 5, 6]
mock_cache = ["prefix_cache"]
self.cache.insert_cache("model1", short_tokens, mock_cache)
result_cache, remaining = self.cache.fetch_nearest_cache("model1", long_tokens)
self.assertEqual(result_cache, mock_cache)
self.assertEqual(remaining, [4, 5, 6])
def test_shorter_prefix_correct_remaining_calculation(self):
"""Verify remaining tokens are calculated correctly for various prefix lengths."""
# Note: Single-token prefixes ([1] -> [1,2,3]) are deliberately not matched
# to allow longer cached sequences to be preferred for trimming.
# This matches upstream mlx_lm/server.py behavior.
test_cases = [
# (cached_tokens, requested_tokens, expected_remaining)
([1, 2], [1, 2, 3, 4, 5], [3, 4, 5]),
([10, 20, 30, 40], [10, 20, 30, 40, 50], [50]),
]
for cached, requested, expected_remaining in test_cases:
with self.subTest(cached=cached, requested=requested):
cache = ThreadSafeLRUPromptCache(max_size=10)
cache.insert_cache("model", cached, ["cache"])
result_cache, remaining = cache.fetch_nearest_cache("model", requested)
self.assertIsNotNone(result_cache)
self.assertEqual(remaining, expected_remaining)
def test_single_token_prefix_not_matched(self):
"""Single-token prefixes are not matched (by design, matches upstream).
This allows longer cached sequences to be preferred for trimming,
which provides better KV cache reuse. Single-token caches are rare
in practice since real prompts with chat templates are many tokens.
"""
cache = ThreadSafeLRUPromptCache(max_size=10)
cache.insert_cache("model", [1], ["cache"])
result_cache, remaining = cache.fetch_nearest_cache("model", [1, 2, 3])
# Single-token prefix is NOT matched
self.assertIsNone(result_cache)
self.assertEqual(remaining, [1, 2, 3])
class TestCacheLongerPrefix(unittest.TestCase):
"""Tests for longer prefix match behavior (trimming)."""
def setUp(self):
# Track trim calls for verification
self.trim_calls = []
def mock_can_trim(cache):
return True
def mock_trim(cache, num_to_trim):
self.trim_calls.append(num_to_trim)
# Simulate trimming by modifying the cache
cache.append(f"trimmed_{num_to_trim}")
self.cache = ThreadSafeLRUPromptCache(
max_size=10,
can_trim_fn=mock_can_trim,
trim_fn=mock_trim,
)
def test_longer_prefix_triggers_trim(self):
"""When cached sequence is longer, should trim to match requested prefix."""
long_tokens = [1, 2, 3, 4, 5]
short_tokens = [1, 2, 3]
self.cache.insert_cache("model1", long_tokens, ["original_cache"])
result_cache, remaining = self.cache.fetch_nearest_cache("model1", short_tokens)
# Should have called trim
self.assertTrue(len(self.trim_calls) > 0, "trim_fn should have been called")
# Result should be a trimmed copy, not the original
self.assertIn("trimmed_", str(result_cache))
def test_longer_prefix_without_trim_fn_returns_no_match(self):
"""Without trim functions, longer prefix should not match."""
cache_no_trim = ThreadSafeLRUPromptCache(max_size=10)
long_tokens = [1, 2, 3, 4, 5]
short_tokens = [1, 2, 3]
cache_no_trim.insert_cache("model1", long_tokens, ["cache"])
result_cache, remaining = cache_no_trim.fetch_nearest_cache("model1", short_tokens)
# Without trim_fn, should return no match
self.assertIsNone(result_cache)
self.assertEqual(remaining, short_tokens)
def test_longer_prefix_can_trim_false_returns_no_match(self):
"""When can_trim_fn returns False, should not attempt trim."""
cache = ThreadSafeLRUPromptCache(
max_size=10,
can_trim_fn=lambda c: False,
trim_fn=lambda c, n: None,
)
cache.insert_cache("model1", [1, 2, 3, 4, 5], ["cache"])
result_cache, remaining = cache.fetch_nearest_cache("model1", [1, 2, 3])
self.assertIsNone(result_cache)
self.assertEqual(remaining, [1, 2, 3])
class TestCacheNoMatch(unittest.TestCase):
"""Tests for no match behavior."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=10)
def test_empty_cache_returns_none(self):
"""Empty cache should return None and all tokens as remaining."""
tokens = [1, 2, 3]
result_cache, remaining = self.cache.fetch_nearest_cache("model1", tokens)
self.assertIsNone(result_cache)
self.assertEqual(remaining, tokens)
def test_different_prefix_returns_none(self):
"""Tokens with different prefix should not match."""
self.cache.insert_cache("model1", [1, 2, 3], ["cache"])
# Completely different tokens
result_cache, remaining = self.cache.fetch_nearest_cache("model1", [4, 5, 6])
self.assertIsNone(result_cache)
self.assertEqual(remaining, [4, 5, 6])
def test_partial_prefix_mismatch_returns_none(self):
"""Tokens that diverge mid-sequence should not match."""
self.cache.insert_cache("model1", [1, 2, 3], ["cache"])
# Same start but diverges
result_cache, remaining = self.cache.fetch_nearest_cache("model1", [1, 2, 99])
self.assertIsNone(result_cache)
self.assertEqual(remaining, [1, 2, 99])
def test_wrong_model_returns_none(self):
"""Different model key should not match."""
self.cache.insert_cache("model1", [1, 2, 3], ["cache"])
result_cache, remaining = self.cache.fetch_nearest_cache("model2", [1, 2, 3])
self.assertIsNone(result_cache)
self.assertEqual(remaining, [1, 2, 3])
class TestCacheLRUEviction(unittest.TestCase):
"""Tests for LRU eviction behavior."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=3)
def test_evicts_oldest_when_full(self):
"""Should evict least recently used entry when capacity exceeded."""
self.cache.insert_cache("model", [1], ["cache1"])
self.cache.insert_cache("model", [2], ["cache2"])
self.cache.insert_cache("model", [3], ["cache3"])
self.assertEqual(len(self.cache), 3)
# Insert 4th entry - should evict [1]
self.cache.insert_cache("model", [4], ["cache4"])
self.assertEqual(len(self.cache), 3)
# [1] should be evicted
result, _ = self.cache.fetch_nearest_cache("model", [1])
self.assertIsNone(result)
# [2], [3], [4] should still exist
for tokens in [[2], [3], [4]]:
# Re-insert since fetch extracts
self.cache.insert_cache("model", tokens, [f"cache{tokens[0]}"])
result2, _ = self.cache.fetch_nearest_cache("model", [2])
self.assertIsNotNone(result2)
def test_access_updates_lru_order(self):
"""Accessing an entry should move it to most recently used."""
self.cache.insert_cache("model", [1], ["cache1"])
self.cache.insert_cache("model", [2], ["cache2"])
self.cache.insert_cache("model", [3], ["cache3"])
# Access [1] to make it most recently used
cache1, _ = self.cache.fetch_nearest_cache("model", [1])
# Re-insert it (simulating normal usage pattern)
self.cache.insert_cache("model", [1], cache1)
# Now insert two more entries - should evict [2] then [3], not [1]
self.cache.insert_cache("model", [4], ["cache4"])
self.cache.insert_cache("model", [5], ["cache5"])
# [1] should still exist (was accessed, so not evicted)
result1, _ = self.cache.fetch_nearest_cache("model", [1])
self.assertIsNotNone(result1)
# [2] should be evicted (was oldest after [1] was accessed)
result2, _ = self.cache.fetch_nearest_cache("model", [2])
self.assertIsNone(result2)
class TestCacheReferenceCount(unittest.TestCase):
"""Tests for reference counting behavior."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=10)
def test_multiple_inserts_increment_count(self):
"""Inserting same tokens multiple times should increment count."""
tokens = [1, 2, 3]
self.cache.insert_cache("model", tokens, ["cache"])
self.cache.insert_cache("model", tokens, ["cache"])
self.cache.insert_cache("model", tokens, ["cache"])
# Should still be one entry (with count=3 internally)
self.assertEqual(len(self.cache), 1)
# First two fetches should return copies (count decremented)
result1, _ = self.cache.fetch_nearest_cache("model", tokens)
self.assertIsNotNone(result1)
result2, _ = self.cache.fetch_nearest_cache("model", tokens)
self.assertIsNotNone(result2)
# Third fetch extracts the last reference
result3, _ = self.cache.fetch_nearest_cache("model", tokens)
self.assertIsNotNone(result3)
# Fourth fetch should return None (entry fully extracted)
result4, _ = self.cache.fetch_nearest_cache("model", tokens)
self.assertIsNone(result4)
def test_extract_with_high_count_returns_deep_copy(self):
"""When count > 1, extract should return a deep copy."""
tokens = [1, 2, 3]
original_cache = [{"nested": "data"}]
self.cache.insert_cache("model", tokens, original_cache)
self.cache.insert_cache("model", tokens, original_cache) # count=2
result1, _ = self.cache.fetch_nearest_cache("model", tokens)
# Modify the returned cache
result1[0]["nested"] = "modified"
# Second fetch should get unmodified copy
result2, _ = self.cache.fetch_nearest_cache("model", tokens)
self.assertEqual(result2[0]["nested"], "data")
class TestCacheMultiModel(unittest.TestCase):
"""Tests for multi-model namespacing."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=10)
def test_same_tokens_different_models_are_separate(self):
"""Same token sequence under different models should be independent."""
tokens = [1, 2, 3]
self.cache.insert_cache("model_a", tokens, ["cache_a"])
self.cache.insert_cache("model_b", tokens, ["cache_b"])
self.assertEqual(len(self.cache), 2)
result_a, _ = self.cache.fetch_nearest_cache("model_a", tokens)
result_b, _ = self.cache.fetch_nearest_cache("model_b", tokens)
self.assertEqual(result_a, ["cache_a"])
self.assertEqual(result_b, ["cache_b"])
def test_eviction_across_models(self):
"""LRU eviction should work across different models."""
cache = ThreadSafeLRUPromptCache(max_size=3)
cache.insert_cache("model_a", [1], ["a1"])
cache.insert_cache("model_b", [1], ["b1"])
cache.insert_cache("model_a", [2], ["a2"])
self.assertEqual(len(cache), 3)
# Insert 4th - should evict model_a:[1] (oldest)
cache.insert_cache("model_b", [2], ["b2"])
result, _ = cache.fetch_nearest_cache("model_a", [1])
self.assertIsNone(result)
class TestCacheThreadSafety(unittest.TestCase):
"""Tests for thread safety with data integrity verification."""
def test_concurrent_inserts_no_data_loss(self):
"""Concurrent inserts should not lose data."""
cache = ThreadSafeLRUPromptCache(max_size=100)
num_threads = 10
inserts_per_thread = 20
def insert_entries(thread_id):
for i in range(inserts_per_thread):
tokens = [thread_id, i]
cache.insert_cache("model", tokens, [f"cache_{thread_id}_{i}"])
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
futures = [executor.submit(insert_entries, tid) for tid in range(num_threads)]
concurrent.futures.wait(futures)
# Verify expected number of entries (may be less due to LRU eviction with max_size=100)
# But should be exactly 100 since we inserted exactly 200 and max_size is 100
self.assertEqual(len(cache), 100)
def test_concurrent_fetch_and_insert_no_corruption(self):
"""Concurrent fetches and inserts should not corrupt data."""
cache = ThreadSafeLRUPromptCache(max_size=50)
errors = []
lock = threading.Lock()
# Pre-populate with known data
for i in range(20):
cache.insert_cache("model", [i], [f"original_{i}"])
def fetch_and_verify(thread_id):
try:
for _ in range(50):
token_id = thread_id % 20
result, remaining = cache.fetch_nearest_cache("model", [token_id])
if result is not None:
# Verify data integrity
expected_prefix = f"original_{token_id}"
if not str(result[0]).startswith("original_"):
with lock:
errors.append(f"Corrupted data: {result}")
# Re-insert to keep cache populated
cache.insert_cache("model", [token_id], result)
except Exception as e:
with lock:
errors.append(str(e))
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = [executor.submit(fetch_and_verify, tid) for tid in range(10)]
concurrent.futures.wait(futures)
self.assertEqual(errors, [], f"Thread safety errors: {errors}")
def test_concurrent_operations_maintain_cache_bounds(self):
"""Cache size should never exceed max_size under concurrent operations."""
max_size = 10
cache = ThreadSafeLRUPromptCache(max_size=max_size)
size_violations = []
lock = threading.Lock()
def random_operations(thread_id):
import random
for i in range(100):
tokens = [random.randint(0, 50)]
if random.random() < 0.7:
cache.insert_cache("model", tokens, [f"cache_{thread_id}_{i}"])
else:
cache.fetch_nearest_cache("model", tokens)
current_size = len(cache)
if current_size > max_size:
with lock:
size_violations.append(current_size)
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = [executor.submit(random_operations, tid) for tid in range(10)]
concurrent.futures.wait(futures)
self.assertEqual(size_violations, [], f"Size exceeded max: {size_violations}")
self.assertLessEqual(len(cache), max_size)
class TestCacheClear(unittest.TestCase):
"""Tests for cache clear operation."""
def setUp(self):
self.cache = ThreadSafeLRUPromptCache(max_size=10)
def test_clear_removes_all_entries(self):
"""Clear should remove all entries."""
self.cache.insert_cache("model1", [1, 2], ["cache1"])
self.cache.insert_cache("model2", [3, 4], ["cache2"])
self.cache.insert_cache("model1", [5, 6], ["cache3"])
self.assertEqual(len(self.cache), 3)
self.cache.clear()
self.assertEqual(len(self.cache), 0)
def test_clear_allows_new_inserts(self):
"""After clear, new inserts should work normally."""
self.cache.insert_cache("model", [1], ["cache1"])
self.cache.clear()
self.cache.insert_cache("model", [2], ["cache2"])
self.assertEqual(len(self.cache), 1)
result, _ = self.cache.fetch_nearest_cache("model", [2])
self.assertEqual(result, ["cache2"])
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,5 @@
--extra-index-url https://download.pytorch.org/whl/cu130
transformers
accelerate
torch==2.9.1
rerankers[transformers]

View File

@@ -0,0 +1,8 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch==2.9.1
rfdetr
opencv-python
accelerate
inference
peft
optimum-quanto

View File

@@ -5,5 +5,5 @@ accelerate
transformers
bitsandbytes
outetts
sentence-transformers==5.1.0
protobuf==6.33.1
sentence-transformers==5.2.0
protobuf==6.33.2

View File

@@ -6,5 +6,5 @@ accelerate
transformers
bitsandbytes
outetts
sentence-transformers==5.1.0
protobuf==6.33.1
sentence-transformers==5.2.0
protobuf==6.33.2

View File

@@ -5,5 +5,5 @@ numba==0.60.0
transformers
bitsandbytes
outetts
sentence-transformers==5.1.0
protobuf==6.33.1
sentence-transformers==5.2.0
protobuf==6.33.2

View File

@@ -0,0 +1,9 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch==2.9.0
llvmlite==0.43.0
numba==0.60.0
transformers
bitsandbytes
outetts
sentence-transformers==5.2.0
protobuf==6.33.2

View File

@@ -7,5 +7,5 @@ numba==0.60.0
bitsandbytes
outetts
bitsandbytes
sentence-transformers==5.1.0
protobuf==6.33.1
sentence-transformers==5.2.0
protobuf==6.33.2

View File

@@ -9,5 +9,5 @@ transformers
intel-extension-for-transformers
bitsandbytes
outetts
sentence-transformers==5.1.0
protobuf==6.33.1
sentence-transformers==5.2.0
protobuf==6.33.2

View File

@@ -1,5 +1,5 @@
grpcio==1.76.0
protobuf==6.33.1
protobuf==6.33.2
certifi
setuptools
scipy==1.15.1

View File

@@ -0,0 +1,23 @@
.PHONY: vibevoice
vibevoice:
bash install.sh
.PHONY: run
run: vibevoice
@echo "Running vibevoice..."
bash run.sh
@echo "vibevoice run."
.PHONY: test
test: vibevoice
@echo "Testing vibevoice..."
bash test.sh
@echo "vibevoice tested."
.PHONY: protogen-clean
protogen-clean:
$(RM) backend_pb2_grpc.py backend_pb2.py
.PHONY: clean
clean: protogen-clean
rm -rf venv __pycache__

View File

@@ -0,0 +1,485 @@
#!/usr/bin/env python3
"""
This is an extra gRPC server of LocalAI for VibeVoice
"""
from concurrent import futures
import time
import argparse
import signal
import sys
import os
import copy
import traceback
from pathlib import Path
import backend_pb2
import backend_pb2_grpc
import torch
from vibevoice.modular.modeling_vibevoice_streaming_inference import VibeVoiceStreamingForConditionalGenerationInference
from vibevoice.processor.vibevoice_streaming_processor import VibeVoiceStreamingProcessor
import grpc
def is_float(s):
"""Check if a string can be converted to float."""
try:
float(s)
return True
except ValueError:
return False
def is_int(s):
"""Check if a string can be converted to int."""
try:
int(s)
return True
except ValueError:
return False
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
# If MAX_WORKERS are specified in the environment use it, otherwise default to 1
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
# Implement the BackendServicer class with the service methods
class BackendServicer(backend_pb2_grpc.BackendServicer):
"""
BackendServicer is the class that implements the gRPC service
"""
def Health(self, request, context):
return backend_pb2.Reply(message=bytes("OK", 'utf-8'))
def LoadModel(self, request, context):
# Get device
if torch.cuda.is_available():
print("CUDA is available", file=sys.stderr)
device = "cuda"
else:
print("CUDA is not available", file=sys.stderr)
device = "cpu"
mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
if mps_available:
device = "mps"
if not torch.cuda.is_available() and request.CUDA:
return backend_pb2.Result(success=False, message="CUDA is not available")
# Normalize potential 'mpx' typo to 'mps'
if device == "mpx":
print("Note: device 'mpx' detected, treating it as 'mps'.", file=sys.stderr)
device = "mps"
# Validate mps availability if requested
if device == "mps" and not torch.backends.mps.is_available():
print("Warning: MPS not available. Falling back to CPU.", file=sys.stderr)
device = "cpu"
self.device = device
self._torch_device = torch.device(device)
options = request.Options
# empty dict
self.options = {}
# The options are a list of strings in this form optname:optvalue
# We are storing all the options in a dict so we can use it later when
# generating the audio
for opt in options:
if ":" not in opt:
continue
key, value = opt.split(":", 1) # Split only on first colon
# if value is a number, convert it to the appropriate type
if is_float(value):
value = float(value)
elif is_int(value):
value = int(value)
elif value.lower() in ["true", "false"]:
value = value.lower() == "true"
self.options[key] = value
# Get model path from request
model_path = request.Model
if not model_path:
model_path = "microsoft/VibeVoice-Realtime-0.5B"
# Get inference steps from options, default to 5
self.inference_steps = self.options.get("inference_steps", 5)
if not isinstance(self.inference_steps, int) or self.inference_steps <= 0:
self.inference_steps = 5
# Get cfg_scale from options, default to 1.5
self.cfg_scale = self.options.get("cfg_scale", 1.5)
if not isinstance(self.cfg_scale, (int, float)) or self.cfg_scale <= 0:
self.cfg_scale = 1.5
# Determine voices directory
# Priority order:
# 1. voices_dir option (explicitly set by user - highest priority)
# 2. Relative to ModelFile if provided
# 3. Relative to ModelPath (models directory) if provided
# 4. Backend directory
# 5. Absolute path from AudioPath if provided
voices_dir = None
# First check if voices_dir is explicitly set in options
if "voices_dir" in self.options:
voices_dir_option = self.options["voices_dir"]
if isinstance(voices_dir_option, str) and voices_dir_option.strip():
voices_dir = voices_dir_option.strip()
# If relative path, try to resolve it relative to ModelPath or ModelFile
if not os.path.isabs(voices_dir):
if hasattr(request, 'ModelPath') and request.ModelPath:
voices_dir = os.path.join(request.ModelPath, voices_dir)
elif request.ModelFile:
model_file_base = os.path.dirname(request.ModelFile)
voices_dir = os.path.join(model_file_base, voices_dir)
# If still relative, make it absolute from current working directory
if not os.path.isabs(voices_dir):
voices_dir = os.path.abspath(voices_dir)
# Check if the directory exists
if not os.path.exists(voices_dir):
print(f"Warning: voices_dir option specified but directory does not exist: {voices_dir}", file=sys.stderr)
voices_dir = None
# If not set via option, try relative to ModelFile if provided
if not voices_dir and request.ModelFile:
model_file_base = os.path.dirname(request.ModelFile)
voices_dir = os.path.join(model_file_base, "voices", "streaming_model")
if not os.path.exists(voices_dir):
voices_dir = None
# If not found, try relative to ModelPath (models directory)
if not voices_dir and hasattr(request, 'ModelPath') and request.ModelPath:
voices_dir = os.path.join(request.ModelPath, "voices", "streaming_model")
if not os.path.exists(voices_dir):
voices_dir = None
# If not found, try relative to backend directory
if not voices_dir:
backend_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
voices_dir = os.path.join(backend_dir, "vibevoice", "voices", "streaming_model")
if not os.path.exists(voices_dir):
# Try absolute path from AudioPath if provided
if request.AudioPath and os.path.isabs(request.AudioPath):
voices_dir = os.path.dirname(request.AudioPath)
else:
voices_dir = None
self.voices_dir = voices_dir
self.voice_presets = {}
self._voice_cache = {}
self.default_voice_key = None
# Load voice presets if directory exists
if self.voices_dir and os.path.exists(self.voices_dir):
self._load_voice_presets()
else:
print(f"Warning: Voices directory not found. Voice presets will not be available.", file=sys.stderr)
try:
print(f"Loading processor & model from {model_path}", file=sys.stderr)
self.processor = VibeVoiceStreamingProcessor.from_pretrained(model_path)
# Decide dtype & attention implementation
if self.device == "mps":
load_dtype = torch.float32 # MPS requires float32
device_map = None
attn_impl_primary = "sdpa" # flash_attention_2 not supported on MPS
elif self.device == "cuda":
load_dtype = torch.bfloat16
device_map = "cuda"
attn_impl_primary = "flash_attention_2"
else: # cpu
load_dtype = torch.float32
device_map = "cpu"
attn_impl_primary = "sdpa"
print(f"Using device: {self.device}, torch_dtype: {load_dtype}, attn_implementation: {attn_impl_primary}", file=sys.stderr)
# Load model with device-specific logic
try:
if self.device == "mps":
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
attn_implementation=attn_impl_primary,
device_map=None, # load then move
)
self.model.to("mps")
elif self.device == "cuda":
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map="cuda",
attn_implementation=attn_impl_primary,
)
else: # cpu
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map="cpu",
attn_implementation=attn_impl_primary,
)
except Exception as e:
if attn_impl_primary == 'flash_attention_2':
print(f"[ERROR] : {type(e).__name__}: {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
print("Error loading the model. Trying to use SDPA. However, note that only flash_attention_2 has been fully tested, and using SDPA may result in lower audio quality.", file=sys.stderr)
self.model = VibeVoiceStreamingForConditionalGenerationInference.from_pretrained(
model_path,
torch_dtype=load_dtype,
device_map=(self.device if self.device in ("cuda", "cpu") else None),
attn_implementation='sdpa'
)
if self.device == "mps":
self.model.to("mps")
else:
raise e
self.model.eval()
self.model.set_ddpm_inference_steps(num_steps=self.inference_steps)
# Set default voice key
if self.voice_presets:
# Try to get default from environment or use first available
preset_name = os.environ.get("VOICE_PRESET")
self.default_voice_key = self._determine_voice_key(preset_name)
print(f"Default voice preset: {self.default_voice_key}", file=sys.stderr)
else:
print("Warning: No voice presets available. Voice selection will not work.", file=sys.stderr)
except Exception as err:
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(message="Model loaded successfully", success=True)
def _load_voice_presets(self):
"""Load voice presets from the voices directory."""
if not self.voices_dir or not os.path.exists(self.voices_dir):
self.voice_presets = {}
return
self.voice_presets = {}
# Get all .pt files in the voices directory
pt_files = [f for f in os.listdir(self.voices_dir)
if f.lower().endswith('.pt') and os.path.isfile(os.path.join(self.voices_dir, f))]
# Create dictionary with filename (without extension) as key
for pt_file in pt_files:
# Remove .pt extension to get the name
name = os.path.splitext(pt_file)[0]
# Create full path
full_path = os.path.join(self.voices_dir, pt_file)
self.voice_presets[name] = full_path
# Sort the voice presets alphabetically by name
self.voice_presets = dict(sorted(self.voice_presets.items()))
print(f"Found {len(self.voice_presets)} voice files in {self.voices_dir}", file=sys.stderr)
if self.voice_presets:
print(f"Available voices: {', '.join(self.voice_presets.keys())}", file=sys.stderr)
def _determine_voice_key(self, name):
"""Determine voice key from name or use default."""
if name and name in self.voice_presets:
return name
# Try default key
default_key = "en-WHTest_man"
if default_key in self.voice_presets:
return default_key
# Use first available
if self.voice_presets:
first_key = next(iter(self.voice_presets))
print(f"Using fallback voice preset: {first_key}", file=sys.stderr)
return first_key
return None
def _get_voice_path(self, speaker_name):
"""Get voice file path for a given speaker name."""
if not self.voice_presets:
return None
# First try exact match
if speaker_name and speaker_name in self.voice_presets:
return self.voice_presets[speaker_name]
# Try partial matching (case insensitive)
if speaker_name:
speaker_lower = speaker_name.lower()
for preset_name, path in self.voice_presets.items():
if preset_name.lower() in speaker_lower or speaker_lower in preset_name.lower():
return path
# Default to first voice if no match found
if self.default_voice_key and self.default_voice_key in self.voice_presets:
return self.voice_presets[self.default_voice_key]
elif self.voice_presets:
default_voice = list(self.voice_presets.values())[0]
print(f"Warning: No voice preset found for '{speaker_name}', using default voice: {default_voice}", file=sys.stderr)
return default_voice
return None
def _ensure_voice_cached(self, voice_path):
"""Load and cache voice preset."""
if not voice_path or not os.path.exists(voice_path):
return None
# Use path as cache key
if voice_path not in self._voice_cache:
print(f"Loading prefilled prompt from {voice_path}", file=sys.stderr)
prefilled_outputs = torch.load(
voice_path,
map_location=self._torch_device,
weights_only=False,
)
self._voice_cache[voice_path] = prefilled_outputs
return self._voice_cache[voice_path]
def TTS(self, request, context):
try:
# Get voice selection
# Priority: request.voice > AudioPath > default
voice_path = None
voice_key = None
if request.voice:
# Try to get voice by name
voice_path = self._get_voice_path(request.voice)
if voice_path:
voice_key = request.voice
elif request.AudioPath:
# Use AudioPath as voice file
if os.path.isabs(request.AudioPath):
voice_path = request.AudioPath
elif request.ModelFile:
model_file_base = os.path.dirname(request.ModelFile)
voice_path = os.path.join(model_file_base, request.AudioPath)
elif hasattr(request, 'ModelPath') and request.ModelPath:
voice_path = os.path.join(request.ModelPath, request.AudioPath)
else:
voice_path = request.AudioPath
elif self.default_voice_key:
voice_path = self._get_voice_path(self.default_voice_key)
voice_key = self.default_voice_key
if not voice_path or not os.path.exists(voice_path):
return backend_pb2.Result(
success=False,
message=f"Voice file not found: {voice_path}. Please provide a valid voice preset or AudioPath."
)
# Load voice preset
prefilled_outputs = self._ensure_voice_cached(voice_path)
if prefilled_outputs is None:
return backend_pb2.Result(
success=False,
message=f"Failed to load voice preset from {voice_path}"
)
# Get generation parameters from options
cfg_scale = self.options.get("cfg_scale", self.cfg_scale)
inference_steps = self.options.get("inference_steps", self.inference_steps)
do_sample = self.options.get("do_sample", False)
temperature = self.options.get("temperature", 0.9)
top_p = self.options.get("top_p", 0.9)
# Update inference steps if needed
if inference_steps != self.inference_steps:
self.model.set_ddpm_inference_steps(num_steps=inference_steps)
self.inference_steps = inference_steps
# Prepare text
text = request.text.strip().replace("'", "'").replace('"', '"').replace('"', '"')
# Prepare inputs
inputs = self.processor.process_input_with_cached_prompt(
text=text,
cached_prompt=prefilled_outputs,
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
# Move tensors to target device
target_device = self._torch_device
for k, v in inputs.items():
if torch.is_tensor(v):
inputs[k] = v.to(target_device)
print(f"Generating audio with cfg_scale: {cfg_scale}, inference_steps: {inference_steps}", file=sys.stderr)
# Generate audio
outputs = self.model.generate(
**inputs,
max_new_tokens=None,
cfg_scale=cfg_scale,
tokenizer=self.processor.tokenizer,
generation_config={
'do_sample': do_sample,
'temperature': temperature if do_sample else 1.0,
'top_p': top_p if do_sample else 1.0,
},
verbose=False,
all_prefilled_outputs=copy.deepcopy(prefilled_outputs) if prefilled_outputs is not None else None,
)
# Save output
if outputs.speech_outputs and outputs.speech_outputs[0] is not None:
self.processor.save_audio(
outputs.speech_outputs[0], # First (and only) batch item
output_path=request.dst,
)
print(f"Saved output to {request.dst}", file=sys.stderr)
else:
return backend_pb2.Result(
success=False,
message="No audio output generated"
)
except Exception as err:
print(f"Error in TTS: {err}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return backend_pb2.Result(success=False, message=f"Unexpected {err=}, {type(err)=}")
return backend_pb2.Result(success=True)
def serve(address):
server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS),
options=[
('grpc.max_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_send_message_length', 50 * 1024 * 1024), # 50MB
('grpc.max_receive_message_length', 50 * 1024 * 1024), # 50MB
])
backend_pb2_grpc.add_BackendServicer_to_server(BackendServicer(), server)
server.add_insecure_port(address)
server.start()
print("Server started. Listening on: " + address, file=sys.stderr)
# Define the signal handler function
def signal_handler(sig, frame):
print("Received termination signal. Shutting down...")
server.stop(0)
sys.exit(0)
# Set the signal handlers for SIGINT and SIGTERM
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run the gRPC server.")
parser.add_argument(
"--addr", default="localhost:50051", help="The address to bind the server to."
)
args = parser.parse_args()
serve(args.addr)

View File

@@ -0,0 +1,35 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
# This is here because the Intel pip index is broken and returns 200 status codes for every package name, it just doesn't return any package links.
# This makes uv think that the package exists in the Intel pip index, and by default it stops looking at other pip indexes once it finds a match.
# We need uv to continue falling through to the pypi default index to find optimum[openvino] in the pypi index
# the --upgrade actually allows us to *downgrade* torch to the version provided in the Intel pip index
if [ "x${BUILD_PROFILE}" == "xintel" ]; then
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
fi
# Use python 3.12 for l4t
if [ "x${BUILD_PROFILE}" == "xl4t12" ] || [ "x${BUILD_PROFILE}" == "xl4t13" ]; then
PYTHON_VERSION="3.12"
PYTHON_PATCH="12"
PY_STANDALONE_TAG="20251120"
fi
installRequirements
git clone https://github.com/microsoft/VibeVoice.git
cd VibeVoice/
if [ "x${USE_PIP}" == "xtrue" ]; then
pip install ${EXTRA_PIP_INSTALL_FLAGS:-} .
else
uv pip install ${EXTRA_PIP_INSTALL_FLAGS:-} .
fi

View File

@@ -0,0 +1,22 @@
--extra-index-url https://download.pytorch.org/whl/cpu
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
torchvision==0.22.1
accelerate
compel
peft
sentencepiece
torch==2.7.1
optimum-quanto
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,22 @@
--extra-index-url https://download.pytorch.org/whl/cu118
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
torchvision==0.22.1
accelerate
compel
peft
sentencepiece
torch==2.7.1
optimum-quanto
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,22 @@
--extra-index-url https://download.pytorch.org/whl/cu121
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
torchvision
accelerate
compel
peft
sentencepiece
torch
ftfy
optimum-quanto
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,22 @@
--extra-index-url https://download.pytorch.org/whl/cu130
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
torchvision
accelerate
compel
peft
sentencepiece
torch
ftfy
optimum-quanto
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,22 @@
--extra-index-url https://download.pytorch.org/whl/rocm6.3
torch==2.7.1+rocm6.3
torchvision==0.22.1+rocm6.3
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
accelerate
compel
peft
sentencepiece
optimum-quanto
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,26 @@
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
intel-extension-for-pytorch==2.3.110+xpu
torch==2.5.1+cxx11.abi
torchvision==0.20.1+cxx11.abi
oneccl_bind_pt==2.8.0+xpu
optimum[openvino]
setuptools
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
accelerate
compel
peft
sentencepiece
optimum-quanto
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,22 @@
--extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu129/
torch
git+https://github.com/huggingface/diffusers
transformers==4.51.3
accelerate
compel
peft
optimum-quanto
numpy<2
sentencepiece
torchvision
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,22 @@
--extra-index-url https://download.pytorch.org/whl/cu130
torch
git+https://github.com/huggingface/diffusers
transformers==4.51.3
accelerate
compel
peft
optimum-quanto
numpy<2
sentencepiece
torchvision
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,21 @@
torch==2.7.1
torchvision==0.22.1
git+https://github.com/huggingface/diffusers
opencv-python
transformers==4.51.3
accelerate
compel
peft
sentencepiece
optimum-quanto
ftfy
llvmlite>=0.40.0
numba>=0.57.0
tqdm
numpy
scipy
librosa
ml-collections
absl-py
gradio
av

View File

@@ -0,0 +1,4 @@
grpcio==1.71.0
protobuf
certifi
packaging==24.1

View File

@@ -0,0 +1,9 @@
#!/bin/bash
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
startBackend $@

View File

@@ -0,0 +1,82 @@
"""
A test script to test the gRPC service
"""
import unittest
import subprocess
import time
import backend_pb2
import backend_pb2_grpc
import grpc
class TestBackendServicer(unittest.TestCase):
"""
TestBackendServicer is the class that tests the gRPC service
"""
def setUp(self):
"""
This method sets up the gRPC service by starting the server
"""
self.service = subprocess.Popen(["python3", "backend.py", "--addr", "localhost:50051"])
time.sleep(30)
def tearDown(self) -> None:
"""
This method tears down the gRPC service by terminating the server
"""
self.service.terminate()
self.service.wait()
def test_server_startup(self):
"""
This method tests if the server starts up successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.Health(backend_pb2.HealthMessage())
self.assertEqual(response.message, b'OK')
except Exception as err:
print(err)
self.fail("Server failed to start")
finally:
self.tearDown()
def test_load_model(self):
"""
This method tests if the model is loaded successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="tts_models/en/vctk/vits"))
print(response)
self.assertTrue(response.success)
self.assertEqual(response.message, "Model loaded successfully")
except Exception as err:
print(err)
self.fail("LoadModel service failed")
finally:
self.tearDown()
def test_tts(self):
"""
This method tests if the embeddings are generated successfully
"""
try:
self.setUp()
with grpc.insecure_channel("localhost:50051") as channel:
stub = backend_pb2_grpc.BackendStub(channel)
response = stub.LoadModel(backend_pb2.ModelOptions(Model="tts_models/en/vctk/vits"))
self.assertTrue(response.success)
tts_request = backend_pb2.TTSRequest(text="80s TV news production music hit for tonight's biggest story")
tts_response = stub.TTS(tts_request)
self.assertIsNotNone(tts_response)
except Exception as err:
print(err)
self.fail("TTS service failed")
finally:
self.tearDown()

View File

@@ -0,0 +1,11 @@
#!/bin/bash
set -e
backend_dir=$(dirname $0)
if [ -d $backend_dir/common ]; then
source $backend_dir/common/libbackend.sh
else
source $backend_dir/../common/libbackend.sh
fi
runUnittests

View File

@@ -49,6 +49,8 @@ type ReleaseManager struct {
ChecksumsPath string
// MetadataPath is where version metadata is stored
MetadataPath string
// HTTPClient is the HTTP client used for downloads
HTTPClient *http.Client
}
// NewReleaseManager creates a new release manager
@@ -65,6 +67,9 @@ func NewReleaseManager() *ReleaseManager {
CurrentVersion: internal.PrintableVersion(),
ChecksumsPath: checksumsPath,
MetadataPath: metadataPath,
HTTPClient: &http.Client{
Timeout: 30 * time.Second,
},
}
}
@@ -72,7 +77,7 @@ func NewReleaseManager() *ReleaseManager {
func (rm *ReleaseManager) GetLatestRelease() (*Release, error) {
url := fmt.Sprintf("https://api.github.com/repos/%s/%s/releases/latest", rm.GitHubOwner, rm.GitHubRepo)
resp, err := http.Get(url)
resp, err := rm.HTTPClient.Get(url)
if err != nil {
return nil, fmt.Errorf("failed to fetch latest release: %w", err)
}
@@ -125,18 +130,43 @@ func (rm *ReleaseManager) DownloadRelease(version string, progressCallback func(
rm.GitHubOwner, rm.GitHubRepo, version, version)
checksumPath := filepath.Join(rm.BinaryPath, "checksums.txt")
if err := rm.downloadFile(checksumURL, checksumPath, nil); err != nil {
return fmt.Errorf("failed to download checksums: %w", err)
manualChecksumPath := filepath.Join(rm.ChecksumsPath, fmt.Sprintf("checksums-%s.txt", version))
// First, check if there's already a checksum file (either manually placed or previously downloaded)
// and honor that, skipping download entirely in such case
var downloadErr error
if _, err := os.Stat(manualChecksumPath); err == nil {
log.Printf("Using existing checksums from: %s", manualChecksumPath)
checksumPath = manualChecksumPath
} else if _, err := os.Stat(checksumPath); err == nil {
log.Printf("Using existing checksums from: %s", checksumPath)
} else {
// No existing checksum file found, try to download
downloadErr = rm.downloadFile(checksumURL, checksumPath, nil)
if downloadErr != nil {
log.Printf("Warning: failed to download checksums: %v", downloadErr)
log.Printf("Warning: Checksum verification will be skipped. For security, you can manually place checksums at: %s", manualChecksumPath)
log.Printf("Download checksums from: %s", checksumURL)
// Continue without verification - log warning but don't fail
}
}
// Verify the checksum
if err := rm.VerifyChecksum(localPath, checksumPath, binaryName); err != nil {
return fmt.Errorf("checksum verification failed: %w", err)
}
// Verify the checksum if we have a checksum file
if _, err := os.Stat(checksumPath); err == nil {
if err := rm.VerifyChecksum(localPath, checksumPath, binaryName); err != nil {
return fmt.Errorf("checksum verification failed: %w", err)
}
log.Printf("Checksum verification successful")
// Save checksums persistently for future verification
if err := rm.saveChecksums(version, checksumPath, binaryName); err != nil {
log.Printf("Warning: failed to save checksums: %v", err)
// Save checksums persistently for future verification
if downloadErr == nil {
if err := rm.saveChecksums(version, checksumPath, binaryName); err != nil {
log.Printf("Warning: failed to save checksums: %v", err)
}
}
} else {
log.Printf("Warning: Proceeding without checksum verification")
}
// Make the binary executable
@@ -168,34 +198,61 @@ func (rm *ReleaseManager) GetBinaryName(version string) string {
// downloadFile downloads a file from a URL to a local path with optional progress callback
func (rm *ReleaseManager) downloadFile(url, filepath string, progressCallback func(float64)) error {
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
return rm.downloadFileWithRetry(url, filepath, progressCallback, 3)
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("bad status: %s", resp.Status)
}
// downloadFileWithRetry downloads a file from a URL with retry logic
func (rm *ReleaseManager) downloadFileWithRetry(url, filepath string, progressCallback func(float64), maxRetries int) error {
var lastErr error
out, err := os.Create(filepath)
if err != nil {
return err
}
defer out.Close()
// Create a progress reader if callback is provided
var reader io.Reader = resp.Body
if progressCallback != nil && resp.ContentLength > 0 {
reader = &progressReader{
Reader: resp.Body,
Total: resp.ContentLength,
Callback: progressCallback,
for attempt := 1; attempt <= maxRetries; attempt++ {
if attempt > 1 {
log.Printf("Retrying download (attempt %d/%d): %s", attempt, maxRetries, url)
time.Sleep(time.Duration(attempt) * time.Second)
}
resp, err := rm.HTTPClient.Get(url)
if err != nil {
lastErr = err
continue
}
if resp.StatusCode != http.StatusOK {
resp.Body.Close()
lastErr = fmt.Errorf("bad status: %s", resp.Status)
continue
}
out, err := os.Create(filepath)
if err != nil {
resp.Body.Close()
return err
}
// Create a progress reader if callback is provided
var reader io.Reader = resp.Body
if progressCallback != nil && resp.ContentLength > 0 {
reader = &progressReader{
Reader: resp.Body,
Total: resp.ContentLength,
Callback: progressCallback,
}
}
_, err = io.Copy(out, reader)
resp.Body.Close()
out.Close()
if err != nil {
lastErr = err
os.Remove(filepath)
continue
}
return nil
}
_, err = io.Copy(out, reader)
return err
return fmt.Errorf("failed after %d attempts: %w", maxRetries, lastErr)
}
// saveChecksums saves checksums persistently for future verification

View File

@@ -4,6 +4,7 @@ import (
"os"
"path/filepath"
"runtime"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
@@ -37,6 +38,8 @@ var _ = Describe("ReleaseManager", func() {
Expect(newRM.GitHubOwner).To(Equal("mudler"))
Expect(newRM.GitHubRepo).To(Equal("LocalAI"))
Expect(newRM.BinaryPath).To(ContainSubstring(".localai"))
Expect(newRM.HTTPClient).ToNot(BeNil())
Expect(newRM.HTTPClient.Timeout).To(Equal(30 * time.Second))
})
})

View File

@@ -382,7 +382,7 @@ func (sm *SystrayManager) showStatusDetails(status, version string) {
// showErrorDialog shows a simple error dialog
func (sm *SystrayManager) showErrorDialog(title, message string) {
fyne.DoAndWait(func() {
dialog.ShowError(fmt.Errorf(message), sm.window)
dialog.ShowError(fmt.Errorf("%s", message), sm.window)
})
}

View File

@@ -8,9 +8,7 @@ import (
"github.com/joho/godotenv"
"github.com/mudler/LocalAI/core/cli"
"github.com/mudler/LocalAI/internal"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/mudler/xlog"
_ "github.com/mudler/LocalAI/swagger"
)
@@ -18,9 +16,8 @@ import (
func main() {
var err error
// Initialize zerolog at a level of INFO, we will set the desired level after we parse the CLI options
log.Logger = log.Output(zerolog.ConsoleWriter{Out: os.Stderr})
zerolog.SetGlobalLevel(zerolog.InfoLevel)
// Initialize xlog at a level of INFO, we will set the desired level after we parse the CLI options
xlog.SetLogger(xlog.NewLogger(xlog.LogLevel("info"), "text"))
// handle loading environment variables from .env files
envFiles := []string{".env", "localai.env"}
@@ -32,10 +29,10 @@ func main() {
for _, envFile := range envFiles {
if _, err := os.Stat(envFile); err == nil {
log.Debug().Str("envFile", envFile).Msg("env file found, loading environment variables from file")
xlog.Debug("env file found, loading environment variables from file", "envFile", envFile)
err = godotenv.Load(envFile)
if err != nil {
log.Error().Err(err).Str("envFile", envFile).Msg("failed to load environment variables from file")
xlog.Error("failed to load environment variables from file", "error", err, "envFile", envFile)
continue
}
}
@@ -46,16 +43,7 @@ func main() {
kong.Description(
` LocalAI is a drop-in replacement OpenAI API for running LLM, GPT and genAI models locally on CPU, GPUs with consumer grade hardware.
Some of the models compatible are:
- Vicuna
- Koala
- GPT4ALL
- GPT4ALL-J
- Cerebras
- Alpaca
- StableLM (ggml quantized)
For a list of all available models for one-click install, check out: https://models.localai.io
For a list of all available models run local-ai models list
Copyright: Ettore Di Giacinto
@@ -76,7 +64,6 @@ Version: ${version}
logLevel := "info"
if cli.CLI.Debug && cli.CLI.LogLevel == nil {
logLevel = "debug"
zerolog.SetGlobalLevel(zerolog.DebugLevel)
cli.CLI.LogLevel = &logLevel
}
@@ -84,27 +71,12 @@ Version: ${version}
cli.CLI.LogLevel = &logLevel
}
switch *cli.CLI.LogLevel {
case "error":
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
log.Debug().Msg("Setting logging to error")
case "warn":
zerolog.SetGlobalLevel(zerolog.WarnLevel)
log.Debug().Msg("Setting logging to warn")
case "info":
zerolog.SetGlobalLevel(zerolog.InfoLevel)
log.Debug().Msg("Setting logging to info")
case "debug":
zerolog.SetGlobalLevel(zerolog.DebugLevel)
log.Debug().Msg("Setting logging to debug")
case "trace":
zerolog.SetGlobalLevel(zerolog.TraceLevel)
log.Debug().Msg("Setting logging to trace")
}
// Set xlog logger with the desired level and text format
xlog.SetLogger(xlog.NewLogger(xlog.LogLevel(*cli.CLI.LogLevel), *cli.CLI.LogFormat))
// Run the thing!
err = ctx.Run(&cli.CLI.Context)
if err != nil {
log.Fatal().Err(err).Msg("Error running the application")
xlog.Fatal("Error running the application", "error", err)
}
}

View File

@@ -0,0 +1,42 @@
package application
import (
"time"
"github.com/mudler/LocalAI/core/services"
"github.com/mudler/xlog"
)
// RestartAgentJobService restarts the agent job service with current ApplicationConfig settings
func (a *Application) RestartAgentJobService() error {
a.agentJobMutex.Lock()
defer a.agentJobMutex.Unlock()
// Stop existing service if running
if a.agentJobService != nil {
if err := a.agentJobService.Stop(); err != nil {
xlog.Warn("Error stopping agent job service", "error", err)
}
// Wait a bit for shutdown to complete
time.Sleep(200 * time.Millisecond)
}
// Create new service instance
agentJobService := services.NewAgentJobService(
a.ApplicationConfig(),
a.ModelLoader(),
a.ModelConfigLoader(),
a.TemplatesEvaluator(),
)
// Start the service
err := agentJobService.Start(a.ApplicationConfig().Context)
if err != nil {
xlog.Error("Failed to start agent job service", "error", err)
return err
}
a.agentJobService = agentJobService
xlog.Info("Agent job service restarted")
return nil
}

View File

@@ -17,17 +17,19 @@ type Application struct {
startupConfig *config.ApplicationConfig // Stores original config from env vars (before file loading)
templatesEvaluator *templates.Evaluator
galleryService *services.GalleryService
agentJobService *services.AgentJobService
watchdogMutex sync.Mutex
watchdogStop chan bool
p2pMutex sync.Mutex
p2pCtx context.Context
p2pCancel context.CancelFunc
agentJobMutex sync.Mutex
}
func newApplication(appConfig *config.ApplicationConfig) *Application {
return &Application{
backendLoader: config.NewModelConfigLoader(appConfig.SystemState.Model.ModelsPath),
modelLoader: model.NewModelLoader(appConfig.SystemState, appConfig.SingleBackend),
modelLoader: model.NewModelLoader(appConfig.SystemState),
applicationConfig: appConfig,
templatesEvaluator: templates.NewEvaluator(appConfig.SystemState.Model.ModelsPath),
}
@@ -53,6 +55,10 @@ func (a *Application) GalleryService() *services.GalleryService {
return a.galleryService
}
func (a *Application) AgentJobService() *services.AgentJobService {
return a.agentJobService
}
// StartupConfig returns the original startup configuration (from env vars, before file loading)
func (a *Application) StartupConfig() *config.ApplicationConfig {
return a.startupConfig
@@ -67,5 +73,20 @@ func (a *Application) start() error {
a.galleryService = galleryService
// Initialize agent job service
agentJobService := services.NewAgentJobService(
a.ApplicationConfig(),
a.ModelLoader(),
a.ModelConfigLoader(),
a.TemplatesEvaluator(),
)
err = agentJobService.Start(a.ApplicationConfig().Context)
if err != nil {
return err
}
a.agentJobService = agentJobService
return nil
}

View File

@@ -11,7 +11,7 @@ import (
"dario.cat/mergo"
"github.com/fsnotify/fsnotify"
"github.com/mudler/LocalAI/core/config"
"github.com/rs/zerolog/log"
"github.com/mudler/xlog"
)
type fileHandler func(fileContent []byte, appConfig *config.ApplicationConfig) error
@@ -33,16 +33,18 @@ func newConfigFileHandler(appConfig *config.ApplicationConfig) configFileHandler
}
err := c.Register("api_keys.json", readApiKeysJson(*appConfig), true)
if err != nil {
log.Error().Err(err).Str("file", "api_keys.json").Msg("unable to register config file handler")
xlog.Error("unable to register config file handler", "error", err, "file", "api_keys.json")
}
err = c.Register("external_backends.json", readExternalBackendsJson(*appConfig), true)
if err != nil {
log.Error().Err(err).Str("file", "external_backends.json").Msg("unable to register config file handler")
xlog.Error("unable to register config file handler", "error", err, "file", "external_backends.json")
}
err = c.Register("runtime_settings.json", readRuntimeSettingsJson(*appConfig), true)
if err != nil {
log.Error().Err(err).Str("file", "runtime_settings.json").Msg("unable to register config file handler")
xlog.Error("unable to register config file handler", "error", err, "file", "runtime_settings.json")
}
// Note: agent_tasks.json and agent_jobs.json are handled by AgentJobService directly
// The service watches and reloads these files internally
return c
}
@@ -60,14 +62,14 @@ func (c *configFileHandler) Register(filename string, handler fileHandler, runNo
func (c *configFileHandler) callHandler(filename string, handler fileHandler) {
rootedFilePath := filepath.Join(c.appConfig.DynamicConfigsDir, filepath.Clean(filename))
log.Trace().Str("filename", rootedFilePath).Msg("reading file for dynamic config update")
xlog.Debug("reading file for dynamic config update", "filename", rootedFilePath)
fileContent, err := os.ReadFile(rootedFilePath)
if err != nil && !os.IsNotExist(err) {
log.Error().Err(err).Str("filename", rootedFilePath).Msg("could not read file")
xlog.Error("could not read file", "error", err, "filename", rootedFilePath)
}
if err = handler(fileContent, c.appConfig); err != nil {
log.Error().Err(err).Msg("WatchConfigDirectory goroutine failed to update options")
xlog.Error("WatchConfigDirectory goroutine failed to update options", "error", err)
}
}
@@ -79,13 +81,13 @@ func (c *configFileHandler) Watch() error {
}
if c.appConfig.DynamicConfigsDirPollInterval > 0 {
log.Debug().Msg("Poll interval set, falling back to polling for configuration changes")
xlog.Debug("Poll interval set, falling back to polling for configuration changes")
ticker := time.NewTicker(c.appConfig.DynamicConfigsDirPollInterval)
go func() {
for {
<-ticker.C
for file, handler := range c.handlers {
log.Debug().Str("file", file).Msg("polling config file")
xlog.Debug("polling config file", "file", file)
c.callHandler(file, handler)
}
}
@@ -109,7 +111,7 @@ func (c *configFileHandler) Watch() error {
c.callHandler(filepath.Base(event.Name), handler)
}
case err, ok := <-c.watcher.Errors:
log.Error().Err(err).Msg("config watcher error received")
xlog.Error("config watcher error received", "error", err)
if !ok {
return
}
@@ -133,8 +135,7 @@ func (c *configFileHandler) Stop() error {
func readApiKeysJson(startupAppConfig config.ApplicationConfig) fileHandler {
handler := func(fileContent []byte, appConfig *config.ApplicationConfig) error {
log.Debug().Msg("processing api keys runtime update")
log.Trace().Int("numKeys", len(startupAppConfig.ApiKeys)).Msg("api keys provided at startup")
xlog.Debug("processing api keys runtime update", "numKeys", len(startupAppConfig.ApiKeys))
if len(fileContent) > 0 {
// Parse JSON content from the file
@@ -144,14 +145,14 @@ func readApiKeysJson(startupAppConfig config.ApplicationConfig) fileHandler {
return err
}
log.Trace().Int("numKeys", len(fileKeys)).Msg("discovered API keys from api keys dynamic config dile")
xlog.Debug("discovered API keys from api keys dynamic config file", "numKeys", len(fileKeys))
appConfig.ApiKeys = append(startupAppConfig.ApiKeys, fileKeys...)
} else {
log.Trace().Msg("no API keys discovered from dynamic config file")
xlog.Debug("no API keys discovered from dynamic config file")
appConfig.ApiKeys = startupAppConfig.ApiKeys
}
log.Trace().Int("numKeys", len(appConfig.ApiKeys)).Msg("total api keys after processing")
xlog.Debug("total api keys after processing", "numKeys", len(appConfig.ApiKeys))
return nil
}
@@ -160,7 +161,7 @@ func readApiKeysJson(startupAppConfig config.ApplicationConfig) fileHandler {
func readExternalBackendsJson(startupAppConfig config.ApplicationConfig) fileHandler {
handler := func(fileContent []byte, appConfig *config.ApplicationConfig) error {
log.Debug().Msg("processing external_backends.json")
xlog.Debug("processing external_backends.json")
if len(fileContent) > 0 {
// Parse JSON content from the file
@@ -177,40 +178,15 @@ func readExternalBackendsJson(startupAppConfig config.ApplicationConfig) fileHan
} else {
appConfig.ExternalGRPCBackends = startupAppConfig.ExternalGRPCBackends
}
log.Debug().Msg("external backends loaded from external_backends.json")
xlog.Debug("external backends loaded from external_backends.json")
return nil
}
return handler
}
type runtimeSettings struct {
WatchdogEnabled *bool `json:"watchdog_enabled,omitempty"`
WatchdogIdleEnabled *bool `json:"watchdog_idle_enabled,omitempty"`
WatchdogBusyEnabled *bool `json:"watchdog_busy_enabled,omitempty"`
WatchdogIdleTimeout *string `json:"watchdog_idle_timeout,omitempty"`
WatchdogBusyTimeout *string `json:"watchdog_busy_timeout,omitempty"`
SingleBackend *bool `json:"single_backend,omitempty"`
ParallelBackendRequests *bool `json:"parallel_backend_requests,omitempty"`
Threads *int `json:"threads,omitempty"`
ContextSize *int `json:"context_size,omitempty"`
F16 *bool `json:"f16,omitempty"`
Debug *bool `json:"debug,omitempty"`
CORS *bool `json:"cors,omitempty"`
CSRF *bool `json:"csrf,omitempty"`
CORSAllowOrigins *string `json:"cors_allow_origins,omitempty"`
P2PToken *string `json:"p2p_token,omitempty"`
P2PNetworkID *string `json:"p2p_network_id,omitempty"`
Federated *bool `json:"federated,omitempty"`
Galleries *[]config.Gallery `json:"galleries,omitempty"`
BackendGalleries *[]config.Gallery `json:"backend_galleries,omitempty"`
AutoloadGalleries *bool `json:"autoload_galleries,omitempty"`
AutoloadBackendGalleries *bool `json:"autoload_backend_galleries,omitempty"`
ApiKeys *[]string `json:"api_keys,omitempty"`
}
func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHandler {
handler := func(fileContent []byte, appConfig *config.ApplicationConfig) error {
log.Debug().Msg("processing runtime_settings.json")
xlog.Debug("processing runtime_settings.json")
// Determine if settings came from env vars by comparing with startup config
// startupAppConfig contains the original values set from env vars at startup.
@@ -221,7 +197,10 @@ func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHand
envWatchdogIdleTimeout := appConfig.WatchDogIdleTimeout == startupAppConfig.WatchDogIdleTimeout
envWatchdogBusyTimeout := appConfig.WatchDogBusyTimeout == startupAppConfig.WatchDogBusyTimeout
envSingleBackend := appConfig.SingleBackend == startupAppConfig.SingleBackend
envMaxActiveBackends := appConfig.MaxActiveBackends == startupAppConfig.MaxActiveBackends
envParallelRequests := appConfig.ParallelBackendRequests == startupAppConfig.ParallelBackendRequests
envMemoryReclaimerEnabled := appConfig.MemoryReclaimerEnabled == startupAppConfig.MemoryReclaimerEnabled
envMemoryReclaimerThreshold := appConfig.MemoryReclaimerThreshold == startupAppConfig.MemoryReclaimerThreshold
envThreads := appConfig.Threads == startupAppConfig.Threads
envContextSize := appConfig.ContextSize == startupAppConfig.ContextSize
envF16 := appConfig.F16 == startupAppConfig.F16
@@ -234,9 +213,13 @@ func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHand
envFederated := appConfig.Federated == startupAppConfig.Federated
envAutoloadGalleries := appConfig.AutoloadGalleries == startupAppConfig.AutoloadGalleries
envAutoloadBackendGalleries := appConfig.AutoloadBackendGalleries == startupAppConfig.AutoloadBackendGalleries
envAgentJobRetentionDays := appConfig.AgentJobRetentionDays == startupAppConfig.AgentJobRetentionDays
envForceEvictionWhenBusy := appConfig.ForceEvictionWhenBusy == startupAppConfig.ForceEvictionWhenBusy
envLRUEvictionMaxRetries := appConfig.LRUEvictionMaxRetries == startupAppConfig.LRUEvictionMaxRetries
envLRUEvictionRetryInterval := appConfig.LRUEvictionRetryInterval == startupAppConfig.LRUEvictionRetryInterval
if len(fileContent) > 0 {
var settings runtimeSettings
var settings config.RuntimeSettings
err := json.Unmarshal(fileContent, &settings)
if err != nil {
return err
@@ -260,7 +243,7 @@ func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHand
if err == nil {
appConfig.WatchDogIdleTimeout = dur
} else {
log.Warn().Err(err).Str("timeout", *settings.WatchdogIdleTimeout).Msg("invalid watchdog idle timeout in runtime_settings.json")
xlog.Warn("invalid watchdog idle timeout in runtime_settings.json", "error", err, "timeout", *settings.WatchdogIdleTimeout)
}
}
if settings.WatchdogBusyTimeout != nil && !envWatchdogBusyTimeout {
@@ -268,15 +251,49 @@ func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHand
if err == nil {
appConfig.WatchDogBusyTimeout = dur
} else {
log.Warn().Err(err).Str("timeout", *settings.WatchdogBusyTimeout).Msg("invalid watchdog busy timeout in runtime_settings.json")
xlog.Warn("invalid watchdog busy timeout in runtime_settings.json", "error", err, "timeout", *settings.WatchdogBusyTimeout)
}
}
if settings.SingleBackend != nil && !envSingleBackend {
// Handle MaxActiveBackends (new) and SingleBackend (deprecated)
if settings.MaxActiveBackends != nil && !envMaxActiveBackends {
appConfig.MaxActiveBackends = *settings.MaxActiveBackends
// For backward compatibility, also set SingleBackend if MaxActiveBackends == 1
appConfig.SingleBackend = (*settings.MaxActiveBackends == 1)
} else if settings.SingleBackend != nil && !envSingleBackend {
// Legacy: SingleBackend maps to MaxActiveBackends = 1
appConfig.SingleBackend = *settings.SingleBackend
if *settings.SingleBackend {
appConfig.MaxActiveBackends = 1
} else {
appConfig.MaxActiveBackends = 0
}
}
if settings.ParallelBackendRequests != nil && !envParallelRequests {
appConfig.ParallelBackendRequests = *settings.ParallelBackendRequests
}
if settings.MemoryReclaimerEnabled != nil && !envMemoryReclaimerEnabled {
appConfig.MemoryReclaimerEnabled = *settings.MemoryReclaimerEnabled
if appConfig.MemoryReclaimerEnabled {
appConfig.WatchDog = true // Memory reclaimer requires watchdog
}
}
if settings.MemoryReclaimerThreshold != nil && !envMemoryReclaimerThreshold {
appConfig.MemoryReclaimerThreshold = *settings.MemoryReclaimerThreshold
}
if settings.ForceEvictionWhenBusy != nil && !envForceEvictionWhenBusy {
appConfig.ForceEvictionWhenBusy = *settings.ForceEvictionWhenBusy
}
if settings.LRUEvictionMaxRetries != nil && !envLRUEvictionMaxRetries {
appConfig.LRUEvictionMaxRetries = *settings.LRUEvictionMaxRetries
}
if settings.LRUEvictionRetryInterval != nil && !envLRUEvictionRetryInterval {
dur, err := time.ParseDuration(*settings.LRUEvictionRetryInterval)
if err == nil {
appConfig.LRUEvictionRetryInterval = dur
} else {
xlog.Warn("invalid LRU eviction retry interval in runtime_settings.json", "error", err, "interval", *settings.LRUEvictionRetryInterval)
}
}
if settings.Threads != nil && !envThreads {
appConfig.Threads = *settings.Threads
}
@@ -328,6 +345,9 @@ func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHand
// Replace all runtime keys with what's in runtime_settings.json
appConfig.ApiKeys = append(envKeys, runtimeKeys...)
}
if settings.AgentJobRetentionDays != nil && !envAgentJobRetentionDays {
appConfig.AgentJobRetentionDays = *settings.AgentJobRetentionDays
}
// If watchdog is enabled via file but not via env, ensure WatchDog flag is set
if !envWatchdogIdle && !envWatchdogBusy {
@@ -336,7 +356,7 @@ func readRuntimeSettingsJson(startupAppConfig config.ApplicationConfig) fileHand
}
}
}
log.Debug().Msg("runtime settings loaded from runtime_settings.json")
xlog.Debug("runtime settings loaded from runtime_settings.json")
return nil
}
return handler

Some files were not shown because too many files have changed in this diff Show More