* feat(face-recognition): add insightface backend for 1:1 verify, 1:N identify, embedding, detection, analysis
Adds face recognition as a new first-class capability in LocalAI via the
`insightface` Python backend, with a pluggable two-engine design so
non-commercial (insightface model packs) and commercial-safe
(OpenCV Zoo YuNet + SFace) models share the same gRPC/HTTP surface.
New gRPC RPCs (backend/backend.proto):
* FaceVerify(FaceVerifyRequest) returns FaceVerifyResponse
* FaceAnalyze(FaceAnalyzeRequest) returns FaceAnalyzeResponse
Existing Embedding and Detect RPCs are reused (face image in
PredictOptions.Images / DetectOptions.src) for face embedding and
face detection respectively.
New HTTP endpoints under /v1/face/:
* verify — 1:1 image pair same-person decision
* analyze — per-face age + gender (emotion/race reserved)
* register — 1:N enrollment; stores embedding in vector store
* identify — 1:N recognition; detect → embed → StoresFind
* forget — remove a registered face by opaque ID
Service layer (core/services/facerecognition/) introduces a
`Registry` interface with one in-memory `storeRegistry` impl backed
by LocalAI's existing local-store gRPC vector backend. HTTP handlers
depend on the interface, not on StoresSet/StoresFind directly, so a
persistent PostgreSQL/pgvector implementation can be slotted in via a
single constructor change in core/application (TODO marker in the
package doc).
New usecase flag FLAG_FACE_RECOGNITION; insightface is also wired
into FLAG_DETECTION so /v1/detection works for face bounding boxes.
Gallery (backend/index.yaml) ships three entries:
* insightface-buffalo-l — SCRFD-10GF + ArcFace R50 + genderage
(~326MB pre-baked; non-commercial research use only)
* insightface-opencv — YuNet + SFace (~40MB pre-baked; Apache 2.0)
* insightface-buffalo-s — SCRFD-500MF + MBF (runtime download; non-commercial)
Python backend (backend/python/insightface/):
* engines.py — FaceEngine protocol with InsightFaceEngine and
OnnxDirectEngine; resolves model paths relative to the backend
directory so the same gallery config works in docker-scratch and
in the e2e-backends rootfs-extraction harness.
* backend.py — gRPC servicer implementing Health, LoadModel, Status,
Embedding, Detect, FaceVerify, FaceAnalyze.
* install.sh — pre-bakes buffalo_l + OpenCV YuNet/SFace inside the
backend directory so first-run is offline-clean (the final scratch
image only preserves files under /<backend>/).
* test.py — parametrized unit tests over both engines.
Tests:
* Registry unit tests (go test -race ./core/services/facerecognition/...)
— in-memory fake grpc.Backend, table-driven, covers register/
identify/forget/error paths + concurrent access.
* tests/e2e-backends/backend_test.go extended with face caps
(face_detect, face_embed, face_verify, face_analyze); relative
ordering + configurable verifyCeiling per engine.
* Makefile targets: test-extra-backend-insightface-buffalo-l,
-opencv, and the -all aggregate.
* CI: .github/workflows/test-extra.yml gains tests-insightface-grpc,
auto-triggered by changes under backend/python/insightface/.
Docs:
* docs/content/features/face-recognition.md — feature page with
license table, quickstart (defaults to the commercial-safe model),
models matrix, API reference, 1:N workflow, storage caveats.
* Cross-refs in object-detection.md, stores.md, embeddings.md, and
whats-new.md.
* Contributor README at backend/python/insightface/README.md.
Verified end-to-end:
* buffalo_l: 6/6 specs (health, load, face_detect, face_embed,
face_verify, face_analyze).
* opencv: 5/5 specs (same minus face_analyze — SFace has no
demographic head; correctly skipped via BACKEND_TEST_CAPS).
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): move engine selection to model gallery, collapse backend entries
The previous commit put engine/model_pack options on backend gallery
entries (`backend/index.yaml`). That was wrong — `GalleryBackend`
(core/gallery/backend_types.go:32) has no `options` field, so the
YAML decoder silently dropped those keys and all three "different
insightface-*" backend entries resolved to the same container image
with no distinguishing configuration.
Correct split:
* `backend/index.yaml` now has ONE `insightface` backend entry
shipping the CPU + CUDA 12 container images. The Python backend
bundles both the non-commercial insightface model packs
(buffalo_l / buffalo_s) and the commercial-safe OpenCV Zoo
weights (YuNet + SFace); the active engine is selected at
LoadModel time via `options: ["engine:..."]`.
* `gallery/index.yaml` gains three model entries —
`insightface-buffalo-l`, `insightface-opencv`,
`insightface-buffalo-s` — each setting the appropriate
`overrides.backend` + `overrides.options` so installing one
actually gives the user the intended engine. This matches how
`rfdetr-base` lives in the model gallery against the `rfdetr`
backend.
The earlier e2e tests passed despite this bug because the Makefile
targets pass `BACKEND_TEST_OPTIONS` directly to LoadModel via gRPC,
bypassing any gallery resolution entirely. No code changes needed.
Assisted-by: Claude:claude-opus-4-7
* feat(face-recognition): cover all supported models in the gallery + drop weight baking
Follows up on the model-gallery split: adds entries for every model
configuration either engine actually supports, and switches weight
delivery from image-baked to LocalAI's standard gallery mechanism.
Gallery now has seven `insightface-*` model entries (gallery/index.yaml):
insightface (family) — non-commercial research use
• buffalo-l (326MB) — SCRFD-10GF + ResNet50 + genderage, default
• buffalo-m (313MB) — SCRFD-2.5GF + ResNet50 + genderage
• buffalo-s (159MB) — SCRFD-500MF + MBF + genderage
• buffalo-sc (16MB) — SCRFD-500MF + MBF, recognition only
(no landmarks, no demographics — analyze
returns empty attributes)
• antelopev2 (407MB) — SCRFD-10GF + ResNet100@Glint360K + genderage
OpenCV Zoo family — Apache 2.0 commercial-safe
• opencv — YuNet + SFace fp32 (~40MB)
• opencv-int8 — YuNet + SFace int8 (~12MB, ~3x smaller, faster on CPU)
Model weights are no longer baked into the backend image. The image
now ships only the Python runtime + libraries (~275MB content size,
~1.18GB disk vs ~1.21GB when weights were baked). Weights flow through
LocalAI's gallery mechanism:
* OpenCV variants list `files:` with ONNX URIs + SHA-256, so
`local-ai models install insightface-opencv` pulls them into the
models directory exactly like any other gallery-managed model.
* insightface packs (upstream distributes .zip archives only, not
individual ONNX files) auto-download on first LoadModel via
FaceAnalysis' built-in machinery, rooted at the LocalAI models
directory so they live alongside everything else — same pattern
`rfdetr` uses with `inference.get_model()`.
Backend changes (backend/python/insightface/):
* backend.py — LoadModel propagates `ModelOptions.ModelPath` (the
LocalAI models directory) to engines via a `_model_dir` hint.
This replaces the earlier ModelFile-dirname approach; ModelPath
is the canonical "models directory" variable set by the Go loader
(pkg/model/initializers.go:144) and is always populated.
* engines.py::_resolve_model_path — picks up `model_dir` and searches
it (plus basename-in-model-dir) before falling back to the dev
script-dir. This is how OnnxDirectEngine finds gallery-downloaded
YuNet/SFace files by filename only.
* engines.py::_flatten_insightface_pack — new helper that works
around an upstream packaging inconsistency: buffalo_l/s/sc zips
expand flat, but buffalo_m and antelopev2 zips wrap their ONNX
files in a redundant `<name>/` directory. insightface's own
loader looks one level too shallow and fails. We call
`ensure_available()` explicitly, flatten if nested, then hand to
FaceAnalysis.
* engines.py::InsightFaceEngine.prepare — root-resolution order now
includes the `_model_dir` hint so packs download into the LocalAI
models directory by default.
* install.sh — no longer pre-downloads any weights. Everything is
gallery-managed now.
* smoke.py (new) — parametrized smoke test that iterates over every
gallery configuration, simulating the LocalAI install flow
(creates a models dir, fetches OpenCV files with checksum
verification, lets insightface auto-download its packs), then
runs detect + embed + verify (+ analyze where supported) through
the in-process BackendServicer.
* test.py — OnnxDirectEngineTest no longer hardcodes `/models/opencv/`
paths; downloads ONNX files to a temp dir at setUpClass time and
passes ModelPath accordingly.
Registry change (core/services/facerecognition/store_registry.go):
* `dim=0` in NewStoreRegistry now means "accept whatever dimension
arrives" — needed because the backend supports 512-d ArcFace/MBF
and 128-d SFace via the same Registry. A non-zero dim still fails
fast with ErrDimensionMismatch.
* core/application plumbs `faceEmbeddingDim = 0`, explaining the
rationale in the comment.
Backend gallery description updated to reflect that the image carries
no weights — it's just Python + engines.
Smoke-tested all 7 configurations against the rebuilt image (with the
flatten fix applied), exit 0:
PASS: insightface-buffalo-l faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-sc faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-s faces=6 dim=512 same-dist=0.000
PASS: insightface-buffalo-m faces=6 dim=512 same-dist=0.000
PASS: insightface-antelopev2 faces=6 dim=512 same-dist=0.000
PASS: insightface-opencv faces=6 dim=128 same-dist=0.000
PASS: insightface-opencv-int8 faces=6 dim=128 same-dist=0.000
7/7 passed
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): pre-fetch OpenCV ONNX for e2e target; drop stale pre-baked claim
CI regression from the previous commit: I moved OpenCV Zoo weight
delivery to LocalAI's gallery `files:` mechanism, but the
test-extra-backend-insightface-opencv target was still passing
relative paths `detector_onnx:models/opencv/yunet.onnx` in
BACKEND_TEST_OPTIONS. The e2e suite drives LoadModel directly over
gRPC without going through the gallery, so those relative paths
resolved to nothing and OpenCV's ONNXImporter failed:
LoadModel failed: Failed to load face engine:
OpenCV(4.13.0) ... Can't read ONNX file: models/opencv/yunet.onnx
Fix: add an `insightface-opencv-models` prerequisite target that
fetches the two ONNX files (YuNet + SFace) to a deterministic host
cache at /tmp/localai-insightface-opencv-cache/, verifies SHA-256,
and skips the download on re-runs. The opencv test target depends on
it and passes absolute paths in BACKEND_TEST_OPTIONS, so the backend
finds the files via its normal absolute-path resolution branch.
Also refresh the buffalo_l comment: it no longer says "pre-baked"
(nothing is — the pack auto-downloads from upstream's GitHub release
on first LoadModel, same as in CI).
Locally verified: `make test-extra-backend-insightface-opencv` passes
5/5 specs (health, load, face_detect, face_embed, face_verify).
Assisted-by: Claude:claude-opus-4-7
* feat(face-recognition): add POST /v1/face/embed + correct /v1/embeddings docs
The docs promised that /v1/embeddings returns face vectors when you
send an image data-URI. That was never true: /v1/embeddings is
OpenAI-compatible and text-only by contract — its handler goes
through `core/backend/embeddings.go::ModelEmbedding`, which sets
`predictOptions.Embeddings = s` (a string of TEXT to embed) and never
populates `predictOptions.Images[]`. The Python backend's Embedding
gRPC method does handle Images[] (that's how /v1/face/register reaches
it internally via `backend.FaceEmbed`), but the HTTP embeddings
endpoint wasn't wired to populate it.
Rather than overload /v1/embeddings with image-vs-text detection —
messy, and the endpoint is OpenAI-compatible by design — add a
dedicated /v1/face/embed endpoint that wraps `backend.FaceEmbed`
(already used internally by /v1/face/register and /v1/face/identify).
Matches LocalAI's convention of a dedicated path per non-standard flow
(/v1/rerank, /v1/detection, /v1/face/verify etc.).
Response:
{
"embedding": [<dim> floats, L2-normed],
"dim": int, // 512 for ArcFace R50 / MBF, 128 for SFace
"model": "<name>"
}
Live-tested on the opencv engine: returns a 128-d L2-normalized vector
(sum(x^2) = 1.0000). Sentinel in docs updated to note /v1/embeddings
is text-only and point image users at /v1/face/embed instead.
Assisted-by: Claude:claude-opus-4-7
* fix(http): map malformed image input + gRPC status codes to proper 4xx
Image-input failures on LocalAI's single-image endpoints (/v1/detection,
/v1/face/{verify,analyze,embed,register,identify}) have historically
returned 500 — even when the client was the one who sent garbage.
Classic example: you POST an "image" that isn't a URL, isn't a
data-URI, and isn't a valid JPEG/PNG — the server shouldn't claim
that's its fault.
Two helpers land in core/http/endpoints/localai/images.go and every
single-image handler is switched over:
* decodeImageInput(s)
Wraps utils.GetContentURIAsBase64 and turns any failure
(invalid URL, not a data-URI, download error, etc.) into
echo.NewHTTPError(400, "invalid image input: ...").
* mapBackendError(err)
Inspects the gRPC status on a backend call error and maps:
INVALID_ARGUMENT → 400 Bad Request
NOT_FOUND → 404 Not Found
FAILED_PRECONDITION → 412 Precondition Failed
Unimplemented → 501 Not Implemented
All other codes fall through unchanged (still 500).
Before, my 1×1 PNG error-path test returned:
HTTP 500 "rpc error: code = InvalidArgument desc = failed to decode one or both images"
After:
HTTP 400 "failed to decode one or both images"
Scope-limited to the LocalAI single-image endpoints. The multi-modal
paths (middleware/request.go, openresponses/responses.go,
openai/realtime.go) intentionally log-and-skip individual media parts
when decoding fails — different design intent (graceful degradation
of a multi-part message), not a 400-worthy failure. Left untouched.
Live-verified: every error case in /tmp/face_errors.py now returns
4xx with a meaningful message; the "image with no face (1x1 PNG)"
case specifically went from 500 → 400.
Assisted-by: Claude:claude-opus-4-7
* refactor(face-recognition): insightface packs go through gallery files:, drop FaceAnalysis
Follows up on the discovery that LocalAI's gallery `files:` mechanism
handles archives (zip, tar.gz, …) via mholt/archiver/v3 — the rhasspy
piper voices use exactly this pattern. Insightface packs are zip
archives, so we can now deliver them the same way every other
gallery-managed model gets delivered: declaratively, checksum-verified,
through LocalAI's standard download+extract pipeline.
Two changes:
1. Gallery (gallery/index.yaml) — every insightface-* entry gains a
`files:` list with the pack zip's URI + SHA-256. `local-ai models
install insightface-buffalo-l` now fetches the zip, verifies the
hash, and extracts it into the models directory. No more reliance
on insightface's library-internal `ensure_available()` auto-download
or its hardcoded `BASE_REPO_URL`.
2. InsightFaceEngine (backend/python/insightface/engines.py) — drops
the FaceAnalysis wrapper and drives insightface's `model_zoo`
directly. The ~50 lines FaceAnalysis provides — glob ONNX files,
route each through `model_zoo.get_model()`, build a
`{taskname: model}` dict, loop per-face at inference — are
reimplemented in `InsightFaceEngine`. The actual inference classes
(RetinaFace, ArcFaceONNX, Attribute, Landmark) are still
insightface's — we only replicate the glue, so drift risk against
upstream is minimal.
Why drop FaceAnalysis: it hard-codes a `<root>/models/<name>/*.onnx`
layout that doesn't match what LocalAI's zip extraction produces.
LocalAI unpacks archives flat into `<models_dir>`. Upstream packs
are inconsistent — buffalo_l/s/sc ship ONNX at the zip root (lands
at `<models_dir>/*.onnx`), buffalo_m/antelopev2 wrap in a redundant
`<name>/` dir (lands at `<models_dir>/<name>/*.onnx`). The new
`_locate_insightface_pack` helper searches both locations plus
legacy paths and returns whichever has ONNX files. Replaces the
earlier `_flatten_insightface_pack` helper (which tried to fight
FaceAnalysis's layout expectations; now we just find the files
wherever they are).
Net effect for users: install once via LocalAI's managed flow,
weights live alongside every other model, progress shows in the
jobs endpoint, no first-load network call. Same API surface,
cleaner plumbing.
Assisted-by: Claude:claude-opus-4-7
* fix(face-recognition): CI's insightface e2e path needs the pack pre-fetched
The e2e suite drives LoadModel over gRPC without going through LocalAI's
gallery flow, so the engine's `_model_dir` option (normally populated
from ModelPath) is empty. Previously the insightface target relied on
FaceAnalysis auto-download to paper over this, but we dropped
FaceAnalysis in favor of direct model_zoo calls — so the buffalo_l
target started failing at LoadModel with "no insightface pack found".
Mirror the opencv target's pre-fetch pattern: download buffalo_sc.zip
(same SHA as the gallery entry), extract it on the host, and pass
`root:<dir>` so the engine locates the pack without needing
ModelPath. Switched to buffalo_sc (smallest pack, ~16MB) to keep CI
fast; it covers the same insightface engine code path as buffalo_l.
Face analyze cap dropped since buffalo_sc has no age/gender head.
Assisted-by: Claude:claude-opus-4-7[1m]
* feat(face-recognition): surface face-recognition in advertised feature maps
The six /v1/face/* endpoints were missing from every place LocalAI
advertises its feature surface to clients:
* api_instructions — the machine-readable capability index at
GET /api/instructions. Added `face-recognition` as a dedicated
instruction area with an intro that calls out the in-memory
registry caveat and the /v1/face/embed vs /v1/embeddings split.
* auth/permissions — added FeatureFaceRecognition constant, routed
all six face endpoints through it so admins can gate them per-user
like any other API feature. Default ON (matches the other API
features).
* React UI capabilities — CAP_FACE_RECOGNITION symbol mapped to
FLAG_FACE_RECOGNITION. Declared only for now; the Face page is a
follow-up (noted in the plan).
Instruction count bumped 9 → 10; test updated.
Assisted-by: Claude:claude-opus-4-7[1m]
* docs(agents): capture advertising-surface steps in the endpoint guide
Before this change, adding a new /v1/* endpoint reliably missed one or
more of: the swagger @Tags annotation, the /api/instructions registry,
the auth RouteFeatureRegistry, and the React UI CAP_* symbol. The
endpoint would work but be invisible to API consumers, admins, and the
UI — and nothing in the existing docs said to look in those places.
Extend .agents/api-endpoints-and-auth.md with a new "Advertising
surfaces" section covering all four surfaces (swagger tags, /api/
instructions, capabilities.js, docs/), and expand the closing checklist
so it's impossible to ship a feature without visiting each one. Hoist a
one-liner reminder into AGENTS.md's Quick Reference so agents skim it
before diving in.
Assisted-by: Claude:claude-opus-4-7[1m]
28 KiB
+++ disableToc = false title = "News" weight = 7 url = '/basics/news/' icon = "newspaper" +++
Release notes have been now moved completely over Github releases.
You can see the release notes here.
2026 Highlights
- April 2026: Face recognition backend —
insightface-powered 1:1 verification, 1:N identification, face embedding, face detection, and demographic analysis. Ships both a non-commercialbuffalo_lmodel and an Apache 2.0 OpenCV Zoo alternative.
2024 Highlights
- April 2024: Reranker API
- May 2024: Distributed inferencing, Decentralized P2P llama.cpp — Docs
- July/August 2024: P2P Dashboard, Federated mode and AI Swarms, P2P Global community pools, FLUX-1 support, P2P Explorer
- October 2024: Examples moved to LocalAI-examples
- November 2024: Voice Activity Detection (VAD), Bark.cpp backend
- December 2024: stablediffusion.cpp backend (ggml)
04-12-2023: v2.0.0
This release brings a major overhaul in some backends.
Breaking/important changes:
- Backend rename:
llama-stablerenamed tollama-ggml{{< pr "1287" >}} - Prompt template changes: {{< pr "1254" >}} (extra space in roles)
- Apple metal bugfixes: {{< pr "1365" >}}
New:
- Added support for LLaVa and OpenAI Vision API support ({{< pr "1254" >}})
- Python based backends are now using conda to track env dependencies ( {{< pr "1144" >}} )
- Support for parallel requests ( {{< pr "1290" >}} )
- Support for transformers-embeddings ( {{< pr "1308" >}})
- Watchdog for backends ( {{< pr "1341" >}}). As https://github.com/ggerganov/llama.cpp/issues/3969 is hitting LocalAI's llama-cpp implementation, we have now a watchdog that can be used to make sure backends are not stalling. This is a generic mechanism that can be enabled for all the backends now.
- Whisper.cpp updates ( {{< pr "1302" >}} )
- Petals backend ( {{< pr "1350" >}} )
- Full LLM fine-tuning example to use with LocalAI: https://localai.io/advanced/fine-tuning/
Due to the python dependencies size of images grew in size.
If you still want to use smaller images without python dependencies, you can use the corresponding images tags ending with -core.
Full changelog: https://github.com/mudler/LocalAI/releases/tag/v2.0.0
30-10-2023: v1.40.0
This release is a preparation before v2 - the efforts now will be to refactor, polish and add new backends. Follow up on: https://github.com/mudler/LocalAI/issues/1126
Hot topics
This release now brings the llama-cpp backend which is a c++ backend tied to llama.cpp. It follows more closely and tracks recent versions of llama.cpp. It is not feature compatible with the current llama backend but plans are to sunset the current llama backend in favor of this one. This one will be probably be the latest release containing the older llama backend written in go and c++. The major improvement with this change is that there are less layers that could be expose to potential bugs - and as well it ease out maintenance as well.
Support for ROCm/HIPBLAS
This release bring support for AMD thanks to @65a . See more details in {{< pr "1100" >}}
More CLI commands
Thanks to @jespino now the local-ai binary has more subcommands allowing to manage the gallery or try out directly inferencing, check it out!
25-09-2023: v1.30.0
This is an exciting LocalAI release! Besides bug-fixes and enhancements this release brings the new backend to a whole new level by extending support to vllm and vall-e-x for audio generation!
Check out the documentation for vllm here and Vall-E-X here
26-08-2023: v1.25.0
Hey everyone, Ettore here, I'm so happy to share this release out - while this summer is hot apparently doesn't stop LocalAI development :)
This release brings a lot of new features, bugfixes and updates! Also a big shout out to the community, this was a great release!
Attention 🚨
From this release the llama backend supports only gguf files (see {{< pr "943" >}}). LocalAI however still supports ggml files. We ship a version of llama.cpp before that change in a separate backend, named llama-stable to allow still loading ggml files. If you were specifying the llama backend manually to load ggml files from this release you should use llama-stable instead, or do not specify a backend at all (LocalAI will automatically handle this).
Image generation enhancements
The [Diffusers]({{%relref "features/image-generation" %}}) backend got now various enhancements, including support to generate images from images, longer prompts, and support for more kernels schedulers. See the [Diffusers]({{%relref "features/image-generation" %}}) documentation for more information.
Lora adapters
Now it's possible to load lora adapters for llama.cpp. See {{< pr "955" >}} for more information.
Device management
It is now possible for single-devices with one GPU to specify --single-active-backend to allow only one backend active at the time {{< pr "925" >}}.
Community spotlight
Resources management
Thanks to the continous community efforts (another cool contribution from {{< github "dave-gray101" >}} ) now it's possible to shutdown a backend programmatically via the API. There is an ongoing effort in the community to better handling of resources. See also the 🔥Roadmap.
New how-to section
Thanks to the community efforts now we have a new how-to website with various examples on how to use LocalAI. This is a great starting point for new users! We are currently working on improving it, a huge shout out to {{< github "lunamidori5" >}} from the community for the impressive efforts on this!
💡 More examples!
- Open source autopilot? See the new addition by {{< github "gruberdev" >}} in our examples on how to use Continue with LocalAI!
- Want to try LocalAI with Insomnia? Check out the new Insomnia example by {{< github "dave-gray101" >}}!
LocalAGI in discord!
Did you know that we have now few cool bots in our Discord? come check them out! We also have an instance of LocalAGI ready to help you out!
Changelog summary
Breaking Changes 🛠
- feat: bump llama.cpp, add gguf support by {{< github "mudler" >}} in {{< pr "943" >}}
Exciting New Features 🎉
- feat(Makefile): allow to restrict backend builds by {{< github "mudler" >}} in {{< pr "890" >}}
- feat(diffusers): various enhancements by {{< github "mudler" >}} in {{< pr "895" >}}
- feat: make initializer accept gRPC delay times by {{< github "mudler" >}} in {{< pr "900" >}}
- feat(diffusers): add DPMSolverMultistepScheduler++, DPMSolverMultistepSchedulerSDE++, guidance_scale by {{< github "mudler" >}} in {{< pr "903" >}}
- feat(diffusers): overcome prompt limit by {{< github "mudler" >}} in {{< pr "904" >}}
- feat(diffusers): add img2img and clip_skip, support more kernels schedulers by {{< github "mudler" >}} in {{< pr "906" >}}
- Usage Features by {{< github "dave-gray101" >}} in {{< pr "863" >}}
- feat(diffusers): be consistent with pipelines, support also depthimg2img by {{< github "mudler" >}} in {{< pr "926" >}}
- feat: add --single-active-backend to allow only one backend active at the time by {{< github "mudler" >}} in {{< pr "925" >}}
- feat: add llama-stable backend by {{< github "mudler" >}} in {{< pr "932" >}}
- feat: allow to customize rwkv tokenizer by {{< github "dave-gray101" >}} in {{< pr "937" >}}
- feat: backend monitor shutdown endpoint, process based by {{< github "dave-gray101" >}} in {{< pr "938" >}}
- feat: Allow to load lora adapters for llama.cpp by {{< github "mudler" >}} in {{< pr "955" >}}
Join our Discord community! our vibrant community is growing fast, and we are always happy to help! https://discord.gg/uJAeKSAGDy
The full changelog is available here.
🔥🔥🔥🔥 12-08-2023: v1.24.0 🔥🔥🔥🔥
This is release brings four(!) new additional backends to LocalAI: [🐶 Bark]({{%relref "features/text-to-audio#bark" %}}), 🦙 [AutoGPTQ]({{%relref "features/text-generation#autogptq" %}}), [🧨 Diffusers]({{%relref "features/image-generation" %}}), 🦙 [exllama]({{%relref "features/text-generation#exllama" %}}) and a lot of improvements!
Major improvements:
- feat: add bark and AutoGPTQ by {{< github "mudler" >}} in {{< pr "871" >}}
- feat: Add Diffusers by {{< github "mudler" >}} in {{< pr "874" >}}
- feat: add API_KEY list support by {{< github "neboman11" >}} and {{< github "bnusunny" >}} in {{< pr "877" >}}
- feat: Add exllama by {{< github "mudler" >}} in {{< pr "881" >}}
- feat: pre-configure LocalAI galleries by {{< github "mudler" >}} in {{< pr "886" >}}
🐶 Bark
[Bark]({{%relref "features/text-to-audio#bark" %}}) is a text-prompted generative audio model - it combines GPT techniques to generate Audio from text. It is a great addition to LocalAI, and it's available in the container images by default.
It can also generate music, see the example: lion.webm
🦙 AutoGPTQ
[AutoGPTQ]({{%relref "features/text-generation#autogptq" %}}) is an easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
It is targeted mainly for GPU usage only. Check out the [ documentation]({{%relref "features/text-generation" %}}) for usage.
🦙 Exllama
[Exllama]({{%relref "features/text-generation#exllama" %}}) is a "A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights". It is a faster alternative to run LLaMA models on GPU.Check out the [Exllama documentation]({{%relref "features/text-generation#exllama" %}}) for usage.
🧨 Diffusers
[Diffusers]({{%relref "features/image-generation#diffusers" %}}) is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Currently it is experimental, and supports generation only of images so you might encounter some issues on models which weren't tested yet. Check out the [Diffusers documentation]({{%relref "features/image-generation" %}}) for usage.
🔑 API Keys
Thanks to the community contributions now it's possible to specify a list of API keys that can be used to gate API requests.
API Keys can be specified with the API_KEY environment variable as a comma-separated list of keys.
🖼️ Galleries
Now by default the model-gallery repositories are configured in the container images
💡 New project
LocalAGI is a simple agent that uses LocalAI functions to have a full locally runnable assistant (with no API keys needed).
See it here in action planning a trip for San Francisco!
The full changelog is available here.
🔥🔥 29-07-2023: v1.23.0 🚀
This release focuses mostly on bugfixing and updates, with just a couple of new features:
- feat: add rope settings and negative prompt, drop grammar backend by {{< github "mudler" >}} in {{< pr "797" >}}
- Added CPU information to entrypoint.sh by @finger42 in {{< pr "794" >}}
- feat: cancel stream generation if client disappears by @tmm1 in {{< pr "792" >}}
Most notably, this release brings important fixes for CUDA (and not only):
- fix: add rope settings during model load, fix CUDA by {{< github "mudler" >}} in {{< pr "821" >}}
- fix: select function calls if 'name' is set in the request by {{< github "mudler" >}} in {{< pr "827" >}}
- fix: symlink libphonemize in the container by {{< github "mudler" >}} in {{< pr "831" >}}
{{% notice note %}}
From this release [OpenAI functions]({{%relref "features/openai-functions" %}}) are available in the llama backend. The llama-grammar has been deprecated. See also [OpenAI functions]({{%relref "features/openai-functions" %}}).
{{% /notice %}}
The full changelog is available here
🔥🔥🔥 23-07-2023: v1.22.0 🚀
- feat: add llama-master backend by {{< github "mudler" >}} in {{< pr "752" >}}
- [build] pass build type to cmake on libtransformers.a build by @TonDar0n in {{< pr "741" >}}
- feat: resolve JSONSchema refs (planners) by {{< github "mudler" >}} in {{< pr "774" >}}
- feat: backends improvements by {{< github "mudler" >}} in {{< pr "778" >}}
- feat(llama2): add template for chat messages by {{< github "dave-gray101" >}} in {{< pr "782" >}}
{{% notice note %}}
From this release to use the OpenAI functions you need to use the llama-grammar backend. It has been added a llama backend for tracking llama.cpp master and llama-grammar for the grammar functionalities that have not been merged yet upstream. See also [OpenAI functions]({{%relref "features/openai-functions" %}}). Until the feature is merged we will have two llama backends.
{{% /notice %}}
Huggingface embeddings
In this release is now possible to specify to LocalAI external gRPC backends that can be used for inferencing {{< pr "778" >}}. It is now possible to write internal backends in any language, and a huggingface-embeddings backend is now available in the container image to be used with https://github.com/UKPLab/sentence-transformers. See also [Embeddings]({{%relref "features/embeddings" %}}).
LLaMa 2 has been released!
Thanks to the community effort now LocalAI supports templating for LLaMa2! more at: {{< pr "782" >}} until we update the model gallery with LLaMa2 models!
Official langchain integration
Progress has been made to support LocalAI with langchain. See: https://github.com/langchain-ai/langchain/pull/8134
🔥🔥🔥 17-07-2023: v1.21.0 🚀
- [whisper] Partial support for verbose_json format in transcribe endpoint by
@ldotlopezin {{< pr "721" >}} - LocalAI functions by
@mudlerin {{< pr "726" >}} gRPC-based backends by@mudlerin {{< pr "743" >}}- falcon support (7b and 40b) with
ggllm.cppby@mudlerin {{< pr "743" >}}
LocalAI functions
This allows to run OpenAI functions as described in the OpenAI blog post and documentation: https://openai.com/blog/function-calling-and-other-api-updates.
This is a video of running the same example, locally with LocalAI:
And here when it actually picks to reply to the user instead of using functions!
Note: functions are supported only with llama.cpp-compatible models.
A full example is available here: https://github.com/mudler/LocalAI-examples/tree/main/functions
gRPC backends
This is an internal refactor which is not user-facing, however, it allows to ease out maintenance and addition of new backends to LocalAI!
falcon support
Now Falcon 7b and 40b models compatible with https://github.com/cmp-nct/ggllm.cpp are supported as well.
The former, ggml-based backend has been renamed to falcon-ggml.
Default pre-compiled binaries
From this release the default behavior of images has changed. Compilation is not triggered on start automatically, to recompile local-ai from scratch on start and switch back to the old behavior, you can set REBUILD=true in the environment variables. Rebuilding can be necessary if your CPU and/or architecture is old and the pre-compiled binaries are not compatible with your platform. See the [build section]({{%relref "installation/build" %}}) for more information.
🔥🔥🔥 28-06-2023: v1.20.0 🚀
Exciting New Features 🎉
- Add Text-to-Audio generation with
go-piperby {{< github "mudler" >}} in {{< pr "649" >}} See [API endpoints]({{%relref "features/text-to-audio" %}}) in our documentation. - Add gallery repository by {{< github "mudler" >}} in {{< pr "663" >}}. See [models]({{%relref "features/model-gallery" %}}) for documentation.
Container images
- Standard (GPT +
stablediffusion):quay.io/go-skynet/local-ai:v1.20.0 - FFmpeg:
quay.io/go-skynet/local-ai:v1.20.0-ffmpeg - CUDA 11+FFmpeg:
quay.io/go-skynet/local-ai:v1.20.0-gpu-nvidia-cuda11-ffmpeg - CUDA 12+FFmpeg:
quay.io/go-skynet/local-ai:v1.20.0-gpu-nvidia-cuda12-ffmpeg
Updates
Updates to llama.cpp, go-transformers, gpt4all.cpp and rwkv.cpp.
The NUMA option was enabled by {{< github "mudler" >}} in {{< pr "684" >}}, along with many new parameters (mmap,mmlock, ..). See [advanced]({{%relref "advanced" %}}) for the full list of parameters.
Gallery repositories
In this release there is support for gallery repositories. These are repositories that contain models, and can be used to install models. The default gallery which contains only freely licensed models is in Github: https://github.com/go-skynet/model-gallery, but you can use your own gallery by setting the GALLERIES environment variable. An automatic index of huggingface models is available as well.
For example, now you can start LocalAI with the following environment variable to use both galleries:
GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:ci-robbot/localai-huggingface-zoo/index.yaml","name":"huggingface"}]
And in runtime you can install a model from huggingface now with:
curl http://localhost:8000/models/apply -H "Content-Type: application/json" -d '{ "id": "huggingface@thebloke__open-llama-7b-open-instruct-ggml__open-llama-7b-open-instruct.ggmlv3.q4_0.bin" }'
or a tts voice with:
curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ "id": "model-gallery@voice-en-us-kathleen-low" }'
See also [models]({{%relref "features/model-gallery" %}}) for a complete documentation.
Text to Audio
Now LocalAI uses piper and go-piper to generate audio from text. This is an experimental feature, and it requires GO_TAGS=tts to be set during build. It is enabled by default in the pre-built container images.
To setup audio models, you can use the new galleries, or setup the models manually as described in [the API section of the documentation]({{%relref "features/text-to-audio" %}}).
You can check the full changelog in Github
🔥🔥🔥 19-06-2023: v1.19.0 🚀
- Full CUDA GPU offload support ( PR by mudler. Thanks to chnyda for handing over the GPU access, and lu-zero to help in debugging )
- Full GPU Metal Support is now fully functional. Thanks to Soleblaze to iron out the Metal Apple silicon support!
Container images:
- Standard (GPT +
stablediffusion):quay.io/go-skynet/local-ai:v1.19.2 - FFmpeg:
quay.io/go-skynet/local-ai:v1.19.2-ffmpeg - CUDA 11+FFmpeg:
quay.io/go-skynet/local-ai:v1.19.2-gpu-nvidia-cuda11-ffmpeg - CUDA 12+FFmpeg:
quay.io/go-skynet/local-ai:v1.19.2-gpu-nvidia-cuda12-ffmpeg
🔥🔥🔥 06-06-2023: v1.18.0 🚀
This LocalAI release is plenty of new features, bugfixes and updates! Thanks to the community for the help, this was a great community release!
We now support a vast variety of models, while being backward compatible with prior quantization formats, this new release allows still to load older formats and new k-quants!
New features
- ✨ Added support for
falcon-based model families (7b) ( mudler ) - ✨ Experimental support for Metal Apple Silicon GPU - ( mudler and thanks to Soleblaze for testing! ). See the [build section]({{%relref "installation/build#Acceleration" %}}).
- ✨ Support for token stream in the
/v1/completionsendpoint ( samm81 ) - ✨ Added huggingface backend ( Evilfreelancer )
- 📷 Stablediffusion now can output
2048x2048images size withesrgan! ( mudler )
Container images
- 🐋 CUDA container images (arm64, x86_64) ( sebastien-prudhomme )
- 🐋 FFmpeg container images (arm64, x86_64) ( mudler )
Dependencies updates
- 🆙 Bloomz has been updated to the latest ggml changes, including new quantization format ( mudler )
- 🆙 RWKV has been updated to the new quantization format( mudler )
- 🆙 k-quants format support for the
llamamodels ( mudler ) - 🆙 gpt4all has been updated, incorporating upstream changes allowing to load older models, and with different CPU instruction set (AVX only, AVX2) from the same binary! ( mudler )
Generic
- 🐧 Fully Linux static binary releases ( mudler )
- 📷 Stablediffusion has been enabled on container images by default ( mudler )
Note: You can disable container image rebuilds with
REBUILD=false
Examples
Two new projects offer now direct integration with LocalAI!
29-05-2023: v1.17.0
Support for OpenCL has been added while building from sources.
You can now build LocalAI from source with BUILD_TYPE=clblas to have an OpenCL build. See also the [build section]({{%relref "getting-started/build#Acceleration" %}}).
For instructions on how to install OpenCL/CLBlast see here.
rwkv.cpp has been updated to the new ggml format commit.
27-05-2023: v1.16.0
Now it's possible to automatically download pre-configured models before starting the API.
Start local-ai with the PRELOAD_MODELS containing a list of models from the gallery, for instance to install gpt4all-j as gpt-3.5-turbo:
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}]
llama.cpp models now can also automatically save the prompt cache state as well by specifying in the model YAML configuration file:
prompt_cache_path: "alpaca-cache"
prompt_cache_all: true
See also the [advanced section]({{%relref "advanced" %}}).
Media, Blogs, Social
- Create a slackbot for teams and OSS projects that answer to documentation
- LocalAI meets k8sgpt - CNCF Webinar showcasing LocalAI and k8sgpt.
- Question Answering on Documents locally with LangChain, LocalAI, Chroma, and GPT4All by Ettore Di Giacinto
- Tutorial to use k8sgpt with LocalAI - excellent usecase for localAI, using AI to analyse Kubernetes clusters. by Tyller Gillson
Previous
- 23-05-2023: v1.15.0 released.
go-gpt2.cppbackend got renamed togo-ggml-transformers.cppupdated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. This impacts RedPajama, GptNeoX, MPT(notgpt4all-mpt), Dolly, GPT2 and Starcoder based models. Binary releases available, various fixes, including {{< pr "341" >}} . - 21-05-2023: v1.14.0 released. Minor updates to the
/models/applyendpoint,llama.cppbackend updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models.gpt4allis still compatible with the old format. - 19-05-2023: v1.13.0 released! 🔥🔥 updates to the
gpt4allandllamabackend, consolidated CUDA support ( {{< pr "310" >}} thanks to @bubthegreat and @Thireus ), preliminar support for [installing models via API]({{%relref "advanced#" %}}). - 17-05-2023: v1.12.0 released! 🔥🔥 Minor fixes, plus CUDA ({{< pr "258" >}}) support for
llama.cpp-compatible models and image generation ({{< pr "272" >}}). - 16-05-2023: 🔥🔥🔥 Experimental support for CUDA ({{< pr "258" >}}) in the
llama.cppbackend and Stable diffusion CPU image generation ({{< pr "272" >}}) inmaster.
Now LocalAI can generate images too:
| mode=0 | mode=1 (winograd/sgemm) |
|---|---|
- 14-05-2023: v1.11.1 released!
rwkvbackend patch release - 13-05-2023: v1.11.0 released! 🔥 Updated
llama.cppbindings: This update includes a breaking change in the model files ( https://github.com/ggerganov/llama.cpp/pull/1405 ) - old models should still work with thegpt4all-llamabackend. - 12-05-2023: v1.10.0 released! 🔥🔥 Updated
gpt4allbindings. Added support for GPTNeox (experimental), RedPajama (experimental), Starcoder (experimental), Replit (experimental), MosaicML MPT. Also nowembeddingsendpoint supports tokens arrays. See the langchain-chroma example! Note - this update does NOT include https://github.com/ggerganov/llama.cpp/pull/1405 which makes models incompatible. - 11-05-2023: v1.9.0 released! 🔥 Important whisper updates ( {{< pr "233" >}} {{< pr "229" >}} ) and extended gpt4all model families support ( {{< pr "232" >}} ). Redpajama/dolly experimental ( {{< pr "214" >}} )
- 10-05-2023: v1.8.0 released! 🔥 Added support for fast and accurate embeddings with
bert.cpp( {{< pr "222" >}} ) - 09-05-2023: Added experimental support for transcriptions endpoint ( {{< pr "211" >}} )
- 08-05-2023: Support for embeddings with models using the
llama.cppbackend ( {{< pr "207" >}} ) - 02-05-2023: Support for
rwkv.cppmodels ( {{< pr "158" >}} ) and for/editsendpoint - 01-05-2023: Support for SSE stream of tokens in
llama.cppbackends ( {{< pr "152" >}} )