mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-29 11:37:40 -04:00
* feat(insightface): add antispoofing (liveness) detection
Light up the anti_spoofing flag that was parked during the first pass.
Both FaceVerify and FaceAnalyze now run the Silent-Face MiniFASNetV2 +
MiniFASNetV1SE ensemble (~4 MB, Apache 2.0, CPU <10ms) when the flag is
set. Failed liveness on either image vetoes FaceVerify regardless of
embedding similarity. Every insightface* gallery entry now ships the
MiniFASNet ONNX weights so existing packs light up after reinstall.
Setting the flag against a model without the MiniFASNet files returns
FAILED_PRECONDITION (HTTP 412) with a clear install message — no
silent is_real=false.
FaceVerifyResponse gained per-image img{1,2}_is_real and
img{1,2}_antispoof_score (proto 9-12); FaceAnalysis's existing
is_real/antispoof_score fields are now populated. Schema fields are
pointers so they are fully absent from the JSON response when
anti_spoofing was not requested — avoids collapsing "not checked" with
"checked and fake" under Go's omitempty on bool.
Validated end-to-end over HTTP against a local install:
- verify + anti_spoofing, both real -> verified=true, score ~0.76
- verify + anti_spoofing, img2 spoof -> verified=false, img2_is_real=false
- analyze + anti_spoofing -> is_real and score per face
- flag against model without MiniFASNet -> HTTP 412 fail-loud
Assisted-by: Claude:claude-opus-4-7 go vet
* test(insightface): wire test target into test-extra
The root Makefile's `test-extra` already runs
`$(MAKE) -C backend/python/insightface test`, but the backend's
Makefile never defined the target — so the command silently errored
and the suite was never executed in CI. Adding the two-line target
(matching ace-step/Makefile) hooks `test.sh` → `runUnittests` →
`python -m unittest test.py`, which discovers both the pre-existing
engine classes (InsightFaceEngineTest, OnnxDirectEngineTest) and the
new AntispoofingTest. Each class skips gracefully when its weights
can't be downloaded from a network-restricted runner.
Assisted-by: Claude:claude-opus-4-7
* test(insightface): exercise antispoofing in e2e-backends (both paths)
Add a `face_antispoof` capability to the Ginkgo e2e suite and extend
the existing FaceVerify + FaceAnalyze specs with liveness assertions
covering BOTH paths:
real fixture -> is_real=true, score>0, verified stays true
spoof fixture -> is_real=false, verified vetoed to false
The spoof fixture is upstream's own `image_F2.jpg` (via the yakhyo
mirror) — verified locally against the MiniFASNetV2+V1SE ensemble to
classify as is_real=false with score ~0.013. That makes the assertion
deterministic across CI runs; synthetic/derived spoofs fool the model
unpredictably and would be flaky.
Makefile wires it up end-to-end:
- New INSIGHTFACE_ANTISPOOF_* cache dir + two ONNX downloads with
pinned SHAs, matching the gallery entries.
- insightface-antispoof-models target shared by both backend configs.
- FACE_SPOOF_IMAGE_URL passed via BACKEND_TEST_FACE_SPOOF_IMAGE_URL.
- Both e2e targets (buffalo-sc + opencv) now:
* depend on insightface-antispoof-models
* pass antispoof_v2_onnx / antispoof_v1se_onnx in BACKEND_TEST_OPTIONS
* include face_antispoof in BACKEND_TEST_CAPS
backend_test.go adds the new capability constant and a faceSpoofFile
fixture resolved the same way as faceFile1/2/3. Spoof assertions are
gated on both capFaceAntispoof AND faceSpoofFile being set, so a test
config that omits the spoof fixture degrades gracefully to "real path
only" instead of failing.
Assisted-by: Claude:claude-opus-4-7 go vet
insightface backend (LocalAI)
Face recognition backend backed by ONNX Runtime. Provides face verification (1:1), face analysis (age/gender), face detection, face embedding, and — via LocalAI's built-in vector store — 1:N identification.
Engines
This backend ships with two interchangeable engines selected via
LoadModel.Options["engine"]:
| engine | Implementation | Models | License |
|---|---|---|---|
insightface (default) |
insightface.app.FaceAnalysis |
buffalo_l, buffalo_s, antelopev2 |
Non-commercial research use only |
onnx_direct |
OpenCV FaceDetectorYN + FaceRecognizerSF |
OpenCV Zoo YuNet + SFace | Apache 2.0 (commercial-safe) |
Both engines implement the same FaceEngine protocol in engines.py,
so the gRPC servicer in backend.py doesn't need to know which one is
active.
LoadModel options
Common:
| option | default | description |
|---|---|---|
engine |
insightface |
one of insightface, onnx_direct |
det_size |
640x640 (insightface), 320x320 (onnx_direct) |
detector input size |
det_thresh |
0.5 |
detector confidence threshold |
verify_threshold |
0.35 |
default cosine distance cutoff for FaceVerify |
insightface engine:
| option | default | description |
|---|---|---|
model_pack |
buffalo_l |
which insightface pack to load |
onnx_direct engine:
| option | default | description |
|---|---|---|
detector_onnx |
(required) | path to YuNet-compatible ONNX |
recognizer_onnx |
(required) | path to SFace-compatible ONNX |
Adding a new model pack
- If it's an insightface pack (auto-downloadable or manually extracted
into
~/.insightface/models/<name>/), just add a new gallery entry inbackend/index.yamlwithoptions: ["engine:insightface", "model_pack:<name>"]. No code change. - If it's an Apache-licensed ONNX pair, add a gallery entry with
options: ["engine:onnx_direct", "detector_onnx:...", "recognizer_onnx:..."]. If the detector or recognizer has a different input-tensor shape than YuNet/SFace, you may need a new engine implementation inengines.py; the two-engine seam makes that a self-contained change.
Running tests locally
make -C backend/python/insightface # install deps + bake models
make -C backend/python/insightface test # run test.py
The OpenCV Zoo tests skip gracefully when /models/opencv/*.onnx is
absent (e.g. on dev boxes where install.sh wasn't run).