Compare commits

..

1 Commits

Author SHA1 Message Date
dependabot[bot]
0b2bfe655c Bump python-multipart from 0.0.12 to 0.0.20 in /docker/main
Bumps [python-multipart](https://github.com/Kludex/python-multipart) from 0.0.12 to 0.0.20.
- [Release notes](https://github.com/Kludex/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Kludex/python-multipart/compare/0.0.12...0.0.20)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-11 23:48:15 +00:00
1676 changed files with 23421 additions and 185813 deletions

View File

@@ -22,7 +22,6 @@ autotrack
autotracked
autotracker
autotracking
backchannel
balena
Beelink
BGRA
@@ -109,8 +108,8 @@ imagestream
imdecode
imencode
imread
imutils
imwrite
inpoint
interp
iostat
iotop
@@ -192,7 +191,6 @@ ONVIF
openai
opencv
openvino
overfitting
OWASP
paddleocr
paho
@@ -267,7 +265,6 @@ tensorrt
tflite
thresholded
timelapse
titlecase
tmpfs
tobytes
toggleable
@@ -317,4 +314,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency

View File

@@ -1,6 +0,0 @@
---
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: false
---
Never write strings in the frontend directly, always write to and reference the relevant translations file.

View File

@@ -1,129 +0,0 @@
title: "[Beta Support]: "
labels: ["support", "triage", "beta"]
body:
- type: markdown
attributes:
value: |
Thank you for testing Frigate beta versions! Use this form for support with beta releases.
**Note:** Beta versions may have incomplete features, known issues, or unexpected behavior. Please check the [release notes](https://github.com/blakeblackshear/frigate/releases) and [recent discussions][discussions] for known beta issues before submitting.
Before submitting, read the [beta documentation][docs].
[docs]: https://deploy-preview-19787--frigate-docs.netlify.app/
- type: textarea
id: description
attributes:
label: Describe the problem you are having
description: Please be as detailed as possible. Include what you expected to happen vs what actually happened.
validations:
required: true
- type: input
id: version
attributes:
label: Beta Version
description: Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.17.0-beta1)
placeholder: "0.17.0-beta1"
validations:
required: true
- type: dropdown
id: issue-category
attributes:
label: Issue Category
description: What area is your issue related to? This helps us understand the context.
options:
- Object Detection / Detectors
- Hardware Acceleration
- Configuration / Setup
- WebUI / Frontend
- Recordings / Storage
- Notifications / Events
- Integration (Home Assistant, etc)
- Performance / Stability
- Installation / Updates
- Other
validations:
required: true
- type: textarea
id: config
attributes:
label: Frigate config file
description: This will be automatically formatted into code, so no need for backticks. Remove any sensitive information like passwords or URLs.
render: yaml
validations:
required: true
- type: textarea
id: frigatelogs
attributes:
label: Relevant Frigate log output
description: Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: true
- type: textarea
id: go2rtclogs
attributes:
label: Relevant go2rtc log output (if applicable)
description: If your issue involves cameras, streams, or playback, please include go2rtc logs. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
id: install-method
attributes:
label: Install method
options:
- Home Assistant Add-on
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required: true
- type: textarea
id: docker
attributes:
label: docker-compose file or Docker CLI command
description: This will be automatically formatted into code, so no need for backticks. Include relevant environment variables and device mappings.
render: yaml
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating system
options:
- Home Assistant OS
- Debian
- Ubuntu
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required: true
- type: input
id: hardware
attributes:
label: CPU / GPU / Hardware
description: Provide details about your hardware (e.g., Intel i5-9400, NVIDIA RTX 3060, Raspberry Pi 4, etc)
placeholder: "Intel i7-10700, NVIDIA GTX 1660"
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: Screenshots of the issue, System metrics pages, or any relevant UI. Drag and drop or paste images directly.
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to reproduce
description: If applicable, provide detailed steps to reproduce the issue
placeholder: |
1. Go to '...'
2. Click on '...'
3. See error
- type: textarea
id: other
attributes:
label: Any other information that may be helpful
description: Additional context, related issues, when the problem started appearing, etc.

View File

@@ -73,7 +73,7 @@ body:
attributes:
label: Operating system
options:
- Home Assistant OS
- HassOS
- Debian
- Other Linux
- Proxmox
@@ -87,7 +87,7 @@ body:
attributes:
label: Install method
options:
- Home Assistant Add-on
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker

View File

@@ -59,7 +59,7 @@ body:
attributes:
label: Operating system
options:
- Home Assistant OS
- HassOS
- Debian
- Other Linux
- Proxmox
@@ -73,7 +73,7 @@ body:
attributes:
label: Install method
options:
- Home Assistant Add-on
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker

View File

@@ -53,7 +53,7 @@ body:
attributes:
label: Install method
options:
- Home Assistant Add-on
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker

View File

@@ -73,7 +73,7 @@ body:
attributes:
label: Install method
options:
- Home Assistant Add-on
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker

View File

@@ -69,7 +69,7 @@ body:
attributes:
label: Install method
options:
- Home Assistant Add-on
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker

View File

@@ -6,9 +6,7 @@ body:
value: |
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
**⚠️ If you are running a beta version (0.17.0-beta or similar), please use the [Beta Support template](https://github.com/blakeblackshear/frigate/discussions/new?category=beta-support) instead.**
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
Before submitting your bug report, please [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**
@@ -16,7 +14,6 @@ body:
[prs]: https://www.github.com/blakeblackshear/frigate/pulls
[docs]: https://docs.frigate.video
[faq]: https://github.com/blakeblackshear/frigate/discussions/12724
[ai]: https://docs.frigate.video
- type: checkboxes
attributes:
label: Checklist
@@ -29,8 +26,6 @@ body:
- label: I have tried a different browser to see if it is related to my browser.
required: true
- label: I have tried reproducing the issue in [incognito mode](https://www.computerworld.com/article/1719851/how-to-go-incognito-in-chrome-firefox-safari-and-edge.html) to rule out problems with any third party extensions or plugins I have installed.
- label: I have asked the AI at https://docs.frigate.video about my issue.
required: true
- type: textarea
id: description
attributes:
@@ -102,7 +97,7 @@ body:
attributes:
label: Operating system
options:
- Home Assistant OS
- HassOS
- Debian
- Other Linux
- Proxmox
@@ -116,7 +111,7 @@ body:
attributes:
label: Install method
options:
- Home Assistant Add-on
- HassOS Addon
- Docker Compose
- Docker CLI
validations:

View File

@@ -1,2 +0,0 @@
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.

View File

@@ -2,12 +2,12 @@
<!--
Thank you!
If you're introducing a new feature or significantly refactoring existing functionality,
we encourage you to start a discussion first. This helps ensure your idea aligns with
If you're introducing a new feature or significantly refactoring existing functionality,
we encourage you to start a discussion first. This helps ensure your idea aligns with
Frigate's development goals.
Describe what this pull request does and how it will benefit users of Frigate.
Please describe in detail any considerations, breaking changes, etc. that are
Please describe in detail any considerations, breaking changes, etc. that are
made in this pull request.
-->
@@ -24,7 +24,7 @@
## Additional information
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- This PR is related to issue:
## Checklist
@@ -35,5 +35,4 @@
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] UI changes including text have used i18n keys and have been added to the `en` locale.
- [ ] The code has been formatted using Ruff (`ruff format frigate`)

View File

@@ -15,7 +15,7 @@ concurrency:
cancel-in-progress: true
env:
PYTHON_VERSION: 3.11
PYTHON_VERSION: 3.9
jobs:
amd64_build:
@@ -23,7 +23,7 @@ jobs:
name: AMD64 Build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -41,13 +41,12 @@ jobs:
target: frigate
tags: ${{ steps.setup.outputs.image-name }}-amd64
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
cache-to: type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
arm64_build:
runs-on: ubuntu-22.04-arm
name: ARM Build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -77,12 +76,42 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
jetson_jp5_build:
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 5
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 5)
env:
ARCH: arm64
BASE_IMAGE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
SLIM_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
TRT_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
jetson_jp6_build:
runs-on: ubuntu-22.04-arm
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -113,7 +142,7 @@ jobs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -132,10 +161,11 @@ jobs:
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-tensorrt
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-tensorrt,mode=max
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
@@ -146,7 +176,7 @@ jobs:
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-rocm,mode=max
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-rocm
*.cache-from=type=gha
arm64_extra_builds:
runs-on: ubuntu-22.04-arm
name: ARM Extra Build
@@ -154,7 +184,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -172,31 +202,6 @@ jobs:
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
synaptics_build:
runs-on: ubuntu-22.04-arm
name: Synaptics Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Synaptics build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: synaptics
files: docker/synaptics/synaptics.hcl
set: |
synaptics.tags=${{ steps.setup.outputs.image-name }}-synaptics
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:
@@ -211,7 +216,7 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@@ -4,24 +4,48 @@ on:
pull_request:
paths-ignore:
- "docs/**"
- ".github/*.yml"
- ".github/DISCUSSION_TEMPLATE/**"
- ".github/ISSUE_TEMPLATE/**"
- ".github/**"
env:
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:
runs-on: ubuntu-latest
name: Build Devcontainer
# The Dockerfile contains features that requires buildkit, and since the
# devcontainer cli uses docker-compose to build the image, the only way to
# ensure docker-compose uses buildkit is to explicitly enable it.
env:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
run: devcontainer build --workspace-folder .
# It would be nice to also test the following commands, but for some
# reason they don't work even though in VS Code devcontainer works.
# - name: Start devcontainer
# run: devcontainer up --workspace-folder .
# - name: Run devcontainer scripts
# run: devcontainer run-user-commands --workspace-folder .
web_lint:
name: Web - Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
node-version: 16.x
- run: npm install
working-directory: ./web
- name: Lint
@@ -32,10 +56,10 @@ jobs:
name: Web - Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
- run: npm install
@@ -52,7 +76,7 @@ jobs:
name: Python Checks
steps:
- name: Check out the repository
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
@@ -75,21 +99,16 @@ jobs:
name: Python Tests
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@v6
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
env:
DOCKER_BUILDKIT: "1"
run: devcontainer build --workspace-folder .
- name: Start devcontainer
run: devcontainer up --workspace-folder .
- name: Run mypy in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m mypy --config-file frigate/mypy.ini frigate"
- name: Run unit tests in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m unittest"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build
run: make
- name: Run mypy
run: docker run --rm --entrypoint=python3 frigate:latest -u -m mypy --config-file frigate/mypy.ini frigate
- name: Run tests
run: docker run --rm --entrypoint=python3 frigate:latest -u -m unittest

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
persist-credentials: false
- id: lowercaseRepo
@@ -18,7 +18,7 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -39,14 +39,14 @@ jobs:
STABLE_TAG=${BASE}:stable
PULL_TAG=${BASE}:${BUILD_TAG}
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm; do
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
done
# stable tag
if [[ "${BUILD_TYPE}" == "stable" ]]; then
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm; do
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
done
fi

1
.gitignore vendored
View File

@@ -15,7 +15,6 @@ frigate/version.py
web/build
web/node_modules
web/coverage
web/.env
core
!/web/**/*.ts
.idea/*

View File

@@ -1,6 +1,6 @@
The MIT License
Copyright (c) 2025 Frigate LLC (Frigate™)
Copyright (c) 2020 Blake Blackshear
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
SOFTWARE.

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.17.0
VERSION = 0.16.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty
@@ -14,19 +14,12 @@ push-boards: $(BOARDS:%=push-%)
version:
echo 'VERSION = "$(VERSION)-$(COMMIT_HASH)"' > frigate/version.py
echo 'VITE_GIT_COMMIT_HASH=$(COMMIT_HASH)' > web/.env
local: version
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--tag frigate:latest \
--load
debug: version
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--build-arg DEBUG=true \
--tag frigate:latest \
--load
amd64:
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) \

View File

@@ -1,20 +1,12 @@
<p align="center">
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
<img align="center" alt="logo" src="docs/static/img/frigate.png">
</p>
# Frigate NVR™ - Realtime Object Detection for IP Cameras
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/language-badge.svg" alt="Translation status" />
</a>
\[English\] | [简体中文](https://github.com/blakeblackshear/frigate/blob/dev/README_CN.md)
# Frigate - NVR With Realtime Object Detection for IP Cameras
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported [object detectors](https://docs.frigate.video/configuration/object_detectors/).
Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but highly recommended. The Coral will outperform even the best CPUs and can process 100+ FPS with very little overhead.
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
@@ -35,49 +27,24 @@ View the documentation at https://docs.frigate.video
If you would like to make a donation to support development, please use [Github Sponsors](https://github.com/sponsors/blakeblackshear).
## License
This project is licensed under the **MIT License**.
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
## Screenshots
### Live dashboard
<div>
<img width="800" alt="Live dashboard" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
</div>
### Streamlined review workflow
<div>
<img width="800" alt="Streamlined review workflow" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
</div>
### Multi-camera scrubbing
<div>
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
</div>
### Built-in mask and zone editor
<div>
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## Translations
We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support language translations. Contributions are always welcome.
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/multi-auto.svg" alt="Translation status" />
</a>
---
**Copyright © 2025 Frigate LLC.**

View File

@@ -1,90 +0,0 @@
<p align="center">
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
</p>
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU并且功耗也极低。
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与 Home Assistant 紧密集成
- 设计上通过仅在必要时和必要地点寻找目标,最大限度地减少资源使用并最大化性能
- 大量利用多进程处理,强调实时性而非处理每一帧
- 使用非常低开销的画面变动检测(也叫运动检测)来确定运行目标检测的位置
- 使用 TensorFlow 进行目标检测,并运行在单独的进程中以达到最大 FPS
- 通过 MQTT 进行通信,便于集成到其他系统中
- 根据检测到的物体设置保留时间进行视频录制
- 24/7 全天候录制
- 通过 RTSP 重新流传输以减少摄像头的连接数
- 支持 WebRTC 和 MSE实现低延迟的实时观看
## 社区中文翻译文档
你可以在这里查看文档 https://docs.frigate-cn.video
## 赞助
如果您想通过捐赠支持开发,请使用 [Github Sponsors](https://github.com/sponsors/blakeblackshear)。
## 协议
本项目采用 **MIT 许可证**授权。
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
**商标部分**“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标****不在** MIT 许可证覆盖范围内。
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
## 截图
### 实时监控面板
<div>
<img width="800" alt="实时监控面板" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
</div>
### 简单的核查工作流程
<div>
<img width="800" alt="简单的审查工作流程" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
</div>
### 多摄像头可按时间轴查看
<div>
<img width="800" alt="多摄像头可按时间轴查看" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
</div>
### 内置遮罩和区域编辑器
<div>
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## 翻译
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
## 非官方中文讨论社区
欢迎加入中文讨论 QQ 群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
Bilibilihttps://space.bilibili.com/3546894915602564
## 中文社区赞助商
[![EdgeOne](https://edgeone.ai/media/34fe3a45-492d-4ea4-ae5d-ea1087ca7b4b.png)](https://edgeone.ai/zh?from=github)
本项目 CDN 加速及安全防护由 Tencent EdgeOne 赞助
---
**Copyright © 2025 Frigate LLC.**

View File

@@ -1,58 +0,0 @@
# Trademark Policy
**Last Updated:** November 2025
This document outlines the policy regarding the use of the trademarks associated with the Frigate NVR project.
## 1. Our Trademarks
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
- **Frigate™**
- **Frigate NVR™**
- **Frigate+™**
- **The Frigate Logo**
**Note on Common Law Rights:**
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
## 2. Interaction with the MIT License
The software in this repository is licensed under the [MIT License](LICENSE).
**Crucial Distinction:**
- The **Code** is free to use, modify, and distribute under the MIT terms.
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
## 3. Acceptable Use
You may use the Marks without prior written permission in the following specific contexts:
- **Referential Use:** To truthfully refer to the software (e.g., _"I use Frigate NVR for my home security"_).
- **Compatibility:** To indicate that your product or project works with the software (e.g., _"MyPlugin for Frigate NVR"_ or _"Compatible with Frigate"_).
- **Commentary:** In news articles, blog posts, or tutorials discussing the software.
## 4. Prohibited Use
You may **NOT** use the Marks in the following ways:
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).
## 5. The Logo
The Frigate logo (the bird icon) is a visual trademark.
- You generally **cannot** use the logo on your own website or product packaging without permission.
- If you are building a dashboard or integration that interfaces with Frigate, you may use the logo only to represent the Frigate node/service, provided it does not imply you _are_ Frigate.
## 6. Questions & Permissions
If you are unsure if your intended use violates this policy, or if you wish to request a specific license to use the Marks (e.g., for a partnership), please contact us at:
**help@frigate.video**

View File

@@ -4,13 +4,13 @@ from statistics import mean
import numpy as np
import frigate.util as util
from frigate.config import DetectorTypeEnum
from frigate.object_detection.base import (
from frigate.object_detection import (
ObjectDetectProcess,
RemoteObjectDetector,
load_labels,
)
from frigate.util.process import FrigateProcess
my_frame = np.expand_dims(np.full((300, 300, 3), 1, np.uint8), axis=0)
labels = load_labels("/labelmap.txt")
@@ -91,7 +91,7 @@ edgetpu_process_2 = ObjectDetectProcess(
)
for x in range(0, 10):
camera_process = FrigateProcess(
camera_process = util.Process(
target=start, args=(x, 300, detection_queue, events[str(x)])
)
camera_process.daemon = True

View File

@@ -1,8 +1,8 @@
version: "3"
services:
devcontainer:
container_name: frigate-devcontainer
# Check host system's actual render/video/plugdev group IDs with 'getent group render', 'getent group video', and 'getent group plugdev'
# Must add these exact IDs in container's group_add section or OpenVINO GPU acceleration will fail
# add groups from host for render, plugdev, video
group_add:
- "109" # render
- "110" # render
@@ -24,8 +24,8 @@ services:
# capabilities: [gpu]
environment:
YOLO_MODELS: ""
# devices:
# - /dev/bus/usb:/dev/bus/usb # Uncomment for Google Coral USB
devices:
- /dev/bus/usb:/dev/bus/usb
# - /dev/dri:/dev/dri # for intel hwaccel, needs to be updated for your hardware
volumes:
- .:/workspace/frigate:cached
@@ -33,10 +33,9 @@ services:
- /etc/localtime:/etc/localtime:ro
- ./config:/config
- ./debug:/media/frigate
# - /dev/bus/usb:/dev/bus/usb # Uncomment for Google Coral USB
- /dev/bus/usb:/dev/bus/usb
mqtt:
container_name: mqtt
image: eclipse-mosquitto:2.0
command: mosquitto -c /mosquitto-no-auth.conf # enable no-auth mode
image: eclipse-mosquitto:1.6
ports:
- "1883:1883"
- "1883:1883"

View File

@@ -4,7 +4,7 @@
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget
hailo_version="4.21.0"
hailo_version="4.20.0"
arch=$(uname -m)
if [[ $arch == "x86_64" ]]; then

View File

@@ -55,7 +55,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.10/go2rtc_linux_${TARGETARCH}" go2rtc
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.2/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio
ARG TARGETARCH
@@ -78,9 +78,8 @@ COPY docker/main/requirements-ov.txt /requirements-ov.txt
RUN apt-get -qq update \
&& apt-get -qq install -y wget python3 python3-dev python3-distutils gcc pkg-config libhdf5-dev \
&& wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' get-pip.py \
&& python3 get-pip.py "pip" \
&& pip3 install -r /requirements-ov.txt
&& pip install -r /requirements-ov.txt
# Get OpenVino Model
RUN --mount=type=bind,source=docker/main/build_ov_model.py,target=/build_ov_model.py \
@@ -148,12 +147,11 @@ RUN --mount=type=bind,source=docker/main/install_s6_overlay.sh,target=/deps/inst
FROM base AS wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
ARG DEBUG=false
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https wget unzip \
apt-transport-https wget \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.11 \
@@ -174,12 +172,9 @@ RUN apt-get -qq update \
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' get-pip.py \
&& python3 get-pip.py "pip"
COPY docker/main/requirements.txt /requirements.txt
COPY docker/main/requirements-dev.txt /requirements-dev.txt
RUN pip3 install -r /requirements.txt
# Build pysqlite3 from source
@@ -187,10 +182,7 @@ COPY docker/main/build_pysqlite3.sh /build_pysqlite3.sh
RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt && \
if [ "$DEBUG" = "true" ]; then \
pip3 wheel --wheel-dir=/wheels -r /requirements-dev.txt; \
fi
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
# Install HailoRT & Wheels
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
@@ -212,7 +204,6 @@ COPY docker/main/rootfs/ /
# Frigate deps (ffmpeg, python, nginx, go2rtc, s6-overlay, etc)
FROM slim-base AS deps
ARG TARGETARCH
ARG BASE_IMAGE
ARG DEBIAN_FRONTEND
# http://stackoverflow.com/questions/48162574/ddg#49462622
@@ -231,25 +222,9 @@ ENV TRANSFORMERS_NO_ADVISORY_WARNINGS=1
# Set OpenCV ffmpeg loglevel to fatal: https://ffmpeg.org/doxygen/trunk/log_8h.html
ENV OPENCV_FFMPEG_LOGLEVEL=8
# Set NumPy to ignore getlimits warning
ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
# TensorFlow C++ logging suppression (must be set before import)
# TF_CPP_MIN_LOG_LEVEL: 0=all, 1=INFO+, 2=WARNING+, 3=ERROR+ (we use 3 for errors only)
ENV TF_CPP_MIN_LOG_LEVEL=3
# Suppress verbose logging from TensorFlow C++ code
ENV TF_CPP_MIN_VLOG_LEVEL=3
# Disable oneDNN optimization messages ("optimized with oneDNN...")
ENV TF_ENABLE_ONEDNN_OPTS=0
# Suppress AutoGraph verbosity during conversion
ENV AUTOGRAPH_VERBOSITY=0
# Google Logging (GLOG) suppression for TensorFlow components
ENV GLOG_minloglevel=3
ENV GLOG_logtostderr=0
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
# Install dependencies
@@ -260,16 +235,11 @@ ENV DEFAULT_FFMPEG_VERSION="7.0"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:5.0"
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' get-pip.py \
&& python3 get-pip.py "pip"
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
pip3 install -U /deps/wheels/*.whl
# Install MemryX runtime (requires libgomp (OpenMP) in the final docker image)
RUN --mount=type=bind,source=docker/main/install_memryx.sh,target=/deps/install_memryx.sh \
bash -c "bash /deps/install_memryx.sh"
COPY --from=deps-rootfs / /
RUN ldconfig
@@ -287,12 +257,12 @@ ENTRYPOINT ["/init"]
CMD []
HEALTHCHECK --start-period=300s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
CMD test -f /dev/shm/.frigate-is-stopping && exit 0; curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
# Frigate deps with Node.js and NPM for devcontainer
FROM deps AS devcontainer
# Do not start the actual Frigate service on devcontainer as it will be started by VS Code
# Do not start the actual Frigate service on devcontainer as it will be started by VSCode
# But start a fake service for simulating the logs
COPY docker/main/fake_frigate_run /etc/s6-overlay/s6-rc.d/frigate/run

View File

@@ -2,7 +2,7 @@
set -euxo pipefail
NGINX_VERSION="1.27.4"
NGINX_VERSION="1.25.3"
VOD_MODULE_VERSION="1.31"
SECURE_TOKEN_MODULE_VERSION="1.5"
SET_MISC_MODULE_VERSION="v0.33"

View File

@@ -2,31 +2,18 @@
set -euxo pipefail
SQLITE3_VERSION="3.46.1"
SQLITE3_VERSION="96c92aba00c8375bc32fafcdf12429c58bd8aabfcadab6683e35bbb9cdebf19e" # 3.46.0
PYSQLITE3_VERSION="0.5.3"
# Install libsqlite3-dev if not present (needed for some base images like NVIDIA TensorRT)
if ! dpkg -l | grep -q libsqlite3-dev; then
echo "Installing libsqlite3-dev for compilation..."
apt-get update && apt-get install -y libsqlite3-dev && rm -rf /var/lib/apt/lists/*
fi
# Fetch the pre-built sqlite amalgamation instead of building from source
# Fetch the source code for the latest release of Sqlite.
if [[ ! -d "sqlite" ]]; then
mkdir sqlite
cd sqlite
# Download the pre-built amalgamation from sqlite.org
# For SQLite 3.46.1, the amalgamation version is 3460100
SQLITE_AMALGAMATION_VERSION="3460100"
wget https://www.sqlite.org/2024/sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}.zip -O sqlite-amalgamation.zip
unzip sqlite-amalgamation.zip
mv sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}/* .
rmdir sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}
rm sqlite-amalgamation.zip
wget https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=${SQLITE3_VERSION} -O sqlite.tar.gz
tar xzf sqlite.tar.gz
cd sqlite/
LIBS="-lm" ./configure --disable-tcl --enable-tempstore=always
make sqlite3.c
cd ../
rm sqlite.tar.gz
fi
# Grab the pysqlite3 source code.

View File

@@ -19,9 +19,7 @@ apt-get -qq install --no-install-recommends -y \
nethogs \
libgl1 \
libglib2.0-0 \
libusb-1.0.0 \
python3-h2 \
libgomp1 # memryx detector
libusb-1.0.0
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
@@ -33,18 +31,6 @@ unset DEBIAN_FRONTEND
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
rm /tmp/libedgetpu1-max.deb
# install mesa-teflon-delegate from bookworm-backports
# Only available for arm64 at the moment
if [[ "${TARGETARCH}" == "arm64" ]]; then
if [[ "${BASE_IMAGE}" == *"nvcr.io/nvidia/tensorrt"* ]]; then
echo "Info: Skipping apt-get commands because BASE_IMAGE includes 'nvcr.io/nvidia/tensorrt' for arm64."
else
echo "deb http://deb.debian.org/debian bookworm-backports main" | tee /etc/apt/sources.list.d/bookworm-backbacks.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y mesa-teflon-delegate/bookworm-backports
fi
fi
# ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
@@ -71,16 +57,9 @@ fi
# arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then
# Install non-free version of i965 driver
sed -i -E "/^Components: main$/s/main/main contrib non-free non-free-firmware/" "/etc/apt/sources.list.d/debian.sources" \
&& apt-get -qq update \
&& apt-get install --no-install-recommends --no-install-suggests -y i965-va-driver-shaders \
&& sed -i -E "/^Components: main contrib non-free non-free-firmware$/s/main contrib non-free non-free-firmware/main/" "/etc/apt/sources.list.d/debian.sources" \
&& apt-get update
# install amd / intel-i965 driver packages
apt-get -qq install --no-install-recommends --no-install-suggests -y \
intel-gpu-tools onevpl-tools \
i965-va-driver intel-gpu-tools onevpl-tools \
libva-drm2 \
mesa-va-drivers radeontop
@@ -92,41 +71,11 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2
apt-get -qq install -y ocl-icd-libopencl1
# install libtbb12 for NPU support
apt-get -qq install -y libtbb12
intel-opencl-icd=24.35.30872.31-996~22.04 intel-level-zero-gpu=1.3.29735.27-914~22.04 intel-media-va-driver-non-free=24.3.3-996~22.04 \
libmfx1=23.2.2-880~22.04 libmfxgen1=24.2.4-914~22.04 libvpl2=1:2.13.0.0-996~22.04
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# install legacy and standard intel icd and level-zero-gpu
# see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info
# needed core package
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/libigdgmm12_22.5.5_amd64.deb
dpkg -i libigdgmm12_22.5.5_amd64.deb
rm libigdgmm12_22.5.5_amd64.deb
# legacy packages
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
# standard packages
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-opencl-icd_24.52.32224.5_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
# npu packages
wget https://github.com/oneapi-src/level-zero/releases/download/v1.21.9/level-zero_1.21.9+u22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-driver-compiler-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-fw-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-level-zero-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
dpkg -i *.deb
rm *.deb
fi
if [[ "${TARGETARCH}" == "arm64" ]]; then
@@ -145,6 +94,6 @@ rm -rf /var/lib/apt/lists/*
# Install yq, for frigate-prepare and go2rtc echo source
curl -fsSL \
"https://github.com/mikefarah/yq/releases/download/v4.48.2/yq_linux_$(dpkg --print-architecture)" \
"https://github.com/mikefarah/yq/releases/download/v4.33.3/yq_linux_$(dpkg --print-architecture)" \
--output /usr/local/bin/yq
chmod +x /usr/local/bin/yq

View File

@@ -2,7 +2,7 @@
set -euxo pipefail
hailo_version="4.21.0"
hailo_version="4.20.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="x86_64"
@@ -10,5 +10,5 @@ elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-debian12-${TARGETARCH}.tar.gz" | tar -C / -xzf -
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" | tar -C / -xzf -
wget -P /wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp311-cp311-linux_${arch}.whl"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -e
# Download the MxAccl for Frigate github release
wget https://github.com/memryx/mx_accl_frigate/archive/refs/tags/v2.1.0.zip -O /tmp/mxaccl.zip
unzip /tmp/mxaccl.zip -d /tmp
mv /tmp/mx_accl_frigate-2.1.0 /opt/mx_accl_frigate
rm /tmp/mxaccl.zip
# Install Python dependencies
pip3 install -r /opt/mx_accl_frigate/freeze
# Link the Python package dynamically
SITE_PACKAGES=$(python3 -c "import site; print(site.getsitepackages()[0])")
ln -s /opt/mx_accl_frigate/memryx "$SITE_PACKAGES/memryx"
# Copy architecture-specific shared libraries
ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then
cp /opt/mx_accl_frigate/memryx/x86/libmemx.so* /usr/lib/x86_64-linux-gnu/
cp /opt/mx_accl_frigate/memryx/x86/libmx_accl.so* /usr/lib/x86_64-linux-gnu/
elif [[ "$ARCH" == "aarch64" ]]; then
cp /opt/mx_accl_frigate/memryx/arm/libmemx.so* /usr/lib/aarch64-linux-gnu/
cp /opt/mx_accl_frigate/memryx/arm/libmx_accl.so* /usr/lib/aarch64-linux-gnu/
else
echo "Unsupported architecture: $ARCH"
exit 1
fi
# Refresh linker cache
ldconfig

View File

@@ -2,7 +2,7 @@
set -euxo pipefail
s6_version="3.2.1.0"
s6_version="3.1.5.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
s6_arch="x86_64"

View File

@@ -1,4 +1 @@
ruff
# types
types-peewee == 3.17.*

View File

@@ -1,28 +1,25 @@
aiofiles == 24.1.*
click == 8.1.*
# FastAPI
aiohttp == 3.12.*
starlette == 0.47.*
starlette-context == 0.4.*
fastapi[standard-no-fastapi-cloud-cli] == 0.116.*
uvicorn == 0.35.*
aiohttp == 3.11.3
starlette == 0.41.2
starlette-context == 0.3.6
fastapi == 0.115.*
uvicorn == 0.30.*
slowapi == 0.1.*
joserfc == 1.2.*
cryptography == 44.0.*
pathvalidate == 3.3.*
imutils == 0.5.*
joserfc == 1.0.*
pathvalidate == 3.2.*
markupsafe == 3.0.*
python-multipart == 0.0.20
# Classification Model Training
tensorflow == 2.19.* ; platform_machine == 'aarch64'
tensorflow-cpu == 2.19.* ; platform_machine == 'x86_64'
# General
mypy == 1.6.1
onvif-zeep-async == 4.0.*
onvif-zeep-async == 3.1.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.14.*
psutil == 7.1.*
peewee_migrate == 1.13.*
psutil == 6.1.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
pytz == 2025.*
@@ -31,24 +28,24 @@ ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
norfair == 2.3.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
titlecase == 2.4.*
# Image Manipulation
numpy == 1.26.*
opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.*
scipy == 1.16.*
scipy == 1.14.*
# OpenVino & ONNX
openvino == 2025.3.*
onnxruntime == 1.22.*
openvino == 2024.4.*
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
# Embeddings
transformers == 4.45.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.5.*
ollama == 0.3.*
openai == 1.65.*
# push notifications
py-vapid == 1.9.*
@@ -56,7 +53,7 @@ pywebpush == 2.0.*
# alpr
pyclipper == 1.3.*
shapely == 2.0.*
rapidfuzz==3.12.*
Levenshtein==0.26.*
# HailoRT Wheels
appdirs==1.4.*
argcomplete==2.0.*
@@ -74,12 +71,3 @@ prometheus-client == 0.21.*
# TFLite
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'
# audio transcription
sherpa-onnx==1.12.*
faster-whisper==1.1.*
librosa==0.11.*
soundfile==0.13.*
# DeGirum detector
degirum == 0.16.*
# Memory profiling
memray == 1.15.*

View File

@@ -1 +1,2 @@
scikit-build == 0.18.*
nvidia-pyindex

View File

@@ -10,7 +10,7 @@ echo "[INFO] Starting certsync..."
lefile="/etc/letsencrypt/live/frigate/fullchain.pem"
tls_enabled=`python3 /usr/local/nginx/get_listen_settings.py | jq -r .tls.enabled`
tls_enabled=`python3 /usr/local/nginx/get_tls_settings.py | jq -r .enabled`
while true
do

View File

@@ -4,16 +4,44 @@
set -o errexit -o nounset -o pipefail
# opt out of openvino telemetry
if [ -e /usr/local/bin/opt_in_out ]; then
/usr/local/bin/opt_in_out --opt_out > /dev/null 2>&1
fi
# Logs should be sent to stdout so that s6 can collect them
# Tell S6-Overlay not to restart this service
s6-svc -O .
function migrate_db_path() {
# Find config file in yaml or yml, but prefer yaml
local config_file="${CONFIG_FILE:-"/config/config.yml"}"
local config_file_yaml="${config_file//.yml/.yaml}"
if [[ -f "${config_file_yaml}" ]]; then
config_file="${config_file_yaml}"
elif [[ ! -f "${config_file}" ]]; then
# Frigate will create the config file on startup
return 0
fi
unset config_file_yaml
# Use yq to check if database.path is set
local user_db_path
user_db_path=$(yq eval '.database.path' "${config_file}")
if [[ "${user_db_path}" == "null" ]]; then
local previous_db_path="/media/frigate/frigate.db"
local new_db_dir="/config"
if [[ -f "${previous_db_path}" ]]; then
if mountpoint --quiet "${new_db_dir}"; then
# /config is a mount point, move the db
echo "[INFO] Moving db from '${previous_db_path}' to the '${new_db_dir}' dir..."
# Move all files that starts with frigate.db to the new directory
mv -vf "${previous_db_path}"* "${new_db_dir}"
else
echo "[ERROR] Trying to migrate the db path from '${previous_db_path}' to the '${new_db_dir}' dir, but '${new_db_dir}' is not a mountpoint, please mount the '${new_db_dir}' dir"
return 1
fi
fi
fi
}
function set_libva_version() {
local ffmpeg_path
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
@@ -22,8 +50,8 @@ function set_libva_version() {
}
echo "[INFO] Preparing Frigate..."
migrate_db_path
set_libva_version
echo "[INFO] Starting Frigate..."
cd /opt/frigate || echo "[ERROR] Failed to change working directory to /opt/frigate"

View File

@@ -50,40 +50,6 @@ function set_libva_version() {
export LIBAVFORMAT_VERSION_MAJOR
}
function setup_homekit_config() {
local config_path="$1"
if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty HomeKit config file..."
echo 'homekit: {}' > "${config_path}"
fi
# Convert YAML to JSON for jq processing
local temp_json="/tmp/cache/homekit_config.json"
yq eval -o=json "${config_path}" > "${temp_json}" 2>/dev/null || {
echo "[WARNING] Failed to convert HomeKit config to JSON, skipping cleanup"
return 0
}
# Use jq to filter and keep only the homekit section
local cleaned_json="/tmp/cache/homekit_cleaned.json"
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{"homekit": {}}' > "${cleaned_json}"
}
# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo 'homekit: {}' > "${config_path}"
}
# Clean up temp files
rm -f "${temp_json}" "${cleaned_json}"
}
set_libva_version
if [[ -f "/dev/shm/go2rtc.yaml" ]]; then
@@ -95,7 +61,7 @@ if [[ ! -f "/dev/shm/go2rtc.yaml" ]]; then
echo "[INFO] Preparing new go2rtc config..."
if [[ -n "${SUPERVISOR_TOKEN:-}" ]]; then
# Running as a Home Assistant Add-on, infer the IP address and port
# Running as a Home Assistant add-on, infer the IP address and port
get_ip_and_port_from_supervisor
fi
@@ -104,10 +70,6 @@ else
echo "[WARNING] Unable to remove existing go2rtc config. Changes made to your frigate config file may not be recognized. Please remove the /dev/shm/go2rtc.yaml from your docker host manually."
fi
# HomeKit configuration persistence setup
readonly homekit_config_path="/config/go2rtc_homekit.yml"
setup_homekit_config "${homekit_config_path}"
readonly config_path="/config"
if [[ -x "${config_path}/go2rtc" ]]; then
@@ -120,7 +82,5 @@ fi
echo "[INFO] Starting go2rtc..."
# Replace the bash process with the go2rtc process, redirecting stderr to stdout
# Use HomeKit config as the primary config so writebacks go there
# The main config from Frigate will be loaded as a secondary config
exec 2>&1
exec "${binary_path}" -config="${homekit_config_path}" -config=/dev/shm/go2rtc.yaml
exec "${binary_path}" -config=/dev/shm/go2rtc.yaml

View File

@@ -79,13 +79,8 @@ if [ ! \( -f "$letsencrypt_path/privkey.pem" -a -f "$letsencrypt_path/fullchain.
-keyout "$letsencrypt_path/privkey.pem" -out "$letsencrypt_path/fullchain.pem" 2>/dev/null
fi
# build templates for optional FRIGATE_BASE_PATH environment variable
python3 /usr/local/nginx/get_base_path.py | \
tempio -template /usr/local/nginx/templates/base_path.gotmpl \
-out /usr/local/nginx/conf/base_path.conf
# build templates for optional TLS support
python3 /usr/local/nginx/get_listen_settings.py | \
python3 /usr/local/nginx/get_tls_settings.py | \
tempio -template /usr/local/nginx/templates/listen.gotmpl \
-out /usr/local/nginx/conf/listen.conf

View File

@@ -1,146 +0,0 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# Do preparation tasks before starting the main services
set -o errexit -o nounset -o pipefail
function migrate_addon_config_dir() {
local home_assistant_config_dir="/homeassistant"
if ! mountpoint --quiet "${home_assistant_config_dir}"; then
# Not running as a Home Assistant Add-on
return 0
fi
local config_dir="/config"
local new_config_file="${config_dir}/config.yml"
local new_config_file_yaml="${new_config_file//.yml/.yaml}"
if [[ -f "${new_config_file_yaml}" || -f "${new_config_file}" ]]; then
# Already migrated
return 0
fi
local old_config_file="${home_assistant_config_dir}/frigate.yml"
local old_config_file_yaml="${old_config_file//.yml/.yaml}"
if [[ -f "${old_config_file}" ]]; then
:
elif [[ -f "${old_config_file_yaml}" ]]; then
old_config_file="${old_config_file_yaml}"
new_config_file="${new_config_file_yaml}"
else
# Nothing to migrate
return 0
fi
unset old_config_file_yaml new_config_file_yaml
echo "[INFO] Starting migration from Home Assistant config dir to Add-on config dir..." >&2
local db_path
db_path=$(yq -r '.database.path' "${old_config_file}")
if [[ "${db_path}" == "null" ]]; then
db_path="${config_dir}/frigate.db"
fi
if [[ "${db_path}" == "${config_dir}/"* ]]; then
# replace /config/ prefix with /homeassistant/
local old_db_path="${home_assistant_config_dir}/${db_path:8}"
if [[ -f "${old_db_path}" ]]; then
local new_db_dir
new_db_dir="$(dirname "${db_path}")"
echo "[INFO] Migrating database from '${old_db_path}' to '${new_db_dir}' dir..." >&2
mkdir -vp "${new_db_dir}"
mv -vf "${old_db_path}" "${new_db_dir}"
local db_file
for db_file in "${old_db_path}"-shm "${old_db_path}"-wal; do
if [[ -f "${db_file}" ]]; then
mv -vf "${db_file}" "${new_db_dir}"
fi
done
unset db_file
fi
fi
local config_entry
for config_entry in .model.path .model.labelmap_path .ffmpeg.path .mqtt.tls_ca_certs .mqtt.tls_client_cert .mqtt.tls_client_key; do
local config_entry_path
config_entry_path=$(yq -r "${config_entry}" "${old_config_file}")
if [[ "${config_entry_path}" == "${config_dir}/"* ]]; then
# replace /config/ prefix with /homeassistant/
local old_config_entry_path="${home_assistant_config_dir}/${config_entry_path:8}"
if [[ -f "${old_config_entry_path}" ]]; then
local new_config_entry_entry
new_config_entry_entry="$(dirname "${config_entry_path}")"
echo "[INFO] Migrating ${config_entry} from '${old_config_entry_path}' to '${config_entry_path}'..." >&2
mkdir -vp "${new_config_entry_entry}"
mv -vf "${old_config_entry_path}" "${config_entry_path}"
fi
fi
done
local old_model_cache_path="${home_assistant_config_dir}/model_cache"
if [[ -d "${old_model_cache_path}" ]]; then
echo "[INFO] Migrating '${old_model_cache_path}' to '${config_dir}'..." >&2
mv -f "${old_model_cache_path}" "${config_dir}"
fi
echo "[INFO] Migrating other files from '${home_assistant_config_dir}' to '${config_dir}'..." >&2
local file
for file in .exports .jwt_secret .timeline .vacuum go2rtc; do
file="${home_assistant_config_dir}/${file}"
if [[ -f "${file}" ]]; then
mv -vf "${file}" "${config_dir}"
fi
done
echo "[INFO] Migrating config file from '${old_config_file}' to '${new_config_file}'..." >&2
mv -vf "${old_config_file}" "${new_config_file}"
echo "[INFO] Migration from Home Assistant config dir to Add-on config dir completed." >&2
}
function migrate_db_from_media_to_config() {
# Find config file in yml or yaml, but prefer yml
local config_file="${CONFIG_FILE:-"/config/config.yml"}"
local config_file_yaml="${config_file//.yml/.yaml}"
if [[ -f "${config_file}" ]]; then
:
elif [[ -f "${config_file_yaml}" ]]; then
config_file="${config_file_yaml}"
else
# Frigate will create the config file on startup
return 0
fi
unset config_file_yaml
local user_db_path
user_db_path=$(yq -r '.database.path' "${config_file}")
if [[ "${user_db_path}" == "null" ]]; then
local old_db_path="/media/frigate/frigate.db"
local new_db_dir="/config"
if [[ -f "${old_db_path}" ]]; then
echo "[INFO] Migrating database from '${old_db_path}' to '${new_db_dir}' dir..." >&2
if mountpoint --quiet "${new_db_dir}"; then
# /config is a mount point, move the db
mv -vf "${old_db_path}" "${new_db_dir}"
local db_file
for db_file in "${old_db_path}"-shm "${old_db_path}"-wal; do
if [[ -f "${db_file}" ]]; then
mv -vf "${db_file}" "${new_db_dir}"
fi
done
unset db_file
else
echo "[ERROR] Trying to migrate the database path from '${old_db_path}' to '${new_db_dir}' dir, but '${new_db_dir}' is not a mountpoint, please mount the '${new_db_dir}' dir" >&2
return 1
fi
fi
fi
}
# remove leftover from last run, not normally needed, but just in case
# used by the docker healthcheck
rm -f /dev/shm/.frigate-is-stopping
migrate_addon_config_dir
migrate_db_from_media_to_config

View File

@@ -1 +0,0 @@
oneshot

View File

@@ -1 +0,0 @@
/etc/s6-overlay/s6-rc.d/prepare/run

View File

@@ -1,6 +1,6 @@
import json
import os
import sys
from typing import Any
from ruamel.yaml import YAML
@@ -9,24 +9,28 @@ from frigate.const import (
DEFAULT_FFMPEG_VERSION,
INCLUDED_FFMPEG_VERSIONS,
)
from frigate.util.config import find_config_file
sys.path.remove("/opt/frigate")
yaml = YAML()
config_file = find_config_file()
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, Any] = yaml.load(raw_config)
config: dict[str, any] = yaml.load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, Any] = json.loads(raw_config)
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, Any] = {}
config: dict[str, any] = {}
path = config.get("ffmpeg", {}).get("path", "default")
if path == "default":

View File

@@ -4,7 +4,6 @@ import json
import os
import sys
from pathlib import Path
from typing import Any
from ruamel.yaml import YAML
@@ -16,7 +15,6 @@ from frigate.const import (
LIBAVFORMAT_VERSION_MAJOR,
)
from frigate.ffmpeg_presets import parse_preset_hardware_acceleration_encode
from frigate.util.config import find_config_file
sys.path.remove("/opt/frigate")
@@ -31,20 +29,25 @@ if os.path.isdir("/run/secrets"):
Path(os.path.join("/run/secrets", secret_file)).read_text().strip()
)
config_file = find_config_file()
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, Any] = yaml.load(raw_config)
config: dict[str, any] = yaml.load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, Any] = json.loads(raw_config)
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, Any] = {}
config: dict[str, any] = {}
go2rtc_config: dict[str, Any] = config.get("go2rtc", {})
go2rtc_config: dict[str, any] = config.get("go2rtc", {})
# Need to enable CORS for go2rtc so the frigate integration / card work automatically
if go2rtc_config.get("api") is None:
@@ -54,7 +57,7 @@ elif go2rtc_config["api"].get("origin") is None:
# Need to set default location for HA config
if go2rtc_config.get("hass") is None:
go2rtc_config["hass"] = {"config": "/homeassistant"}
go2rtc_config["hass"] = {"config": "/config"}
# we want to ensure that logs are easy to read
if go2rtc_config.get("log") is None:
@@ -66,6 +69,10 @@ elif go2rtc_config["log"].get("format") is None:
if go2rtc_config.get("webrtc") is None:
go2rtc_config["webrtc"] = {}
# go2rtc should listen on 8555 tcp & udp by default
if go2rtc_config["webrtc"].get("listen") is None:
go2rtc_config["webrtc"]["listen"] = ":8555"
if go2rtc_config["webrtc"].get("candidates") is None:
default_candidates = []
# use internal candidate if it was discovered when running through the add-on
@@ -77,15 +84,33 @@ if go2rtc_config["webrtc"].get("candidates") is None:
go2rtc_config["webrtc"]["candidates"] = default_candidates
if go2rtc_config.get("rtsp", {}).get("username") is not None:
go2rtc_config["rtsp"]["username"] = go2rtc_config["rtsp"]["username"].format(
**FRIGATE_ENV_VARS
)
# This prevents WebRTC from attempting to establish a connection to the internal
# docker IPs which are not accessible from outside the container itself and just
# wastes time during negotiation. Note that this is only necessary because
# Frigate container doesn't run in host network mode.
if go2rtc_config["webrtc"].get("filter") is None:
go2rtc_config["webrtc"]["filter"] = {"candidates": []}
elif go2rtc_config["webrtc"]["filter"].get("candidates") is None:
go2rtc_config["webrtc"]["filter"]["candidates"] = []
if go2rtc_config.get("rtsp", {}).get("password") is not None:
go2rtc_config["rtsp"]["password"] = go2rtc_config["rtsp"]["password"].format(
**FRIGATE_ENV_VARS
)
# sets default RTSP response to be equivalent to ?video=h264,h265&audio=aac
# this means user does not need to specify audio codec when using restream
# as source for frigate and the integration supports HLS playback
if go2rtc_config.get("rtsp") is None:
go2rtc_config["rtsp"] = {"default_query": "mp4"}
else:
if go2rtc_config["rtsp"].get("default_query") is None:
go2rtc_config["rtsp"]["default_query"] = "mp4"
if go2rtc_config["rtsp"].get("username") is not None:
go2rtc_config["rtsp"]["username"] = go2rtc_config["rtsp"]["username"].format(
**FRIGATE_ENV_VARS
)
if go2rtc_config["rtsp"].get("password") is not None:
go2rtc_config["rtsp"]["password"] = go2rtc_config["rtsp"]["password"].format(
**FRIGATE_ENV_VARS
)
# ensure ffmpeg path is set correctly
path = config.get("ffmpeg", {}).get("path", "default")
@@ -103,7 +128,7 @@ elif go2rtc_config["ffmpeg"].get("bin") is None:
# need to replace ffmpeg command when using ffmpeg4
if LIBAVFORMAT_VERSION_MAJOR < 59:
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 10000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
if go2rtc_config.get("ffmpeg") is None:
go2rtc_config["ffmpeg"] = {"rtsp": rtsp_args}
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
@@ -135,7 +160,7 @@ for name in go2rtc_config.get("streams", {}):
# add birdseye restream stream if enabled
if config.get("birdseye", {}).get("restream", False):
birdseye: dict[str, Any] = config.get("birdseye")
birdseye: dict[str, any] = config.get("birdseye")
input = f"-f rawvideo -pix_fmt yuv420p -video_size {birdseye.get('width', 1280)}x{birdseye.get('height', 720)} -r 10 -i {BIRDSEYE_PIPE}"
ffmpeg_cmd = f"exec:{parse_preset_hardware_acceleration_encode(ffmpeg_path, config.get('ffmpeg', {}).get('hwaccel_args', ''), input, '-rtsp_transport tcp -f rtsp {output}')}"

View File

@@ -17,9 +17,7 @@ http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'request_time="$request_time" upstream_response_time="$upstream_response_time"';
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
@@ -32,7 +30,7 @@ http {
gzip on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript image/svg+xml image/x-icon image/bmp;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript image/svg+xml image/x-icon image/bmp image/png image/gif image/jpeg image/jpg;
gzip_proxied no-cache no-store private expired auth;
gzip_vary on;
@@ -73,8 +71,6 @@ http {
vod_manifest_segment_durations_mode accurate;
vod_ignore_edit_list on;
vod_segment_duration 10000;
# MPEG-TS settings (not used when fMP4 is enabled, kept for reference)
vod_hls_mpegts_align_frames off;
vod_hls_mpegts_interleave_frames on;
@@ -86,7 +82,7 @@ http {
aio on;
# file upload size
client_max_body_size 20M;
client_max_body_size 10M;
# https://github.com/kaltura/nginx-vod-module#vod_open_file_thread_pool
vod_open_file_thread_pool default;
@@ -100,17 +96,12 @@ http {
gzip_types application/vnd.apple.mpegurl;
include auth_location.conf;
include base_path.conf;
location /vod/ {
include auth_request.conf;
aio threads;
vod hls;
# Use fMP4 (fragmented MP4) instead of MPEG-TS for better performance
# Smaller segments, faster generation, better browser compatibility
vod_hls_container_format fmp4;
secure_token $args;
secure_token_types application/vnd.apple.mpegurl;
@@ -280,18 +271,6 @@ http {
include proxy.conf;
}
# Allow unauthenticated access to the first_time_login endpoint
# so the login page can load help text before authentication.
location /api/auth/first_time_login {
auth_request off;
limit_except GET {
deny all;
}
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
location /api/stats {
include auth_request.conf;
access_log off;
@@ -320,35 +299,11 @@ http {
add_header Cache-Control "public";
}
location /fonts/ {
access_log off;
expires 1y;
add_header Cache-Control "public";
}
location /locales/ {
access_log off;
add_header Cache-Control "public";
}
location ~ ^/.*-([A-Za-z0-9]+)\.webmanifest$ {
access_log off;
expires 1y;
add_header Cache-Control "public";
default_type application/json;
proxy_set_header Accept-Encoding "";
sub_filter_once off;
sub_filter_types application/json;
sub_filter '"start_url": "/BASE_PATH/"' '"start_url" : "$http_x_ingress_path/"';
sub_filter '"src": "/BASE_PATH/' '"src": "$http_x_ingress_path/';
}
sub_filter 'href="/BASE_PATH/' 'href="$http_x_ingress_path/';
sub_filter 'url(/BASE_PATH/' 'url($http_x_ingress_path/';
sub_filter '"/BASE_PATH/dist/' '"$http_x_ingress_path/dist/';
sub_filter '"/BASE_PATH/js/' '"$http_x_ingress_path/js/';
sub_filter '"/BASE_PATH/assets/' '"$http_x_ingress_path/assets/';
sub_filter '"/BASE_PATH/locales/' '"$http_x_ingress_path/locales/';
sub_filter '"/BASE_PATH/monacoeditorwork/' '"$http_x_ingress_path/assets/';
sub_filter 'return"/BASE_PATH/"' 'return window.baseUrl';
sub_filter '<body>' '<body><script>window.baseUrl="$http_x_ingress_path/";</script>';

View File

@@ -18,10 +18,6 @@ proxy_set_header X-Forwarded-User $http_x_forwarded_user;
proxy_set_header X-Forwarded-Groups $http_x_forwarded_groups;
proxy_set_header X-Forwarded-Email $http_x_forwarded_email;
proxy_set_header X-Forwarded-Preferred-Username $http_x_forwarded_preferred_username;
proxy_set_header X-Auth-Request-User $http_x_auth_request_user;
proxy_set_header X-Auth-Request-Groups $http_x_auth_request_groups;
proxy_set_header X-Auth-Request-Email $http_x_auth_request_email;
proxy_set_header X-Auth-Request-Preferred-Username $http_x_auth_request_preferred_username;
proxy_set_header X-authentik-username $http_x_authentik_username;
proxy_set_header X-authentik-groups $http_x_authentik_groups;
proxy_set_header X-authentik-email $http_x_authentik_email;

View File

@@ -1,11 +0,0 @@
"""Prints the base path as json to stdout."""
import json
import os
from typing import Any
base_path = os.environ.get("FRIGATE_BASE_PATH", "")
result: dict[str, Any] = {"base_path": base_path}
print(json.dumps(result))

View File

@@ -1,35 +0,0 @@
"""Prints the tls config as json to stdout."""
import json
import sys
from typing import Any
from ruamel.yaml import YAML
sys.path.insert(0, "/opt/frigate")
from frigate.util.config import find_config_file
sys.path.remove("/opt/frigate")
yaml = YAML()
config_file = find_config_file()
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, Any] = yaml.load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, Any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, Any] = {}
tls_config: dict[str, any] = config.get("tls", {"enabled": True})
networking_config = config.get("networking", {})
ipv6_config = networking_config.get("ipv6", {"enabled": False})
output = {"tls": tls_config, "ipv6": ipv6_config}
print(json.dumps(output))

View File

@@ -0,0 +1,30 @@
"""Prints the tls config as json to stdout."""
import json
import os
from ruamel.yaml import YAML
yaml = YAML()
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, any] = yaml.load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, any] = {}
tls_config: dict[str, any] = config.get("tls", {"enabled": True})
print(json.dumps(tls_config))

View File

@@ -1,19 +0,0 @@
{{ if .base_path }}
location = {{ .base_path }} {
return 302 {{ .base_path }}/;
}
location ^~ {{ .base_path }}/ {
# remove base_url from the path before passing upstream
rewrite ^{{ .base_path }}/(.*) /$1 break;
proxy_pass $scheme://127.0.0.1:8971;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Ingress-Path {{ .base_path }};
access_log off;
}
{{ end }}

View File

@@ -1,45 +1,33 @@
# Internal (IPv4 always; IPv6 optional)
# intended for internal traffic, not protected by auth
listen 5000;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:5000;{{ end }}{{ end }}
{{ if not .enabled }}
# intended for external traffic, protected by auth
{{ if .tls }}
{{ if .tls.enabled }}
# external HTTPS (IPv4 always; IPv6 optional)
listen 8971 ssl;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971 ssl;{{ end }}{{ end }}
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
{{ else }}
# external HTTP (IPv4 always; IPv6 optional)
listen 8971;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971;{{ end }}{{ end }}
{{ end }}
listen 8971;
{{ else }}
# (No tls section) default to HTTP (IPv4 always; IPv6 optional)
listen 8971;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971;{{ end }}{{ end }}
# intended for external traffic, protected by auth
listen 8971 ssl;
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
{{ end }}

View File

@@ -1,44 +0,0 @@
#!/bin/bash
set -e # Exit immediately if any command fails
set -o pipefail
echo "Starting MemryX driver and runtime installation..."
# Detect architecture
arch=$(uname -m)
# Purge existing packages and repo
echo "Removing old MemryX installations..."
# Remove any holds on MemryX packages (if they exist)
sudo apt-mark unhold memx-* mxa-manager || true
sudo apt purge -y memx-* mxa-manager || true
sudo rm -f /etc/apt/sources.list.d/memryx.list /etc/apt/trusted.gpg.d/memryx.asc
# Install kernel headers
echo "Installing kernel headers for: $(uname -r)"
sudo apt update
sudo apt install -y dkms linux-headers-$(uname -r)
# Add MemryX key and repo
echo "Adding MemryX GPG key and repository..."
wget -qO- https://developer.memryx.com/deb/memryx.asc | sudo tee /etc/apt/trusted.gpg.d/memryx.asc >/dev/null
echo 'deb https://developer.memryx.com/deb stable main' | sudo tee /etc/apt/sources.list.d/memryx.list >/dev/null
# Update and install specific SDK 2.1 packages
echo "Installing MemryX SDK 2.1 packages..."
sudo apt update
sudo apt install -y memx-drivers=2.1.* memx-accl=2.1.* mxa-manager=2.1.*
# Hold packages to prevent automatic upgrades
sudo apt-mark hold memx-drivers memx-accl mxa-manager
# ARM-specific board setup
if [[ "$arch" == "aarch64" || "$arch" == "arm64" ]]; then
echo "Running ARM board setup..."
sudo mx_arm_setup
fi
echo -e "\n\n\033[1;31mYOU MUST RESTART YOUR COMPUTER NOW\033[0m\n\n"
echo "MemryX SDK 2.1 installation complete!"

View File

@@ -11,10 +11,8 @@ COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
RUN sed -i '/\[.*\]/d' /requirements-wheels.txt \
&& pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN rm -rf /rk-wheels/opencv_python-*
RUN rm -rf /rk-wheels/torch-*
FROM deps AS rk-frigate
ARG TARGETARCH
@@ -28,11 +26,9 @@ COPY --from=rootfs / /
COPY docker/rockchip/COCO /COCO
COPY docker/rockchip/conv2rknn.py /opt/conv2rknn.py
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.2/librknnrt.so /usr/lib/
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/librknnrt.so /usr/lib/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-11/ffmpeg /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-11/ffprobe /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/7.1-1/ffmpeg /usr/lib/ffmpeg/7.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/7.1-1/ffprobe /usr/lib/ffmpeg/7.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
ENV DEFAULT_FFMPEG_VERSION="6.0"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"

View File

@@ -14,7 +14,7 @@ try:
with open("/config/conv2rknn.yaml", "r") as config_file:
configuration = yaml.safe_load(config_file)
except FileNotFoundError:
raise Exception("Please place a config file at /config/conv2rknn.yaml")
raise Exception("Please place a config.yaml file in /config/conv2rknn.yaml")
if configuration["config"] != None:
rknn_config = configuration["config"]

View File

@@ -1,2 +1,2 @@
rknn-toolkit2 == 2.3.2
rknn-toolkit-lite2 == 2.3.2
rknn-toolkit2 == 2.3.0
rknn-toolkit-lite2 == 2.3.0

View File

@@ -2,7 +2,8 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=1
ARG ROCM=6.3.3
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
@@ -10,17 +11,18 @@ ARG HSA_OVERRIDE
FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt update -qq && \
RUN apt update && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1.1/ubuntu/jammy/amdgpu-install_7.1.1.70101-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/$ROCM/ubuntu/jammy/amdgpu-install_6.3.60303-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -qq -y rocm
apt install -y rocm
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* librocroller.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
@@ -31,46 +33,35 @@ RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM deps AS deps-prelim
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
RUN apt-get update && \
apt-get install -y libnuma1 && \
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers && \
# Install C++ standard library headers for HIPRTC kernel compilation fallback
apt-get install -qq -y libstdc++-12-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y libnuma1
WORKDIR /opt/frigate
COPY --from=rootfs / /
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' get-pip.py \
&& python3 get-pip.py "pip" --break-system-packages
RUN python3 -m pip config set global.break-system-packages true
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
RUN pip3 uninstall -y onnxruntime \
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 install -r /requirements.txt
#######################################################################
FROM scratch AS rocm-dist
ARG ROCM
ARG AMDGPU
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
# Copy MIOpen database files for gfx10xx and gfx11xx only (RDNA2/RDNA3)
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx10* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx11* /opt/rocm-$ROCM/share/miopen/db/
# Copy rocBLAS library files for gfx10xx and gfx11xx only
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx10* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx11* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV MIGRAPHX_DISABLE_MIOPEN_FUSION=1
ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1
ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1
ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1
ENV HSA_ENABLE_SDMA=0
ENV MIGRAPHX_ENABLE_NHWC=1
COPY --from=rocm-dist / /

View File

@@ -1,6 +0,0 @@
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm-backports
Components: main
Enabled: yes
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

View File

@@ -1 +1 @@
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.1.0/onnxruntime_migraphx-1.23.1-cp311-cp311-linux_x86_64.whl
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.3.3/onnxruntime_rocm-1.20.1-cp311-cp311-linux_x86_64.whl

View File

@@ -1,5 +1,8 @@
variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "7.1.1"
default = "6.3.3"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""
@@ -35,6 +38,7 @@ target rocm {
}
platforms = ["linux/amd64"]
args = {
AMDGPU = AMDGPU,
ROCM = ROCM,
HSA_OVERRIDE_GFX_VERSION = HSA_OVERRIDE_GFX_VERSION,
HSA_OVERRIDE = HSA_OVERRIDE

View File

@@ -1,15 +1,53 @@
BOARDS += rocm
# AMD/ROCm is chunky so we build couple of smaller images for specific chipsets
ROCM_CHIPSETS:=gfx900:9.0.0 gfx1030:10.3.0 gfx1100:11.0.0
local-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) \
--load \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=frigate:latest-rocm \
--load
build-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm
push-rocm: build-rocm
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
--push \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm \
--push

View File

@@ -1,28 +0,0 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels AS synap1680-wheels
ARG TARGETARCH
# Install dependencies
RUN wget -qO- "https://github.com/GaryHuang-ASUS/synaptics_astra_sdk/releases/download/v1.5.0/Synaptics-SL1680-v1.5.0-rt.tar" | tar -C / -xzf -
RUN wget -P /wheels/ "https://github.com/synaptics-synap/synap-python/releases/download/v0.0.4-preview/synap_python-0.0.4-cp311-cp311-manylinux_2_35_aarch64.whl"
FROM deps AS synap1680-deps
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=synap1680-wheels,source=/wheels,target=/deps/synap-wheels \
pip3 install --no-deps -U /deps/synap-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib
ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /synaptics/mobilenet.synap

View File

@@ -1,27 +0,0 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "rootfs"
}
target synaptics {
dockerfile = "docker/synaptics/Dockerfile"
contexts = {
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/arm64"]
}

View File

@@ -1,15 +0,0 @@
BOARDS += synaptics
local-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=frigate:latest-synaptics \
--load
build-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics
push-synaptics: build-synaptics
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics \
--push

View File

@@ -6,32 +6,24 @@ ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels AS trt-wheels
FROM tensorrt-base AS frigate-tensorrt
ARG PIP_BREAK_SYSTEM_PACKAGES
ENV TRT_VER=8.6.1
# Install TensorRT wheels
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
# remove dependencies from the requirements that have type constraints
RUN sed -i '/\[.*\]/d' /requirements-wheels.txt \
&& pip3 wheel --wheel-dir=/trt-wheels -c /requirements-wheels.txt -r /requirements-tensorrt.txt
FROM deps AS frigate-tensorrt
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 uninstall -y onnxruntime \
&& pip3 install -U /deps/trt-wheels/*.whl
COPY --from=rootfs / /
COPY docker/tensorrt/detector/rootfs/etc/ld.so.conf.d /etc/ld.so.conf.d
RUN ldconfig
RUN pip3 install -U -r /requirements-tensorrt.txt && ldconfig
WORKDIR /opt/frigate/
COPY --from=rootfs / /
# Dev Container w/ TRT
FROM devcontainer AS devcontainer-trt
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=trt-deps /usr/local/cuda-12.1 /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl

View File

@@ -1,69 +1,17 @@
# syntax=docker/dockerfile:1.6
# syntax=docker/dockerfile:1.4
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG BASE_IMAGE
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.12-py3
# Build TensorRT-specific library
FROM ${TRT_BASE} AS trt-deps
ARG TARGETARCH
ARG COMPUTE_LEVEL
RUN apt-get update \
&& apt-get install -y git build-essential cuda-nvcc-* cuda-nvtx-* libnvinfer-dev libnvinfer-plugin-dev libnvparsers-dev libnvonnxparsers-dev \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=docker/tensorrt/detector/tensorrt_libyolo.sh,target=/tensorrt_libyolo.sh \
/tensorrt_libyolo.sh
# COPY required individual CUDA deps
RUN mkdir -p /usr/local/cuda-deps
RUN if [ "$TARGETARCH" = "amd64" ]; then \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libcurand.so.* /usr/local/cuda-deps/ && \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libnvrtc.so.* /usr/local/cuda-deps/ && \
cd /usr/local/cuda-deps/ && \
for lib in libnvrtc.so.*; do \
if [[ "$lib" =~ libnvrtc.so\.([0-9]+\.[0-9]+\.[0-9]+) ]]; then \
version="${BASH_REMATCH[1]}"; \
ln -sf "libnvrtc.so.$version" libnvrtc.so; \
fi; \
done && \
for lib in libcurand.so.*; do \
if [[ "$lib" =~ libcurand.so\.([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) ]]; then \
version="${BASH_REMATCH[1]}"; \
ln -sf "libcurand.so.$version" libcurand.so; \
fi; \
done; \
fi
# Frigate w/ TensorRT Support as separate image
FROM deps AS tensorrt-base
#Disable S6 Global timeout
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
# COPY TensorRT Model Generation Deps
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
# COPY Individual CUDA deps folder
COPY --from=trt-deps /usr/local/cuda-deps /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS=""
HEALTHCHECK --start-period=600s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
FROM ${BASE_IMAGE} AS build-wheels
ARG DEBIAN_FRONTEND
# Add deadsnakes PPA for python3.11
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository ppa:deadsnakes/ppa
apt-get -qq install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository ppa:deadsnakes/ppa
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
@@ -76,7 +24,6 @@ RUN apt-get -qq update \
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' get-pip.py \
&& python3 get-pip.py "pip"
FROM build-wheels AS trt-wheels
@@ -99,11 +46,12 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
RUN pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
# See https://elinux.org/Jetson_Zoo#ONNX_Runtime
ADD https://nvidia.box.com/shared/static/9yvw05k6u343qfnkhdv2x6xhygze0aq1.whl /trt-wheels/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
ADD https://nvidia.box.com/shared/static/9yvw05k6u343qfnkhdv2x6xhygze0aq1.whl /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt \
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
FROM build-wheels AS trt-model-wheels
ARG DEBIAN_FRONTEND
@@ -112,7 +60,7 @@ RUN apt-get update \
&& apt-get install -y protobuf-compiler libprotobuf-dev \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=docker/tensorrt/requirements-models-arm64.txt,target=/requirements-tensorrt-models.txt \
pip3 wheel --wheel-dir=/trt-model-wheels --no-deps -r /requirements-tensorrt-models.txt
pip3 wheel --wheel-dir=/trt-model-wheels -r /requirements-tensorrt-models.txt
FROM wget AS jetson-ffmpeg
ARG DEBIAN_FRONTEND
@@ -144,13 +92,11 @@ RUN mkdir -p /etc/ld.so.conf.d && echo /usr/lib/ffmpeg/jetson/lib/ > /etc/ld.so.
COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
--mount=type=bind,from=trt-model-wheels,source=/trt-model-wheels,target=/deps/trt-model-wheels \
pip3 uninstall -y onnxruntime \
&& pip3 install -U /deps/trt-wheels/*.whl \
&& pip3 install -U /deps/trt-model-wheels/*.whl \
pip3 install -U /deps/trt-wheels/*.whl /deps/trt-model-wheels/*.whl \
&& ldconfig
WORKDIR /opt/frigate/
COPY --from=rootfs / /
# Fixes "Error importing detector runtime: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block"
ENV LD_PRELOAD /usr/lib/aarch64-linux-gnu/libstdc++.so.6
ENV LD_PRELOAD /usr/lib/aarch64-linux-gnu/libstdc++.so.6

View File

@@ -0,0 +1,44 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.12-py3
# Build TensorRT-specific library
FROM ${TRT_BASE} AS trt-deps
ARG TARGETARCH
ARG COMPUTE_LEVEL
RUN apt-get update \
&& apt-get install -y git build-essential cuda-nvcc-* cuda-nvtx-* libnvinfer-dev libnvinfer-plugin-dev libnvparsers-dev libnvonnxparsers-dev \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=docker/tensorrt/detector/tensorrt_libyolo.sh,target=/tensorrt_libyolo.sh \
/tensorrt_libyolo.sh
# COPY required individual CUDA deps
RUN mkdir -p /usr/local/cuda-deps
RUN if [ "$TARGETARCH" = "amd64" ]; then \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libcurand.so.* /usr/local/cuda-deps/ && \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libnvrtc.so.* /usr/local/cuda-deps/ ; \
fi
# Frigate w/ TensorRT Support as separate image
FROM deps AS tensorrt-base
#Disable S6 Global timeout
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
# COPY TensorRT Model Generation Deps
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
# COPY Individual CUDA deps folder
COPY --from=trt-deps /usr/local/cuda-deps /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS=""
HEALTHCHECK --start-period=600s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1

View File

@@ -1,6 +1,8 @@
/usr/local/lib
/usr/local/cuda
/usr/local/lib/python3.11/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cufft/lib
/usr/local/lib/python3.11/dist-packages/nvidia/curand/lib/
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_nvrtc/lib/
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.11/dist-packages/tensorrt
/usr/local/lib/python3.11/dist-packages/nvidia/cufft/lib

View File

@@ -1,18 +1,17 @@
# NVidia TensorRT Support (amd64 only)
--extra-index-url 'https://pypi.nvidia.com'
cython==3.0.*; platform_machine == 'x86_64'
nvidia_cuda_cupti_cu12==12.5.82; platform_machine == 'x86_64'
nvidia-cublas-cu12==12.5.3.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12==9.3.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.2.3.*; platform_machine == 'x86_64'
nvidia-curand-cu12==10.3.6.*; platform_machine == 'x86_64'
nvidia_cuda_nvcc_cu12==12.5.82; platform_machine == 'x86_64'
nvidia-cuda-nvrtc-cu12==12.5.82; platform_machine == 'x86_64'
nvidia_cuda_runtime_cu12==12.5.82; platform_machine == 'x86_64'
nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
numpy < 1.24; platform_machine == 'x86_64'
tensorrt == 8.6.1; platform_machine == 'x86_64'
tensorrt_bindings == 8.6.1; platform_machine == 'x86_64'
cuda-python == 11.8.*; platform_machine == 'x86_64'
cython == 3.0.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12 == 12.1.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu11 == 11.8.*; platform_machine == 'x86_64'
nvidia-cublas-cu11 == 11.11.3.6; platform_machine == 'x86_64'
nvidia-cudnn-cu11 == 8.6.0.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12 == 9.5.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu11==10.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@@ -1,2 +1 @@
cuda-python == 12.6.*; platform_machine == 'aarch64'
numpy == 1.26.*; platform_machine == 'aarch64'

View File

@@ -1,2 +1,3 @@
onnx == 1.14.0; platform_machine == 'aarch64'
protobuf == 3.20.3; platform_machine == 'aarch64'
numpy == 1.23.*; platform_machine == 'aarch64' # required by python-tensorrt 8.2.1 (Jetpack 4.6)

View File

@@ -79,13 +79,21 @@ target "trt-deps" {
inherits = ["_build_args"]
}
target "tensorrt-base" {
dockerfile = "docker/tensorrt/Dockerfile.base"
context = "."
contexts = {
deps = "target:deps",
}
inherits = ["_build_args"]
}
target "tensorrt" {
dockerfile = "docker/tensorrt/Dockerfile.${ARCH}"
context = "."
contexts = {
wget = "target:wget",
wheels = "target:wheels",
deps = "target:deps",
tensorrt-base = "target:tensorrt-base",
rootfs = "target:rootfs"
}
target = "frigate-tensorrt"

View File

@@ -25,7 +25,7 @@ Examples of available modules are:
- `frigate.app`
- `frigate.mqtt`
- `frigate.object_detection.base`
- `frigate.object_detection`
- `detector.<detector_name>`
- `watchdog.<camera_name>`
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
@@ -44,7 +44,7 @@ go2rtc:
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container, like within Home Assistant OS.
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
Example:
@@ -53,17 +53,6 @@ environment_vars:
VARIABLE_NAME: variable_value
```
#### TensorFlow Thread Configuration
If you encounter thread creation errors during classification model training, you can limit TensorFlow's thread usage:
```yaml
environment_vars:
TF_INTRA_OP_PARALLELISM_THREADS: "2" # Threads within operations (0 = use default)
TF_INTER_OP_PARALLELISM_THREADS: "2" # Threads between operations (0 = use default)
TF_DATASET_THREAD_POOL_SIZE: "2" # Data pipeline threads (0 = use default)
```
### `database`
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
@@ -183,43 +172,6 @@ listen [::]:8971 ipv6only=off ssl;
listen [::]:5000 ipv6only=off;
```
## Base path
By default, Frigate runs at the root path (`/`). However some setups require to run Frigate under a custom path prefix (e.g. `/frigate`), especially when Frigate is located behind a reverse proxy that requires path-based routing.
### Set Base Path via HTTP Header
The preferred way to configure the base path is through the `X-Ingress-Path` HTTP header, which needs to be set to the desired base path in an upstream reverse proxy.
For example, in Nginx:
```
location /frigate {
proxy_set_header X-Ingress-Path /frigate;
proxy_pass http://frigate_backend;
}
```
### Set Base Path via Environment Variable
When it is not feasible to set the base path via a HTTP header, it can also be set via the `FRIGATE_BASE_PATH` environment variable in the Docker Compose file.
For example:
```
services:
frigate:
image: blakeblackshear/frigate:latest
environment:
- FRIGATE_BASE_PATH=/frigate
```
This can be used for example to access Frigate via a Tailscale agent (https), by simply forwarding all requests to the base path (http):
```
tailscale serve --https=443 --bg --set-path /frigate http://localhost:5000/frigate
```
## Custom Dependencies
### Custom ffmpeg build
@@ -234,7 +186,7 @@ To do this:
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.10, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
To do this:
@@ -258,7 +210,7 @@ curl -X POST http://frigate_host:5000/api/config/save -d @config.json
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
```bash
yq -o=json '.' config.yaml | curl -X POST 'http://frigate_host:5000/api/config/save?save_option=saveonly' --data-binary @-
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
```
### Via Command Line

View File

@@ -50,7 +50,7 @@ cameras:
### Configuring Minimum Volume
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. MQTT explorer can be used on the audio topic to see what volume level is being detected.
:::tip
@@ -72,106 +72,3 @@ audio:
- speech
- yell
```
### Audio Transcription
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
Transcription accuracy also depends heavily on the quality of your camera's microphone and recording conditions. Many cameras use inexpensive microphones, and distance to the speaker, low audio bitrate, or background noise can significantly reduce transcription quality. If you need higher accuracy, more robust long-running queues, or large-scale automatic transcription, consider using the HTTP API in combination with an automation platform and a cloud transcription service.
#### Configuration
To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
```yaml
audio_transcription:
enabled: True
device: ...
model_size: ...
```
Disable audio transcription for select cameras at the camera level:
```yaml
cameras:
back_yard:
...
audio_transcription:
enabled: False
```
:::note
Audio detection must be enabled and configured as described above in order to use audio transcription features.
:::
The optional config parameters that can be set at the global level include:
- **`enabled`**: Enable or disable the audio transcription feature.
- Default: `False`
- It is recommended to only configure the features at the global level, and enable it at the individual camera level.
- **`device`**: Device to use to run transcription and translation models.
- Default: `CPU`
- This can be `CPU` or `GPU`. The `sherpa-onnx` models are lightweight and run on the CPU only. The `whisper` models can run on GPU but are only supported on CUDA hardware.
- **`model_size`**: The size of the model used for live transcription.
- Default: `small`
- This can be `small` or `large`. The `small` setting uses `sherpa-onnx` models that are fast, lightweight, and always run on the CPU but are not as accurate as the `whisper` model.
- This config option applies to **live transcription only**. Recorded `speech` events will always use a different `whisper` model (and can be accelerated for CUDA hardware if available with `device: GPU`).
- **`language`**: Defines the language used by `whisper` to translate `speech` audio events (and live audio only if using the `large` model).
- Default: `en`
- You must use a valid [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
- Transcriptions for `speech` events are translated.
- Live audio is translated only if you are using the `large` model. The `small` `sherpa-onnx` model is English-only.
The only field that is valid at the camera level is `enabled`.
#### Live transcription
The single camera Live view in the Frigate UI supports live transcription of audio for streams defined with the `audio` role. Use the Enable/Disable Live Audio Transcription button/switch to toggle transcription processing. When speech is heard, the UI will display a black box over the top of the camera stream with text. The MQTT topic `frigate/<camera_name>/audio/transcription` will also be updated in real-time with transcribed text.
Results can be error-prone due to a number of factors, including:
- Poor quality camera microphone
- Distance of the audio source to the camera microphone
- Low audio bitrate setting in the camera
- Background noise
- Using the `small` model - it's fast, but not accurate for poor quality audio
For speech sources close to the camera with minimal background noise, use the `small` model.
If you have CUDA hardware, you can experiment with the `large` `whisper` model on GPU. Performance is not quite as fast as the `sherpa-onnx` `small` model, but live transcription is far more accurate. Using the `large` model with CPU will likely be too slow for real-time transcription.
#### Transcription and translation of `speech` audio events
Any `speech` events in Explore can be transcribed and/or translated through the Transcribe button in the Tracked Object Details pane.
In order to use transcription and translation for past events, you must enable audio detection and define `speech` as an audio type to listen for in your config. To have `speech` events translated into the language of your choice, set the `language` config parameter with the correct [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
The transcribed/translated speech will appear in the description box in the Tracked Object Details pane. If Semantic Search is enabled, embeddings are generated for the transcription text and are fully searchable using the description search type.
:::note
Only one `speech` event may be transcribed at a time. Frigate does not automatically transcribe `speech` events or implement a queue for long-running transcription model inference.
:::
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a supported Nvidia GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.
#### FAQ
1. Why doesn't Frigate automatically transcribe all `speech` events?
Frigate does not implement a queue mechanism for speech transcription, and adding one is not trivial. A proper queue would need backpressure, prioritization, memory/disk buffering, retry logic, crash recovery, and safeguards to prevent unbounded growth when events outpace processing. Thats a significant amount of complexity for a feature that, in most real-world environments, would mostly just churn through low-value noise.
Because transcription is **serialized (one event at a time)** and speech events can be generated far faster than they can be processed, an auto-transcribe toggle would very quickly create an ever-growing backlog and degrade core functionality. For the amount of engineering and risk involved, it adds **very little practical value** for the majority of deployments, which are often on low-powered, edge hardware.
If you hear speech thats actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
Other options are being considered for future versions of Frigate to add transcription options that support external `whisper` Docker containers. A single transcription service could then be shared by Frigate and other applications (for example, Home Assistant Voice), and run on more powerful machines when available.
2. Why don't you save live transcription text and use that for `speech` events?
Theres no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
Automatically persisting that data would often result in **misaligned, partial, or irrelevant transcripts**, while still incurring all of the CPU, storage, and privacy costs of transcription. Thats why Frigate treats transcription as an **explicit, user-initiated action** rather than an automatic side-effect of every `speech` event.

View File

@@ -43,29 +43,13 @@ Restarting Frigate will reset the rate limits.
If you are running Frigate behind a proxy, you will want to set `trusted_proxies` or these rate limits will apply to the upstream proxy IP address. This means that a brute force attack will rate limit login attempts from other devices and could temporarily lock you out of your instance. In order to ensure rate limits only apply to the actual IP address where the requests are coming from, you will need to list the upstream networks that you want to trust. These trusted proxies are checked against the `X-Forwarded-For` header when looking for the IP address where the request originated.
If you are running a reverse proxy in the same Docker Compose file as Frigate, here is an example of how your auth config might look:
If you are running a reverse proxy in the same docker compose file as Frigate, here is an example of how your auth config might look:
```yaml
auth:
failed_login_rate_limit: "1/second;5/minute;20/hour"
trusted_proxies:
- 172.18.0.0/16 # <---- this is the subnet for the internal Docker Compose network
```
## Session Length
The default session length for user authentication in Frigate is 24 hours. This setting determines how long a user's authenticated session remains active before a token refresh is required — otherwise, the user will need to log in again.
While the default provides a balance of security and convenience, you can customize this duration to suit your specific security requirements and user experience preferences. The session length is configured in seconds.
The default value of `86400` will expire the authentication session after 24 hours. Some other examples:
- `0`: Setting the session length to 0 will require a user to log in every time they access the application or after a very short, immediate timeout.
- `604800`: Setting the session length to 604800 will require a user to log in if the token is not refreshed for 7 days.
```yaml
auth:
session_length: 86400
- 172.18.0.0/16 # <---- this is the subnet for the internal docker compose network
```
## JWT Token Secret
@@ -81,8 +65,8 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
Frigate looks for a JWT token secret in the following order:
1. An environment variable named `FRIGATE_JWT_SECRET`
2. A file named `FRIGATE_JWT_SECRET` in the directory specified by the `CREDENTIALS_DIRECTORY` environment variable (defaults to the Docker Secrets directory: `/run/secrets/`)
3. A `jwt_secret` option from the Home Assistant Add-on options
2. A docker secret named `FRIGATE_JWT_SECRET` in `/run/secrets/`
3. A `jwt_secret` option from the Home Assistant Addon options
4. A `.jwt_secret` file in the config directory
If no secret is found on startup, Frigate generates one and stores it in a `.jwt_secret` file in the config directory.
@@ -93,7 +77,7 @@ Changing the secret will invalidate current tokens.
Frigate can be configured to leverage features of common upstream authentication proxies such as Authelia, Authentik, oauth2_proxy, or traefik-forward-auth.
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication as there is no correspondence between users in Frigate's database and users authenticated via the proxy. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
Here is an example of how to disable Frigate's authentication and also ensure the requests come only from your known proxy.
@@ -113,54 +97,17 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
### Header mapping
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Groups` values. Header names are not case sensitive. Multiple values can be included in the role header. Frigate expects that the character separating the roles is a comma, but this can be specified using the `separator` config entry.
```yaml
proxy:
...
separator: "|" # This value defaults to a comma, but Authentik uses a pipe, for example.
header_map:
user: x-forwarded-user
role: x-forwarded-groups
```
Frigate supports `admin`, `viewer`, and custom roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
A default role can be provided. Any value in the mapped `role` header will override the default.
```yaml
proxy:
...
default_role: viewer
```
## Role mapping
In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do not pass a Frigate-compatible role directly, but instead pass one or more group claims. To handle this, Frigate supports a `role_map` that translates upstream group names into Frigates internal roles (`admin`, `viewer`, or custom).
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive.
```yaml
proxy:
...
header_map:
user: x-forwarded-user
role: x-forwarded-groups
role_map:
admin:
- sysadmins
- access-level-security
viewer:
- camera-viewer
operator: # Custom role mapping
- operators
role: x-forwarded-role
```
In this example:
- If the proxy passes a role header containing `sysadmins` or `access-level-security`, the user is assigned the `admin` role.
- If the proxy passes a role header containing `camera-viewer`, the user is assigned the `viewer` role.
- If the proxy passes a role header containing `operators`, the user is assigned the `operator` custom role.
- If no mapping matches, Frigate falls back to `default_role` if configured.
- If `role_map` is not defined, Frigate assumes the role header directly contains `admin`, `viewer`, or a custom role name.
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
#### Port Considerations
@@ -170,7 +117,6 @@ In this example:
- The `remote-role` header determines the users privileges:
- **admin** → Full access (user management, configuration changes).
- **viewer** → Read-only access.
- **Custom roles** → Read-only access limited to the cameras defined in `auth.roles[role]`.
- Ensure your **proxy sends both user and role headers** for proper role enforcement.
**Unauthenticated Port (5000)**
@@ -216,41 +162,6 @@ Frigate supports user roles to control access to certain features in the UI and
- **admin**: Full access to all features, including user management and configuration.
- **viewer**: Read-only access to the UI and API, including viewing cameras, review items, and historical footage. Configuration editor and settings in the UI are inaccessible.
- **Custom Roles**: Arbitrary role names (alphanumeric, dots/underscores) with specific camera permissions. These extend the system for granular access (e.g., "operator" for select cameras).
### Custom Roles and Camera Access
The viewer role provides read-only access to all cameras in the UI and API. Custom roles allow admins to limit read-only access to specific cameras. Each role specifies an array of allowed camera names. If a user is assigned a custom role, their account is like the **viewer** role - they can only view Live, Review/History, Explore, and Export for the designated cameras. Backend API endpoints enforce this server-side (e.g., returning 403 for unauthorized cameras), and the frontend UI filters content accordingly (e.g., camera dropdowns show only permitted options).
### Role Configuration Example
```yaml
cameras:
front_door:
# ... camera config
side_yard:
# ... camera config
garage:
# ... camera config
auth:
enabled: true
roles:
operator: # Custom role
- front_door
- garage # Operator can access front and garage
neighbor:
- side_yard
```
If you want to provide access to all cameras to a specific user, just use the **viewer** role.
### Managing User Roles
1. Log in as an **admin** user via port `8971` (preferred), or unauthenticated via port `5000`.
2. Navigate to **Settings**.
3. In the **Users** section, edit a users role by selecting from available roles (admin, viewer, or custom).
4. In the **Roles** section, add/edit/delete custom roles (select cameras via switches). Deleting a role auto-reassigns users to "viewer".
### Role Enforcement
@@ -270,42 +181,3 @@ To use role-based access control, you must connect to Frigate via the **authenti
1. Log in as an **admin** user via port `8971`.
2. Navigate to **Settings > Users**.
3. Edit a users role by selecting **admin** or **viewer**.
## API Authentication Guide
### Getting a Bearer Token
To use the Frigate API, you need to authenticate first. Follow these steps to obtain a Bearer token:
#### 1. Login
Make a POST request to `/login` with your credentials:
```bash
curl -i -X POST https://frigate_ip:8971/api/login \
-H "Content-Type: application/json" \
-d '{"user": "admin", "password": "your_password"}'
```
:::note
You may need to include `-k` in the argument list in these steps (eg: `curl -k -i -X POST ...`) if your Frigate instance is using a self-signed certificate.
:::
The response will contain a cookie with the JWT token.
#### 2. Using the Bearer Token
Once you have the token, include it in the Authorization header for subsequent requests:
```bash
curl -H "Authorization: Bearer <your_token>" https://frigate_ip:8971/api/profile
```
#### 3. Token Lifecycle
- Tokens are valid for the configured session length
- Tokens are automatically refreshed when you visit the `/auth` endpoint
- Tokens are invalidated when the user's password is changed
- Use `/logout` to clear your session cookie

View File

@@ -21,7 +21,7 @@ Frigate autotracking functions with PTZ cameras capable of relative movement wit
Many cheaper or older PTZs may not support this standard. Frigate will report an error message in the log and disable autotracking if your PTZ is unsupported.
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`. These features are required for autotracking, but some cameras still fail to respond even if they claim support.
Alternatively, you can download and run [this simple Python script](https://gist.github.com/hawkeye217/152a1d4ba80760dac95d46e143d37112), replacing the details on line 4 with your camera's IP address, ONVIF port, username, and password to check your camera.
A growing list of cameras and brands that have been reported by users to work with Frigate's autotracking can be found [here](cameras.md).

View File

@@ -1,31 +0,0 @@
---
id: bird_classification
title: Bird Classification
---
Bird classification identifies known birds using a quantized Tensorflow model. When a known bird is recognized, its common name will be added as a `sub_label`. This information is included in the UI, filters, as well as in notifications.
## Minimum System Requirements
Bird classification runs a lightweight tflite model on the CPU, there are no significantly different system requirements than running Frigate itself.
## Model
The classification model used is the MobileNet INat Bird Classification, [available identifiers can be found here.](https://raw.githubusercontent.com/google-coral/test_data/master/inat_bird_labels.txt)
## Configuration
Bird classification is disabled by default, it must be enabled in your config file before it can be used. Bird classification is a global configuration setting.
```yaml
classification:
bird:
enabled: true
```
## Advanced Configuration
Fine-tune bird classification with these optional parameters:
- `threshold`: Classification confidence score required to set the sub label on the object.
- Default: `0.9`.

View File

@@ -4,7 +4,7 @@ In addition to Frigate's Live camera dashboard, Birdseye allows a portable heads
Birdseye can be viewed by adding the "Birdseye" camera to a Camera Group in the Web UI. Add a Camera Group by pressing the "+" icon on the Live page, and choose "Birdseye" as one of the cameras.
Birdseye can also be used in Home Assistant dashboards, cast to media devices, etc.
Birdseye can also be used in HomeAssistant dashboards, cast to media devices, etc.
## Birdseye Behavior

View File

@@ -15,17 +15,6 @@ Many cameras support encoding options which greatly affect the live view experie
:::
## H.265 Cameras via Safari
Some cameras support h265 with different formats, but Safari only supports the annexb format. When using h265 camera streams for recording with devices that use the Safari browser, the `apple_compatibility` option should be used.
```yaml
cameras:
h265_cam: # <------ Doesn't matter what the camera is called
ffmpeg:
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
```
## MJPEG Cameras
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
@@ -144,14 +133,7 @@ WEB Digest Algorithm - MD5
### Reolink Cameras
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
| ----------------- | ------------------------- | --------------------------------- | ----------------------------------------------------------------------- |
| 5MP or lower | All | http-flv | Stream is h264 |
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
Frigate works much better with newer reolink cameras that are setup with the below options:
If available, recommended settings are:
@@ -164,35 +146,19 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
<details>
<summary>Example Config</summary>
:::warning
:::tip
Reolink's latest cameras support two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
NOTE: The RTSP stream can not be prefixed with `ffmpeg:`, as go2rtc needs to handle the stream to support two way audio.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
:::
```yaml
go2rtc:
streams:
# example for connecting to a standard Reolink camera
your_reolink_camera:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
your_reolink_camera_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
# example for connectin to a Reolink camera that supports two way talk
your_reolink_camera_twt:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- "rtsp://username:password@reolink_ip/Preview_01_sub
your_reolink_camera_twt_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
- "rtsp://username:password@reolink_ip/Preview_01_sub
# example for connecting to a Reolink NVR
your_reolink_camera_via_nvr:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
- "ffmpeg:your_reolink_camera_via_nvr#audio=aac"
@@ -223,7 +189,22 @@ cameras:
roles:
- detect
```
</details>
#### Reolink Doorbell
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
streams:
your_reolink_doorbell:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- rtsp://reolink_ip/Preview_01_sub
your_reolink_doorbell_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
```
### Unifi Protect Cameras
@@ -238,7 +219,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
@@ -251,37 +232,3 @@ ffmpeg:
### TP-Link VIGI Cameras
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
## USB Cameras (aka Webcams)
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
- Preparation outside of Frigate:
- Get USB camera path. Run `v4l2-ctl --list-devices` to get a listing of locally-connected cameras available. (You may need to install `v4l-utils` in a way appropriate for your Linux distribution). In the sample configuration below, we use `video=0` to correlate with a detected device path of `/dev/video0`
- Get USB camera formats & resolutions. Run `ffmpeg -f v4l2 -list_formats all -i /dev/video0` to get an idea of what formats and resolutions the USB Camera supports. In the sample configuration below, we use a width of 1024 and height of 576 in the stream and detection settings based on what was reported back.
- If using Frigate in a container (e.g. Docker on TrueNAS), ensure you have USB Passthrough support enabled, along with a specific Host Device (`/dev/video0`) + Container Device (`/dev/video0`) listed.
- In your Frigate Configuration File, add the go2rtc stream and roles as appropriate:
```
go2rtc:
streams:
usb_camera:
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
cameras:
usb_camera:
enabled: true
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/usb_camera
input_args: preset-rtsp-restream
roles:
- detect
- record
detect:
enabled: false # <---- disable detection until you have a working camera feed
width: 1024
height: 576
```

View File

@@ -89,35 +89,31 @@ An ONVIF-capable camera that supports relative movement within the field of view
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback. If you'd like to report specific quirks or issues with a manufacturer or camera that would be helpful for other users, open a pull request to add to this list.
This list of working and non-working PTZ cameras is based on user feedback.
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`. These features are required for autotracking, but some cameras still fail to respond even if they claim support. If they are missing, autotracking will not work (though basic PTZ in the WebUI might). Avoid cameras with no database entry unless they are confirmed as working below.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- |
| Amcrest | ✅ | | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Annke CZ504 | ✅ | | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | | Some low-end Dahuas (lite series, picoo series (commonly), among others) have been reported to not support autotracking. These models usually don't have a four digit model number with chassis prefix and options postfix (e.g. DH-P5AE-PV vs DH-SD49825GB-HNR). |
| Dahua DH-SD2A500HB | | | |
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
| Dahua DH-P5AE-PV | ❌ | | |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database | |
| Foscam R5 | ✅ | ❌ | |
| Foscam SD4 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | | | |
| Reolink | ✅ | ❌ | |
| Speco O8P32X | ✅ | | |
| Sunba 405-D20X | | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Uniview IPC6612SR-X33-VG | ✅ | ✅ | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |
| Vikylin PTZ-2804X-I2 | ❌ | ❌ | Incomplete ONVIF support |
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| Amcrest | | | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
| Amcrest IP4M-S2112EW-AI | ✅ | | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Ctronics PTZ | | | |
| Dahua | | | |
| Dahua DH-SD2A500HB | ✅ | | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | | |
| Hikvision | | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
| Reolink 511WA | | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Speco O8P32X | | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | | | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Uniview IPC6612SR-X33-VG | ✅ | | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |
| Vikylin PTZ-2804X-I2 | | ❌ | Incomplete ONVIF support |
## Setting up camera groups
@@ -138,7 +134,3 @@ camera_groups:
icon: LuCar
order: 0
```
## Two-Way Audio
See the guide [here](/configuration/live/#two-way-talk)

View File

@@ -1,130 +0,0 @@
---
id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
## Minimum System Requirements
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
For object classification:
- Define classes that represent different types or attributes of the detected object
- Examples: For `person` objects, classes might be `delivery_person`, `resident`, `stranger`
- Include a `none` class for objects that don't fit any specific category
- Keep classes visually distinct to improve accuracy
### Classification Type
- **Sub label**:
- Applied to the objects `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
:::note
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
:::
## Assignment Requirements
Sub labels and attributes are only assigned when both conditions are met:
1. **Threshold**: Each classification attempt must have a confidence score that meets or exceeds the configured `threshold` (default: `0.8`).
2. **Class Consensus**: After at least 3 classification attempts, 60% of attempts must agree on the same class label. If the consensus class is `none`, no assignment is made.
This two-step verification prevents false positives by requiring consistent predictions across multiple frames before assigning a sub label or attribute.
## Example use cases
### Sub label
- **Known pet vs unknown**: For `dog` objects, set sub label to your pets name (e.g., `buddy`) or `none` for others.
- **Mail truck vs normal car**: For `car`, classify as `mail_truck` vs `car` to filter important arrivals.
- **Delivery vs non-delivery person**: For `person`, classify `delivery` vs `visitor` based on uniform/props.
### Attributes
- **Backpack**: For `person`, add attribute `backpack: yes/no`.
- **Helmet**: For `person` (worksite), add `helmet: yes/no`.
- **Leash**: For `dog`, add `leash: yes/no` (useful for park or yard rules).
- **Ladder rack**: For `truck`, add `ladder_rack: yes/no` to flag service vehicles.
## Configuration
Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.
```yaml
classification:
custom:
dog:
threshold: 0.8
object_config:
objects: [dog] # object labels to classify
classification_type: sub_label # or: attribute
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
### Step 2: Assign Training Examples
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
## Debugging Classification Models
To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- Consensus calculations and when assignments are made
- Object classification history and weighted scores

View File

@@ -1,107 +0,0 @@
---
id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
## Minimum System Requirements
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
For state classification:
- Define classes that represent mutually exclusive states
- Examples: `open` and `closed` for a garage door, `on` and `off` for lights
- Use at least 2 classes (typically binary states work best)
- Keep class names clear and descriptive
## Example use cases
- **Door state**: Detect if a garage or front door is open vs closed.
- **Gate state**: Track if a driveway gate is open or closed.
- **Trash day**: Bins at curb vs no bins present.
- **Pool cover**: Cover on vs off.
## Configuration
State classification is configured as a custom classification model. Each model has its own name and settings. You must provide at least one camera crop under `state_config.cameras`.
```yaml
classification:
custom:
front_door:
threshold: 0.8
state_config:
motion: true # run when motion overlaps the crop
interval: 10 # also run every N seconds (optional)
cameras:
front:
crop: [0, 180, 220, 400]
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
### Step 1: Name and Define
Enter a name for your model and define at least 2 classes (states) that represent mutually exclusive states. For example, `open` and `closed` for a door, or `on` and `off` for lights.
### Step 2: Select the Crop Area
Choose one or more cameras and draw a rectangle over the area of interest for each camera. The crop should be tight around the region you want to classify to avoid extra signals unrelated to what is being classified. You can drag and resize the rectangle to adjust the crop area.
### Step 3: Assign Training Examples
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state. It's not strictly required to select all images you see. If a state is missing from the samples, you can train it from the Recent tab later.
Once some images are assigned, training will begin automatically.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
## Debugging Classification Models
To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- State verification progress (consecutive detections needed)
- When state changes are published
### Recent Classifications
For state classification, images are only added to recent classifications under specific circumstances:
- **First detection**: The first classification attempt for a camera is always saved
- **State changes**: Images are saved when the detected state differs from the current verified state
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)
Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.

View File

@@ -3,90 +3,20 @@ id: face_recognition
title: Face Recognition
---
Face recognition identifies known individuals by matching detected faces with previously learned facial data. When a known `person` is recognized, their name will be added as a `sub_label`. This information is included in the UI, filters, as well as in notifications.
Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.
## Model Requirements
### Face Detection
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
:::note
Frigate needs to first detect a `person` before it can detect and recognize a face.
:::
### Face Recognition
Frigate has support for two face recognition model types:
- **small**: Frigate will run a FaceNet embedding model to recognize faces, which runs locally on the CPU. This model is optimized for efficiency and is not as accurate.
- **large**: Frigate will run a large ArcFace embedding model that is optimized for accuracy. It is only recommended to be run when an integrated or dedicated GPU / NPU is available.
In both cases, a lightweight face landmark detection model is also used to align faces before running recognition.
All of these features run locally on your system.
## Minimum System Requirements
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
The `large` model is optimized for accuracy, an integrated or discrete GPU / NPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
Frigate has support for CV2 Local Binary Pattern Face Recognizer to recognize faces, which runs locally. A lightweight face landmark detection model is also used to align faces before running them through the face recognizer.
## Configuration
Face recognition is disabled by default, face recognition must be enabled in the UI or in your config file before it can be used. Face recognition is a global configuration setting.
Face recognition is disabled by default, face recognition must be enabled in your config file before it can be used. Face recognition is a global configuration setting.
```yaml
face_recognition:
enabled: true
```
Like the other real-time processors in Frigate, face recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
## Advanced Configuration
Fine-tune face recognition with these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled` and `min_area`.
### Detection
- `detection_threshold`: Face detection confidence score required before recognition runs:
- Default: `0.7`
- Note: This is field only applies to the standalone face detection model, `min_score` should be used to filter for models that have face detection built in.
- `min_area`: Defines the minimum size (in pixels) a face must be before recognition runs.
- Default: `500` pixels.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant faces.
### Recognition
- `model_size`: Which model size to use, options are `small` or `large`
- `unknown_score`: Min score to mark a person as a potential match, matches at or below this will be marked as unknown.
- Default: `0.8`.
- `recognition_threshold`: Recognition confidence score required to add the face to the object as a sub label.
- Default: `0.9`.
- `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training.
- Default: `200`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
- Default: `None`.
- Note: This setting is only applicable when using the `large` model. See [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)
## Usage
Follow these steps to begin:
1. **Enable face recognition** in your configuration file and restart Frigate.
2. **Upload one face** using the **Add Face** button's wizard in the Face Library section of the Frigate UI. Read below for the best practices on expanding your training set.
3. When Frigate detects and attempts to recognize a face, it will appear in the **Train** tab of the Face Library, along with its associated recognition confidence.
4. From the **Train** tab, you can **assign the face** to a new or existing person to improve recognition accuracy for the future.
## Creating a Robust Training Set
## Dataset
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
@@ -95,9 +25,11 @@ The number of images needed for a sufficient training set for face recognition v
However, here are some general guidelines:
- Minimum: For basic face recognition tasks, a minimum of 5-10 images per person is often recommended.
- Recommended: For more robust and accurate systems, 20-30 images per person is a good starting point.
- Ideal: For optimal performance, especially in challenging conditions, 50-100 images per person can be beneficial.
- Minimum: For basic face recognition tasks, a minimum of 10-20 images per person is often recommended.
- Recommended: For more robust and accurate systems, 30-50 images per person is a good starting point.
- Ideal: For optimal performance, especially in challenging conditions, 100 or more images per person can be beneficial.
## Creating a Robust Training Set
The accuracy of face recognition is heavily dependent on the quality of data given to it for training. It is recommended to build the face training library in phases.
@@ -106,108 +38,19 @@ The accuracy of face recognition is heavily dependent on the quality of data giv
When choosing images to include in the face training set it is recommended to always follow these recommendations:
- If it is difficult to make out details in a persons face it will not be helpful in training.
- Avoid images with extreme under/over-exposure.
- Avoid images with under/over-exposure.
- Avoid blurry / pixelated images.
- Avoid training on infrared (gray-scale). The models are trained on color images and will be able to extract features from gray-scale images.
- Using images of people wearing hats / sunglasses may confuse the model.
- Do not upload too many similar images at the same time, it is recommended to train no more than 4-6 similar images for each person to avoid over-fitting.
- Be careful when uploading images of people when they are wearing clothing that covers a lot of their face as this may confuse the training.
- Do not upload too many images at the same time, it is recommended to train 4-6 images for each person each day so it is easier to know if the previously added images helped or hurt performance.
:::
### Understanding the Recent Recognitions Tab
The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
Refer to the guidelines below for best practices on selecting images for training.
### Step 1 - Building a Strong Foundation
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-5 photos containing just this person's face. It is important that the person's face in the photo is front-facing and not turned, this will ensure a good starting point.
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-2 photos taken by a smartphone for each person. It is important that the person's face in the photo is straight-on and not turned which will ensure a good starting point.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are front-facing. Ignore images from cameras that recognize faces from an angle. Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting.
You do not want to train images that are 90%+ as these are already being confidently recognized. In this step the goal is to train on clear, lower scoring front-facing images until the majority of front-facing images for a given person are consistently recognized correctly. Then it is time to move on to step 2.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle. Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step.
### Step 2 - Expanding The Dataset
Once front-facing images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone, and you still only want to train on images that score lower.
## FAQ
### How do I debug Face Recognition issues?
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model:
- Watch the debug view (Settings --> Debug) to ensure that `face` is being detected along with `person`.
- You may need to adjust the `min_score` for the `face` object if faces are not being detected.
If you are **not** using a Frigate+ or `face` detecting model:
- Check your `detect` stream resolution and ensure it is sufficiently high enough to capture face details on `person` objects.
- You may need to lower your `detection_threshold` if faces are not being detected.
2. Any detected faces will then be _recognized_.
- Make sure you have trained at least one face per the recommendations above.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).
### Detection does not work well with blurry images?
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
### Why can't I bulk upload photos?
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.
### Why can't I bulk reprocess faces?
Face embedding models work by breaking apart faces into different features. This means that when reprocessing an image, only images from a similar angle will have its score affected.
### Why do unknown people score similarly to known people?
This can happen for a few different reasons, but this is usually an indicator that the training set needs to be improved. This is often related to over-fitting:
- If you train with only a few images per person, especially if those images are very similar, the recognition model becomes overly specialized to those specific images.
- When you provide images with different poses, lighting, and expressions, the algorithm extracts features that are consistent across those variations.
- By training on a diverse set of images, the algorithm becomes less sensitive to minor variations and noise in the input image.
Review your face collections and remove most of the unclear or low-quality images. Then, use the **Reprocess** button on each face in the **Train** tab to evaluate how the changes affect recognition scores.
Avoid training on images that already score highly, as this can lead to over-fitting. Instead, focus on relatively clear images that score lower - ideally with different lighting, angles, and conditions—to help the model generalize more effectively.
### Frigate misidentified a face. Can I tell it that a face is "not" a specific person?
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy.
### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
### Can I use other face recognition software like DoubleTake at the same time as the built in face recognition?
No, using another face recognition service will interfere with Frigate's built in face recognition. When using double-take the sub_label feature must be disabled if the built in face recognition is also desired.
### Does face recognition run on the recording stream?
Face recognition does not run on the recording stream, this would be suboptimal for many reasons:
1. The latency of accessing the recordings means the notifications would not include the names of recognized people because recognition would not complete until after.
2. The embedding models used run on a set image size, so larger images will be scaled down to match this anyway.
3. Motion clarity is much more important than extra pixels, over-compression and motion blur are much more detrimental to results than resolution.
### I get an unknown error when taking a photo directly with my iPhone
By default iOS devices will use HEIC (High Efficiency Image Container) for images, but this format is not supported for uploads. Choosing `large` as the format instead of `original` will use JPG which will work correctly.
### How can I delete the face database and start over?
Frigate does not store anything in its database related to face recognition. You can simply delete all of your faces through the Frigate UI or remove the contents of the `/media/frigate/clips/faces` directory.
Once straight-on images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone.

View File

@@ -9,7 +9,7 @@ Some presets of FFmpeg args are provided by default to make the configuration ea
It is highly recommended to use hwaccel presets in the config. These presets not only replace the longer args, but they also give Frigate hints of what hardware is available and allows Frigate to make other optimizations using the GPU such as when encoding the birdseye restream or when scaling a stream that has a size different than the native stream size.
See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more info on how to setup hwaccel for your GPU / iGPU.
See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on how to setup hwaccel for your GPU / iGPU.
| Preset | Usage | Other Notes |
| --------------------- | ------------------------------ | ----------------------------------------------------- |
@@ -21,7 +21,8 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
| preset-nvidia | Nvidia GPU | |
| preset-jetson-h264 | Nvidia Jetson with h264 stream | |
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with \*-rk suffix and privileged mode |
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with \*-rk suffix and privileged mode |
### Input Args Presets
@@ -70,11 +71,11 @@ cameras:
Output args presets help make the config more readable and handle use cases for different types of streams to ensure consistent recordings.
| Preset | Usage | Other Notes |
| -------------------------------- | --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| preset-record-generic | Record WITHOUT audio | If your camera doesnt have audio, or if you dont want to record audio, use this option |
| preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings |
| preset-record-generic-audio-aac | Record WITH transcoded aac audio | This is the default when no option is specified. Use it to transcode audio to AAC. If the source is already in AAC format, use preset-record-generic-audio-copy instead to avoid unnecessary re-encoding |
| preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead |
| preset-record-jpeg | Record live jpeg | Recommend restreaming live jpeg instead |
| preset-record-ubiquiti | Record ubiquiti stream with audio | Recordings with ubiquiti non-standard audio |
| Preset | Usage | Other Notes |
| -------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| preset-record-generic | Record WITHOUT audio | This is the default when nothing is specified |
| preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings |
| preset-record-generic-audio-aac | Record WITH transcoded aac audio | Use this to transcode to aac audio. If your source is already aac, use preset-record-generic-audio-copy instead to avoid re-encoding |
| preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead |
| preset-record-jpeg | Record live jpeg | Recommend restreaming live jpeg instead |
| preset-record-ubiquiti | Record ubiquiti stream with audio | Recordings with ubiquiti non-standard audio |

View File

@@ -9,37 +9,24 @@ Requests for a description are sent off automatically to your AI provider at the
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
model: gemini-1.5-flash
cameras:
front_camera:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
front_camera: ...
indoor_camera:
objects:
genai:
enabled: False # <- disable GenAI for your indoor camera
genai: # <- disable GenAI for your indoor camera
enabled: False
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
@@ -56,7 +43,7 @@ Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
@@ -64,17 +51,14 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:4b
model: llava:7b
```
## Google Gemini
@@ -83,7 +67,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
@@ -98,24 +82,19 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
model: gemini-1.5-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
@@ -125,6 +104,7 @@ To start using OpenAI, you must first [create an API key](https://platform.opena
```yaml
genai:
enabled: True
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
@@ -142,19 +122,19 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
### Configuration
```yaml
genai:
enabled: True
provider: azure_openai
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
@@ -187,7 +167,7 @@ Analyze the sequence of images containing the {label}. Focus on the likely inten
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
Prompts can use variable replacements like `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
@@ -195,35 +175,34 @@ You are also able to define custom prompts in your configuration.
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
genai:
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts

View File

@@ -1,142 +0,0 @@
---
id: genai_config
title: Configuring Generative AI
---
## Configuration
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::info
Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger sizes are more capable of complex tasks and understanding of situations, but requires more memory and computational resources. It is recommended to try multiple models and experiment to see which performs best.
:::
:::tip
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
:::
The following models are recommended:
| Model | Notes |
| ----------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
```
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
```

View File

@@ -1,77 +0,0 @@
---
id: genai_objects
title: Object Descriptions
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@@ -1,120 +0,0 @@
---
id: genai_review
title: Review Summaries
---
Generative AI can be used to automatically generate structured summaries of review items. These summaries will show up in Frigate's native notifications as well as in the UI. Generative AI can also be used to take a collection of summaries over a period of time and provide a report, which may be useful to get a quick report of everything that happened while out for some amount of time.
Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well.
Generative AI review summaries can also be toggled dynamically for a [camera via MQTT](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset).
## Review Summary Usage and Best Practices
Review summaries provide structured JSON responses that are saved for each review item:
```
- `title` (string): A concise, direct title that describes the purpose or overall action (e.g., "Person taking out trash", "Joe walking dog").
- `scene` (string): A narrative description of what happens across the sequence from start to finish, including setting, detected objects, and their observable actions.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. This is a condensed version of the scene description.
- `confidence` (float): 0-1 confidence in the analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
```
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will automatically show the title and `shortSummary` when the data is available, while the full `scene` description is available in the UI for detailed review.
### Defining Typical Activity
Each installation and even camera can have different parameters for what is considered suspicious activity. Frigate allows the `activity_context_prompt` to be defined globally and at the camera level, which allows you to define more specifically what should be considered normal activity. It is important that this is not overly specific as it can sway the output of the response.
<details>
<summary>Default Activity Context Prompt</summary>
```
### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
### Critical Threat Indicators (Level 2)
- Holding break-in tools (crowbars, pry bars, bolt cutters)
- Weapons visible (guns, knives, bats used aggressively)
- Forced entry in progress
- Physical aggression or violence
- Active property damage or theft in progress
### Assessment Guidance
Evaluate in this order:
1. **If person is verified/known** → Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
- Check actions: If testing doors/handles, taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
```
</details>
### Image Source
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:
```yaml
review:
genai:
enabled: true
image_source: recordings # Options: "preview" (default) or "recordings"
```
When using `recordings`, frames are extracted at 480px height while maintaining the camera's original aspect ratio, providing better detail for the LLM while being mindful of context window size. This is particularly useful for scenarios where fine details matter, such as identifying license plates, reading text, or analyzing distant objects.
The number of frames sent to the LLM is dynamically calculated based on:
- Your LLM provider's context window size
- The camera's resolution and aspect ratio (ultrawide cameras like 32:9 use more tokens per image)
- The image source (recordings use more tokens than preview images)
Frame counts are automatically optimized to use ~98% of the available context window while capping at 20 frames maximum to ensure reasonable inference times. Note that using recordings will:
- Provide higher quality images to the LLM (480p vs 180p preview images)
- Use more tokens per image due to higher resolution
- Result in fewer frames being sent for ultrawide cameras due to larger image size
- Require that recordings are enabled for the camera
If recordings are not available for a given time period, the system will automatically fall back to using preview frames.
### Additional Concerns
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:
```yaml
review:
genai:
enabled: true
additional_concerns:
- animals in the garden
```
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.

View File

@@ -1,19 +1,20 @@
---
id: hardware_acceleration_video
title: Video Decoding
id: hardware_acceleration
title: Hardware Acceleration
---
# Video Decoding
# Hardware Acceleration
It is highly recommended to use a GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
It is highly recommended to use a GPU for hardware acceleration in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
# Officially Supported
## Raspberry Pi 3/4
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
If you are using the HA addon, you may need to use the full access variant and turn off `Protection mode` for hardware acceleration.
```yaml
# if you want to decode a h264 stream
@@ -27,8 +28,8 @@ ffmpeg:
:::note
If running Frigate through Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker Compose add:
If running Frigate in Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker compose add:
```yaml
services:
@@ -68,19 +69,18 @@ Or map in all the `/dev/video*` devices.
**Recommended hwaccel Preset**
| CPU Generation | Intel Driver | Recommended Preset | Notes |
| -------------- | ------------ | ------------------- | ------------------------------------ |
| gen1 - gen5 | i965 | preset-vaapi | qsv is not supported |
| gen6 - gen7 | iHD | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
| CPU Generation | Intel Driver | Recommended Preset | Notes |
| -------------- | ------------ | ------------------ | ----------------------------------- |
| gen1 - gen7 | i965 | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-* | |
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-* | |
:::
:::note
The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `frigate.yaml` for HA OS users](advanced.md#environment_vars).
See [The Intel Docs](https://www.intel.com/content/www/us/en/support/articles/000005505/processors.html) to figure out what generation your CPU is.
@@ -175,33 +175,23 @@ For more information on the various values across different distributions, see h
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
#### Stats for SR-IOV or other devices
#### Stats for SR-IOV devices
When using virtualized GPUs via SR-IOV, you need to specify the device path to use to gather stats from `intel_gpu_top`. This example may work for some systems using SR-IOV:
When using virtualized GPUs via SR-IOV, additional args are needed for GPU stats to function. This can be enabled with the following config:
```yaml
telemetry:
stats:
intel_gpu_device: "sriov"
sriov: True
```
For other virtualized GPUs, try specifying the direct path to the device instead:
```yaml
telemetry:
stats:
intel_gpu_device: "drm:/dev/dri/card0"
```
If you are passing in a device path, make sure you've passed the device through to the container.
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
:::note
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `frigate.yaml` for HA OS users](advanced.md#environment_vars).
:::
@@ -228,7 +218,7 @@ Additional configuration is needed for the Docker container to be able to access
services:
frigate:
...
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
image: ghcr.io/blakeblackshear/frigate:stable
deploy: # <------------- Add this section
resources:
reservations:
@@ -246,7 +236,7 @@ docker run -d \
--name frigate \
...
--gpus=all \
ghcr.io/blakeblackshear/frigate:stable-tensorrt
ghcr.io/blakeblackshear/frigate:stable
```
### Setup Decoder
@@ -305,7 +295,8 @@ These instructions were originally based on the [Jellyfin documentation](https:/
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 5.0+ use the `stable-tensorrt-jp5`
tagged image, or if your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
You will need to use the image with the nvidia container runtime:
@@ -315,16 +306,17 @@ You will need to use the image with the nvidia container runtime:
docker run -d \
...
--runtime nvidia
ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6
ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5
```
### Docker Compose - Jetson
```yaml
version: '2.4'
services:
frigate:
...
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5
runtime: nvidia # Add this
```
@@ -385,8 +377,13 @@ Make sure to follow the [Rockchip specific installation instructions](/frigate/i
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
# if you try to decode a h264 encoded stream
ffmpeg:
hwaccel_args: preset-rkmpp
hwaccel_args: preset-rk-h264
# if you try to decode a h265 (hevc) encoded stream
ffmpeg:
hwaccel_args: preset-rk-h265
```
:::note
@@ -394,62 +391,3 @@ ffmpeg:
Make sure that your SoC supports hardware acceleration for your input stream. For example, if your camera streams with h265 encoding and a 4k resolution, your SoC must be able to de- and encode h265 with a 4k resolution or higher. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::
:::warning
If one or more of your cameras are not properly processed and this error is shown in the logs:
```
[segment @ 0xaaaaff694790] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[Parsed_scale_rkrga_0 @ 0xaaaaff819070] No hw context provided on input
[Parsed_scale_rkrga_0 @ 0xaaaaff819070] Failed to configure output pad on Parsed_scale_rkrga_0
Error initializing filters!
Error marking filters as finished
[out#1/rawvideo @ 0xaaaaff3d8730] Nothing was written into output file, because at least one of its streams received no packets.
Restarting ffmpeg...
```
you should try to uprade to FFmpeg 7. This can be done using this config option:
```
ffmpeg:
path: "7.0"
```
You can set this option globally to use FFmpeg 7 for all cameras or on camera level to use it only for specific cameras. Do not confuse this option with:
```
cameras:
name:
ffmpeg:
inputs:
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
```
:::
## Synaptics
Hardware accelerated video de-/encoding is supported on Synpatics SL-series SoC.
### Prerequisites
Make sure to follow the [Synaptics specific installation instructions](/frigate/installation#synaptics).
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream
output_args:
record: preset-record-generic-audio-aac
```
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your input stream is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::

View File

@@ -1,37 +0,0 @@
---
id: hardware_acceleration_enrichments
title: Enrichments
---
# Enrichments
Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accelerated processing.
## Requirements
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
- **AMD**
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
- **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
- **RockChip**
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
:::note
A Google Coral is a TPU (Tensor Processing Unit), not a dedicated GPU (Graphics Processing Unit) and therefore does not provide any kind of acceleration for Frigate's enrichments.
:::

View File

@@ -3,12 +3,10 @@ id: index
title: Frigate Configuration
---
For Home Assistant Add-on installations, the config file should be at `/addon_configs/<addon_directory>/config.yml`, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running. See the list of directories [here](#accessing-add-on-config-dir).
For Home Assistant Addon installations, the config file needs to be in the root of your Home Assistant config directory (same location as `configuration.yaml`). It can be named `frigate.yaml` or `frigate.yml`, but if both files exist `frigate.yaml` will be preferred and `frigate.yml` will be ignored.
For all other installation types, the config file should be mapped to `/config/config.yml` inside the container.
It can be named `config.yml` or `config.yaml`, but if both files exist `config.yml` will be preferred and `config.yaml` will be ignored.
It is recommended to start with a minimal configuration and add to it as described in [this guide](../guides/getting_started.md) and use the built in configuration editor in Frigate's UI which supports validation.
```yaml
@@ -25,24 +23,9 @@ cameras:
- detect
```
## Accessing the Home Assistant Add-on configuration directory {#accessing-add-on-config-dir}
## VSCode Configuration Schema
When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running.
| Add-on Variant | Configuration directory |
| -------------------------- | -------------------------------------------- |
| Frigate | `/addon_configs/ccab4aaf_frigate` |
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
**Whenever you see `/config` in the documentation, it refers to this directory.**
If for example you are running the standard Add-on variant and use the [VS Code Add-on](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in file editor in the Frigate UI to edit the configuration file.
## VS Code Configuration Schema
VS Code supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VS Code and Frigate as an Add-on, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VS Code on another machine.
VSCode supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VSCode and Frigate as an add-on, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VSCode on another machine.
## Environment Variable Substitution
@@ -82,10 +65,10 @@ genai:
Here are some common starter configuration examples. Refer to the [reference config](./reference.md) for detailed information about all the config values.
### Raspberry Pi Home Assistant Add-on with USB Coral
### Raspberry Pi Home Assistant Addon with USB Coral
- Single camera with 720p, 5fps stream for detect
- MQTT connected to the Home Assistant Mosquitto Add-on
- MQTT connected to home assistant mosquitto addon
- Hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not

View File

@@ -3,34 +3,32 @@ id: license_plate_recognition
title: License Plate Recognition (LPR)
---
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a [known](#matching) name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
Frigate can recognize license plates on vehicles and automatically add the detected characters or recognized name as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles.
When a plate is recognized, the details are:
When a plate is recognized, the detected characters or recognized name is:
- Added as a `sub_label` (if [known](#matching)) or the `recognized_license_plate` field (if unknown) to a tracked object.
- Viewable in the Details pane in Review/History.
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
- Added as a `sub_label` to the `car` tracked object.
- Viewable in the Review Item Details pane in Review and the Tracked Object Details pane in Explore.
- Filterable through the More Filters menu in Explore.
- Published via the `frigate/events` MQTT topic as a `sub_label` ([known](#matching)) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if [known](#matching)) and `plate`.
- Published via the `frigate/events` MQTT topic as a `sub_label` for the tracked object.
## Model Requirements
Users running a Frigate+ model (or any custom model that natively detects license plates) should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that can be configured to run on your CPU or GPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that runs on your CPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
:::note
In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle` before it can recognize a license plate. If you're using a dedicated LPR camera and have a zoomed-in view where a `car` or `motorcycle` will not be detected, you can still run LPR, but the configuration parameters will differ from the default mode. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section below.
Frigate needs to first detect a `car` before it can recognize a license plate. If you're using a dedicated LPR camera or have a zoomed-in view, make sure the camera captures enough of the `car` for Frigate to detect it reliably.
:::
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
## Configuration
@@ -41,47 +39,28 @@ lpr:
enabled: True
```
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You should disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras:
```yaml
cameras:
garage:
...
lpr:
enabled: False
```
For non-dedicated LPR cameras, ensure that your camera is configured to detect objects of type `car` or `motorcycle`, and that a car or motorcycle is actually being detected by Frigate. Otherwise, LPR will not run.
Ensure that your camera is configured to detect objects of type `car`, and that a car is actually being detected by Frigate. Otherwise, LPR will not run.
Like the other real-time processors in Frigate, license plate recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
## Advanced Configuration
Fine-tune the LPR feature using these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled`, `min_area`, and `enhancement`.
Fine-tune the LPR feature using these optional parameters:
### Detection
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
- Default: `0.7`
- Note: This is field only applies to the standalone license plate detection model, `threshold` and `min_score` object filters should be used for models like Frigate+ that have license plate detection built in.
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Note: If you are using a Frigate+ model and you set the `threshold` in your objects config for `license_plate` higher than this value, recognition will never run. It's best to ensure these values match, or this `detection_threshold` is lower than your object config `threshold`.
- **`min_area`**: Defines the minimum size (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `CPU`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`.
- The `small` model is fast and identifies groups of Latin and Chinese characters.
- The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. It is significantly slower than the `small` model.
- If your country or region does not use multi-line plates, you should use the `small` model as performance is much better for single-line plates.
### Recognition
- **`recognition_threshold`**: Recognition confidence score required to add the plate to the object as a `recognized_license_plate` and/or `sub_label`.
- **`recognition_threshold`**: Recognition confidence score required to add the plate to the object as a sub label.
- Default: `0.9`.
- **`min_plate_length`**: Specifies the minimum number of characters a detected license plate must have to be added as a `recognized_license_plate` and/or `sub_label` to an object.
- **`min_plate_length`**: Specifies the minimum number of characters a detected license plate must have to be added as a sub label to an object.
- Use this to filter out short, incomplete, or incorrect detections.
- **`format`**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded.
- `"^[A-Z]{1,3} [A-Z]{1,2} [0-9]{1,4}$"` matches plates like "B AB 1234" or "M X 7"
@@ -90,61 +69,18 @@ Fine-tune the LPR feature using these optional parameters at the global level of
### Matching
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` and `motorcycle` objects when a recognized plate matches a known value.
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` objects when a recognized plate matches a known value.
- These labels appear in the UI, filters, and notifications.
- Unknown plates are still saved but are added to the `recognized_license_plate` field rather than the `sub_label`.
- **`match_distance`**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate.
- For example, setting `match_distance: 1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`.
- This parameter will _not_ operate on known plates that are defined as regular expressions. You should define the full string of your plate in `known_plates` in order to use `match_distance`.
### Image Enhancement
- **`enhancement`**: A value between 0 and 10 that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. This preprocessing step can sometimes improve accuracy but may also have the opposite effect.
- Default: `0` (no enhancement)
- Higher values increase contrast, sharpen details, and reduce noise, but excessive enhancement can blur or distort characters, actually making them much harder for Frigate to recognize.
- This setting is best adjusted at the camera level if running LPR on multiple cameras.
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at 5 and adjusting as needed. You should see how different enhancement levels affect your plates. Use the `debug_save_plates` configuration option (see below).
### Normalization Rules
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially and are applied _before_ the `format` regex, if specified. Each rule must have a `pattern` (which can be a string or a regex) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
These rules must be defined at the global level of your `lpr` config.
```yaml
lpr:
replace_rules:
- pattern: "[%#*?]" # Remove noise symbols
replacement: ""
- pattern: "[= ]" # Normalize = or space to dash
replacement: "-"
- pattern: "O" # Swap 'O' to '0' (common OCR error)
replacement: "0"
- pattern: "I" # Swap 'I' to '1'
replacement: "1"
- pattern: '(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123) - use single quotes to preserve backslashes
replacement: '\1-\2'
```
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
- Backrefs (`\1`, `\2`) allow dynamic replacements (e.g., capture groups).
- Any changes made by the rules are printed to the LPR debug log.
- Tip: You can test patterns with tools like regex101.com.
### Debugging
- **`debug_save_plates`**: Set to `True` to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `<camera>/<event_id>`, and named based on the capture timestamp.
- These saved images are not full plates but rather the specific areas of text detected on the plates. It is normal for the text detection model to sometimes find multiple areas of text on the plate. Use them to analyze what text Frigate recognized and how image enhancement affects detection.
- **Note:** Frigate does **not** automatically delete these debug images. Once LPR is functioning correctly, you should disable this option and manually remove the saved files to free up storage.
## Configuration Examples
These configuration parameters are available at the global level of your config. The only optional parameters that should be set at the camera level are `enabled`, `min_area`, and `enhancement`.
```yaml
lpr:
enabled: True
min_area: 1500 # Ignore plates with an area (length x width) smaller than 1500 pixels
min_area: 1500 # Ignore plates smaller than 1500 pixels
min_plate_length: 4 # Only recognize plates with 4 or more characters
known_plates:
Wife's Car:
@@ -161,13 +97,10 @@ lpr:
```yaml
lpr:
enabled: True
min_area: 4000 # Run recognition on larger plates only (4000 pixels represents a 63x63 pixel square in your image)
min_area: 4000 # Run recognition on larger plates only
recognition_threshold: 0.85
format: "^[A-Z]{2} [A-Z][0-9]{4}$" # Only recognize plates that are two letters, followed by a space, followed by a single letter and 4 numbers
match_distance: 1 # Allow one character variation in plate matching
replace_rules:
- pattern: "O"
replacement: "0" # Replace the letter O with the number 0 in every plate
known_plates:
Delivery Van:
- "RJ K5678"
@@ -176,181 +109,22 @@ lpr:
- "MN D3163"
```
:::note
If a camera is configured to detect `car` or `motorcycle` but you don't want Frigate to run LPR for that camera, disable LPR at the camera level:
```yaml
cameras:
side_yard:
lpr:
enabled: False
...
```
:::
## Dedicated LPR Cameras
Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night.
To mark a camera as a dedicated LPR camera, add `type: "lpr"` the camera configuration.
:::note
Frigate's dedicated LPR mode is optimized for cameras with a narrow field of view, specifically positioned and zoomed to capture license plates exclusively. If your camera provides a general overview of a scene rather than a tightly focused view, this mode is not recommended.
:::
Users can configure Frigate's dedicated LPR mode in two different ways depending on whether a Frigate+ (or native `license_plate` detecting) model is used:
### Using a Frigate+ (or Native `license_plate` Detecting) Model
Users running a Frigate+ model (or any model that natively detects `license_plate`) can take advantage of `license_plate` detection. This allows license plates to be treated as standard objects in dedicated LPR mode, meaning that alerts, detections, snapshots, and other Frigate features work as usual, and plates are detected efficiently through your configured object detector.
An example configuration for a dedicated LPR camera using a `license_plate`-detecting model:
```yaml
# LPR global configuration
lpr:
enabled: True
device: CPU # can also be GPU if available
# Dedicated LPR camera configuration
cameras:
dedicated_lpr_camera:
type: "lpr" # required to use dedicated LPR camera mode
ffmpeg: ... # add your streams
detect:
enabled: True
fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 10 is unnecessary and is not recommended.
min_initialized: 2
width: 1920
height: 1080
objects:
track:
- license_plate
filters:
license_plate:
threshold: 0.7
motion:
threshold: 30
contour_area: 60 # use an increased value to tune out small motion changes
improve_contrast: false
mask: 0.704,0.007,0.709,0.052,0.989,0.055,0.993,0.001 # ensure your camera's timestamp is masked
record:
enabled: True # disable recording if you only want snapshots
snapshots:
enabled: True
review:
detections:
labels:
- license_plate
```
With this setup:
- License plates are treated as normal objects in Frigate.
- Scores, alerts, detections, and snapshots work as expected.
- Snapshots will have license plate bounding boxes on them.
- The `frigate/events` MQTT topic will publish tracked object updates.
- Debug view will display `license_plate` bounding boxes.
- If you are using a Frigate+ model and want to submit images from your dedicated LPR camera for model training and fine-tuning, annotate both the `car` / `motorcycle` and the `license_plate` in the snapshots on the Frigate+ website, even if the car is barely visible.
### Using the Secondary LPR Pipeline (Without Frigate+)
If you are not running a Frigate+ model, you can use Frigates built-in secondary dedicated LPR pipeline. In this mode, Frigate bypasses the standard object detection pipeline and runs a local license plate detector model on the full frame whenever motion activity occurs.
An example configuration for a dedicated LPR camera using the secondary pipeline:
```yaml
# LPR global configuration
lpr:
enabled: True
device: CPU # can also be GPU if available and correct Docker image is used
detection_threshold: 0.7 # change if necessary
# Dedicated LPR camera configuration
cameras:
dedicated_lpr_camera:
type: "lpr" # required to use dedicated LPR camera mode
lpr:
enabled: True
enhancement: 3 # optional, enhance the image before trying to recognize characters
ffmpeg: ... # add your streams
detect:
enabled: False # disable Frigate's standard object detection pipeline
fps: 5 # increase if necessary, though high values may slow down Frigate's enrichments pipeline and use considerable CPU
width: 1920
height: 1080
objects:
track: [] # required when not using a Frigate+ model for dedicated LPR mode
motion:
threshold: 30
contour_area: 60 # use an increased value here to tune out small motion changes
improve_contrast: false
mask: 0.704,0.007,0.709,0.052,0.989,0.055,0.993,0.001 # ensure your camera's timestamp is masked
record:
enabled: True # disable recording if you only want snapshots
review:
detections:
enabled: True
retain:
default: 7
```
With this setup:
- The standard object detection pipeline is bypassed. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. You must **not** specify `license_plate` as an object to track.
- The license plate detector runs on the full frame whenever motion is detected and processes frames according to your detect `fps` setting.
- Review items will always be classified as a `detection`.
- Snapshots will always be saved.
- Zones and object masks are **not** used.
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a [known](#matching) plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
- Debug view will not show `license_plate` bounding boxes.
### Summary
| Feature | Native `license_plate` detecting Model (like Frigate+) | Secondary Pipeline (without native model or Frigate+) |
| ----------------------- | ------------------------------------------------------ | --------------------------------------------------------------- |
| License Plate Detection | Uses `license_plate` as a tracked object | Runs a dedicated LPR pipeline |
| FPS Setting | 5 (increase for fast-moving cars) | 5 (increase for fast-moving cars, but it may use much more CPU) |
| Object Detection | Standard Frigate+ detection applies | Bypasses standard object detection |
| Debug View | May show `license_plate` bounding boxes | May **not** show `license_plate` bounding boxes |
| MQTT `frigate/events` | Publishes tracked object updates | Publishes limited updates |
| Explore | Recognized plates available in More Filters | Recognized plates available in More Filters |
By selecting the appropriate configuration, users can optimize their dedicated LPR cameras based on whether they are using a Frigate+ model or the secondary LPR pipeline.
### Best practices for using Dedicated LPR camera mode
- Tune your motion detection and increase the `contour_area` until you see only larger motion boxes being created as cars pass through the frame (likely somewhere between 50-90 for a 1920x1080 detect stream). Increasing the `contour_area` filters out small areas of motion and will prevent excessive resource use from looking for license plates in frames that don't even have a car passing through it.
- Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection.
- Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate.
- For non-Frigate+ users, you may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
- The secondary pipeline mode runs a local AI model on your CPU or GPU (depending on how `device` is configured) to detect plates. Increasing detect `fps` will increase resource usage proportionally.
## FAQ
### Why isn't my license plate being detected and recognized?
Ensure that:
- Your camera has a clear, human-readable, well-lit view of the plate. If you can't read the plate's characters, Frigate certainly won't be able to, even if the model is recognizing a `license_plate`. This may require changing video size, quality, or frame rate settings on your camera, depending on your scene and how fast the vehicles are traveling.
- Your camera has a clear, human-readable, well-lit view of the plate. If you can't read the plate, Frigate certainly won't be able to. This may require changing video size, quality, or frame rate settings on your camera, depending on your scene and how fast the vehicles are traveling.
- The plate is large enough in the image (try adjusting `min_area`) or increasing the resolution of your camera's stream.
- Your `enhancement` level (if you've changed it from the default of `0`) is not too high. Too much enhancement will run too much denoising and cause the plate characters to become blurry and unreadable.
- A `car` is detected first, as LPR only runs on recognized vehicles.
If you are using a Frigate+ model or a custom model that detects license plates, ensure that `license_plate` is added to your list of objects to track.
If you are using the free model that ships with Frigate, you should _not_ add `license_plate` to the list of objects to track.
Recognized plates will show as object labels in the debug view and will appear in the "Recognized License Plates" select box in the More Filters popout in Explore.
### Can I run LPR without detecting `car` objects?
If you are still having issues detecting plates, start with a basic configuration and see the debugging tips below.
### Can I run LPR without detecting `car` or `motorcycle` objects?
In normal LPR mode, Frigate requires a `car` or `motorcycle` to be detected first before recognizing a license plate. If you have a dedicated LPR camera, you can change the camera `type` to `"lpr"` to use the Dedicated LPR Camera algorithm. This comes with important caveats, though. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section above.
No, Frigate requires a `car` to be detected first before recognizing a license plate.
### How can I improve detection accuracy?
@@ -362,76 +136,17 @@ In normal LPR mode, Frigate requires a `car` or `motorcycle` to be detected firs
Yes, but performance depends on camera quality, lighting, and infrared capabilities. Make sure your camera can capture clear images of plates at night.
### Can I limit LPR to specific zones?
LPR, like other Frigate enrichments, runs at the camera level rather than the zone level. While you can't restrict LPR to specific zones directly, you can control when recognition runs by setting a `min_area` value to filter out smaller detections.
### How can I match known plates with minor variations?
Use `match_distance` to allow small character mismatches. Alternatively, define multiple variations in `known_plates`.
### How do I debug LPR issues?
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
1. Start with a simplified LPR config.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
lpr:
enabled: true
debug_save_plates: true
```
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.common.license_plate: debug
```
3. Ensure your plates are being _detected_.
If you are using a Frigate+ or `license_plate` detecting model:
- Watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected.
- View MQTT messages for `frigate/events` to verify detected plates.
- You may need to adjust your `min_score` and/or `threshold` for the `license_plate` object if your plates are not being detected.
If you are **not** using a Frigate+ or `license_plate` detecting model:
- Watch the debug logs for messages from the YOLOv9 plate detector.
- You may need to adjust your `detection_threshold` if your plates are not being detected.
4. Ensure the characters on detected plates are being _recognized_.
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).
- View MQTT messages for `frigate/events` to verify detected plates.
- Adjust `detection_threshold` and `recognition_threshold` settings.
- If you are using a Frigate+ model or a model that detects license plates, watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected with a `car`.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only enable this when necessary.
### Will LPR slow down my system?
LPR's performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU or GPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.
### I am seeing a YOLOv9 plate detection metric in Enrichment Metrics, but I have a Frigate+ or custom model that detects `license_plate`. Why is the YOLOv9 model running?
The YOLOv9 license plate detector model will run (and the metric will appear) if you've enabled LPR but haven't defined `license_plate` as an object to track, either at the global or camera level.
If you are detecting `car` or `motorcycle` on cameras where you don't want to run LPR, make sure you disable LPR it at the camera level. And if you do want to run LPR on those cameras, make sure you define `license_plate` as an object to track.
### It looks like Frigate picked up my camera's timestamp or overlay text as the license plate. How can I prevent this?
This could happen if cars or motorcycles travel close to your camera's timestamp or overlay text. You could either move the text through your camera's firmware, or apply a mask to it in Frigate.
If you are using a model that natively detects `license_plate`, add an _object mask_ of type `license_plate` and a _motion mask_ over your text.
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
### I see "Error running ... model" in my logs. How can I fix this?
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results.

View File

@@ -23,7 +23,7 @@ If you are using go2rtc, you should adjust the following settings in your camera
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes. For many users this may not be an issue, but it should be noted that a 1x i-frame interval will cause more storage utilization if you are using the stream for the `record` role as well.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes. For many users this may not be an issue, but it should be noted that that a 1x i-frame interval will cause more storage utilization if you are using the stream for the `record` role as well.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.
@@ -42,16 +42,6 @@ go2rtc:
- "ffmpeg:http_cam#audio=opus" # <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
```
If your camera does not support AAC audio or are having problems with Live view, try transcoding to AAC audio directly:
```yaml
go2rtc:
streams:
rtsp_cam: # <- for RTSP streams
- "ffmpeg:rtsp://192.168.1.5:554/live0#video=copy#audio=aac" # <- copies video stream and transcodes to aac audio
- "ffmpeg:rtsp_cam#audio=opus" # <- provides support for WebRTC
```
If your camera does not have audio and you are having problems with Live view, you should have go2rtc send video only:
```yaml
@@ -114,9 +104,9 @@ cameras:
WebRTC works by creating a TCP or UDP connection on port `8555`. However, it requires additional configuration:
- For external access, over the internet, setup your router to forward port `8555` to port `8555` on the Frigate device, for both TCP and UDP.
- For internal/local access, unless you are running through the HA Add-on, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate:
- For internal/local access, unless you are running through the add-on, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate:
```yaml title="config.yml"
```yaml title="/config/frigate.yaml"
go2rtc:
streams:
test_cam: ...
@@ -131,9 +121,9 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
:::tip
This extra configuration may not be required if Frigate has been installed as a Home Assistant Add-on, as Frigate uses the Supervisor's API to generate a WebRTC candidate.
This extra configuration may not be required if Frigate has been installed as a Home Assistant add-on, as Frigate uses the Supervisor's API to generate a WebRTC candidate.
However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate Add-on fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the Add-on logs page during the initialization:
However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate add-on fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the add-on logs page during the initialization:
```log
[WARN] Failed to get IP address from supervisor
@@ -172,13 +162,9 @@ For devices that support two way talk, Frigate can be configured to use the feat
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source.
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-cameras)
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
To prevent go2rtc from blocking other applications from accessing your camera's two-way audio, you must configure your stream with `#backchannel=0`. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
### Streaming options on camera group dashboards
@@ -193,12 +179,7 @@ Frigate provides a dialog in the Camera Group Edit pane with several options for
:::note
The default dashboard ("All Cameras") will always use:
- Smart Streaming, unless you've disabled the global Automatic Live View in Settings.
- The first entry set in your `streams` configuration, if defined.
Use a camera group if you want to change any of these settings from the defaults.
The default dashboard ("All Cameras") will always use Smart Streaming and the first entry set in your `streams` configuration, if defined. Use a camera group if you want to change any of these settings from the defaults.
:::
@@ -206,52 +187,10 @@ Use a camera group if you want to change any of these settings from the defaults
Cameras can be temporarily disabled through the Frigate UI and through [MQTT](/integrations/mqtt#frigatecamera_nameenabledset) to conserve system resources. When disabled, Frigate's ffmpeg processes are terminated — recording stops, object detection is paused, and the Live dashboard displays a blank image with a disabled message. Review items, tracked objects, and historical footage for disabled cameras can still be accessed via the UI.
:::note
Disabling a camera via the Frigate UI or MQTT is temporary and does not persist through restarts of Frigate.
:::
For restreamed cameras, go2rtc remains active but does not use system resources for decoding or processing unless there are active external consumers (such as the Advanced Camera Card in Home Assistant using a go2rtc source).
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
### Live player error messages
When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them.
- **startup**
- What it means: The player failed to initialize or connect to the live stream (network or startup error).
- What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser.
- Possible console messages from the player code:
- `Error opening MediaSource.`
- `Browser reported a network error.`
- `Max error count ${errorCount} exceeded.` (the numeric value will vary)
- **mse-decode**
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
- What to try: Check the browser console for the supported and negotiated codecs. Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- Possible console messages from the player code:
- `Safari cannot open MediaSource.`
- `Safari reported InvalidStateError.`
- `Safari reported decoding errors.`
- **stalled**
- What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving).
- What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings.
- Possible console messages from the player code:
- `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.`
- `Media playback has stalled after <n> seconds due to insufficient buffering or a network interruption.` (the seconds value will vary)
## Live view FAQ
1. **Why don't I have audio in my Live view?**
@@ -264,31 +203,9 @@ When your browser runs into problems playing back your camera streams, it will l
Frigate intelligently selects the live streaming technology based on a number of factors (user-selected modes like two-way talk, camera settings, browser capabilities, available bandwidth) and prioritizes showing an actual up-to-date live view of your camera's stream as quickly as possible.
When you have go2rtc configured, Live view initially attempts to load and play back your stream with a clearer, fluent stream technology (MSE). An initial timeout, a low bandwidth condition that would cause buffering of the stream, or decoding errors in the stream will cause Frigate to switch to the stream defined by the `detect` role, using the jsmpeg format. This is what the UI labels as "low bandwidth mode". On Live dashboards, the mode will automatically reset when smart streaming is configured and activity stops. Continuous streaming mode does not have an automatic reset mechanism, but you can use the _Reset_ option to force a reload of your stream.
When you have go2rtc configured, Live view initially attempts to load and play back your stream with a clearer, fluent stream technology (MSE). An initial timeout, a low bandwidth condition that would cause buffering of the stream, or decoding errors in the stream will cause Frigate to switch to the stream defined by the `detect` role, using the jsmpeg format. This is what the UI labels as "low bandwidth mode". On Live dashboards, the mode will automatically reset when smart streaming is configured and activity stops. You can also try using the _Reset_ button to force a reload of your stream.
If you are using continuous streaming or you are loading more than a few high resolution streams at once on the dashboard, your browser may struggle to begin playback of your streams before the timeout. Frigate always prioritizes showing a live stream as quickly as possible, even if it is a lower quality jsmpeg stream. You can use the "Reset" link/button to try loading your high resolution stream again.
Errors in stream playback (e.g., connection failures, codec issues, or buffering timeouts) that cause the fallback to low bandwidth mode (jsmpeg) are logged to the browser console for easier debugging. These errors may include:
- Network issues (e.g., MSE or WebRTC network connection problems).
- Unsupported codecs or stream formats (e.g., H.265 in WebRTC, which is not supported in some browsers).
- Buffering timeouts or low bandwidth conditions causing fallback to jsmpeg.
- Browser compatibility problems (e.g., iOS Safari limitations with MSE).
To view browser console logs:
1. Open the Frigate Live View in your browser.
2. Open the browser's Developer Tools (F12 or right-click > Inspect > Console tab).
3. Reproduce the error (e.g., load a problematic stream or simulate network issues).
4. Look for messages prefixed with the camera name.
These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors:
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)).
- Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS).
- Test with a different stream via the UI dropdown (if `live -> streams` is configured).
- For WebRTC-specific issues, ensure port 8555 is forwarded and candidates are set (see (WebRTC Extra Configuration)(#webrtc-extra-configuration)).
- If your cameras are streaming at a high resolution, your browser may be struggling to load all of the streams before the buffering timeout occurs. Frigate prioritizes showing a true live view as quickly as possible. If the fallback occurs often, change your live view settings to use a lower bandwidth substream.
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the recommendations above or ensure you have enough bandwidth available.
3. **It doesn't seem like my cameras are streaming on the Live dashboard. Why?**
@@ -304,49 +221,8 @@ When your browser runs into problems playing back your camera streams, it will l
This static image is pulled from the stream defined in your config with the `detect` role. When activity is detected, images from the `detect` stream immediately begin updating at ~5 frames per second so you can see the activity until the live player is loaded and begins playing. This usually only takes a second or two. If the live player times out, buffers, or has streaming errors, the jsmpeg player is loaded and plays a video-only stream from the `detect` role. When activity ends, the players are destroyed and a static image is displayed until activity is detected again, and the process repeats.
Smart streaming depends on having your camera's motion `threshold` and `contour_area` config values dialed in. Use the Motion Tuner in Settings in the UI to tune these values in real-time.
This is Frigate's default and recommended setting because it results in a significant bandwidth savings, especially for high resolution cameras.
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.
7. **My camera streams have lots of visual artifacts / distortion.**
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
8. **Why does my camera stream switch aspect ratios on the Live dashboard?**
Your camera may change aspect ratios on the dashboard because Frigate uses different streams for different purposes. With go2rtc and Smart Streaming, Frigate shows a static image from the `detect` stream when no activity is present, and switches to the live stream when motion is detected. The camera image will change size if your streams use different aspect ratios.
To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches.
Example: Resolutions from two streams
- Mismatched (may cause aspect ratio switching on the dashboard):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x352 (~1.82:1, not 16:9)
- Matched (prevents switching):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x360 (16:9)
You can update the detect settings in your camera config to match the aspect ratio of your go2rtc live stream. For example:
```yaml
cameras:
front_door:
detect:
width: 640
height: 360 # set this to 360 instead of 352
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/front_door # main stream 1920x1080
roles:
- record
- path: rtsp://127.0.0.1:8554/front_door_sub # sub stream 640x352
roles:
- detect
```

View File

@@ -28,6 +28,7 @@ To create a poly mask:
5. Click the plus icon under the type of mask or zone you would like to create
6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon.
7. When you've finished creating your mask, press Save.
8. Restart Frigate to apply your changes.
Your config file will be updated with the relative coordinates of the mask/zone:

View File

@@ -77,7 +77,7 @@ At this point if motion is working as desired there is no reason to continue wit
Once daytime motion detection is tuned, there is a chance that the settings will work well for motion detection during the night as well. If this is the case then the preferred settings can be written to the config file and left alone.
However, if the preferred day settings do not work well at night it is recommended to use Home Assistant or some other solution to automate changing the settings. That way completely separate sets of motion settings can be used for optimal day and night motion detection.
However, if the preferred day settings do not work well at night it is recommended to use HomeAssistant or some other solution to automate changing the settings. That way completely separate sets of motion settings can be used for optimal day and night motion detection.
## Tuning For Large Changes In Motion
@@ -104,4 +104,4 @@ Lightning threshold does not stop motion based recordings from being saved.
:::
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in no motion detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.

View File

File diff suppressed because it is too large Load Diff

View File

@@ -20,5 +20,5 @@ In order to install Frigate as a PWA, the following requirements must be met:
Installation varies slightly based on the device that is being used:
- Desktop: Use the install button typically found in right edge of the address bar
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
- iOS: Use the `Add to Homescreen` button in the share menu
- Android: Use the `Install as App` button in the more options menu
- iOS: Use the `Add to Homescreen` button in the share menu

View File

@@ -13,34 +13,34 @@ H265 recordings can be viewed in Chrome 108+, Edge and Safari only. All other br
### Most conservative: Ensure all video is saved
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion will be saved for 7 days. After 7 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
```yaml
record:
enabled: True
continuous:
retain:
days: 3
motion:
days: 7
mode: all
alerts:
retain:
days: 30
mode: all
mode: motion
detections:
retain:
days: 30
mode: all
mode: motion
```
### Reduced storage: Only saving video when motion is detected
In order to reduce storage requirements, you can adjust your config to only retain video where motion / activity was detected.
In order to reduce storage requirements, you can adjust your config to only retain video where motion was detected.
```yaml
record:
enabled: True
motion:
retain:
days: 3
mode: motion
alerts:
retain:
days: 30
@@ -53,12 +53,12 @@ record:
### Minimum: Alerts only
If you only want to retain video that occurs during activity caused by tracked object(s), this config will discard video unless an alert is ongoing.
If you only want to retain video that occurs during a tracked object, this config will discard video unless an alert is ongoing.
```yaml
record:
enabled: True
continuous:
retain:
days: 0
alerts:
retain:
@@ -80,17 +80,15 @@ Retention configs support decimals meaning they can be configured to retain `0.5
:::
### Continuous and Motion Recording
### Continuous Recording
The number of days to retain continuous and motion recordings can be set via the following config where X is a number, by default continuous recording is disabled.
The number of days to retain continuous recordings can be set via the following config where X is a number, by default continuous recording is disabled.
```yaml
record:
enabled: True
continuous:
retain:
days: 1 # <- number of days to keep continuous recordings
motion:
days: 2 # <- number of days to keep motion recordings
```
Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean)
@@ -114,9 +112,41 @@ This configuration will retain recording segments that overlap with alerts and d
**WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect.
## What do the different retain modes mean?
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects).
Let's say you have Frigate configured so that your doorbell camera would retain the last **2** days of continuous recording.
- With the `all` option all 48 hours of those two days would be kept and viewable.
- With the `motion` option the only parts of those 48 hours would be segments that Frigate detected motion. This is the middle ground option that won't keep all 48 hours, but will likely keep all segments of interest along with the potential for some extra segments.
- With the `active_objects` option the only segments that would be kept are those where there was a true positive object that was not considered stationary.
The same options are available with alerts and detections, except it will only save the recordings when it overlaps with a review item of that type.
A configuration example of the above retain modes where all `motion` segments are stored for 7 days and `active objects` are stored for 14 days would be as follows:
```yaml
record:
enabled: True
retain:
days: 7
mode: motion
alerts:
retain:
days: 14
mode: active_objects
detections:
retain:
days: 14
mode: active_objects
```
The above configuration example can be added globally or on a per camera basis.
## Can I have "continuous" recordings, but only at certain times?
Using Frigate UI, Home Assistant, or MQTT, cameras can be automated to only record in certain situations or at certain times.
Using Frigate UI, HomeAssistant, or MQTT, cameras can be automated to only record in certain situations or at certain times.
## How do I export recordings?
@@ -144,10 +174,6 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
:::
## Apple Compatibility with H.265 Streams
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
## Syncing Recordings With Disk
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.

View File

@@ -73,38 +73,21 @@ tls:
# Optional: Enable TLS for port 8971 (default: shown below)
enabled: True
# Optional: IPv6 configuration
networking:
# Optional: Enable IPv6 on 5000, and 8971 if tls is configured (default: shown below)
ipv6:
enabled: False
# Optional: Proxy configuration
proxy:
# Optional: Mapping for headers from upstream proxies. Only used if Frigate's auth
# is disabled.
# NOTE: Many authentication proxies pass a header downstream with the authenticated
# user name and role. Not all values are supported. It must be a whitelisted header.
# user name. Not all values are supported. It must be a whitelisted header.
# See the docs for more info.
header_map:
user: x-forwarded-user
role: x-forwarded-groups
role_map:
admin:
- sysadmins
- access-level-security
viewer:
- camera-viewer
# Optional: Url for logging out a user. This sets the location of the logout url in
# the UI.
logout_url: /api/logout
# Optional: Auth secret that is checked against the X-Proxy-Secret header sent from
# the proxy. If not set, all requests are trusted regardless of origin.
auth_secret: None
# Optional: The default role to use for proxy auth. Must be "admin" or "viewer"
default_role: viewer
# Optional: The character used to separate multiple values in the proxy headers. (default: shown below)
separator: ","
# Optional: Authentication configuration
auth:
@@ -123,7 +106,7 @@ auth:
# Optional: Refresh time in seconds (default: shown below)
# When the session is going to expire in less time than this setting,
# it will be refreshed back to the session_length.
refresh_time: 1800 # 30 minutes
refresh_time: 43200 # 12 hours
# Optional: Rate limiting for login failures to help prevent brute force
# login attacks (default: shown below)
# See the docs for more information on valid values
@@ -142,7 +125,7 @@ auth:
# NOTE: The default values are for the EdgeTPU detector.
# Other detectors will require the model config to be set.
model:
# Required: path to the model. Frigate+ models use plus://<model_id> (default: automatic based on detector)
# Required: path to the model (default: automatic based on detector)
path: /edgetpu_model.tflite
# Required: path to the labelmap (default: shown below)
labelmap_path: /labelmap.txt
@@ -240,13 +223,11 @@ birdseye:
scaling_factor: 2.0
# Optional: Maximum number of cameras to show at one time, showing the most recent (default: show all cameras)
max_cameras: 1
# Optional: Frames-per-second to re-send the last composed Birdseye frame when idle (no motion or active updates). (default: shown below)
idle_heartbeat_fps: 0.0
# Optional: ffmpeg configuration
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
ffmpeg:
# Optional: ffmpeg binary path (default: shown below)
# Optional: ffmpeg binry path (default: shown below)
# can also be set to `7.0` or `5.0` to specify one of the included versions
# or can be set to any path that holds `bin/ffmpeg` & `bin/ffprobe`
path: "default"
@@ -270,8 +251,6 @@ ffmpeg:
retry_interval: 10
# Optional: Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players. (default: shown below)
apple_compatibility: false
# Optional: Set the index of the GPU to use for hardware acceleration. (default: shown below)
gpu: 0
# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
@@ -291,9 +270,6 @@ detect:
max_disappeared: 25
# Optional: Configuration for stationary object tracking
stationary:
# Optional: Stationary classifier that uses visual characteristics to determine if an object
# is stationary even if the box changes enough to be considered motion (default: shown below).
classifier: True
# Optional: Frequency for confirming stationary objects (default: same as threshold)
# When set to 1, object detection will run to confirm the object still exists on every frame.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
@@ -358,33 +334,6 @@ objects:
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI object description generation (default: shown below)
enabled: False
# Optional: Use the object snapshot instead of thumbnails for description generation (default: shown below)
use_snapshot: False
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: objects to generate descriptions for (default: all objects that are tracked)
objects:
- person
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional: Review configuration
# NOTE: Can be overridden at the camera level
@@ -397,8 +346,6 @@ review:
labels:
- car
- person
# Time to cutoff alerts after no alert-causing activity has occurred (default: shown below)
cutoff_time: 40
# Optional: required zones for an object to be marked as an alert (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
@@ -413,36 +360,12 @@ review:
labels:
- car
- person
# Time to cutoff detections after no detection-causing activity has occurred (default: shown below)
cutoff_time: 30
# Optional: required zones for an object to be marked as a detection (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
# should be configured at the camera level.
required_zones:
- driveway
# Optional: GenAI Review Summary Configuration
genai:
# Optional: Enable the GenAI review summary feature (default: shown below)
enabled: False
# Optional: Enable GenAI review summaries for alerts (default: shown below)
alerts: True
# Optional: Enable GenAI review summaries for detections (default: shown below)
detections: False
# Optional: Activity Context Prompt to give context to the GenAI what activity is and is not suspicious.
# It is important to be direct and detailed. See documentation for the default prompt structure.
activity_context_prompt: """Define what is and is not suspicious
"""
# Optional: Image source for GenAI (default: preview)
# Options: "preview" (uses cached preview frames at ~180p) or "recordings" (extracts frames from recordings at 480p)
# Using "recordings" provides better image quality but uses more tokens per image.
# Frame count is automatically calculated based on context window size, aspect ratio, and image source (capped at 20 frames).
image_source: preview
# Optional: Additional concerns that the GenAI should make note of (default: None)
additional_concerns:
- Animals in the garden
# Optional: Preferred response language (default: English)
preferred_language: English
# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
@@ -510,20 +433,20 @@ record:
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
# Optional: Sync recordings with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Continuous retention settings
continuous:
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Motion retention settings
motion:
# Optional: Retention settings for recording
retain:
# Optional: Number of days to retain recordings regardless of tracked objects (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Mode for retention. Available options are: all, motion, and active_objects
# all - save all recording segments regardless of activity
# motion - save all recordings segments with any detected motion
# active_objects - save all recording segments with active/moving objects
# NOTE: this mode only applies when the days setting above is greater than 0
mode: all
# Optional: Recording Export Settings
export:
# Optional: Timelapse Output Args (default: shown below).
@@ -548,7 +471,7 @@ record:
# Optional: Retention settings for recordings of alerts
retain:
# Required: Retention days (default: shown below)
days: 10
days: 14
# Optional: Mode for retention. (default: shown below)
# all - save all recording segments for alerts regardless of activity
# motion - save all recordings segments for alerts with any detected motion
@@ -568,7 +491,7 @@ record:
# Optional: Retention settings for recordings of detections
retain:
# Required: Retention days (default: shown below)
days: 10
days: 14
# Optional: Mode for retention. (default: shown below)
# all - save all recording segments for detections regardless of activity
# motion - save all recordings segments for detections with any detected motion
@@ -585,7 +508,7 @@ record:
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
enabled: False
# Optional: save a clean copy of the snapshot image (default: shown below)
# Optional: save a clean PNG copy of the snapshot image (default: shown below)
clean_copy: True
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: False
@@ -618,46 +541,19 @@ semantic_search:
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Target a specific device to run the model (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: None
# Optional: Configuration for face recognition capability
# NOTE: enabled, min_area can be overridden at the camera level
face_recognition:
# Optional: Enable face recognition (default: shown below)
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Minimum face distance score required to mark as a potential match (default: shown below)
unknown_score: 0.8
# Optional: Minimum face detection score required to detect a face (default: shown below)
# NOTE: This only applies when not running a Frigate+ model
detection_threshold: 0.7
# Optional: Minimum face distance score required to be considered a match (default: shown below)
recognition_threshold: 0.9
# Optional: Min area of detected face box to consider running face recognition (default: shown below)
min_area: 500
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below)
save_attempts: 200
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below)
model_size: small
# Optional: Target a specific device to run the model (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: None
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for license plate recognition capability
# NOTE: enabled, min_area, and enhancement can be overridden at the camera level
lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: The device to run the models on (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: CPU
# Optional: Set the model size used for text detection. (default: shown below)
model_size: small
# Optional: License plate object confidence score required to begin running recognition (default: shown below)
detection_threshold: 0.7
# Optional: Minimum area of license plate to begin running recognition (default: shown below)
@@ -672,84 +568,30 @@ lpr:
match_distance: 1
# Optional: Known plates to track (strings or regular expressions) (default: shown below)
known_plates: {}
# Optional: Enhance the detected plate image with contrast adjustment and denoising (default: shown below)
# A value between 0 and 10. Higher values are not always better and may perform worse than lower values.
enhancement: 0
# Optional: Save plate images to /media/frigate/clips/lpr for debugging purposes (default: shown below)
debug_save_plates: False
# Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {}
# Optional: Configuration for AI / LLM provider
# Optional: Configuration for AI generated tracked object descriptions
# WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. GenAI features can be configured at
# the camera level to enhance privacy for indoor cameras.
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
# the camera level (enabled: False) to enhance privacy for indoor cameras.
genai:
# Required: Provider must be one of ollama, gemini, or openai
# Optional: Enable AI description generation (default: shown below)
enabled: False
# Required if enabled: Provider must be one of ollama, gemini, or openai
provider: ollama
# Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider.
base_url: http://localhost::11434
# Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}"
# Required: The model to use with the provider.
model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None)
provider_options:
keep_alive: -1
# Optional: Configuration for audio transcription
# NOTE: only the enabled option can be overridden at the camera level
audio_transcription:
# Optional: Enable live and speech event audio transcription (default: shown below)
enabled: False
# Optional: The device to run the models on for live transcription. (default: shown below)
device: CPU
# Optional: Set the model size used for live transcription. (default: shown below)
model_size: small
# Optional: Set the language used for transcription translation. (default: shown below)
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
language: en
# Optional: Configuration for classification models
classification:
# Optional: Configuration for bird classification
bird:
# Optional: Enable bird classification (default: shown below)
enabled: False
# Optional: Minimum classification score required to be considered a match (default: shown below)
threshold: 0.9
custom:
# Required: name of the classification model
model_name:
# Optional: Enable running the model (default: shown below)
enabled: True
# Optional: Name of classification model (default: shown below)
name: None
# Optional: Classification score threshold to change the state (default: shown below)
threshold: 0.8
# Optional: Number of classification attempts to save in the recent classifications tab (default: shown below)
# NOTE: Defaults to 200 for object classification and 100 for state classification if not specified
save_attempts: None
# Optional: Object classification configuration
object_config:
# Required: Object types to classify
objects: [dog]
# Optional: Type of classification that is applied (default: shown below)
classification_type: sub_label
# Optional: State classification configuration
state_config:
# Required: Cameras to run classification on
cameras:
camera_name:
# Required: Crop of image frame on this camera to run classification on
crop: [0, 180, 220, 400]
# Optional: If classification should be run when motion is detected in the crop (default: shown below)
motion: False
# Optional: Interval to run classification on in seconds (default: shown below)
interval: None
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
@@ -803,9 +645,6 @@ cameras:
# If disabled: config is used but no live stream and no capture etc.
# Events/Recordings are still viewable.
enabled: True
# Optional: camera type used for some Frigate features (default: shown below)
# Options are "generic" and "lpr"
type: "generic"
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.
@@ -848,8 +687,6 @@ cameras:
# NOTE: This must be different than any camera names, but can match with another zone on another
# camera.
front_steps:
# Optional: A friendly name or descriptive text for the zones
friendly_name: ""
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
@@ -911,7 +748,7 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification and disable digest authentication for the ONVIF server (default: shown below)
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
@@ -957,27 +794,33 @@ cameras:
# By default the cameras are sorted alphabetically.
order: 0
# Optional: Configuration for triggers to automate actions based on semantic search results.
triggers:
# Required: Unique identifier for the trigger (generated automatically from friendly_name if not specified).
trigger_name:
# Required: Enable or disable the trigger. (default: shown below)
enabled: true
# Optional: A friendly name or descriptive text for the trigger
friendly_name: Unique name or descriptive text
# Type of trigger, either `thumbnail` for image-based matching or `description` for text-based matching. (default: none)
type: thumbnail
# Reference data for matching, either an event ID for `thumbnail` or a text string for `description`. (default: none)
data: 1751565549.853251-b69j73
# Similarity threshold for triggering. (default: shown below)
threshold: 0.8
# List of actions to perform when the trigger fires. (default: none)
# Available options:
# - `notification` (send a webpush notification)
# - `sub_label` (add trigger friendly name as a sub label to the triggering tracked object)
# - `attribute` (add trigger's name and similarity score as a data attribute to the triggering tracked object)
actions:
- notification
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI description generation (default: shown below)
enabled: False
# Optional: Use the object snapshot instead of thumbnails for description generation (default: shown below)
use_snapshot: False
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: objects to generate descriptions for (default: all objects that are tracked)
objects:
- person
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional
ui:
@@ -1002,6 +845,10 @@ ui:
# full: 8:15:22 PM Mountain Standard Time
# (default: shown below).
time_style: medium
# Optional: Ability to manually override the date / time styling to use strftime format
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
# Used in the UI and in MQTT topics
unit_system: metric
@@ -1023,12 +870,12 @@ telemetry:
# Optional: Enable Intel GPU stats (default: shown below)
intel_gpu_stats: True
# Optional: Treat GPU as SR-IOV to fix GPU stats (default: shown below)
intel_gpu_device: None
sriov: False
# Optional: Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# NOTE: The container must either be privileged or have cap_net_admin, cap_net_raw capabilities enabled.
network_bandwidth: False
# Optional: Enable the latest version outbound check (default: shown below)
# NOTE: If you use the Home Assistant integration, disabling this will prevent it from reporting new versions
# NOTE: If you use the HomeAssistant integration, disabling this will prevent it from reporting new versions
version_check: True
# Optional: Camera groups (default: no groups are setup)

View File

@@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.2) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#configuration) for more advanced configurations and features.
:::note
@@ -24,12 +24,6 @@ birdseye:
restream: True
```
:::tip
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `12`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
:::
### Securing Restream With Authentication
The go2rtc restream can be secured with RTSP based username / password authentication. Ex:
@@ -140,7 +134,7 @@ cameras:
## Handling Complex Passwords
go2rtc expects URL-encoded passwords in the config, [urlencoder.org](https://urlencoder.org) can be used for this purpose.
go2rtc expects URL-encoded passwords in the config, [urlencoder.org](https://urlencoder.org) can be used for this purpose.
For example:
@@ -158,36 +152,11 @@ go2rtc:
my_camera: rtsp://username:$%40foo%25@192.168.1.100
```
See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
## Preventing go2rtc from blocking two-way audio {#two-way-talk-restream}
For cameras that support two-way talk, go2rtc will automatically establish an audio output backchannel when connecting to an RTSP stream. This backchannel blocks access to the camera's audio output for two-way talk functionality, preventing both Frigate and other applications from using it.
To prevent this, you must configure two separate stream instances:
1. One stream instance with `#backchannel=0` for Frigate's viewing, recording, and detection (prevents go2rtc from establishing the blocking backchannel)
2. A second stream instance without `#backchannel=0` for two-way talk functionality (can be used by Frigate's WebRTC viewer or other applications)
Configuration example:
```yaml
go2rtc:
streams:
front_door:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2#backchannel=0
front_door_twoway:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
```
In this configuration:
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
See [this comment(https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@@ -21,21 +21,6 @@ In 0.14 and later, all of that is bundled into a single review item which starts
Not every segment of video captured by Frigate may be of the same level of interest to you. Video of people who enter your property may be a different priority than those walking by on the sidewalk. For this reason, Frigate 0.14 categorizes review items as _alerts_ and _detections_. By default, all person and car objects are considered alerts. You can refine categorization of your review items by configuring required zones for them.
:::note
Alerts and detections categorize the tracked objects in review items, but Frigate must first detect those objects with your configured object detector (Coral, OpenVINO, etc). By default, the object tracker only detects `person`. Setting `labels` for `alerts` and `detections` does not automatically enable detection of new objects. To detect more than `person`, you should add the following to your config:
```yaml
objects:
track:
- person
- car
- ...
```
See the [objects documentation](objects.md) for the list of objects that Frigate's default model tracks.
:::
## Restricting alerts to specific labels
By default a review item will only be marked as an alert if a person or car is detected. This can be configured to include any object or audio label using the following config:

Some files were not shown because too many files have changed in this diff Show More