Compare commits

..

15 Commits

Author SHA1 Message Date
Josh Hawkins
a8b90834b0 use same logging pattern in sync_recordings as the other sync functions 2026-01-06 10:13:32 -06:00
Josh Hawkins
f1a19128ed Media sync API refactor and UI (#21542)
* generic job infrastructure

* types and dispatcher changes for jobs

* save data in memory only for completed jobs

* implement media sync job and endpoints

* change logs to debug

* websocket hook and types

* frontend

* i18n

* docs tweaks

* endpoint descriptions

* tweak docs
2026-01-06 08:20:19 -07:00
Josh Hawkins
a77b0a7c4b Add media sync API endpoint (#21526)
* add media cleanup functions

* add endpoint

* remove scheduled sync recordings from cleanup

* move to utils dir

* tweak import

* remove sync_recordings and add config migrator

* remove sync_recordings

* docs

* remove key

* clean up docs

* docs fix

* docs tweak
2026-01-04 11:21:55 -07:00
Nicolas Mowen
1c95eb2c39 Add API to handle deleting recordings (#21520)
* Add recording delete API

* Re-organize recordings apis

* Fix import

* Consolidate query types
2026-01-03 08:19:41 -07:00
Nicolas Mowen
26744efb1e Exports Improvements (#21521)
* Add images to case folder view

* Add ability to select case in export dialog

* Add to mobile review too
2026-01-03 08:03:33 -07:00
Nicolas Mowen
aa0b082184 Add support for GPU and NPU temperatures (#21495)
* Add rockchip temps

* Add support for GPU and NPU temperatures in the frontend

* Add support for Nvidia temperature

* Improve separation

* Adjust graph scaling
2025-12-31 13:32:07 -07:00
Andrew Roberts
7fb8d9b050 Camera-specific hwaccel settings for timelapse exports (correct base) (#21386)
* added hwaccel_args to camera.record.export config struct

* populate camera.record.export.hwaccel_args with a cascade up to camera then global if 'auto'

* use new hwaccel args in export

* added documentation for camera-specific hwaccel export

* fix c/p error

* missed an import

* fleshed out the docs and comments a bit

* ruff lint

* separated out the tips in the doc

* fix documentation

* fix and simplify reference config doc
2025-12-22 09:10:40 -07:00
Nicolas Mowen
b8bc98a423 Refactor temperature reporting for detectors and implement Hailo temp reading (#21395)
* Add Hailo temperature retrieval

* Refactor `get_hailo_temps()` to use ctxmanager

* Show Hailo temps in system UI

* Move hailo_platform import to get_hailo_temps

* Refactor temperatures calculations to use within detector block

* Adjust webUI to handle new location

---------

Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com>
2025-12-22 08:25:38 -07:00
Nicolas Mowen
f9e06bb7b7 Export filter UI (#21322)
* Get started on export filters

* implement basic filter

* Implement filtering and adjust api

* Improve filter handling

* Improve navigation

* Cleanup

* handle scrolling
2025-12-16 16:10:48 -06:00
Josh Hawkins
7cc16161b3 Camera connection quality indicator (#21297)
* add camera connection quality metrics and indicator

* formatting

* move stall calcs to watchdog

* clean up

* change watchdog to 1s and separately track time for ffmpeg retry_interval

* implement status caching to reduce message volume
2025-12-15 14:02:03 -07:00
Nicolas Mowen
08311a6ee2 Case management UI (#21299)
* Refactor export cards to match existing cards in other UI pages

* Show cases separately from exports

* Add proper filtering and display of cases

* Add ability to edit and select cases for exports

* Cleanup typing

* Hide if no unassigned

* Cleanup hiding logic

* fix scrolling

* Improve layout
2025-12-15 13:10:50 -07:00
Josh Hawkins
a08c044144 refactor vainfo to search for first GPU (#21296)
use existing LibvaGpuSelector to pick appropritate libva device
2025-12-15 08:58:50 -07:00
Nicolas Mowen
5cced22f65 implement case management for export apis (#21295) 2025-12-15 08:54:13 -07:00
Nicolas Mowen
b962c95725 Create scaffolding for case management (#21293) 2025-12-15 08:28:52 -07:00
Nicolas Mowen
0cbec25494 Update version 2025-12-15 07:46:31 -07:00
449 changed files with 5730 additions and 8892 deletions

View File

@@ -19,9 +19,9 @@ jobs:
- uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
node-version: 16.x
- run: npm install
working-directory: ./web
- name: Lint
@@ -35,7 +35,7 @@ jobs:
- uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
- run: npm install
@@ -78,7 +78,7 @@ jobs:
uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli

View File

@@ -1,6 +1,6 @@
The MIT License
Copyright (c) 2026 Frigate, Inc. (Frigate™)
Copyright (c) 2025 Frigate LLC (Frigate™)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.17.0
VERSION = 0.18.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty

View File

@@ -40,7 +40,7 @@ If you would like to make a donation to support development, please use [Github
This project is licensed under the **MIT License**.
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate, Inc.** and are **not** covered by the MIT License.
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
@@ -67,7 +67,7 @@ Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of
### Built-in mask and zone editor
<div>
<img width="800" alt="Built-in mask and zone editor" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## Translations
@@ -80,4 +80,4 @@ We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support la
---
**Copyright © 2026 Frigate, Inc.**
**Copyright © 2025 Frigate LLC.**

View File

@@ -4,14 +4,14 @@
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU并且功耗也极低。
@@ -38,10 +38,9 @@
## 协议
本项目采用 **MIT 许可证**授权。
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
**商标部分**“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate, Inc. 的商标****不在** MIT 许可证覆盖范围内。
**商标部分**“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标****不在** MIT 许可证覆盖范围内。
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
## 截图
@@ -87,4 +86,4 @@ Bilibilihttps://space.bilibili.com/3546894915602564
---
**Copyright © 2026 Frigate, Inc.**
**Copyright © 2025 Frigate LLC.**

View File

@@ -6,7 +6,7 @@ This document outlines the policy regarding the use of the trademarks associated
## 1. Our Trademarks
The following terms and visual assets are trademarks (the "Marks") of **Frigate, Inc.**:
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
- **Frigate™**
- **Frigate NVR™**
@@ -14,7 +14,7 @@ The following terms and visual assets are trademarks (the "Marks") of **Frigate,
- **The Frigate Logo**
**Note on Common Law Rights:**
Frigate, Inc. asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
## 2. Interaction with the MIT License
@@ -25,7 +25,7 @@ The software in this repository is licensed under the [MIT License](LICENSE).
- The **Code** is free to use, modify, and distribute under the MIT terms.
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate, Inc.
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
## 3. Acceptable Use
@@ -40,7 +40,7 @@ You may use the Marks without prior written permission in the following specific
You may **NOT** use the Marks in the following ways:
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate, Inc.
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).

View File

@@ -237,18 +237,8 @@ ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
# TensorFlow C++ logging suppression (must be set before import)
# TF_CPP_MIN_LOG_LEVEL: 0=all, 1=INFO+, 2=WARNING+, 3=ERROR+ (we use 3 for errors only)
# TensorFlow error only
ENV TF_CPP_MIN_LOG_LEVEL=3
# Suppress verbose logging from TensorFlow C++ code
ENV TF_CPP_MIN_VLOG_LEVEL=3
# Disable oneDNN optimization messages ("optimized with oneDNN...")
ENV TF_ENABLE_ONEDNN_OPTS=0
# Suppress AutoGraph verbosity during conversion
ENV AUTOGRAPH_VERBOSITY=0
# Google Logging (GLOG) suppression for TensorFlow components
ENV GLOG_minloglevel=3
ENV GLOG_logtostderr=0
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"

View File

@@ -48,7 +48,7 @@ onnxruntime == 1.22.*
transformers == 4.45.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.6.*
ollama == 0.5.*
openai == 1.65.*
# push notifications
py-vapid == 1.9.*

View File

@@ -55,7 +55,7 @@ function setup_homekit_config() {
if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty HomeKit config file..."
echo 'homekit: {}' > "${config_path}"
echo '{}' > "${config_path}"
fi
# Convert YAML to JSON for jq processing
@@ -70,14 +70,12 @@ function setup_homekit_config() {
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{"homekit": {}}' > "${cleaned_json}"
}
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || echo '{"homekit": {}}' > "${cleaned_json}"
# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo 'homekit: {}' > "${config_path}"
echo '{"homekit": {}}' > "${config_path}"
}
# Clean up temp files

View File

@@ -22,11 +22,6 @@ sys.path.remove("/opt/frigate")
yaml = YAML()
# Check if arbitrary exec sources are allowed (defaults to False for security)
ALLOW_ARBITRARY_EXEC = os.environ.get(
"GO2RTC_ALLOW_ARBITRARY_EXEC", "false"
).lower() in ("true", "1", "yes")
FRIGATE_ENV_VARS = {k: v for k, v in os.environ.items() if k.startswith("FRIGATE_")}
# read docker secret files as env vars too
if os.path.isdir("/run/secrets"):
@@ -114,26 +109,14 @@ if LIBAVFORMAT_VERSION_MAJOR < 59:
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
go2rtc_config["ffmpeg"]["rtsp"] = rtsp_args
def is_restricted_source(stream_source: str) -> bool:
"""Check if a stream source is restricted (echo, expr, or exec)."""
return stream_source.strip().startswith(("echo:", "expr:", "exec:"))
for name in list(go2rtc_config.get("streams", {})):
for name in go2rtc_config.get("streams", {}):
stream = go2rtc_config["streams"][name]
if isinstance(stream, str):
try:
formatted_stream = stream.format(**FRIGATE_ENV_VARS)
if not ALLOW_ARBITRARY_EXEC and is_restricted_source(formatted_stream):
print(
f"[ERROR] Stream '{name}' uses a restricted source (echo/expr/exec) which is disabled by default for security. "
f"Set GO2RTC_ALLOW_ARBITRARY_EXEC=true to enable arbitrary exec sources."
)
del go2rtc_config["streams"][name]
continue
go2rtc_config["streams"][name] = formatted_stream
go2rtc_config["streams"][name] = go2rtc_config["streams"][name].format(
**FRIGATE_ENV_VARS
)
except KeyError as e:
print(
"[ERROR] Invalid substitution found, see https://docs.frigate.video/configuration/restream#advanced-restream-configurations for more info."
@@ -141,33 +124,15 @@ for name in list(go2rtc_config.get("streams", {})):
sys.exit(e)
elif isinstance(stream, list):
filtered_streams = []
for i, stream_item in enumerate(stream):
for i, stream in enumerate(stream):
try:
formatted_stream = stream_item.format(**FRIGATE_ENV_VARS)
if not ALLOW_ARBITRARY_EXEC and is_restricted_source(formatted_stream):
print(
f"[ERROR] Stream '{name}' item {i + 1} uses a restricted source (echo/expr/exec) which is disabled by default for security. "
f"Set GO2RTC_ALLOW_ARBITRARY_EXEC=true to enable arbitrary exec sources."
)
continue
filtered_streams.append(formatted_stream)
go2rtc_config["streams"][name][i] = stream.format(**FRIGATE_ENV_VARS)
except KeyError as e:
print(
"[ERROR] Invalid substitution found, see https://docs.frigate.video/configuration/restream#advanced-restream-configurations for more info."
)
sys.exit(e)
if filtered_streams:
go2rtc_config["streams"][name] = filtered_streams
else:
print(
f"[ERROR] Stream '{name}' was removed because all sources were restricted (echo/expr/exec). "
f"Set GO2RTC_ALLOW_ARBITRARY_EXEC=true to enable arbitrary exec sources."
)
del go2rtc_config["streams"][name]
# add birdseye restream stream if enabled
if config.get("birdseye", {}).get("restream", False):
birdseye: dict[str, Any] = config.get("birdseye")

View File

@@ -18,10 +18,6 @@ proxy_set_header X-Forwarded-User $http_x_forwarded_user;
proxy_set_header X-Forwarded-Groups $http_x_forwarded_groups;
proxy_set_header X-Forwarded-Email $http_x_forwarded_email;
proxy_set_header X-Forwarded-Preferred-Username $http_x_forwarded_preferred_username;
proxy_set_header X-Auth-Request-User $http_x_auth_request_user;
proxy_set_header X-Auth-Request-Groups $http_x_auth_request_groups;
proxy_set_header X-Auth-Request-Email $http_x_auth_request_email;
proxy_set_header X-Auth-Request-Preferred-Username $http_x_auth_request_preferred_username;
proxy_set_header X-authentik-username $http_x_authentik_username;
proxy_set_header X-authentik-groups $http_x_authentik_groups;
proxy_set_header X-authentik-email $http_x_authentik_email;

View File

@@ -50,7 +50,7 @@ cameras:
### Configuring Minimum Volume
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that Frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
:::tip

View File

@@ -188,10 +188,10 @@ go2rtc:
# example for connectin to a Reolink camera that supports two way talk
your_reolink_camera_twt:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- "rtsp://username:password@reolink_ip/Preview_01_sub"
- "rtsp://username:password@reolink_ip/Preview_01_sub
your_reolink_camera_twt_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
- "rtsp://username:password@reolink_ip/Preview_01_sub"
- "rtsp://username:password@reolink_ip/Preview_01_sub
# example for connecting to a Reolink NVR
your_reolink_camera_via_nvr:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
@@ -227,12 +227,6 @@ cameras:
### Unifi Protect Cameras
:::note
Unifi G5s cameras and newer need a Unifi Protect server to enable rtsps stream, it's not posible to enable it in standalone mode.
:::
Unifi protect cameras require the rtspx stream to be used with go2rtc.
To utilize a Unifi protect camera, modify the rtsps link to begin with rtspx.
Additionally, remove the "?enableSrtp" from the end of the Unifi link.
@@ -258,10 +252,6 @@ ffmpeg:
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
### Wyze Wireless Cameras
Some community members have found better performance on Wyze cameras by using an alternative firmware known as [Thingino](https://thingino.com/).
## USB Cameras (aka Webcams)
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:

View File

@@ -94,19 +94,18 @@ This list of working and non-working PTZ cameras is based on user feedback. If y
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`. These features are required for autotracking, but some cameras still fail to respond even if they claim support. If they are missing, autotracking will not work (though basic PTZ in the WebUI might). Avoid cameras with no database entry unless they are confirmed as working below.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Annke CZ504 | ✅ | ✅ | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
| Axis Q-6155E | ✅ | ❌ | ONVIF service port: 80; Camera does not support MoveStatus. |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, picoo series (commonly), among others) have been reported to not support autotracking. These models usually don't have a four digit model number with chassis prefix and options postfix (e.g. DH-P5AE-PV vs DH-SD49825GB-HNR). |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
| Dahua DH-P5AE-PV | ❌ | ❌ | |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database | |
| Foscam R5 | ✅ | ❌ | |
| Foscam SD4 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |

View File

@@ -3,7 +3,7 @@ id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
## Minimum System Requirements
@@ -11,8 +11,6 @@ Object classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
@@ -33,15 +31,9 @@ For object classification:
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
:::note
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. This could also occur with `car` objects that are assigned a sub label for a delivery carrier. Consider using the `attribute` type instead.
:::
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
## Assignment Requirements
@@ -81,17 +73,13 @@ classification:
classification_type: sub_label # or: attribute
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
### Step 2: Assign Training Examples
@@ -99,8 +87,6 @@ The system will automatically generate example images from detected objects matc
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
@@ -108,23 +94,3 @@ If examples for some of your classes do not appear in the grid, you can continue
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
## Debugging Classification Models
To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- Consensus calculations and when assignments are made
- Object classification history and weighted scores

View File

@@ -3,7 +3,7 @@ id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
## Minimum System Requirements
@@ -11,8 +11,6 @@ State classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
@@ -48,8 +46,6 @@ classification:
crop: [0, 180, 220, 400]
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
@@ -74,34 +70,3 @@ Once some images are assigned, training will begin automatically.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
## Debugging Classification Models
To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- State verification progress (consecutive detections needed)
- When state changes are published
### Recent Classifications
For state classification, images are only added to recent classifications under specific circumstances:
- **First detection**: The first classification attempt for a camera is always saved
- **State changes**: Images are saved when the detected state differs from the current verified state
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)
Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.

View File

@@ -48,29 +48,15 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
@@ -78,10 +64,6 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
@@ -211,7 +193,7 @@ You are also able to define custom prompts in your configuration.
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:8b-instruct
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."

View File

@@ -39,10 +39,9 @@ You are also able to define custom prompts in your configuration.
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:8b-instruct
model: llava
objects:
genai:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."

View File

@@ -16,13 +16,12 @@ Review summaries provide structured JSON responses that are saved for each revie
```
- `title` (string): A concise, direct title that describes the purpose or overall action (e.g., "Person taking out trash", "Joe walking dog").
- `scene` (string): A narrative description of what happens across the sequence from start to finish, including setting, detected objects, and their observable actions.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. This is a condensed version of the scene description.
- `confidence` (float): 0-1 confidence in the analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
```
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will automatically show the title and `shortSummary` when the data is available, while the full `scene` description is available in the UI for detailed review.
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will also automatically show the title and description when the data is available.
### Defining Typical Activity
@@ -31,43 +30,40 @@ Each installation and even camera can have different parameters for what is cons
<details>
<summary>Default Activity Context Prompt</summary>
```yaml
review:
genai:
activity_context_prompt: |
### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
```
### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
### Critical Threat Indicators (Level 2)
- Holding break-in tools (crowbars, pry bars, bolt cutters)
- Weapons visible (guns, knives, bats used aggressively)
- Forced entry in progress
- Physical aggression or violence
- Active property damage or theft in progress
### Critical Threat Indicators (Level 2)
- Holding break-in tools (crowbars, pry bars, bolt cutters)
- Weapons visible (guns, knives, bats used aggressively)
- Forced entry in progress
- Physical aggression or violence
- Active property damage or theft in progress
### Assessment Guidance
Evaluate in this order:
### Assessment Guidance
Evaluate in this order:
1. **If person is verified/known** → Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
- Check actions: If testing doors/handles, taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
1. **If person is verified/known** → Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
- Check actions: If testing doors/handles, taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
```
</details>
@@ -112,17 +108,6 @@ review:
- animals in the garden
```
### Preferred Language
By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option:
```yaml
review:
genai:
enabled: true
preferred_language: Spanish
```
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.

View File

@@ -13,7 +13,7 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **AMD**
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
- **Intel**

View File

@@ -3,65 +3,78 @@ id: hardware_acceleration_video
title: Video Decoding
---
import CommunityBadge from '@site/src/components/CommunityBadge';
# Video Decoding
It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate.
It is highly recommended to use a GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg. To verify that hardware acceleration is working:
- Check the logs: A message will either say that hardware acceleration was automatically detected, or there will be a warning that no hardware acceleration was automatically detected
- If hardware acceleration is specified in the config, verification can be done by ensuring the logs are free from errors. There is no CPU fallback for hardware acceleration.
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
:::info
Frigate supports presets for optimal hardware accelerated video decoding:
## Raspberry Pi 3/4
**AMD**
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
- [AMD](#amd-based-cpus): Frigate can utilize modern AMD integrated GPUs and AMD discrete GPUs to accelerate video decoding.
```yaml
# if you want to decode a h264 stream
ffmpeg:
hwaccel_args: preset-rpi-64-h264
**Intel**
# if you want to decode a h265 (hevc) stream
ffmpeg:
hwaccel_args: preset-rpi-64-h265
```
- [Intel](#intel-based-cpus): Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video decoding.
:::note
**Nvidia GPU**
If running Frigate through Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker Compose add:
- [Nvidia GPU](#nvidia-gpus): Frigate can utilize most modern Nvidia GPUs to accelerate video decoding.
```yaml
services:
frigate:
...
devices:
- /dev/video11:/dev/video11
```
**Raspberry Pi 3/4**
Or with `docker run`:
- [Raspberry Pi](#raspberry-pi-34): Frigate can utilize the media engine in the Raspberry Pi 3 and 4 to slightly accelerate video decoding.
```bash
docker run -d \
--name frigate \
...
--device /dev/video11 \
ghcr.io/blakeblackshear/frigate:stable
```
**Nvidia Jetson** <CommunityBadge />
`/dev/video11` is the correct device (on Raspberry Pi 4B). You can check
by running the following and looking for `H264`:
- [Jetson](#nvidia-jetson): Frigate can utilize the media engine in Jetson hardware to accelerate video decoding.
```bash
for d in /dev/video*; do
echo -e "---\n$d"
v4l2-ctl --list-formats-ext -d $d
done
```
**Rockchip** <CommunityBadge />
- [RKNN](#rockchip-platform): Frigate can utilize the media engine in RockChip SOCs to accelerate video decoding.
**Other Hardware**
Depending on your system, these presets may not be compatible, and you may need to use manual hwaccel args to take advantage of your hardware. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
Or map in all the `/dev/video*` devices.
:::
## Intel-based CPUs
Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video decoding.
:::info
**Recommended hwaccel Preset**
| CPU Generation | Intel Driver | Recommended Preset | Notes |
| -------------- | ------------ | ------------------- | ------------------------------------------- |
| gen1 - gen5 | i965 | preset-vaapi | qsv is not supported, may not support H.265 |
| gen6 - gen7 | iHD | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
| CPU Generation | Intel Driver | Recommended Preset | Notes |
| -------------- | ------------ | ------------------- | ------------------------------------ |
| gen1 - gen5 | i965 | preset-vaapi | qsv is not supported |
| gen6 - gen7 | iHD | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
:::
@@ -182,17 +195,15 @@ telemetry:
If you are passing in a device path, make sure you've passed the device through to the container.
## AMD-based CPUs
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video decoding using VAAPI.
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
### Configuring Radeon Driver
:::note
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
### Via VAAPI
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
:::
```yaml
ffmpeg:
@@ -253,7 +264,7 @@ processes:
:::note
`nvidia-smi` will not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
`nvidia-smi` may not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
:::
@@ -289,63 +300,12 @@ If you do not see these processes, check the `docker logs` for the container and
These instructions were originally based on the [Jellyfin documentation](https://jellyfin.org/docs/general/administration/hardware-acceleration.html#nvidia-hardware-acceleration-on-docker-linux).
## Raspberry Pi 3/4
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
```yaml
# if you want to decode a h264 stream
ffmpeg:
hwaccel_args: preset-rpi-64-h264
# if you want to decode a h265 (hevc) stream
ffmpeg:
hwaccel_args: preset-rpi-64-h265
```
:::note
If running Frigate through Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker Compose add:
```yaml
services:
frigate:
...
devices:
- /dev/video11:/dev/video11
```
Or with `docker run`:
```bash
docker run -d \
--name frigate \
...
--device /dev/video11 \
ghcr.io/blakeblackshear/frigate:stable
```
`/dev/video11` is the correct device (on Raspberry Pi 4B). You can check
by running the following and looking for `H264`:
```bash
for d in /dev/video*; do
echo -e "---\n$d"
v4l2-ctl --list-formats-ext -d $d
done
```
Or map in all the `/dev/video*` devices.
:::
# Community Supported
## NVIDIA Jetson
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
A separate set of docker images is available for Jetson devices. They come with an `ffmpeg` build with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
You will need to use the image with the nvidia container runtime:

View File

@@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
@@ -127,8 +127,7 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
- Note that WebRTC does not support H.265.
:::tip

View File

@@ -146,18 +146,18 @@ detectors:
### EdgeTPU Supported Models
| Model | Notes |
| ----------------------- | ------------------------------------------- |
| [Mobiledet](#mobiledet) | Default model |
| [YOLOv9](#yolov9) | More accurate but slower than default model |
| Model | Notes |
| ------------------------------------- | ------------------------------------------- |
| [MobileNet v2](#ssdlite-mobilenet-v2) | Default model |
| [YOLOv9](#yolo-v9) | More accurate but slower than default model |
#### Mobiledet
#### SSDLite MobileNet v2
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
#### YOLOv9
#### YOLO v9
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
<details>
<summary>YOLOv9 Setup & Config</summary>
@@ -178,7 +178,7 @@ model:
labelmap_path: /config/labels-coco17.txt
```
Note that due to hardware limitations of the Coral, the labelmap is a subset of the COCO labels and includes only 17 object classes.
Note that the labelmap uses a subset of the complete COCO label set that has only 17 objects.
</details>
@@ -477,7 +477,7 @@ After placing the downloaded onnx model in your config/model_cache folder, you c
detectors:
ov:
type: openvino
device: CPU
device: GPU
model:
model_type: dfine
@@ -569,10 +569,10 @@ When using Docker Compose:
```yaml
services:
frigate:
...
devices:
- /dev/dri
- /dev/kfd
---
devices:
- /dev/dri
- /dev/kfd
```
For reference on recommended settings see [running ROCm/pytorch in Docker](https://rocm.docs.amd.com/projects/install-on-linux/en/develop/how-to/3rd-party/pytorch-install.html#using-docker-with-pytorch-pre-installed).
@@ -600,9 +600,9 @@ When using Docker Compose:
```yaml
services:
frigate:
...
environment:
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
environment:
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
```
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
@@ -1508,17 +1508,17 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
EOF
```
### Downloading RF-DETR Model
### Download RF-DETR Model
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
```sh
docker build . --build-arg MODEL_SIZE=Nano --rm --output . -f- <<'EOF'
docker build . --build-arg MODEL_SIZE=Nano --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /rfdetr
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 onnxscript
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnxscript
ARG MODEL_SIZE
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
FROM scratch

View File

@@ -11,7 +11,7 @@ This adds features including the ability to deep link directly into the app.
In order to install Frigate as a PWA, the following requirements must be met:
- Frigate must be accessed via a secure context (localhost, secure https, VPN, etc.)
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
@@ -22,7 +22,3 @@ Installation varies slightly based on the device that is being used:
- Desktop: Use the install button typically found in right edge of the address bar
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
- iOS: Use the `Add to Homescreen` button in the share menu
## Usage
Once setup, the Frigate app can be used wherever it has access to Frigate. This means it can be setup as local-only, VPN-only, or fully accessible depending on your needs.

View File

@@ -139,7 +139,13 @@ record:
:::tip
When using `hwaccel_args` globally hardware encoding is used for time lapse generation. The encoder determines its own behavior so the resulting file size may be undesirably large.
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras.<camera>.record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
:::
:::tip
The encoder determines its own behavior so the resulting file size may be undesirably large.
To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (where `n` stands for the value of the quantisation parameter). The value can be adjusted to get an acceptable tradeoff between quality and file size for the given scenario.
:::
@@ -148,19 +154,16 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
## Syncing Recordings With Disk
## Syncing Media Files With Disk
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.
Media files (event snapshots, event thumbnails, review thumbnails, previews, exports, and recordings) can become orphaned when database entries are deleted but the corresponding files remain on disk.
```yaml
record:
sync_recordings: True
```
Normal operation may leave small numbers of orphaned files until Frigate's scheduled cleanup, but crashes, configuration changes, or upgrades may cause more orphaned files that Frigate does not clean up. This feature checks the file system for media files and removes any that are not referenced in the database.
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
The Maintenance pane in the Frigate UI or an API endpoint `POST /api/media/sync` can be used to trigger a media sync. When using the API, a job ID is returned and the operation continues on the server. Status can be checked with the `/api/media/sync/status/{job_id}` endpoint.
:::warning
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.
This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended.
:::

View File

@@ -510,8 +510,6 @@ record:
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Continuous retention settings
continuous:
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
@@ -534,6 +532,8 @@ record:
# The -r (framerate) dictates how smooth the output video is.
# So the args would be -vf setpts=0.02*PTS -r 30 in that case.
timelapse_args: "-vf setpts=0.04*PTS -r 30"
# Optional: Global hardware acceleration settings for timelapse exports. (default: inherit)
hwaccel_args: auto
# Optional: Recording Preview Settings
preview:
# Optional: Quality of recording preview (default: shown below).
@@ -835,6 +835,11 @@ cameras:
# Optional: camera specific output args (default: inherit)
# output_args:
# Optional: camera specific hwaccel args for timelapse export (default: inherit)
# record:
# export:
# hwaccel_args:
# Optional: timeout for highest scoring image before allowing it
# to be replaced by a newer image. (default: shown below)
best_image_timeout: 60

View File

@@ -185,35 +185,10 @@ In this configuration:
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
## Security: Restricted Stream Sources
For security reasons, the `echo:`, `expr:`, and `exec:` stream sources are disabled by default in go2rtc. These sources allow arbitrary command execution and can pose security risks if misconfigured.
If you attempt to use these sources in your configuration, the streams will be removed and an error message will be printed in the logs.
To enable these sources, you must set the environment variable `GO2RTC_ALLOW_ARBITRARY_EXEC=true`. This can be done in your Docker Compose file or container environment:
```yaml
environment:
- GO2RTC_ALLOW_ARBITRARY_EXEC=true
```
:::warning
Enabling arbitrary exec sources allows execution of arbitrary commands through go2rtc stream configurations. Only enable this if you understand the security implications and trust all sources of your configuration.
:::
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
:::warning
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
:::
NOTE: The output will need to be passed with two curly braces `{{output}}`
```yaml

View File

@@ -20,7 +20,7 @@ Here are some of the cameras I recommend:
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
- <a href="https://www.bhphotovideo.com/c/product/1705511-REG/hikvision_colorvu_ds_2cd2387g2p_lsu_sl_8mp_network.html" target="_blank" rel="nofollow noopener">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
- <a href="https://amzn.to/4ltOpaC" target="_blank" rel="nofollow noopener sponsored">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
@@ -38,11 +38,9 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
:::
| Name | Capabilities | Notes |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
## Detectors
@@ -127,16 +125,10 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
### Google Coral TPU
:::warning
The Coral is no longer recommended for new Frigate installations, except in deployments with particularly low power requirements or hardware incapable of utilizing alternative AI accelerators for object detection. Instead, we suggest using one of the numerous other supported object detectors. Frigate will continue to provide support for the Coral TPU for as long as practicably possible given its still one of the most power-efficient devices for executing object detection models.
:::
Frigate supports both the USB and M.2 versions of the Google Coral.
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
- The PCIe and M.2 versions require installation of a driver on the host. https://github.com/jnicolson/gasket-builder should be used.
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.

View File

@@ -94,10 +94,6 @@ $ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
## Extra Steps for Specific Hardware
The following sections contain additional setup steps that are only required if you are using specific hardware. If you are not using any of these hardware types, you can skip to the [Docker](#docker) installation section.
### Raspberry Pi 3/4
By default, the Raspberry Pi limits the amount of memory available to the GPU. In order to use ffmpeg hardware acceleration, you must increase the available memory by setting `gpu_mem` to the maximum recommended value in `config.txt` as described in the [official docs](https://www.raspberrypi.org/documentation/computers/config_txt.html#memory-options).
@@ -110,107 +106,14 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
#### Installation
:::warning
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
For other installations, follow these steps for installation:
:::
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
:::note
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
:::
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
```bash
lsmod | grep hailo
```
If it shows `hailo_pci`, unload it:
```bash
sudo rmmod hailo_pci
```
Now blacklist the driver to prevent it from loading on boot:
```bash
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
```
Update initramfs to ensure the blacklist takes effect:
```bash
sudo update-initramfs -u
```
Reboot your Raspberry Pi:
```bash
sudo reboot
```
After rebooting, verify the built-in driver is not loaded:
```bash
lsmod | grep hailo
```
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
2. **Run the installation script**:
Download the installation script:
```bash
wget https://raw.githubusercontent.com/blakeblackshear/frigate/dev/docker/hailo8l/user_installation.sh
```
Make it executable:
```bash
sudo chmod +x user_installation.sh
```
Run the script:
```bash
./user_installation.sh
```
The script will:
- Install necessary build dependencies
- Clone and build the Hailo driver from the official repository
- Install the driver
- Download and install the required firmware
- Set up udev rules
3. **Reboot your system**:
After the script completes successfully, reboot to load the firmware:
```bash
sudo reboot
```
4. **Verify the installation**:
After rebooting, verify that the Hailo device is available:
```bash
ls -l /dev/hailo0
```
You should see the device listed. You can also verify the driver is loaded:
```bash
lsmod | grep hailo_pci
```
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
4. Run the script with `./user_installation.sh`
#### Setup
@@ -399,7 +302,7 @@ services:
shm_size: "512mb" # update for your cameras based on calculation above
devices:
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
- /dev/dri/renderD128:/dev/dri/renderD128 # AMD / Intel GPU, needs to be updated for your hardware
- /dev/accel:/dev/accel # Intel NPU
@@ -465,7 +368,6 @@ There are important limitations in HA OS to be aware of:
- Separate local storage for media is not yet supported by Home Assistant
- AMD GPUs are not supported because HA OS does not include the mesa driver.
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
:::

View File

@@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.17.0
image: ghcr.io/blakeblackshear/frigate:0.16.2
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
3. **Start the Container**:
@@ -105,8 +105,8 @@ If an update causes issues:
1. Stop Frigate.
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again.

View File

@@ -134,13 +134,31 @@ Now you should be able to start Frigate by running `docker compose up -d` from w
This section assumes that you already have an environment setup as described in [Installation](../frigate/installation.md). You should also configure your cameras according to the [camera setup guide](/frigate/camera_setup). Pay particular attention to the section on choosing a detect resolution.
### Step 1: Start Frigate
### Step 1: Add a detect stream
At this point you should be able to start Frigate and a basic config will be created automatically.
First we will add the detect stream for the camera:
### Step 2: Add a camera
```yaml
mqtt:
enabled: False
You can click the `Add Camera` button to use the camera setup wizard to get your first camera added into Frigate.
cameras:
name_of_your_camera: # <------ Name the camera
enabled: True
ffmpeg:
inputs:
- path: rtsp://10.0.10.10:554/rtsp # <----- The stream you want to use for detection
roles:
- detect
```
### Step 2: Start Frigate
At this point you should be able to start Frigate and see the video feed in the UI.
If you get an error image from the camera, this means ffmpeg was not able to get the video feed from your camera. Check the logs for error messages from ffmpeg. The default ffmpeg arguments are designed to work with H264 RTSP cameras that support TCP connections.
FFmpeg arguments for other types of cameras can be found [here](../configuration/camera_specific.md).
### Step 3: Configure hardware acceleration (recommended)
@@ -155,7 +173,7 @@ services:
frigate:
...
devices:
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel & amd hwaccel, needs to be updated for your hardware
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
...
```
@@ -184,7 +202,7 @@ services:
...
devices:
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
...
```

View File

@@ -245,12 +245,6 @@ To load a preview gif of a review item:
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
```
To load the thumbnail of a review item:
```
https://HA_URL/api/frigate/notifications/<review-id>/<camera>/review_thumbnail.webp
```
<a name="streams"></a>
## RTSP stream

View File

@@ -280,7 +280,7 @@ Topic with current state of notifications. Published values are `ON` and `OFF`.
## Frigate Camera Topics
### `frigate/<camera_name>/status/<role>`
### `frigate/<camera_name>/<role>/status`
Publishes the current health status of each role that is enabled (`audio`, `detect`, `record`). Possible values are:

View File

@@ -38,7 +38,3 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
## [Periscope](https://github.com/maksz42/periscope)
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.
## [Scrypted - Frigate bridge plugin](https://github.com/apocaliss92/scrypted-frigate-bridge)
[Scrypted - Frigate bridge](https://github.com/apocaliss92/scrypted-frigate-bridge) is an plugin that allows to ingest Frigate detections, motion, videoclips on Scrypted as well as provide templates to export rebroadcast configurations on Frigate.

View File

@@ -15,11 +15,13 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
| Model Type | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
_\* Support coming in 0.17_
### YOLOv9 Details
@@ -37,7 +39,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
## Supported detector types
@@ -53,7 +55,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
## Improving your model

View File

@@ -1,73 +0,0 @@
---
id: cpu
title: High CPU Usage
---
High CPU usage can impact Frigate's performance and responsiveness. This guide outlines the most effective configuration changes to help reduce CPU consumption and optimize resource usage.
## 1. Hardware Acceleration for Video Decoding
**Priority: Critical**
Video decoding is one of the most CPU-intensive tasks in Frigate. While an AI accelerator handles object detection, it does not assist with decoding video streams. Hardware acceleration (hwaccel) offloads this work to your GPU or specialized video decode hardware, significantly reducing CPU usage and enabling you to support more cameras on the same hardware.
### Key Concepts
**Resolution & FPS Impact:** The decoding burden grows exponentially with resolution and frame rate. A 4K stream at 30 FPS requires roughly 4 times the processing power of a 1080p stream at the same frame rate, and doubling the frame rate doubles the decode workload. This is why hardware acceleration becomes critical when working with multiple high-resolution cameras.
**Hardware Acceleration Benefits:** By using dedicated video decode hardware, you can:
- Significantly reduce CPU usage per camera stream
- Support 2-3x more cameras on the same hardware
- Free up CPU resources for motion detection and other Frigate processes
- Reduce system heat and power consumption
### Configuration
Frigate provides preset configurations for common hardware acceleration scenarios. Set up `hwaccel_args` based on your hardware in your [configuration](../configuration/reference) as described in the [getting started guide](../guides/getting_started).
### Troubleshooting Hardware Acceleration
If hardware acceleration isn't working:
1. Check Frigate logs for FFmpeg errors related to hwaccel
2. Verify the hardware device is accessible inside the container
3. Ensure your camera streams use H.264 or H.265 codecs (most common)
4. Try different presets if the automatic detection fails
5. Check that your GPU drivers are properly installed on the host system
## 2. Detector Selection and Configuration
**Priority: Critical**
Choosing the right detector for your hardware is the single most important factor for detection performance. The detector is responsible for running the AI model that identifies objects in video frames. Different detector types have vastly different performance characteristics and hardware requirements, as detailed in the [hardware documentation](../frigate/hardware).
### Understanding Detector Performance
Frigate uses motion detection as a first-line check before running expensive object detection, as explained in the [motion detection documentation](../configuration/motion_detection). When motion is detected, Frigate creates a "region" (the green boxes in the debug viewer) and sends it to the detector. The detector's inference speed determines how many detections per second your system can handle.
**Calculating Detector Capacity:** Your detector has a finite capacity measured in detections per second. With an inference speed of 10ms, your detector can handle approximately 100 detections per second (1000ms / 10ms = 100).If your cameras collectively require more than this capacity, you'll experience delays, missed detections, or the system will fall behind.
### Choosing the Right Detector
Different detectors have vastly different performance characteristics, see the expected performance for object detectors in [the hardware docs](../frigate/hardware)
### Multiple Detector Instances
When a single detector cannot keep up with your camera count, some detector types (`openvino`, `onnx`) allow you to define multiple detector instances to share the workload. This is particularly useful with GPU-based detectors that have sufficient VRAM to run multiple inference processes.
For detailed instructions on configuring multiple detectors, see the [Object Detectors documentation](../configuration/object_detectors).
**When to add a second detector:**
- Skipped FPS is consistently > 0 even during normal activity
### Model Selection and Optimization
The model you use significantly impacts detector performance. Frigate provides default models optimized for each detector type, but you can customize them as described in the [detector documentation](../configuration/object_detectors).
**Model Size Trade-offs:**
- Smaller models (320x320): Faster inference, Frigate is specifically optimized for a 320x320 size model.
- Larger models (640x640): Slower inference, can sometimes have higher accuracy on very large objects that take up a majority of the frame.

View File

@@ -1,60 +0,0 @@
---
id: dummy-camera
title: Analyzing Object Detection
---
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
## When to use
- Replaying an exported clip to reproduce incorrect detections
- Testing configuration changes (model settings, trackers, filters) against a known clip
- Gathering deterministic logs and recordings for debugging or issue reports
## Example Config
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
```yaml
cameras:
test:
ffmpeg:
inputs:
- path: /media/frigate/car-stopping.mp4
input_args: -re -stream_loop -1 -fflags +genpts
roles:
- detect
detect:
enabled: true
record:
enabled: false
snapshots:
enabled: false
```
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate.
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
## Variables to consider in object tracking
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
## Troubleshooting
- No video: verify the path is correct and accessible from the Frigate process/container.
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.

View File

@@ -1,6 +1,6 @@
---
id: edgetpu
title: EdgeTPU Errors
title: Troubleshooting EdgeTPU
---
## USB Coral Not Detected
@@ -68,7 +68,8 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
- In most cases https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
- For some newer Linux distros (for example, Ubuntu 22.04+), https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction

View File

@@ -1,6 +1,6 @@
---
id: gpu
title: GPU Errors
title: Troubleshooting GPU
---
## OpenVINO

View File

@@ -1,6 +1,6 @@
---
id: memory
title: Memory Usage
title: Memory Troubleshooting
---
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
@@ -9,20 +9,8 @@ Frigate includes built-in memory profiling using [memray](https://bloomberg.gith
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
```yaml
# docker-compose example
services:
frigate:
...
environment:
- FRIGATE_MEMRAY_MODULES=frigate.embeddings,frigate.capture
```
```bash
# docker run example
docker run -e FRIGATE_MEMRAY_MODULES="frigate.embeddings" \
...
--name frigate <frigate_image>
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
```
### Module Names
@@ -40,7 +28,7 @@ Frigate processes are named using a module-based naming scheme. Common module na
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
```bash
FRIGATE_MEMRAY_MODULES=frigate.capture:front_door
export FRIGATE_MEMRAY_MODULES="frigate.capture:front_door"
```
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
@@ -67,20 +55,11 @@ After a process exits normally, you'll find HTML reports in `/config/memray_repo
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
- Run `memray` inside the Frigate container:
```bash
docker-compose exec frigate memray flamegraph /config/memray_reports/<module_name>.bin
# or
docker exec -it <container_name_or_id> memray flamegraph /config/memray_reports/<module_name>.bin
memray flamegraph /config/memray_reports/<module_name>.bin
```
- You can also copy the `.bin` file to the host and run `memray` locally if you have it installed:
```bash
docker cp <container_name_or_id>:/config/memray_reports/<module_name>.bin /tmp/
memray flamegraph /tmp/<module_name>.bin
```
This will generate an HTML file that you can open in your browser.
## Understanding the Reports
@@ -131,4 +110,20 @@ The interactive HTML reports allow you to:
- Check that memray is properly installed (included by default in Frigate)
- Verify the process actually started and ran (check process logs)
## Example Usage
```bash
# Enable profiling for review and capture modules
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
# Start Frigate
# ... let it run for a while ...
# Check for reports
ls -lh /config/memray_reports/
# If a process crashed, manually generate report
memray flamegraph /config/memray_reports/frigate_capture_front_door.bin
```
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).

View File

@@ -1,6 +1,6 @@
---
id: recordings
title: Recordings Errors
title: Troubleshooting Recordings
---
## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?

View File

@@ -170,7 +170,7 @@ const config: Config = {
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} Frigate, Inc.`,
copyright: `Copyright © ${new Date().getFullYear()} Frigate LLC`,
},
},
plugins: [

View File

@@ -129,27 +129,9 @@ const sidebars: SidebarsConfig = {
Troubleshooting: [
"troubleshooting/faqs",
"troubleshooting/recordings",
"troubleshooting/dummy-camera",
{
type: "category",
label: "Troubleshooting Hardware",
link: {
type: "generated-index",
title: "Troubleshooting Hardware",
description: "Troubleshooting Problems with Hardware",
},
items: ["troubleshooting/gpu", "troubleshooting/edgetpu"],
},
{
type: "category",
label: "Troubleshooting Resource Usage",
link: {
type: "generated-index",
title: "Troubleshooting Resource Usage",
description: "Troubleshooting issues with resource usage",
},
items: ["troubleshooting/cpu", "troubleshooting/memory"],
},
"troubleshooting/gpu",
"troubleshooting/edgetpu",
"troubleshooting/memory",
],
Development: [
"development/contributing",

View File

@@ -17,25 +17,20 @@ paths:
summary: Authenticate request
description: |-
Authenticates the current request based on proxy headers or JWT token.
Returns user role and permissions for camera access.
This endpoint verifies authentication credentials and manages JWT token refresh.
On success, no JSON body is returned; authentication state is communicated via response headers and cookies.
operationId: auth_auth_get
responses:
"200":
description: Successful Response
content:
application/json:
schema: {}
"202":
description: Authentication Accepted (no response body, different headers depending on auth method)
headers:
remote-user:
description: Authenticated username or "viewer" in proxy-only mode
schema:
type: string
remote-role:
description: Resolved role (e.g., admin, viewer, or custom)
schema:
type: string
Set-Cookie:
description: May include refreshed JWT cookie ("frigate-token") when applicable
schema:
type: string
description: Authentication Accepted
content:
application/json:
schema: {}
"401":
description: Authentication Failed
/profile:
@@ -331,6 +326,59 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/HTTPValidationError"
/media/sync:
post:
tags:
- App
summary: Start media sync job
description: |-
Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
Returns 202 with job details when queued, or 409 if a job is already running.
operationId: sync_media_media_sync_post
requestBody:
required: true
content:
application/json:
responses:
"202":
description: Accepted - Job queued
"409":
description: Conflict - Job already running
"422":
description: Validation Error
/media/sync/current:
get:
tags:
- App
summary: Get current media sync job
description: |-
Retrieve the current running media sync job, if any. Returns the job details or null when no job is active.
operationId: get_media_sync_current_media_sync_current_get
responses:
"200":
description: Successful Response
"422":
description: Validation Error
/media/sync/status/{job_id}:
get:
tags:
- App
summary: Get media sync job status
description: |-
Get status and results for the specified media sync job id. Returns 200 with job details including results, or 404 if the job is not found.
operationId: get_media_sync_status_media_sync_status__job_id__get
parameters:
- name: job_id
in: path
responses:
"200":
description: Successful Response
"404":
description: Not Found - Job not found
"422":
description: Validation Error
/faces/train/{name}/classify:
post:
tags:
@@ -616,32 +664,6 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/HTTPValidationError"
/classification/attributes:
get:
tags:
- Classification
summary: Get custom classification attributes
description: |-
Returns custom classification attributes for a given object type.
Only includes models with classification_type set to 'attribute'.
By default returns a flat sorted list of all attribute labels.
If group_by_model is true, returns attributes grouped by model name.
operationId: get_custom_attributes_classification_attributes_get
parameters:
- name: object_type
in: query
schema:
type: string
- name: group_by_model
in: query
schema:
type: boolean
default: false
responses:
"200":
description: Successful Response
"422":
description: Validation Error
/classification/{name}/dataset:
get:
tags:
@@ -2938,42 +2960,6 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/HTTPValidationError"
/events/{event_id}/attributes:
post:
tags:
- Events
summary: Set custom classification attributes
description: |-
Sets an event's custom classification attributes for all attribute-type
models that apply to the event's object type.
Returns a success message or an error if the event is not found.
operationId: set_attributes_events__event_id__attributes_post
parameters:
- name: event_id
in: path
required: true
schema:
type: string
title: Event Id
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/EventsAttributesBody"
responses:
"200":
description: Successful Response
content:
application/json:
schema:
$ref: "#/components/schemas/GenericResponse"
"422":
description: Validation Error
content:
application/json:
schema:
$ref: "#/components/schemas/HTTPValidationError"
/events/{event_id}/description:
post:
tags:
@@ -5021,18 +5007,6 @@ components:
required:
- subLabel
title: EventsSubLabelBody
EventsAttributesBody:
properties:
attributes:
type: object
title: Attributes
description: Object with model names as keys and attribute values
additionalProperties:
type: string
type: object
required:
- attributes
title: EventsAttributesBody
ExportModel:
properties:
id:

View File

@@ -1,12 +1,12 @@
# COPYRIGHT AND TRADEMARK NOTICE
The images, logos, and icons contained in this directory (the "Brand Assets") are
proprietary to Frigate, Inc. and are NOT covered by the MIT License governing the
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
rest of this repository.
1. TRADEMARK STATUS
The "Frigate" name and the accompanying logo are common law trademarks™ of
Frigate, Inc. Frigate, Inc. reserves all rights to these marks.
Frigate LLC. Frigate LLC reserves all rights to these marks.
2. LIMITED PERMISSION FOR USE
Permission is hereby granted to display these Brand Assets strictly for the
@@ -17,9 +17,9 @@ rest of this repository.
3. RESTRICTIONS
You may NOT:
a. Use these Brand Assets to represent a derivative work (fork) as an official
product of Frigate, Inc.
product of Frigate LLC.
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
commercial affiliation with Frigate, Inc.
commercial affiliation with Frigate LLC.
c. Modify or alter the Brand Assets.
If you fork this repository with the intent to distribute a modified or competing
@@ -27,4 +27,4 @@ version of the software, you must replace these Brand Assets with your own
original content.
ALL RIGHTS RESERVED.
Copyright (c) 2026 Frigate, Inc.
Copyright (c) 2025 Frigate LLC.

View File

@@ -25,15 +25,22 @@ from pydantic import ValidationError
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.request.app_body import AppConfigSetBody, MediaSyncBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateTopic,
)
from frigate.ffmpeg_presets import FFMPEG_HWACCEL_VAAPI, _gpu_selector
from frigate.jobs.media_sync import (
get_current_media_sync_job,
get_media_sync_job_by_id,
start_media_sync_job,
)
from frigate.models import Event, Timeline
from frigate.stats.prometheus import get_metrics, update_metrics
from frigate.types import JobStatusTypesEnum
from frigate.util.builtin import (
clean_camera_user_pass,
flatten_config_data,
@@ -458,7 +465,15 @@ def config_set(request: Request, body: AppConfigSetBody):
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
def vainfo():
vainfo = vainfo_hwaccel()
# Use LibvaGpuSelector to pick an appropriate libva device (if available)
selected_gpu = ""
try:
selected_gpu = _gpu_selector.get_gpu_arg(FFMPEG_HWACCEL_VAAPI, 0) or ""
except Exception:
selected_gpu = ""
# If selected_gpu is empty, pass None to vainfo_hwaccel to run plain `vainfo`.
vainfo = vainfo_hwaccel(device_name=selected_gpu or None)
return JSONResponse(
content={
"return_code": vainfo.returncode,
@@ -593,6 +608,98 @@ def restart():
)
@router.post(
"/media/sync",
dependencies=[Depends(require_role(["admin"]))],
summary="Start media sync job",
description="""Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
Returns 202 with job details when queued, or 409 if a job is already running.""",
)
def sync_media(body: MediaSyncBody = Body(...)):
"""Start async media sync job - remove orphaned files.
Syncs specified media types: event snapshots, event thumbnails, review thumbnails,
previews, exports, and/or recordings. Job runs in background; use /media/sync/current
or /media/sync/status/{job_id} to check status.
Args:
body: MediaSyncBody with dry_run flag and media_types list.
media_types can include: 'all', 'event_snapshots', 'event_thumbnails',
'review_thumbnails', 'previews', 'exports', 'recordings'
Returns:
202 Accepted with job_id, or 409 Conflict if job already running.
"""
job_id = start_media_sync_job(
dry_run=body.dry_run, media_types=body.media_types, force=body.force
)
if job_id is None:
# A job is already running
current = get_current_media_sync_job()
return JSONResponse(
content={
"error": "A media sync job is already running",
"current_job_id": current.id if current else None,
},
status_code=409,
)
return JSONResponse(
content={
"job": {
"job_type": "media_sync",
"status": JobStatusTypesEnum.queued,
"id": job_id,
}
},
status_code=202,
)
@router.get(
"/media/sync/current",
dependencies=[Depends(require_role(["admin"]))],
summary="Get current media sync job",
description="""Retrieve the current running media sync job, if any. Returns the job details
or null when no job is active.""",
)
def get_media_sync_current():
"""Get the current running media sync job, if any."""
job = get_current_media_sync_job()
if job is None:
return JSONResponse(content={"job": None}, status_code=200)
return JSONResponse(
content={"job": job.to_dict()},
status_code=200,
)
@router.get(
"/media/sync/status/{job_id}",
dependencies=[Depends(require_role(["admin"]))],
summary="Get media sync job status",
description="""Get status and results for the specified media sync job id. Returns 200 with
job details including results, or 404 if the job is not found.""",
)
def get_media_sync_status(job_id: str):
"""Get the status of a specific media sync job."""
job = get_media_sync_job_by_id(job_id)
if job is None:
return JSONResponse(
content={"error": "Job not found"},
status_code=404,
)
return JSONResponse(
content={"job": job.to_dict()},
status_code=200,
)
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
def get_labels(camera: str = ""):
try:

View File

@@ -143,6 +143,17 @@ def require_admin_by_default():
return admin_checker
def _is_authenticated(request: Request) -> bool:
"""
Helper to determine if a request is from an authenticated user.
Returns True if the request has a valid authenticated user (not anonymous).
Port 5000 internal requests are considered anonymous despite having admin role.
"""
username = request.headers.get("remote-user")
return username is not None and username != "anonymous"
def allow_public():
"""
Override dependency to allow unauthenticated access to an endpoint.
@@ -162,24 +173,27 @@ def allow_public():
def allow_any_authenticated():
"""
Override dependency to allow any request that passed through the /auth endpoint.
Override dependency to allow any authenticated user (bypass admin requirement).
Allows:
- Port 5000 internal requests (remote-user: "anonymous", remote-role: "admin")
- Authenticated users with JWT tokens (remote-user: username)
- Unauthenticated requests when auth is disabled (remote-user: "viewer")
- Port 5000 internal requests (have admin role despite anonymous user)
- Any authenticated user with a real username (not "anonymous")
Rejects:
- Requests with no remote-user header (did not pass through /auth endpoint)
- Port 8971 requests with anonymous user (auth disabled, no proxy auth)
Example:
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
"""
async def auth_checker(request: Request):
# Ensure a remote-user has been set by the /auth endpoint
username = request.headers.get("remote-user")
if username is None:
# Port 5000 requests have admin role and should be allowed
role = request.headers.get("remote-role")
if role == "admin":
return
# Otherwise require a real authenticated user (not anonymous)
if not _is_authenticated(request):
raise HTTPException(status_code=401, detail="Authentication required")
return
@@ -539,32 +553,7 @@ def resolve_role(
"/auth",
dependencies=[Depends(allow_public())],
summary="Authenticate request",
description=(
"Authenticates the current request based on proxy headers or JWT token. "
"This endpoint verifies authentication credentials and manages JWT token refresh. "
"On success, no JSON body is returned; authentication state is communicated via response headers and cookies."
),
status_code=202,
responses={
202: {
"description": "Authentication Accepted (no response body)",
"headers": {
"remote-user": {
"description": 'Authenticated username or "viewer" in proxy-only mode',
"schema": {"type": "string"},
},
"remote-role": {
"description": "Resolved role (e.g., admin, viewer, or custom)",
"schema": {"type": "string"},
},
"Set-Cookie": {
"description": "May include refreshed JWT cookie when applicable",
"schema": {"type": "string"},
},
},
},
401: {"description": "Authentication Failed"},
},
description="Authenticates the current request based on proxy headers or JWT token. Returns user role and permissions for camera access.",
)
def auth(request: Request):
auth_config: AuthConfig = request.app.frigate_config.auth
@@ -592,12 +581,12 @@ def auth(request: Request):
# if auth is disabled, just apply the proxy header map and return success
if not auth_config.enabled:
# pass the user header value from the upstream proxy if a mapping is specified
# or use viewer if none are specified
# or use anonymous if none are specified
user_header = proxy_config.header_map.user
success_response.headers["remote-user"] = (
request.headers.get(user_header, default="viewer")
request.headers.get(user_header, default="anonymous")
if user_header
else "viewer"
else "anonymous"
)
# parse header and resolve a valid role
@@ -709,10 +698,10 @@ def auth(request: Request):
"/profile",
dependencies=[Depends(allow_any_authenticated())],
summary="Get user profile",
description="Returns the current authenticated user's profile including username, role, and allowed cameras. This endpoint requires authentication and returns information about the user's permissions.",
description="Returns the current authenticated user's profile including username, role, and allowed cameras.",
)
def profile(request: Request):
username = request.headers.get("remote-user", "viewer")
username = request.headers.get("remote-user", "anonymous")
role = request.headers.get("remote-role", "viewer")
all_camera_names = set(request.app.frigate_config.cameras.keys())
@@ -728,7 +717,7 @@ def profile(request: Request):
"/logout",
dependencies=[Depends(allow_public())],
summary="Logout user",
description="Logs out the current user by clearing the session cookie. After logout, subsequent requests will require re-authentication.",
description="Logs out the current user by clearing the session cookie.",
)
def logout(request: Request):
auth_config: AuthConfig = request.app.frigate_config.auth
@@ -744,7 +733,7 @@ limiter = Limiter(key_func=get_remote_addr)
"/login",
dependencies=[Depends(allow_public())],
summary="Login with credentials",
description='Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The JWT token can also be retrieved from the response and used as a Bearer token in the Authorization header.\n\nExample using Bearer token:\n```\ncurl -H "Authorization: Bearer <token_value>" https://frigate_ip:8971/api/profile\n```',
description="Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The token can also be retrieved and used as a Bearer token in the Authorization header.",
)
@limiter.limit(limit_value=rateLimiter.get_limit)
def login(request: Request, body: AppPostLoginBody):
@@ -787,7 +776,7 @@ def login(request: Request, body: AppPostLoginBody):
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Get all users",
description="Returns a list of all users with their usernames and roles. Requires admin role. Each user object contains the username and assigned role.",
description="Returns a list of all users with their usernames and roles. Requires admin role.",
)
def get_users():
exports = (
@@ -800,7 +789,7 @@ def get_users():
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Create new user",
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
description="Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements.",
)
def create_user(
request: Request,
@@ -834,7 +823,7 @@ def create_user(
"/users/{username}",
dependencies=[Depends(require_role(["admin"]))],
summary="Delete user",
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role. Returns success message or error if user not found.",
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role.",
)
def delete_user(request: Request, username: str):
# Prevent deletion of the built-in admin user
@@ -851,7 +840,7 @@ def delete_user(request: Request, username: str):
"/users/{username}/password",
dependencies=[Depends(allow_any_authenticated())],
summary="Update user password",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity. Password must meet strength requirements (minimum 8 characters, uppercase letter, digit, and special character).",
)
async def update_password(
request: Request,
@@ -879,9 +868,13 @@ async def update_password(
except DoesNotExist:
return JSONResponse(content={"message": "User not found"}, status_code=404)
# Require old_password when non-admin user is changing any password
# Admin users changing passwords do NOT need to provide the current password
if current_role != "admin":
# Require old_password when:
# 1. Non-admin user is changing another user's password (admin only action)
# 2. Any user is changing their own password
is_changing_own_password = current_username == username
is_non_admin = current_role != "admin"
if is_changing_own_password or is_non_admin:
if not body.old_password:
return JSONResponse(
content={"message": "Current password is required"},
@@ -933,7 +926,7 @@ async def update_password(
"/users/{username}/role",
dependencies=[Depends(require_role(["admin"]))],
summary="Update user role",
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role. Valid roles are defined in the configuration.",
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role.",
)
async def update_role(
request: Request,

View File

@@ -31,7 +31,6 @@ from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera import DetectConfig
from frigate.config.classification import ObjectClassificationType
from frigate.const import CLIPS_DIR, FACE_DIR, MODEL_CACHE_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event
@@ -40,7 +39,6 @@ from frigate.util.classification import (
collect_state_classification_examples,
get_dataset_image_count,
read_training_metadata,
write_training_metadata,
)
from frigate.util.file import get_event_snapshot
@@ -624,59 +622,6 @@ def get_classification_dataset(name: str):
)
@router.get(
"/classification/attributes",
summary="Get custom classification attributes",
description="""Returns custom classification attributes for a given object type.
Only includes models with classification_type set to 'attribute'.
By default returns a flat sorted list of all attribute labels.
If group_by_model is true, returns attributes grouped by model name.""",
)
def get_custom_attributes(
request: Request, object_type: str = None, group_by_model: bool = False
):
models_with_attributes = {}
for (
model_key,
model_config,
) in request.app.frigate_config.classification.custom.items():
if (
not model_config.enabled
or not model_config.object_config
or model_config.object_config.classification_type
!= ObjectClassificationType.attribute
):
continue
model_objects = getattr(model_config.object_config, "objects", []) or []
if object_type is not None and object_type not in model_objects:
continue
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
if not os.path.exists(dataset_dir):
continue
attributes = []
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if os.path.isdir(category_dir) and category_name != "none":
attributes.append(category_name)
if attributes:
model_name = model_config.name or model_key
models_with_attributes[model_name] = sorted(attributes)
if group_by_model:
return JSONResponse(content=models_with_attributes)
else:
# Flatten to a unique sorted list
all_attributes = set()
for attributes in models_with_attributes.values():
all_attributes.update(attributes)
return JSONResponse(content=sorted(list(all_attributes)))
@router.get(
"/classification/{name}/train",
summary="Get classification train images",
@@ -843,12 +788,6 @@ def rename_classification_category(
try:
os.rename(old_folder, new_folder)
# Mark dataset as ready to train by resetting training metadata
# This ensures the dataset is marked as changed after renaming
sanitized_name = sanitize_filename(name)
write_training_metadata(sanitized_name, 0)
return JSONResponse(
content=(
{

View File

@@ -12,7 +12,6 @@ class EventsQueryParams(BaseModel):
labels: Optional[str] = "all"
sub_label: Optional[str] = "all"
sub_labels: Optional[str] = "all"
attributes: Optional[str] = "all"
zone: Optional[str] = "all"
zones: Optional[str] = "all"
limit: Optional[int] = 100
@@ -59,8 +58,6 @@ class EventsSearchQueryParams(BaseModel):
limit: Optional[int] = 50
cameras: Optional[str] = "all"
labels: Optional[str] = "all"
sub_labels: Optional[str] = "all"
attributes: Optional[str] = "all"
zones: Optional[str] = "all"
after: Optional[float] = None
before: Optional[float] = None

View File

@@ -1,8 +1,7 @@
from enum import Enum
from typing import Optional, Union
from typing import Optional
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
class Extension(str, Enum):
@@ -48,15 +47,3 @@ class MediaMjpegFeedQueryParams(BaseModel):
mask: Optional[int] = None
motion: Optional[int] = None
regions: Optional[int] = None
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"
class MediaRecordingsAvailabilityQueryParams(BaseModel):
cameras: str = "all"
before: Union[float, SkipJsonSchema[None]] = None
after: Union[float, SkipJsonSchema[None]] = None
scale: int = 30

View File

@@ -0,0 +1,21 @@
from typing import Optional, Union
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"
class MediaRecordingsAvailabilityQueryParams(BaseModel):
cameras: str = "all"
before: Union[float, SkipJsonSchema[None]] = None
after: Union[float, SkipJsonSchema[None]] = None
scale: int = 30
class RecordingsDeleteQueryParams(BaseModel):
keep: Optional[str] = None
cameras: Optional[str] = "all"

View File

@@ -1,6 +1,6 @@
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
from pydantic import BaseModel, Field
class AppConfigSetBody(BaseModel):
@@ -27,3 +27,16 @@ class AppPostLoginBody(BaseModel):
class AppPutRoleBody(BaseModel):
role: str
class MediaSyncBody(BaseModel):
dry_run: bool = Field(
default=True, description="If True, only report orphans without deleting them"
)
media_types: List[str] = Field(
default=["all"],
description="Types of media to sync: 'all', 'event_snapshots', 'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'",
)
force: bool = Field(
default=False, description="If True, bypass safety threshold checks"
)

View File

@@ -24,13 +24,6 @@ class EventsLPRBody(BaseModel):
)
class EventsAttributesBody(BaseModel):
attributes: List[str] = Field(
title="Selected classification attributes for the event",
default_factory=list,
)
class EventsDescriptionBody(BaseModel):
description: Union[str, None] = Field(title="The description of the event")

View File

@@ -0,0 +1,35 @@
from typing import Optional
from pydantic import BaseModel, Field
class ExportCaseCreateBody(BaseModel):
"""Request body for creating a new export case."""
name: str = Field(max_length=100, description="Friendly name of the export case")
description: Optional[str] = Field(
default=None, description="Optional description of the export case"
)
class ExportCaseUpdateBody(BaseModel):
"""Request body for updating an existing export case."""
name: Optional[str] = Field(
default=None,
max_length=100,
description="Updated friendly name of the export case",
)
description: Optional[str] = Field(
default=None, description="Updated description of the export case"
)
class ExportCaseAssignBody(BaseModel):
"""Request body for assigning or unassigning an export to a case."""
export_case_id: Optional[str] = Field(
default=None,
max_length=30,
description="Case ID to assign to the export, or null to unassign",
)

View File

@@ -16,5 +16,11 @@ class ExportRecordingsBody(BaseModel):
source: PlaybackSourceEnum = Field(
default=PlaybackSourceEnum.recordings, title="Playback source"
)
name: Optional[str] = Field(title="Friendly name", default=None, max_length=256)
name: str = Field(title="Friendly name", default=None, max_length=256)
image_path: Union[str, SkipJsonSchema[None]] = None
export_case_id: Optional[str] = Field(
default=None,
title="Export case ID",
max_length=30,
description="ID of the export case to assign this export to",
)

View File

@@ -0,0 +1,22 @@
from typing import List, Optional
from pydantic import BaseModel, Field
class ExportCaseModel(BaseModel):
"""Model representing a single export case."""
id: str = Field(description="Unique identifier for the export case")
name: str = Field(description="Friendly name of the export case")
description: Optional[str] = Field(
default=None, description="Optional description of the export case"
)
created_at: float = Field(
description="Unix timestamp when the export case was created"
)
updated_at: float = Field(
description="Unix timestamp when the export case was last updated"
)
ExportCasesResponse = List[ExportCaseModel]

View File

@@ -15,6 +15,9 @@ class ExportModel(BaseModel):
in_progress: bool = Field(
description="Whether the export is currently being processed"
)
export_case_id: Optional[str] = Field(
default=None, description="ID of the export case this export belongs to"
)
class StartExportResponse(BaseModel):

View File

@@ -3,13 +3,14 @@ from enum import Enum
class Tags(Enum):
app = "App"
auth = "Auth"
camera = "Camera"
preview = "Preview"
events = "Events"
export = "Export"
classification = "Classification"
logs = "Logs"
media = "Media"
notifications = "Notifications"
preview = "Preview"
recordings = "Recordings"
review = "Review"
export = "Export"
events = "Events"
classification = "Classification"
auth = "Auth"

View File

@@ -37,7 +37,6 @@ from frigate.api.defs.query.regenerate_query_parameters import (
RegenerateQueryParameters,
)
from frigate.api.defs.request.events_body import (
EventsAttributesBody,
EventsCreateBody,
EventsDeleteBody,
EventsDescriptionBody,
@@ -56,7 +55,6 @@ from frigate.api.defs.response.event_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
from frigate.config.classification import ObjectClassificationType
from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline, Trigger
@@ -101,8 +99,6 @@ def events(
if sub_labels == "all" and sub_label != "all":
sub_labels = sub_label
attributes = unquote(params.attributes)
zone = params.zone
zones = params.zones
@@ -191,17 +187,6 @@ def events(
sub_label_clause = reduce(operator.or_, sub_label_clauses)
clauses.append((sub_label_clause))
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
attribute_clause = reduce(operator.or_, attribute_clauses)
clauses.append(attribute_clause)
if recognized_license_plate != "all":
filtered_recognized_license_plates = recognized_license_plate.split(",")
@@ -507,8 +492,6 @@ def events_search(
# Filters
cameras = params.cameras
labels = params.labels
sub_labels = params.sub_labels
attributes = params.attributes
zones = params.zones
after = params.after
before = params.before
@@ -583,38 +566,6 @@ def events_search(
if labels != "all":
event_filters.append((Event.label << labels.split(",")))
if sub_labels != "all":
# use matching so joined sub labels are included
# for example a sub label 'bob' would get events
# with sub labels 'bob' and 'bob, john'
sub_label_clauses = []
filtered_sub_labels = sub_labels.split(",")
if "None" in filtered_sub_labels:
filtered_sub_labels.remove("None")
sub_label_clauses.append((Event.sub_label.is_null()))
for label in filtered_sub_labels:
sub_label_clauses.append(
(Event.sub_label.cast("text") == label)
) # include exact matches
# include this label when part of a list
sub_label_clauses.append((Event.sub_label.cast("text") % f"*{label},*"))
sub_label_clauses.append((Event.sub_label.cast("text") % f"*, {label}*"))
event_filters.append((reduce(operator.or_, sub_label_clauses)))
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
event_filters.append(reduce(operator.or_, attribute_clauses))
if zones != "all":
zone_clauses = []
filtered_zones = zones.split(",")
@@ -1400,107 +1351,6 @@ async def set_plate(
)
@router.post(
"/events/{event_id}/attributes",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set custom classification attributes",
description=(
"Sets an event's custom classification attributes for all attribute-type "
"models that apply to the event's object type."
),
)
async def set_attributes(
request: Request,
event_id: str,
body: EventsAttributesBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found."}),
status_code=404,
)
object_type = event.label
selected_attributes = set(body.attributes or [])
applied_updates: list[dict[str, str | float | None]] = []
for (
model_key,
model_config,
) in request.app.frigate_config.classification.custom.items():
# Only apply to enabled attribute classifiers that target this object type
if (
not model_config.enabled
or not model_config.object_config
or model_config.object_config.classification_type
!= ObjectClassificationType.attribute
or object_type not in (model_config.object_config.objects or [])
):
continue
# Get available labels from dataset directory
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
available_labels = set()
if os.path.exists(dataset_dir):
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if os.path.isdir(category_dir):
available_labels.add(category_name)
if not available_labels:
logger.warning(
"No dataset found for custom attribute model %s at %s",
model_key,
dataset_dir,
)
continue
# Find all selected attributes that apply to this model
model_name = model_config.name or model_key
matching_attrs = selected_attributes & available_labels
if matching_attrs:
# Publish updates for each selected attribute
for attr in matching_attrs:
request.app.event_metadata_updater.publish(
(event_id, model_name, attr, 1.0),
EventMetadataTypeEnum.attribute.value,
)
applied_updates.append(
{"model": model_name, "label": attr, "score": 1.0}
)
else:
# Clear this model's attribute
request.app.event_metadata_updater.publish(
(event_id, model_name, None, None),
EventMetadataTypeEnum.attribute.value,
)
applied_updates.append({"model": model_name, "label": None, "score": None})
if len(applied_updates) == 0:
return JSONResponse(
content={
"success": False,
"message": "No matching attributes found for this object type.",
},
status_code=400,
)
return JSONResponse(
content={
"success": True,
"message": f"Updated {len(applied_updates)} attribute(s)",
"applied": applied_updates,
},
status_code=200,
)
@router.post(
"/events/{event_id}/description",
response_model=GenericResponse,

View File

@@ -4,10 +4,10 @@ import logging
import random
import string
from pathlib import Path
from typing import List
from typing import List, Optional
import psutil
from fastapi import APIRouter, Depends, Request
from fastapi import APIRouter, Depends, Query, Request
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filepath
from peewee import DoesNotExist
@@ -19,8 +19,17 @@ from frigate.api.auth import (
require_camera_access,
require_role,
)
from frigate.api.defs.request.export_case_body import (
ExportCaseAssignBody,
ExportCaseCreateBody,
ExportCaseUpdateBody,
)
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.request.export_rename_body import ExportRenameBody
from frigate.api.defs.response.export_case_response import (
ExportCaseModel,
ExportCasesResponse,
)
from frigate.api.defs.response.export_response import (
ExportModel,
ExportsResponse,
@@ -29,7 +38,7 @@ from frigate.api.defs.response.export_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import CLIPS_DIR, EXPORT_DIR
from frigate.models import Export, Previews, Recordings
from frigate.models import Export, ExportCase, Previews, Recordings
from frigate.record.export import (
PlaybackFactorEnum,
PlaybackSourceEnum,
@@ -52,17 +61,182 @@ router = APIRouter(tags=[Tags.export])
)
def get_exports(
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
export_case_id: Optional[str] = None,
cameras: Optional[str] = Query(default="all"),
start_date: Optional[float] = None,
end_date: Optional[float] = None,
):
exports = (
Export.select()
.where(Export.camera << allowed_cameras)
.order_by(Export.date.desc())
.dicts()
.iterator()
)
query = Export.select().where(Export.camera << allowed_cameras)
if export_case_id is not None:
if export_case_id == "unassigned":
query = query.where(Export.export_case.is_null(True))
else:
query = query.where(Export.export_case == export_case_id)
if cameras and cameras != "all":
requested = set(cameras.split(","))
filtered_cameras = list(requested.intersection(allowed_cameras))
if not filtered_cameras:
return JSONResponse(content=[])
query = query.where(Export.camera << filtered_cameras)
if start_date is not None:
query = query.where(Export.date >= start_date)
if end_date is not None:
query = query.where(Export.date <= end_date)
exports = query.order_by(Export.date.desc()).dicts().iterator()
return JSONResponse(content=[e for e in exports])
@router.get(
"/cases",
response_model=ExportCasesResponse,
dependencies=[Depends(allow_any_authenticated())],
summary="Get export cases",
description="Gets all export cases from the database.",
)
def get_export_cases():
cases = (
ExportCase.select().order_by(ExportCase.created_at.desc()).dicts().iterator()
)
return JSONResponse(content=[c for c in cases])
@router.post(
"/cases",
response_model=ExportCaseModel,
dependencies=[Depends(require_role(["admin"]))],
summary="Create export case",
description="Creates a new export case.",
)
def create_export_case(body: ExportCaseCreateBody):
case = ExportCase.create(
id="".join(random.choices(string.ascii_lowercase + string.digits, k=12)),
name=body.name,
description=body.description,
created_at=Path().stat().st_mtime,
updated_at=Path().stat().st_mtime,
)
return JSONResponse(content=model_to_dict(case))
@router.get(
"/cases/{case_id}",
response_model=ExportCaseModel,
dependencies=[Depends(allow_any_authenticated())],
summary="Get a single export case",
description="Gets a specific export case by ID.",
)
def get_export_case(case_id: str):
try:
case = ExportCase.get(ExportCase.id == case_id)
return JSONResponse(content=model_to_dict(case))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
@router.patch(
"/cases/{case_id}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Update export case",
description="Updates an existing export case.",
)
def update_export_case(case_id: str, body: ExportCaseUpdateBody):
try:
case = ExportCase.get(ExportCase.id == case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
if body.name is not None:
case.name = body.name
if body.description is not None:
case.description = body.description
case.save()
return JSONResponse(
content={"success": True, "message": "Successfully updated export case."}
)
@router.delete(
"/cases/{case_id}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete export case",
description="""Deletes an export case.\n Exports that reference this case will have their export_case set to null.\n """,
)
def delete_export_case(case_id: str):
try:
case = ExportCase.get(ExportCase.id == case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
# Unassign exports from this case but keep the exports themselves
Export.update(export_case=None).where(Export.export_case == case).execute()
case.delete_instance()
return JSONResponse(
content={"success": True, "message": "Successfully deleted export case."}
)
@router.patch(
"/export/{export_id}/case",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Assign export to case",
description=(
"Assigns an export to a case, or unassigns it if export_case_id is null."
),
)
async def assign_export_case(
export_id: str,
body: ExportCaseAssignBody,
request: Request,
):
try:
export: Export = Export.get(Export.id == export_id)
await require_camera_access(export.camera, request=request)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export not found."},
status_code=404,
)
if body.export_case_id is not None:
try:
ExportCase.get(ExportCase.id == body.export_case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found."},
status_code=404,
)
export.export_case = body.export_case_id
else:
export.export_case = None
export.save()
return JSONResponse(
content={"success": True, "message": "Successfully updated export case."}
)
@router.post(
"/export/{camera_name}/start/{start_time}/end/{end_time}",
response_model=StartExportResponse,
@@ -93,6 +267,16 @@ def export_recording(
friendly_name = body.name
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
export_case_id = body.export_case_id
if export_case_id is not None:
try:
ExportCase.get(ExportCase.id == export_case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
@@ -161,6 +345,7 @@ def export_recording(
if playback_source in PlaybackSourceEnum.__members__.values()
else PlaybackSourceEnum.recordings
),
export_case_id,
)
exporter.start()
return JSONResponse(

View File

@@ -22,6 +22,7 @@ from frigate.api import (
media,
notification,
preview,
record,
review,
)
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
@@ -128,6 +129,7 @@ def create_fastapi_app(
app.include_router(export.router)
app.include_router(event.router)
app.include_router(media.router)
app.include_router(record.router)
# App Properties
app.frigate_config = frigate_config
app.embeddings = embeddings

View File

@@ -8,9 +8,8 @@ import os
import subprocess as sp
import time
from datetime import datetime, timedelta, timezone
from functools import reduce
from pathlib import Path as FilePath
from typing import Any, List
from typing import Any
from urllib.parse import unquote
import cv2
@@ -19,12 +18,11 @@ import pytz
from fastapi import APIRouter, Depends, Path, Query, Request, Response
from fastapi.responses import FileResponse, JSONResponse, StreamingResponse
from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn, operator
from peewee import DoesNotExist, fn
from tzlocal import get_localzone_name
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
)
from frigate.api.defs.query.media_query_parameters import (
@@ -32,8 +30,6 @@ from frigate.api.defs.query.media_query_parameters import (
MediaEventsSnapshotQueryParams,
MediaLatestFrameQueryParams,
MediaMjpegFeedQueryParams,
MediaRecordingsAvailabilityQueryParams,
MediaRecordingsSummaryQueryParams,
)
from frigate.api.defs.tags import Tags
from frigate.camera.state import CameraState
@@ -44,13 +40,11 @@ from frigate.const import (
INSTALL_DIR,
MAX_SEGMENT_DURATION,
PREVIEW_FRAME_TYPE,
RECORD_DIR,
)
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
@@ -397,333 +391,6 @@ async def submit_recording_snapshot_to_plus(
)
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
def get_recordings_storage_usage(request: Request):
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
"storage"
][RECORD_DIR]
if not recording_stats:
return JSONResponse({})
total_mb = recording_stats["total"]
camera_usages: dict[str, dict] = (
request.app.storage_maintainer.calculate_camera_usages()
)
for camera_name in camera_usages.keys():
if camera_usages.get(camera_name, {}).get("usage"):
camera_usages[camera_name]["usage_percent"] = (
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
) * 100
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Returns true/false by day indicating if recordings exist"""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera << camera_list)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
if min_time is None or max_time is None:
return JSONResponse(content={})
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
days: dict[str, bool] = {}
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
period_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day")
)
.where(
(Recordings.camera << camera_list)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
for g in period_query:
days[g.day] = True
return JSONResponse(content=dict(sorted(days.items())))
@router.get(
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera == camera_name)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
days: dict[str, dict] = {}
if min_time is None or max_time is None:
return JSONResponse(content=list(days.values()))
dst_periods = get_dst_transitions(timezone, min_time, max_time)
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
recording_groups = (
Recordings.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.SUM(Recordings.duration).alias("duration"),
fn.SUM(Recordings.motion).alias("motion"),
fn.SUM(Recordings.objects).alias("objects"),
)
.where(
(Recordings.camera == camera_name)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
event_groups = (
Event.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Event.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.COUNT(Event.id).alias("count"),
)
.where(Event.camera == camera_name, Event.has_clip)
.where(
(Event.start_time >= period_start) & (Event.start_time <= period_end)
)
.group_by((Event.start_time + period_offset).cast("int") / 3600)
.namedtuples()
)
event_map = {g.hour: g.count for g in event_groups}
for recording_group in recording_groups:
parts = recording_group.hour.split()
hour = parts[1]
day = parts[0]
events_count = event_map.get(recording_group.hour, 0)
hour_data = {
"hour": hour,
"events": events_count,
"motion": recording_group.motion,
"objects": recording_group.objects,
"duration": round(recording_group.duration),
}
if day in days:
# merge counts if already present (edge-case at DST boundary)
days[day]["events"] += events_count or 0
days[day]["hours"].append(hour_data)
else:
days[day] = {
"events": events_count or 0,
"hours": [hour_data],
"day": day,
}
return JSONResponse(content=list(days.values()))
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
async def recordings(
camera_name: str,
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
before: float = datetime.now().timestamp(),
):
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
recordings = (
Recordings.select(
Recordings.id,
Recordings.start_time,
Recordings.end_time,
Recordings.segment_size,
Recordings.motion,
Recordings.objects,
Recordings.duration,
)
.where(
Recordings.camera == camera_name,
Recordings.end_time >= after,
Recordings.start_time <= before,
)
.order_by(Recordings.start_time)
.dicts()
.iterator()
)
return JSONResponse(content=list(recordings))
@router.get(
"/recordings/unavailable",
response_model=list[dict],
dependencies=[Depends(allow_any_authenticated())],
)
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get time ranges with no recordings."""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
before = params.before or datetime.datetime.now().timestamp()
after = (
params.after
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
)
scale = params.scale
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
camera_list = allowed_cameras
# Get recording start times
data: list[Recordings] = (
Recordings.select(Recordings.start_time, Recordings.end_time)
.where(reduce(operator.and_, clauses))
.order_by(Recordings.start_time.asc())
.dicts()
.iterator()
)
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Iterate through time segments and check if each has any recording
no_recording_segments = []
current = after
current_gap_start = None
while current < before:
segment_end = min(current + scale, before)
# Check if this segment overlaps with any recording
has_recording = any(
rec_start < segment_end and rec_end > current
for rec_start, rec_end in recordings
)
if not has_recording:
# This segment has no recordings
if current_gap_start is None:
current_gap_start = current # Start a new gap
else:
# This segment has recordings
if current_gap_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(current)}
)
current_gap_start = None
current = segment_end
# Append the last gap if it exists
if current_gap_start is not None:
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
dependencies=[Depends(require_camera_access)],
@@ -1935,7 +1602,7 @@ async def label_clip(request: Request, camera_name: str, label: str):
try:
event = event_query.get()
return await event_clip(request, event.id, 0)
return await event_clip(request, event.id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Event not found"}, status_code=404

479
frigate/api/record.py Normal file
View File

@@ -0,0 +1,479 @@
"""Recording APIs."""
import logging
from datetime import datetime, timedelta
from functools import reduce
from pathlib import Path
from typing import List
from urllib.parse import unquote
from fastapi import APIRouter, Depends, Request
from fastapi import Path as PathParam
from fastapi.responses import JSONResponse
from peewee import fn, operator
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
)
from frigate.api.defs.query.recordings_query_parameters import (
MediaRecordingsAvailabilityQueryParams,
MediaRecordingsSummaryQueryParams,
RecordingsDeleteQueryParams,
)
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import RECORD_DIR
from frigate.models import Event, Recordings
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.recordings])
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
def get_recordings_storage_usage(request: Request):
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
"storage"
][RECORD_DIR]
if not recording_stats:
return JSONResponse({})
total_mb = recording_stats["total"]
camera_usages: dict[str, dict] = (
request.app.storage_maintainer.calculate_camera_usages()
)
for camera_name in camera_usages.keys():
if camera_usages.get(camera_name, {}).get("usage"):
camera_usages[camera_name]["usage_percent"] = (
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
) * 100
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Returns true/false by day indicating if recordings exist"""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera << camera_list)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
if min_time is None or max_time is None:
return JSONResponse(content={})
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
days: dict[str, bool] = {}
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
period_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day")
)
.where(
(Recordings.camera << camera_list)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
for g in period_query:
days[g.day] = True
return JSONResponse(content=dict(sorted(days.items())))
@router.get(
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera == camera_name)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
days: dict[str, dict] = {}
if min_time is None or max_time is None:
return JSONResponse(content=list(days.values()))
dst_periods = get_dst_transitions(timezone, min_time, max_time)
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
recording_groups = (
Recordings.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.SUM(Recordings.duration).alias("duration"),
fn.SUM(Recordings.motion).alias("motion"),
fn.SUM(Recordings.objects).alias("objects"),
)
.where(
(Recordings.camera == camera_name)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
event_groups = (
Event.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Event.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.COUNT(Event.id).alias("count"),
)
.where(Event.camera == camera_name, Event.has_clip)
.where(
(Event.start_time >= period_start) & (Event.start_time <= period_end)
)
.group_by((Event.start_time + period_offset).cast("int") / 3600)
.namedtuples()
)
event_map = {g.hour: g.count for g in event_groups}
for recording_group in recording_groups:
parts = recording_group.hour.split()
hour = parts[1]
day = parts[0]
events_count = event_map.get(recording_group.hour, 0)
hour_data = {
"hour": hour,
"events": events_count,
"motion": recording_group.motion,
"objects": recording_group.objects,
"duration": round(recording_group.duration),
}
if day in days:
# merge counts if already present (edge-case at DST boundary)
days[day]["events"] += events_count or 0
days[day]["hours"].append(hour_data)
else:
days[day] = {
"events": events_count or 0,
"hours": [hour_data],
"day": day,
}
return JSONResponse(content=list(days.values()))
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
async def recordings(
camera_name: str,
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
before: float = datetime.now().timestamp(),
):
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
recordings = (
Recordings.select(
Recordings.id,
Recordings.start_time,
Recordings.end_time,
Recordings.segment_size,
Recordings.motion,
Recordings.objects,
Recordings.duration,
)
.where(
Recordings.camera == camera_name,
Recordings.end_time >= after,
Recordings.start_time <= before,
)
.order_by(Recordings.start_time)
.dicts()
.iterator()
)
return JSONResponse(content=list(recordings))
@router.get(
"/recordings/unavailable",
response_model=list[dict],
dependencies=[Depends(allow_any_authenticated())],
)
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get time ranges with no recordings."""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
before = params.before or datetime.datetime.now().timestamp()
after = (
params.after
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
)
scale = params.scale
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
camera_list = allowed_cameras
# Get recording start times
data: list[Recordings] = (
Recordings.select(Recordings.start_time, Recordings.end_time)
.where(reduce(operator.and_, clauses))
.order_by(Recordings.start_time.asc())
.dicts()
.iterator()
)
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Iterate through time segments and check if each has any recording
no_recording_segments = []
current = after
current_gap_start = None
while current < before:
segment_end = min(current + scale, before)
# Check if this segment overlaps with any recording
has_recording = any(
rec_start < segment_end and rec_end > current
for rec_start, rec_end in recordings
)
if not has_recording:
# This segment has no recordings
if current_gap_start is None:
current_gap_start = current # Start a new gap
else:
# This segment has recordings
if current_gap_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(current)}
)
current_gap_start = None
current = segment_end
# Append the last gap if it exists
if current_gap_start is not None:
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)
@router.delete(
"/recordings/start/{start}/end/{end}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete recordings",
description="""Deletes recordings within the specified time range.
Recordings can be filtered by cameras and kept based on motion, objects, or audio attributes.
""",
)
async def delete_recordings(
start: float = PathParam(..., description="Start timestamp (unix)"),
end: float = PathParam(..., description="End timestamp (unix)"),
params: RecordingsDeleteQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Delete recordings in the specified time range."""
if start >= end:
return JSONResponse(
content={
"success": False,
"message": "Start time must be less than end time.",
},
status_code=400,
)
cameras = params.cameras
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(
content={
"success": False,
"message": "No valid cameras found in the request.",
},
status_code=400,
)
camera_list = list(filtered)
else:
camera_list = allowed_cameras
# Parse keep parameter
keep_set = set()
if params.keep:
keep_set = set(params.keep.split(","))
# Build query to find overlapping recordings
clauses = [
(
Recordings.start_time.between(start, end)
| Recordings.end_time.between(start, end)
| ((start > Recordings.start_time) & (end < Recordings.end_time))
),
(Recordings.camera << camera_list),
]
keep_clauses = []
if "motion" in keep_set:
keep_clauses.append(Recordings.motion.is_null(False) & (Recordings.motion > 0))
if "object" in keep_set:
keep_clauses.append(
Recordings.objects.is_null(False) & (Recordings.objects > 0)
)
if "audio" in keep_set:
keep_clauses.append(Recordings.dBFS.is_null(False))
if keep_clauses:
keep_condition = reduce(operator.or_, keep_clauses)
clauses.append(~keep_condition)
recordings_to_delete = (
Recordings.select(Recordings.id, Recordings.path)
.where(reduce(operator.and_, clauses))
.dicts()
.iterator()
)
recording_ids = []
deleted_count = 0
error_count = 0
for recording in recordings_to_delete:
recording_ids.append(recording["id"])
try:
Path(recording["path"]).unlink(missing_ok=True)
deleted_count += 1
except Exception as e:
logger.error(f"Failed to delete recording file {recording['path']}: {e}")
error_count += 1
if recording_ids:
max_deletes = 100000
recording_ids_list = list(recording_ids)
for i in range(0, len(recording_ids_list), max_deletes):
Recordings.delete().where(
Recordings.id << recording_ids_list[i : i + max_deletes]
).execute()
message = f"Successfully deleted {deleted_count} recording(s)."
if error_count > 0:
message += f" {error_count} file deletion error(s) occurred."
return JSONResponse(
content={"success": True, "message": message},
status_code=200,
)

View File

@@ -100,10 +100,6 @@ class FrigateApp:
)
if (
config.semantic_search.enabled
or any(
c.objects.genai.enabled or c.review.genai.enabled
for c in config.cameras.values()
)
or config.lpr.enabled
or config.face_recognition.enabled
or len(config.classification.custom) > 0

View File

@@ -19,6 +19,8 @@ class CameraMetrics:
process_pid: Synchronized
capture_process_pid: Synchronized
ffmpeg_pid: Synchronized
reconnects_last_hour: Synchronized
stalls_last_hour: Synchronized
def __init__(self, manager: SyncManager):
self.camera_fps = manager.Value("d", 0)
@@ -35,6 +37,8 @@ class CameraMetrics:
self.process_pid = manager.Value("i", 0)
self.capture_process_pid = manager.Value("i", 0)
self.ffmpeg_pid = manager.Value("i", 0)
self.reconnects_last_hour = manager.Value("i", 0)
self.stalls_last_hour = manager.Value("i", 0)
class PTZMetrics:

View File

@@ -28,6 +28,7 @@ from frigate.const import (
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_EVENT_DESCRIPTION,
UPDATE_JOB_STATE,
UPDATE_MODEL_STATE,
UPDATE_REVIEW_DESCRIPTION,
UPSERT_REVIEW_SEGMENT,
@@ -60,6 +61,7 @@ class Dispatcher:
self.camera_activity = CameraActivityManager(config, self.publish)
self.audio_activity = AudioActivityManager(config, self.publish)
self.model_state: dict[str, ModelStatusTypesEnum] = {}
self.job_state: dict[str, dict[str, Any]] = {} # {job_type: job_data}
self.embeddings_reindex: dict[str, Any] = {}
self.birdseye_layout: dict[str, Any] = {}
self.audio_transcription_state: str = "idle"
@@ -180,6 +182,19 @@ class Dispatcher:
def handle_model_state() -> None:
self.publish("model_state", json.dumps(self.model_state.copy()))
def handle_update_job_state() -> None:
if payload and isinstance(payload, dict):
job_type = payload.get("job_type")
if job_type:
self.job_state[job_type] = payload
self.publish(
"job_state",
json.dumps(self.job_state),
)
def handle_job_state() -> None:
self.publish("job_state", json.dumps(self.job_state.copy()))
def handle_update_audio_transcription_state() -> None:
if payload:
self.audio_transcription_state = payload
@@ -277,6 +292,7 @@ class Dispatcher:
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
UPDATE_REVIEW_DESCRIPTION: handle_update_review_description,
UPDATE_MODEL_STATE: handle_update_model_state,
UPDATE_JOB_STATE: handle_update_job_state,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
UPDATE_AUDIO_TRANSCRIPTION_STATE: handle_update_audio_transcription_state,
@@ -284,6 +300,7 @@ class Dispatcher:
"restart": handle_restart,
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
"modelState": handle_model_state,
"jobState": handle_job_state,
"audioTranscriptionState": handle_audio_transcription_state,
"birdseyeLayout": handle_birdseye_layout,
"onConnect": handle_on_connect,

View File

@@ -225,8 +225,7 @@ class MqttClient(Communicator):
"birdseye_mode",
"review_alerts",
"review_detections",
"object_descriptions",
"review_descriptions",
"genai",
]
for name in self.config.cameras.keys():

View File

@@ -388,7 +388,7 @@ class WebPushClient(Communicator):
else:
title = base_title
message = payload["after"]["data"]["metadata"]["shortSummary"]
message = payload["after"]["data"]["metadata"]["scene"]
else:
zone_names = payload["after"]["data"]["zones"]
formatted_zone_names = []

View File

@@ -1,5 +1,5 @@
from enum import Enum
from typing import Optional
from typing import Optional, Union
from pydantic import Field
@@ -70,13 +70,13 @@ class RecordExportConfig(FrigateBaseModel):
timelapse_args: str = Field(
default=DEFAULT_TIME_LAPSE_FFMPEG_ARGS, title="Timelapse Args"
)
hwaccel_args: Union[str, list[str]] = Field(
default="auto", title="Export-specific FFmpeg hardware acceleration arguments."
)
class RecordConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable record on all cameras.")
sync_recordings: bool = Field(
default=False, title="Sync recordings with disk on startup and once a day."
)
expire_interval: int = Field(
default=60,
title="Number of minutes to wait between cleanup runs.",

View File

@@ -28,7 +28,6 @@ from frigate.util.builtin import (
get_ffmpeg_arg_list,
)
from frigate.util.config import (
CURRENT_CONFIG_VERSION,
StreamInfoRetriever,
convert_area_to_pixels,
find_config_file,
@@ -77,12 +76,11 @@ logger = logging.getLogger(__name__)
yaml = YAML()
DEFAULT_CONFIG = f"""
DEFAULT_CONFIG = """
mqtt:
enabled: False
cameras: {{}} # No cameras defined, UI wizard should be used
version: {CURRENT_CONFIG_VERSION}
cameras: {} # No cameras defined, UI wizard should be used
"""
DEFAULT_DETECTORS = {"cpu": {"type": "cpu"}}
@@ -525,6 +523,14 @@ class FrigateConfig(FrigateBaseModel):
if camera_config.ffmpeg.hwaccel_args == "auto":
camera_config.ffmpeg.hwaccel_args = self.ffmpeg.hwaccel_args
# Resolve export hwaccel_args: camera export -> camera ffmpeg -> global ffmpeg
# This allows per-camera override for exports (e.g., when camera resolution
# exceeds hardware encoder limits)
if camera_config.record.export.hwaccel_args == "auto":
camera_config.record.export.hwaccel_args = (
camera_config.ffmpeg.hwaccel_args
)
for input in camera_config.ffmpeg.inputs:
need_detect_dimensions = "detect" in input.roles and (
camera_config.detect.height is None
@@ -755,7 +761,8 @@ class FrigateConfig(FrigateBaseModel):
if new_config and f.tell() == 0:
f.write(DEFAULT_CONFIG)
logger.info(
"Created default config file, see the getting started docs for configuration: https://docs.frigate.video/guides/getting_started"
"Created default config file, see the getting started docs \
for configuration https://docs.frigate.video/guides/getting_started"
)
f.seek(0)

View File

@@ -77,9 +77,6 @@ FFMPEG_HWACCEL_RKMPP = "preset-rkmpp"
FFMPEG_HWACCEL_AMF = "preset-amd-amf"
FFMPEG_HVC1_ARGS = ["-tag:v", "hvc1"]
# RKNN constants
SUPPORTED_RK_SOCS = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
# Regex constants
REGEX_CAMERA_NAME = r"^[a-zA-Z0-9_-]+$"
@@ -122,6 +119,7 @@ UPDATE_REVIEW_DESCRIPTION = "update_review_description"
UPDATE_MODEL_STATE = "update_model_state"
UPDATE_EMBEDDINGS_REINDEX_PROGRESS = "handle_embeddings_reindex_progress"
UPDATE_BIRDSEYE_LAYOUT = "update_birdseye_layout"
UPDATE_JOB_STATE = "update_job_state"
NOTIFICATION_TEST = "notification_test"
# IO Nice Values

View File

@@ -374,9 +374,6 @@ class LicensePlateProcessingMixin:
combined_plate = re.sub(
pattern, replacement, combined_plate
)
logger.debug(
f"{camera}: Processing replace rule: '{pattern}' -> '{replacement}', result: '{combined_plate}'"
)
except re.error as e:
logger.warning(
f"{camera}: Invalid regex in replace_rules '{pattern}': {e}"
@@ -384,7 +381,7 @@ class LicensePlateProcessingMixin:
if combined_plate != original_combined:
logger.debug(
f"{camera}: All rules applied: '{original_combined}' -> '{combined_plate}'"
f"{camera}: Rules applied: '{original_combined}' -> '{combined_plate}'"
)
# Compute the combined area for qualifying boxes

View File

@@ -131,9 +131,8 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
},
)
# Embed the description if semantic search is enabled
if self.config.semantic_search.enabled:
self.embeddings.embed_description(event_id, transcription)
# Embed the description
self.embeddings.embed_description(event_id, transcription)
except DoesNotExist:
logger.debug("No recording found for audio transcription post-processing")

View File

@@ -86,11 +86,7 @@ class ObjectDescriptionProcessor(PostProcessorApi):
and data["id"] not in self.early_request_sent
):
if data["has_clip"] and data["has_snapshot"]:
try:
event: Event = Event.get(Event.id == data["id"])
except DoesNotExist:
logger.error(f"Event {data['id']} not found")
return
event: Event = Event.get(Event.id == data["id"])
if (
not camera_config.objects.genai.objects
@@ -135,8 +131,6 @@ class ObjectDescriptionProcessor(PostProcessorApi):
)
):
self._process_genai_description(event, camera_config, thumbnail)
else:
self.cleanup_event(event.id)
def __regenerate_description(self, event_id: str, source: str, force: bool) -> None:
"""Regenerate the description for an event."""
@@ -210,17 +204,6 @@ class ObjectDescriptionProcessor(PostProcessorApi):
)
return None
def cleanup_event(self, event_id: str) -> None:
"""Clean up tracked event data to prevent memory leaks.
This should be called when an event ends, regardless of whether
genai processing is triggered.
"""
if event_id in self.tracked_events:
del self.tracked_events[event_id]
if event_id in self.early_request_sent:
del self.early_request_sent[event_id]
def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
"""Read, decode, and crop the snapshot image."""
@@ -316,8 +299,9 @@ class ObjectDescriptionProcessor(PostProcessorApi):
),
).start()
# Clean up tracked events and early request state
self.cleanup_event(event.id)
# Delete tracked events based on the event_id
if event.id in self.tracked_events:
del self.tracked_events[event.id]
def _genai_embed_description(self, event: Event, thumbnails: list[bytes]) -> None:
"""Embed the description for an event."""

View File

@@ -92,7 +92,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
pixels_per_image = width * height
tokens_per_image = pixels_per_image / 1250
prompt_tokens = 3800
prompt_tokens = 3500
response_tokens = 300
available_tokens = context_size - prompt_tokens - response_tokens
max_frames = int(available_tokens / tokens_per_image)
@@ -311,7 +311,6 @@ class ReviewDescriptionProcessor(PostProcessorApi):
start_ts,
end_ts,
events_with_context,
self.config.review.genai.preferred_language,
self.config.review.genai.debug_save_thumbnails,
)
else:

View File

@@ -8,9 +8,6 @@ class ReviewMetadata(BaseModel):
scene: str = Field(
description="A comprehensive description of the setting and entities, including relevant context and plausible inferences if supported by visual evidence."
)
shortSummary: str = Field(
description="A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail."
)
confidence: float = Field(
description="A float between 0 and 1 representing your overall confidence in this analysis."
)

View File

@@ -13,7 +13,7 @@ from frigate.comms.event_metadata_updater import (
)
from frigate.config import FrigateConfig
from frigate.const import MODEL_CACHE_DIR
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from frigate.util.object import calculate_region
from ..types import DataProcessorMetrics
@@ -80,14 +80,13 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
except Exception as e:
logger.error(f"Failed to download {path}: {e}")
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
num_threads=2,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()

View File

@@ -21,7 +21,7 @@ from frigate.config.classification import (
ObjectClassificationType,
)
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
from frigate.util.object import box_overlaps, calculate_region
@@ -52,7 +52,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.requestor = requestor
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter = None
self.interpreter: Interpreter | None = None
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.labelmap: dict[int, str] = {}
@@ -72,12 +72,8 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.last_run = datetime.datetime.now().timestamp()
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from tensorflow.lite.python.interpreter import Interpreter
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
@@ -88,13 +84,11 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.labelmap = {}
return
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0)
@@ -230,34 +224,28 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
if not should_run:
return
x, y, x2, y2 = calculate_region(
frame.shape,
crop[0],
crop[1],
crop[2],
crop[3],
224,
1.0,
)
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
height, width = rgb.shape[:2]
frame = rgb[
y:y2,
x:x2,
]
# Convert normalized crop coordinates to pixel values
x1 = int(camera_config.crop[0] * width)
y1 = int(camera_config.crop[1] * height)
x2 = int(camera_config.crop[2] * width)
y2 = int(camera_config.crop[3] * height)
# Clip coordinates to frame boundaries
x1 = max(0, min(x1, width))
y1 = max(0, min(y1, height))
x2 = max(0, min(x2, width))
y2 = max(0, min(y2, height))
if x2 <= x1 or y2 <= y1:
logger.warning(
f"Invalid crop coordinates for {camera}: [{x1}, {y1}, {x2}, {y2}]"
)
return
frame = rgb[y1:y2, x1:x2]
try:
resized_frame = cv2.resize(frame, (224, 224))
except Exception:
logger.warning("Failed to resize image for state classification")
return
if frame.shape != (224, 224):
try:
resized_frame = cv2.resize(frame, (224, 224))
except Exception:
logger.warning("Failed to resize image for state classification")
return
if self.interpreter is None:
# When interpreter is None, always save (score is 0.0, which is < 1.0)
@@ -357,7 +345,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.model_config = model_config
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter = None
self.interpreter: Interpreter | None = None
self.sub_label_publisher = sub_label_publisher
self.requestor = requestor
self.tensor_input_details: dict[str, Any] | None = None
@@ -378,6 +366,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
@@ -389,13 +378,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.labelmap = {}
return
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0)
@@ -521,13 +508,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
0.0,
max_files=save_attempts,
)
# Still track history even when model doesn't exist to respect MAX_OBJECT_CLASSIFICATIONS
# Add an entry with "unknown" label so the history limit is enforced
if object_id not in self.classification_history:
self.classification_history[object_id] = []
self.classification_history[object_id].append(("unknown", 0.0, now))
return
input = np.expand_dims(resized_crop, axis=0)
@@ -669,5 +649,5 @@ def write_classification_attempt(
if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
except (FileNotFoundError, OSError):
except FileNotFoundError:
pass

View File

@@ -131,7 +131,6 @@ class ONNXModelRunner(BaseModelRunner):
return model_type in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.yolov9_license_plate.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.facenet.value,
@@ -139,31 +138,8 @@ class ONNXModelRunner(BaseModelRunner):
ModelTypeEnum.dfine.value,
]
@staticmethod
def is_concurrent_model(model_type: str | None) -> bool:
"""Check if model requires thread locking for concurrent inference.
Some models (like JinaV2) share one runner between text and vision embeddings
called from different threads, requiring thread synchronization.
"""
if not model_type:
return False
# Import here to avoid circular imports
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type == EnrichmentModelTypeEnum.jina_v2.value
def __init__(self, ort: ort.InferenceSession, model_type: str | None = None):
def __init__(self, ort: ort.InferenceSession):
self.ort = ort
self.model_type = model_type
# Thread lock to prevent concurrent inference (needed for JinaV2 which shares
# one runner between text and vision embeddings called from different threads)
if self.is_concurrent_model(model_type):
self._inference_lock = threading.Lock()
else:
self._inference_lock = None
def get_input_names(self) -> list[str]:
return [input.name for input in self.ort.get_inputs()]
@@ -173,10 +149,6 @@ class ONNXModelRunner(BaseModelRunner):
return self.ort.get_inputs()[0].shape[3]
def run(self, input: dict[str, Any]) -> Any | None:
if self._inference_lock:
with self._inference_lock:
return self.ort.run(None, input)
return self.ort.run(None, input)
@@ -197,7 +169,6 @@ class CudaGraphRunner(BaseModelRunner):
return model_type not in [
ModelTypeEnum.yolonas.value,
ModelTypeEnum.dfine.value,
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
@@ -603,6 +574,5 @@ def get_optimized_runner(
),
providers=providers,
provider_options=options,
),
model_type=model_type,
)
)

View File

@@ -5,7 +5,7 @@ from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from ..detector_utils import tflite_detect_raw, tflite_init
@@ -28,13 +28,12 @@ class CpuDetectorConfig(BaseDetectorConfig):
class CpuTfl(DetectionApi):
type_key = DETECTOR_KEY
@redirect_output_to_logger(logger, logging.DEBUG)
def __init__(self, detector_config: CpuDetectorConfig):
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
interpreter = Interpreter(
model_path=detector_config.model.path,
num_threads=detector_config.num_threads or 3,
)
interpreter = Interpreter(
model_path=detector_config.model.path,
num_threads=detector_config.num_threads or 3,
)
tflite_init(self, interpreter)

View File

@@ -8,7 +8,7 @@ import cv2
import numpy as np
from pydantic import Field
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detection_runners import RKNNModelRunner
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
@@ -19,6 +19,8 @@ logger = logging.getLogger(__name__)
DETECTOR_KEY = "rknn"
supported_socs = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
supported_models = {
ModelTypeEnum.yologeneric: "^frigate-fp16-yolov9-[cemst]$",
ModelTypeEnum.yolonas: "^deci-fp16-yolonas_[sml]$",
@@ -80,9 +82,9 @@ class Rknn(DetectionApi):
except FileNotFoundError:
raise Exception("Make sure to run docker in privileged mode.")
if soc not in SUPPORTED_RK_SOCS:
if soc not in supported_socs:
raise Exception(
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {SUPPORTED_RK_SOCS}."
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {supported_socs}."
)
return soc

View File

@@ -203,9 +203,7 @@ class EmbeddingMaintainer(threading.Thread):
# post processors
self.post_processors: list[PostProcessorApi] = []
if self.genai_client is not None and any(
c.review.genai.enabled_in_config for c in self.config.cameras.values()
):
if any(c.review.genai.enabled_in_config for c in self.config.cameras.values()):
self.post_processors.append(
ReviewDescriptionProcessor(
self.config, self.requestor, self.metrics, self.genai_client
@@ -246,9 +244,7 @@ class EmbeddingMaintainer(threading.Thread):
)
self.post_processors.append(semantic_trigger_processor)
if self.genai_client is not None and any(
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
):
if any(c.objects.genai.enabled_in_config for c in self.config.cameras.values()):
self.post_processors.append(
ObjectDescriptionProcessor(
self.config,
@@ -526,8 +522,6 @@ class EmbeddingMaintainer(threading.Thread):
)
elif isinstance(processor, ObjectDescriptionProcessor):
if not updated_db:
# Still need to cleanup tracked events even if not processing
processor.cleanup_event(event_id)
continue
processor.process_data(
@@ -633,7 +627,7 @@ class EmbeddingMaintainer(threading.Thread):
camera, frame_name, _, _, motion_boxes, _ = data
if not camera or len(motion_boxes) == 0 or camera not in self.config.cameras:
if not camera or len(motion_boxes) == 0:
return
camera_config = self.config.cameras[camera]

View File

@@ -8,7 +8,7 @@ import numpy as np
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_runners import get_optimized_runner
from frigate.embeddings.types import EnrichmentModelTypeEnum
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from frigate.util.downloader import ModelDownloader
from ...config import FaceRecognitionConfig
@@ -57,18 +57,17 @@ class FaceNetEmbedding(BaseEmbedding):
self._load_model_and_utils()
logger.debug(f"models are already downloaded for {self.model_name}")
@redirect_output_to_logger(logger, logging.DEBUG)
def _load_model_and_utils(self):
if self.runner is None:
if self.downloader:
self.downloader.wait_for_download()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.runner = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
num_threads=2,
)
self.runner.allocate_tensors()
self.runner = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
num_threads=2,
)
self.runner.allocate_tensors()
self.tensor_input_details = self.runner.get_input_details()
self.tensor_output_details = self.runner.get_output_details()

View File

@@ -186,9 +186,6 @@ class JinaV1ImageEmbedding(BaseEmbedding):
download_func=self._download_model,
)
self.downloader.ensure_model_files()
# Avoid lazy loading in worker threads: block until downloads complete
# and load the model on the main thread during initialization.
self._load_model_and_utils()
else:
self.downloader = None
ModelDownloader.mark_files_state(

View File

@@ -3,7 +3,6 @@
import io
import logging
import os
import threading
import numpy as np
from PIL import Image
@@ -54,11 +53,6 @@ class JinaV2Embedding(BaseEmbedding):
self.tokenizer = None
self.image_processor = None
self.runner = None
# Lock to prevent concurrent calls (text and vision share this instance)
self._call_lock = threading.Lock()
# download the model and tokenizer
files_names = list(self.download_urls.keys()) + [self.tokenizer_file]
if not all(
os.path.exists(os.path.join(self.download_path, n)) for n in files_names
@@ -71,9 +65,6 @@ class JinaV2Embedding(BaseEmbedding):
download_func=self._download_model,
)
self.downloader.ensure_model_files()
# Avoid lazy loading in worker threads: block until downloads complete
# and load the model on the main thread during initialization.
self._load_model_and_utils()
else:
self.downloader = None
ModelDownloader.mark_files_state(
@@ -206,40 +197,37 @@ class JinaV2Embedding(BaseEmbedding):
def __call__(
self, inputs: list[str] | list[Image.Image] | list[str], embedding_type=None
) -> list[np.ndarray]:
# Lock the entire call to prevent race conditions when text and vision
# embeddings are called concurrently from different threads
with self._call_lock:
self.embedding_type = embedding_type
if not self.embedding_type:
raise ValueError(
"embedding_type must be specified either in __init__ or __call__"
)
self.embedding_type = embedding_type
if not self.embedding_type:
raise ValueError(
"embedding_type must be specified either in __init__ or __call__"
)
self._load_model_and_utils()
processed = self._preprocess_inputs(inputs)
batch_size = len(processed)
self._load_model_and_utils()
processed = self._preprocess_inputs(inputs)
batch_size = len(processed)
# Prepare ONNX inputs with matching batch sizes
onnx_inputs = {}
if self.embedding_type == "text":
onnx_inputs["input_ids"] = np.stack([x[0] for x in processed])
onnx_inputs["pixel_values"] = np.zeros(
(batch_size, 3, 512, 512), dtype=np.float32
)
elif self.embedding_type == "vision":
onnx_inputs["input_ids"] = np.zeros((batch_size, 16), dtype=np.int64)
onnx_inputs["pixel_values"] = np.stack([x[0] for x in processed])
else:
raise ValueError("Invalid embedding type")
# Prepare ONNX inputs with matching batch sizes
onnx_inputs = {}
if self.embedding_type == "text":
onnx_inputs["input_ids"] = np.stack([x[0] for x in processed])
onnx_inputs["pixel_values"] = np.zeros(
(batch_size, 3, 512, 512), dtype=np.float32
)
elif self.embedding_type == "vision":
onnx_inputs["input_ids"] = np.zeros((batch_size, 16), dtype=np.int64)
onnx_inputs["pixel_values"] = np.stack([x[0] for x in processed])
else:
raise ValueError("Invalid embedding type")
# Run inference
outputs = self.runner.run(onnx_inputs)
if self.embedding_type == "text":
embeddings = outputs[2] # text embeddings
elif self.embedding_type == "vision":
embeddings = outputs[3] # image embeddings
else:
raise ValueError("Invalid embedding type")
# Run inference
outputs = self.runner.run(onnx_inputs)
if self.embedding_type == "text":
embeddings = outputs[2] # text embeddings
elif self.embedding_type == "vision":
embeddings = outputs[3] # image embeddings
else:
raise ValueError("Invalid embedding type")
embeddings = self._postprocess_outputs(embeddings)
return [embedding for embedding in embeddings]
embeddings = self._postprocess_outputs(embeddings)
return [embedding for embedding in embeddings]

View File

@@ -34,7 +34,7 @@ from frigate.data_processing.real_time.audio_transcription import (
AudioTranscriptionRealTimeProcessor,
)
from frigate.ffmpeg_presets import parse_preset_input
from frigate.log import LogPipe, suppress_stderr_during
from frigate.log import LogPipe, redirect_output_to_logger
from frigate.object_detection.base import load_labels
from frigate.util.builtin import get_ffmpeg_arg_list
from frigate.util.process import FrigateProcess
@@ -367,17 +367,17 @@ class AudioEventMaintainer(threading.Thread):
class AudioTfl:
@redirect_output_to_logger(logger, logging.DEBUG)
def __init__(self, stop_event: threading.Event, num_threads=2):
self.stop_event = stop_event
self.num_threads = num_threads
self.labels = load_labels("/audio-labelmap.txt", prefill=521)
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path="/cpu_audio_model.tflite",
num_threads=self.num_threads,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path="/cpu_audio_model.tflite",
num_threads=self.num_threads,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()

View File

@@ -46,7 +46,7 @@ def should_update_state(prev_event: Event, current_event: Event) -> bool:
if prev_event["sub_label"] != current_event["sub_label"]:
return True
if set(prev_event["current_zones"]) != set(current_event["current_zones"]):
if len(prev_event["current_zones"]) < len(current_event["current_zones"]):
return True
return False

View File

@@ -153,7 +153,7 @@ PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
"preset-intel-qsv-h264": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {2}",
"preset-intel-qsv-h265": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v main -level:v 4.1 -async_depth:v 1 {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel cuda -hwaccel_device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
"preset-jetson-h264": "{0} -hide_banner {1} -c:v h264_nvmpi -profile high {2}",
"preset-jetson-h265": "{0} -hide_banner {1} -c:v h264_nvmpi -profile main {2}",
FFMPEG_HWACCEL_RKMPP: "{0} -hide_banner {1} -c:v h264_rkmpp -profile:v high {2}",

View File

@@ -101,7 +101,6 @@ When forming your description:
Your response MUST be a flat JSON object with:
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail. This should be a condensed version of the scene description above.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
{get_concern_prompt()}
@@ -179,7 +178,6 @@ Each line represents a detection state, not necessarily unique individuals. Pare
start_ts: float,
end_ts: float,
events: list[dict[str, Any]],
preferred_language: str | None,
debug_save: bool,
) -> str | None:
"""Generate a summary of review item descriptions over a period of time."""
@@ -193,8 +191,6 @@ Input format: Each event is a JSON object with:
- "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "camera", "time", "start_time", "end_time"
- "context": array of related events from other cameras that occurred during overlapping time periods
**Note: Use the "scene" field for event descriptions in the report. Ignore any "shortSummary" field if present.**
Report Structure - Use this EXACT format:
# Security Summary - {time_range}
@@ -236,9 +232,6 @@ Guidelines:
for event in events:
timeline_summary_prompt += f"\n{event}\n"
if preferred_language:
timeline_summary_prompt += f"\nProvide your answer in {preferred_language}"
if debug_save:
with open(
os.path.join(

View File

@@ -3,7 +3,7 @@
import logging
from typing import Any, Optional
from httpx import RemoteProtocolError, TimeoutException
from httpx import TimeoutException
from ollama import Client as ApiClient
from ollama import ResponseError
@@ -68,12 +68,7 @@ class OllamaClient(GenAIClient):
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
)
return result["response"].strip()
except (
TimeoutException,
ResponseError,
RemoteProtocolError,
ConnectionError,
) as e:
except (TimeoutException, ResponseError, ConnectionError) as e:
logger.warning("Ollama returned an error: %s", str(e))
return None

0
frigate/jobs/__init__.py Normal file
View File

21
frigate/jobs/job.py Normal file
View File

@@ -0,0 +1,21 @@
"""Generic base class for long-running background jobs."""
from dataclasses import asdict, dataclass, field
from typing import Any, Optional
@dataclass
class Job:
"""Base class for long-running background jobs."""
id: str = field(default_factory=lambda: __import__("uuid").uuid4().__str__()[:12])
job_type: str = "" # Must be set by subclasses
status: str = "queued" # queued, running, success, failed, cancelled
results: Optional[dict[str, Any]] = None
start_time: Optional[float] = None
end_time: Optional[float] = None
error_message: Optional[str] = None
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for WebSocket transmission."""
return asdict(self)

70
frigate/jobs/manager.py Normal file
View File

@@ -0,0 +1,70 @@
"""Generic job management for long-running background tasks."""
import threading
from typing import Optional
from frigate.jobs.job import Job
from frigate.types import JobStatusTypesEnum
# Global state and locks for enforcing single concurrent job per job type
_job_locks: dict[str, threading.Lock] = {}
_current_jobs: dict[str, Optional[Job]] = {}
# Keep completed jobs for retrieval, keyed by (job_type, job_id)
_completed_jobs: dict[tuple[str, str], Job] = {}
def _get_lock(job_type: str) -> threading.Lock:
"""Get or create a lock for the specified job type."""
if job_type not in _job_locks:
_job_locks[job_type] = threading.Lock()
return _job_locks[job_type]
def set_current_job(job: Job) -> None:
"""Set the current job for a given job type."""
lock = _get_lock(job.job_type)
with lock:
# Store the previous job if it was completed
old_job = _current_jobs.get(job.job_type)
if old_job and old_job.status in (
JobStatusTypesEnum.success,
JobStatusTypesEnum.failed,
JobStatusTypesEnum.cancelled,
):
_completed_jobs[(job.job_type, old_job.id)] = old_job
_current_jobs[job.job_type] = job
def clear_current_job(job_type: str, job_id: Optional[str] = None) -> None:
"""Clear the current job for a given job type, optionally checking the ID."""
lock = _get_lock(job_type)
with lock:
if job_type in _current_jobs:
current = _current_jobs[job_type]
if current is None or (job_id is None or current.id == job_id):
_current_jobs[job_type] = None
def get_current_job(job_type: str) -> Optional[Job]:
"""Get the current running/queued job for a given job type, if any."""
lock = _get_lock(job_type)
with lock:
return _current_jobs.get(job_type)
def get_job_by_id(job_type: str, job_id: str) -> Optional[Job]:
"""Get job by ID. Checks current job first, then completed jobs."""
lock = _get_lock(job_type)
with lock:
# Check if it's the current job
current = _current_jobs.get(job_type)
if current and current.id == job_id:
return current
# Check if it's a completed job
return _completed_jobs.get((job_type, job_id))
def job_is_running(job_type: str) -> bool:
"""Check if a job of the given type is currently running or queued."""
job = get_current_job(job_type)
return job is not None and job.status in ("queued", "running")

135
frigate/jobs/media_sync.py Normal file
View File

@@ -0,0 +1,135 @@
"""Media sync job management with background execution."""
import logging
import threading
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import UPDATE_JOB_STATE
from frigate.jobs.job import Job
from frigate.jobs.manager import (
get_current_job,
get_job_by_id,
job_is_running,
set_current_job,
)
from frigate.types import JobStatusTypesEnum
from frigate.util.media import sync_all_media
logger = logging.getLogger(__name__)
@dataclass
class MediaSyncJob(Job):
"""In-memory job state for media sync operations."""
job_type: str = "media_sync"
dry_run: bool = False
media_types: list[str] = field(default_factory=lambda: ["all"])
force: bool = False
class MediaSyncRunner(threading.Thread):
"""Thread-based runner for media sync jobs."""
def __init__(self, job: MediaSyncJob) -> None:
super().__init__(daemon=True, name="media_sync")
self.job = job
self.requestor = InterProcessRequestor()
def run(self) -> None:
"""Execute the media sync job and broadcast status updates."""
try:
# Update job status to running
self.job.status = JobStatusTypesEnum.running
self.job.start_time = datetime.now().timestamp()
self._broadcast_status()
# Execute sync with provided parameters
logger.debug(
f"Starting media sync job {self.job.id}: "
f"media_types={self.job.media_types}, "
f"dry_run={self.job.dry_run}, "
f"force={self.job.force}"
)
results = sync_all_media(
dry_run=self.job.dry_run,
media_types=self.job.media_types,
force=self.job.force,
)
# Store results and mark as complete
self.job.results = results.to_dict()
self.job.status = JobStatusTypesEnum.success
self.job.end_time = datetime.now().timestamp()
logger.debug(f"Media sync job {self.job.id} completed successfully")
self._broadcast_status()
except Exception as e:
logger.error(f"Media sync job {self.job.id} failed: {e}", exc_info=True)
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(e)
self.job.end_time = datetime.now().timestamp()
self._broadcast_status()
finally:
if self.requestor:
self.requestor.stop()
def _broadcast_status(self) -> None:
"""Broadcast job status update via IPC to all WebSocket subscribers."""
try:
self.requestor.send_data(
UPDATE_JOB_STATE,
self.job.to_dict(),
)
except Exception as e:
logger.warning(f"Failed to broadcast media sync status: {e}")
def start_media_sync_job(
dry_run: bool = False,
media_types: Optional[list[str]] = None,
force: bool = False,
) -> Optional[str]:
"""Start a new media sync job if none is currently running.
Returns job ID on success, None if job already running.
"""
# Check if a job is already running
if job_is_running("media_sync"):
current = get_current_job("media_sync")
logger.warning(
f"Media sync job {current.id} is already running. Rejecting new request."
)
return None
# Create and start new job
job = MediaSyncJob(
dry_run=dry_run,
media_types=media_types or ["all"],
force=force,
)
logger.debug(f"Creating new media sync job: {job.id}")
set_current_job(job)
# Start the background runner
runner = MediaSyncRunner(job)
runner.start()
return job.id
def get_current_media_sync_job() -> Optional[MediaSyncJob]:
"""Get the current running/queued media sync job, if any."""
return get_current_job("media_sync")
def get_media_sync_job_by_id(job_id: str) -> Optional[MediaSyncJob]:
"""Get media sync job by ID. Currently only tracks the current job."""
return get_job_by_id("media_sync", job_id)

View File

@@ -80,15 +80,10 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
log_levels = {
"absl": LogLevel.error,
"httpx": LogLevel.error,
"h5py": LogLevel.error,
"keras": LogLevel.error,
"matplotlib": LogLevel.error,
"tensorflow": LogLevel.error,
"tensorflow.python": LogLevel.error,
"werkzeug": LogLevel.error,
"ws4py": LogLevel.error,
"PIL": LogLevel.warning,
"numba": LogLevel.warning,
**log_levels,
}
@@ -323,31 +318,3 @@ def suppress_os_output(func: Callable) -> Callable:
return result
return wrapper
@contextmanager
def suppress_stderr_during(operation_name: str) -> Generator[None, None, None]:
"""
Context manager to suppress stderr output during a specific operation.
Useful for silencing LLVM debug output, CUDA messages, and other native
library logging that cannot be controlled via Python logging or environment
variables. Completely redirects file descriptor 2 (stderr) to /dev/null.
Usage:
with suppress_stderr_during("model_conversion"):
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Args:
operation_name: Name of the operation for debugging purposes
"""
original_stderr_fd = os.dup(2)
devnull = os.open(os.devnull, os.O_WRONLY)
try:
os.dup2(devnull, 2)
yield
finally:
os.dup2(original_stderr_fd, 2)
os.close(devnull)
os.close(original_stderr_fd)

View File

@@ -80,6 +80,14 @@ class Recordings(Model):
regions = IntegerField(null=True)
class ExportCase(Model):
id = CharField(null=False, primary_key=True, max_length=30)
name = CharField(index=True, max_length=100)
description = TextField(null=True)
created_at = DateTimeField()
updated_at = DateTimeField()
class Export(Model):
id = CharField(null=False, primary_key=True, max_length=30)
camera = CharField(index=True, max_length=20)
@@ -88,6 +96,12 @@ class Export(Model):
video_path = CharField(unique=True)
thumb_path = CharField(unique=True)
in_progress = BooleanField()
export_case = ForeignKeyField(
ExportCase,
null=True,
backref="exports",
column_name="export_case_id",
)
class ReviewSegment(Model):

View File

@@ -139,11 +139,9 @@ class OutputProcess(FrigateProcess):
if CameraConfigUpdateEnum.add in updates:
for camera in updates["add"]:
jsmpeg_cameras[camera] = JsmpegCamera(
self.config.cameras[camera], self.stop_event, websocket_server
)
preview_recorders[camera] = PreviewRecorder(
self.config.cameras[camera]
cam_config, self.stop_event, websocket_server
)
preview_recorders[camera] = PreviewRecorder(cam_config)
preview_write_times[camera] = 0
if (

View File

@@ -13,9 +13,8 @@ from playhouse.sqlite_ext import SqliteExtDatabase
from frigate.config import CameraConfig, FrigateConfig, RetainModeEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, MAX_WAL_SIZE, RECORD_DIR
from frigate.models import Previews, Recordings, ReviewSegment, UserReviewStatus
from frigate.record.util import remove_empty_directories, sync_recordings
from frigate.util.builtin import clear_and_unlink
from frigate.util.time import get_tomorrow_at_time
from frigate.util.media import remove_empty_directories
logger = logging.getLogger(__name__)
@@ -119,7 +118,6 @@ class RecordingCleanup(threading.Thread):
Recordings.path,
Recordings.objects,
Recordings.motion,
Recordings.dBFS,
)
.where(
(Recordings.camera == config.name)
@@ -127,7 +125,6 @@ class RecordingCleanup(threading.Thread):
(
(Recordings.end_time < continuous_expire_date)
& (Recordings.motion == 0)
& (Recordings.dBFS == 0)
)
| (Recordings.end_time < motion_expire_date)
)
@@ -187,7 +184,6 @@ class RecordingCleanup(threading.Thread):
mode == RetainModeEnum.motion
and recording.motion == 0
and recording.objects == 0
and recording.dBFS == 0
)
or (mode == RetainModeEnum.active_objects and recording.objects == 0)
):
@@ -350,11 +346,6 @@ class RecordingCleanup(threading.Thread):
logger.debug("End expire recordings.")
def run(self) -> None:
# on startup sync recordings with disk if enabled
if self.config.record.sync_recordings:
sync_recordings(limited=False)
next_sync = get_tomorrow_at_time(3)
# Expire tmp clips every minute, recordings and clean directories every hour.
for counter in itertools.cycle(range(self.config.record.expire_interval)):
if self.stop_event.wait(60):
@@ -363,14 +354,6 @@ class RecordingCleanup(threading.Thread):
self.clean_tmp_previews()
if (
self.config.record.sync_recordings
and datetime.datetime.now().astimezone(datetime.timezone.utc)
> next_sync
):
sync_recordings(limited=True)
next_sync = get_tomorrow_at_time(3)
if counter == 0:
self.clean_tmp_clips()
self.expire_recordings()

View File

@@ -64,6 +64,7 @@ class RecordingExporter(threading.Thread):
end_time: int,
playback_factor: PlaybackFactorEnum,
playback_source: PlaybackSourceEnum,
export_case_id: Optional[str] = None,
) -> None:
super().__init__()
self.config = config
@@ -75,6 +76,7 @@ class RecordingExporter(threading.Thread):
self.end_time = end_time
self.playback_factor = playback_factor
self.playback_source = playback_source
self.export_case_id = export_case_id
# ensure export thumb dir
Path(os.path.join(CLIPS_DIR, "export")).mkdir(exist_ok=True)
@@ -226,7 +228,7 @@ class RecordingExporter(threading.Thread):
ffmpeg_cmd = (
parse_preset_hardware_acceleration_encode(
self.config.ffmpeg.ffmpeg_path,
self.config.ffmpeg.hwaccel_args,
self.config.cameras[self.camera].record.export.hwaccel_args,
f"-an {ffmpeg_input}",
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart",
EncodeTypeEnum.timelapse,
@@ -317,7 +319,7 @@ class RecordingExporter(threading.Thread):
ffmpeg_cmd = (
parse_preset_hardware_acceleration_encode(
self.config.ffmpeg.ffmpeg_path,
self.config.ffmpeg.hwaccel_args,
self.config.cameras[self.camera].record.export.hwaccel_args,
f"{TIMELAPSE_DATA_INPUT_ARGS} {ffmpeg_input}",
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart {video_path}",
EncodeTypeEnum.timelapse,
@@ -348,17 +350,20 @@ class RecordingExporter(threading.Thread):
video_path = f"{EXPORT_DIR}/{self.camera}_{filename_start_datetime}-{filename_end_datetime}_{cleaned_export_id}.mp4"
thumb_path = self.save_thumbnail(self.export_id)
Export.insert(
{
Export.id: self.export_id,
Export.camera: self.camera,
Export.name: export_name,
Export.date: self.start_time,
Export.video_path: video_path,
Export.thumb_path: thumb_path,
Export.in_progress: True,
}
).execute()
export_values = {
Export.id: self.export_id,
Export.camera: self.camera,
Export.name: export_name,
Export.date: self.start_time,
Export.video_path: video_path,
Export.thumb_path: thumb_path,
Export.in_progress: True,
}
if self.export_case_id is not None:
export_values[Export.export_case] = self.export_case_id
Export.insert(export_values).execute()
try:
if self.playback_source == PlaybackSourceEnum.recordings:

View File

@@ -67,7 +67,7 @@ class SegmentInfo:
if (
not keep
and retain_mode == RetainModeEnum.motion
and (self.motion_count > 0 or self.average_dBFS != 0)
and (self.motion_count > 0 or self.average_dBFS > 0)
):
keep = True

Some files were not shown because too many files have changed in this diff Show More