Compare commits

..

1 Commits

Author SHA1 Message Date
dependabot[bot]
1e5375cb68 Bump actions/setup-python from 5.4.0 to 6.1.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.4.0 to 6.1.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.4.0...v6.1.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-25 11:04:49 +00:00
516 changed files with 5136 additions and 15994 deletions

View File

@@ -22,7 +22,6 @@ autotrack
autotracked
autotracker
autotracking
backchannel
balena
Beelink
BGRA
@@ -192,7 +191,6 @@ ONVIF
openai
opencv
openvino
overfitting
OWASP
paddleocr
paho
@@ -317,4 +315,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency

View File

@@ -1,129 +0,0 @@
title: "[Beta Support]: "
labels: ["support", "triage", "beta"]
body:
- type: markdown
attributes:
value: |
Thank you for testing Frigate beta versions! Use this form for support with beta releases.
**Note:** Beta versions may have incomplete features, known issues, or unexpected behavior. Please check the [release notes](https://github.com/blakeblackshear/frigate/releases) and [recent discussions][discussions] for known beta issues before submitting.
Before submitting, read the [beta documentation][docs].
[docs]: https://deploy-preview-19787--frigate-docs.netlify.app/
- type: textarea
id: description
attributes:
label: Describe the problem you are having
description: Please be as detailed as possible. Include what you expected to happen vs what actually happened.
validations:
required: true
- type: input
id: version
attributes:
label: Beta Version
description: Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.17.0-beta1)
placeholder: "0.17.0-beta1"
validations:
required: true
- type: dropdown
id: issue-category
attributes:
label: Issue Category
description: What area is your issue related to? This helps us understand the context.
options:
- Object Detection / Detectors
- Hardware Acceleration
- Configuration / Setup
- WebUI / Frontend
- Recordings / Storage
- Notifications / Events
- Integration (Home Assistant, etc)
- Performance / Stability
- Installation / Updates
- Other
validations:
required: true
- type: textarea
id: config
attributes:
label: Frigate config file
description: This will be automatically formatted into code, so no need for backticks. Remove any sensitive information like passwords or URLs.
render: yaml
validations:
required: true
- type: textarea
id: frigatelogs
attributes:
label: Relevant Frigate log output
description: Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: true
- type: textarea
id: go2rtclogs
attributes:
label: Relevant go2rtc log output (if applicable)
description: If your issue involves cameras, streams, or playback, please include go2rtc logs. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
id: install-method
attributes:
label: Install method
options:
- Home Assistant Add-on
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required: true
- type: textarea
id: docker
attributes:
label: docker-compose file or Docker CLI command
description: This will be automatically formatted into code, so no need for backticks. Include relevant environment variables and device mappings.
render: yaml
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating system
options:
- Home Assistant OS
- Debian
- Ubuntu
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required: true
- type: input
id: hardware
attributes:
label: CPU / GPU / Hardware
description: Provide details about your hardware (e.g., Intel i5-9400, NVIDIA RTX 3060, Raspberry Pi 4, etc)
placeholder: "Intel i7-10700, NVIDIA GTX 1660"
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: Screenshots of the issue, System metrics pages, or any relevant UI. Drag and drop or paste images directly.
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to reproduce
description: If applicable, provide detailed steps to reproduce the issue
placeholder: |
1. Go to '...'
2. Click on '...'
3. See error
- type: textarea
id: other
attributes:
label: Any other information that may be helpful
description: Additional context, related issues, when the problem started appearing, etc.

View File

@@ -6,8 +6,6 @@ body:
value: |
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
**⚠️ If you are running a beta version (0.17.0-beta or similar), please use the [Beta Support template](https://github.com/blakeblackshear/frigate/discussions/new?category=beta-support) instead.**
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**

View File

@@ -1,2 +0,0 @@
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.

View File

@@ -15,7 +15,7 @@ concurrency:
cancel-in-progress: true
env:
PYTHON_VERSION: 3.11
PYTHON_VERSION: 3.9
jobs:
amd64_build:
@@ -23,7 +23,7 @@ jobs:
name: AMD64 Build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -47,7 +47,7 @@ jobs:
name: ARM Build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -82,7 +82,7 @@ jobs:
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -113,7 +113,7 @@ jobs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -136,6 +136,7 @@ jobs:
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-tensorrt,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
@@ -154,7 +155,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -179,7 +180,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx

View File

@@ -16,12 +16,12 @@ jobs:
name: Web - Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
node-version: 16.x
- run: npm install
working-directory: ./web
- name: Lint
@@ -32,10 +32,10 @@ jobs:
name: Web - Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
- run: npm install
@@ -52,11 +52,11 @@ jobs:
name: Python Checks
steps:
- name: Check out the repository
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@v5.4.0
uses: actions/setup-python@v6.1.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install requirements
@@ -75,10 +75,10 @@ jobs:
name: Python Tests
steps:
- name: Check out code
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@v6
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
with:
persist-credentials: false
- id: lowercaseRepo

View File

@@ -1,31 +1,28 @@
<p align="center">
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
<img align="center" alt="logo" src="docs/static/img/frigate.png">
</p>
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
# Frigate - 一个具有实时目标检测的本地NVR
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计具备AI物体检测功能。使用OpenCV和TensorFlow在本地为IP摄像头执行实时物体检测。
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU并且功耗也极低。
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与 Home Assistant 紧密集成
- 设计上通过仅在必要时和必要地点寻找目标,最大限度地减少资源使用并最大化性能
强烈推荐使用GPU或者AI加速器例如[Google Coral加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)。它们的性能甚至超过目前的顶级CPU并且可以以极低的耗电实现更优的性能。
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与Home Assistant紧密集成
- 设计上通过仅在必要时和必要地点寻找物体,最大限度地减少资源使用并最大化性能
- 大量利用多进程处理,强调实时性而非处理每一帧
- 使用非常低开销的画面变动检测(也叫运动检测来确定运行目标检测的位置
- 使用 TensorFlow 进行目标检测,运行在单独的进程中以达到最大 FPS
- 通过 MQTT 进行通信,便于集成到其他系统中
- 使用非常低开销的运动检测来确定运行物体检测的位置
- 使用TensorFlow进行物体检测运行在单独的进程中以达到最大FPS
- 通过MQTT进行通信便于集成到其他系统中
- 根据检测到的物体设置保留时间进行视频录制
- 24/7 全天候录制
- 通过 RTSP 重新流传输以减少摄像头的连接数
- 支持 WebRTCMSE实现低延迟的实时观看
- 24/7全天候录制
- 通过RTSP重新流传输以减少摄像头的连接数
- 支持WebRTCMSE实现低延迟的实时观看
## 社区中文翻译文档
@@ -35,56 +32,39 @@
如果您想通过捐赠支持开发,请使用 [Github Sponsors](https://github.com/sponsors/blakeblackshear)。
## 协议
本项目采用 **MIT 许可证**授权。
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
**商标部分**“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标****不在** MIT 许可证覆盖范围内。
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
## 截图
### 实时监控面板
<div>
<img width="800" alt="实时监控面板" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
</div>
### 简单的核查工作流程
<div>
<img width="800" alt="简单的审查工作流程" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
</div>
### 多摄像头可按时间轴查看
<div>
<img width="800" alt="多摄像头可按时间轴查看" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
</div>
### 内置遮罩和区域编辑器
<div>
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## 翻译
## 翻译
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
## 非官方中文讨论社区
欢迎加入中文讨论 QQ 群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
## 非官方中文讨论社区
欢迎加入中文讨论QQ群[1043861059](https://qm.qq.com/q/7vQKsTmSz)
Bilibilihttps://space.bilibili.com/3546894915602564
## 中文社区赞助商
## 中文社区赞助商
[![EdgeOne](https://edgeone.ai/media/34fe3a45-492d-4ea4-ae5d-ea1087ca7b4b.png)](https://edgeone.ai/zh?from=github)
本项目 CDN 加速及安全防护由 Tencent EdgeOne 赞助
---
**Copyright © 2025 Frigate LLC.**

View File

@@ -237,18 +237,8 @@ ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
# TensorFlow C++ logging suppression (must be set before import)
# TF_CPP_MIN_LOG_LEVEL: 0=all, 1=INFO+, 2=WARNING+, 3=ERROR+ (we use 3 for errors only)
# TensorFlow error only
ENV TF_CPP_MIN_LOG_LEVEL=3
# Suppress verbose logging from TensorFlow C++ code
ENV TF_CPP_MIN_VLOG_LEVEL=3
# Disable oneDNN optimization messages ("optimized with oneDNN...")
ENV TF_ENABLE_ONEDNN_OPTS=0
# Suppress AutoGraph verbosity during conversion
ENV AUTOGRAPH_VERBOSITY=0
# Google Logging (GLOG) suppression for TensorFlow components
ENV GLOG_minloglevel=3
ENV GLOG_logtostderr=0
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"

View File

@@ -21,7 +21,7 @@ onvif-zeep-async == 4.0.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.14.*
peewee_migrate == 1.13.*
psutil == 7.1.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
@@ -81,5 +81,3 @@ librosa==0.11.*
soundfile==0.13.*
# DeGirum detector
degirum == 0.16.*
# Memory profiling
memray == 1.15.*

View File

@@ -55,7 +55,7 @@ function setup_homekit_config() {
if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty HomeKit config file..."
echo 'homekit: {}' > "${config_path}"
echo '{}' > "${config_path}"
fi
# Convert YAML to JSON for jq processing
@@ -70,14 +70,12 @@ function setup_homekit_config() {
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{"homekit": {}}' > "${cleaned_json}"
}
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || echo '{"homekit": {}}' > "${cleaned_json}"
# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo 'homekit: {}' > "${config_path}"
echo '{"homekit": {}}' > "${config_path}"
}
# Clean up temp files

View File

@@ -3,6 +3,7 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=1
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
@@ -10,10 +11,11 @@ ARG HSA_OVERRIDE
FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt update -qq && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1.1/ubuntu/jammy/amdgpu-install_7.1.1.70101-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1/ubuntu/jammy/amdgpu-install_7.1.70100-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -qq -y rocm
@@ -34,10 +36,7 @@ FROM deps AS deps-prelim
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
RUN apt-get update && \
apt-get install -y libnuma1 && \
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers && \
# Install C++ standard library headers for HIPRTC kernel compilation fallback
apt-get install -qq -y libstdc++-12-dev && \
rm -rf /var/lib/apt/lists/*
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers
WORKDIR /opt/frigate
COPY --from=rootfs / /
@@ -55,14 +54,12 @@ RUN pip3 uninstall -y onnxruntime \
FROM scratch AS rocm-dist
ARG ROCM
ARG AMDGPU
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
# Copy MIOpen database files for gfx10xx and gfx11xx only (RDNA2/RDNA3)
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx10* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx11* /opt/rocm-$ROCM/share/miopen/db/
# Copy rocBLAS library files for gfx10xx and gfx11xx only
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx10* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx11* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
#######################################################################

View File

@@ -1,5 +1,8 @@
variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "7.1.1"
default = "7.1"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""
@@ -35,6 +38,7 @@ target rocm {
}
platforms = ["linux/amd64"]
args = {
AMDGPU = AMDGPU,
ROCM = ROCM,
HSA_OVERRIDE_GFX_VERSION = HSA_OVERRIDE_GFX_VERSION,
HSA_OVERRIDE = HSA_OVERRIDE

View File

@@ -1,15 +1,53 @@
BOARDS += rocm
# AMD/ROCm is chunky so we build couple of smaller images for specific chipsets
ROCM_CHIPSETS:=gfx900:9.0.0 gfx1030:10.3.0 gfx1100:11.0.0
local-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) \
--load \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=frigate:latest-rocm \
--load
build-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm
push-rocm: build-rocm
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
--push \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm \
--push

View File

@@ -75,13 +75,7 @@ audio:
### Audio Transcription
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
Transcription accuracy also depends heavily on the quality of your camera's microphone and recording conditions. Many cameras use inexpensive microphones, and distance to the speaker, low audio bitrate, or background noise can significantly reduce transcription quality. If you need higher accuracy, more robust long-running queues, or large-scale automatic transcription, consider using the HTTP API in combination with an automation platform and a cloud transcription service.
#### Configuration
To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
```yaml
audio_transcription:
@@ -157,21 +151,3 @@ Only one `speech` event may be transcribed at a time. Frigate does not automatic
:::
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a supported Nvidia GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.
#### FAQ
1. Why doesn't Frigate automatically transcribe all `speech` events?
Frigate does not implement a queue mechanism for speech transcription, and adding one is not trivial. A proper queue would need backpressure, prioritization, memory/disk buffering, retry logic, crash recovery, and safeguards to prevent unbounded growth when events outpace processing. Thats a significant amount of complexity for a feature that, in most real-world environments, would mostly just churn through low-value noise.
Because transcription is **serialized (one event at a time)** and speech events can be generated far faster than they can be processed, an auto-transcribe toggle would very quickly create an ever-growing backlog and degrade core functionality. For the amount of engineering and risk involved, it adds **very little practical value** for the majority of deployments, which are often on low-powered, edge hardware.
If you hear speech thats actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
Other options are being considered for future versions of Frigate to add transcription options that support external `whisper` Docker containers. A single transcription service could then be shared by Frigate and other applications (for example, Home Assistant Voice), and run on more powerful machines when available.
2. Why don't you save live transcription text and use that for `speech` events?
Theres no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
Automatically persisting that data would often result in **misaligned, partial, or irrelevant transcripts**, while still incurring all of the CPU, storage, and privacy costs of transcription. Thats why Frigate treats transcription as an **explicit, user-initiated action** rather than an automatic side-effect of every `speech` event.

View File

@@ -270,42 +270,3 @@ To use role-based access control, you must connect to Frigate via the **authenti
1. Log in as an **admin** user via port `8971`.
2. Navigate to **Settings > Users**.
3. Edit a users role by selecting **admin** or **viewer**.
## API Authentication Guide
### Getting a Bearer Token
To use the Frigate API, you need to authenticate first. Follow these steps to obtain a Bearer token:
#### 1. Login
Make a POST request to `/login` with your credentials:
```bash
curl -i -X POST https://frigate_ip:8971/api/login \
-H "Content-Type: application/json" \
-d '{"user": "admin", "password": "your_password"}'
```
:::note
You may need to include `-k` in the argument list in these steps (eg: `curl -k -i -X POST ...`) if your Frigate instance is using a self-signed certificate.
:::
The response will contain a cookie with the JWT token.
#### 2. Using the Bearer Token
Once you have the token, include it in the Authorization header for subsequent requests:
```bash
curl -H "Authorization: Bearer <your_token>" https://frigate_ip:8971/api/profile
```
#### 3. Token Lifecycle
- Tokens are valid for the configured session length
- Tokens are automatically refreshed when you visit the `/auth` endpoint
- Tokens are invalidated when the user's password is changed
- Use `/logout` to clear your session cookie

View File

@@ -3,7 +3,7 @@ id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
## Minimum System Requirements
@@ -11,8 +11,6 @@ Object classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
@@ -33,15 +31,9 @@ For object classification:
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
:::note
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
:::
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
## Assignment Requirements
@@ -81,17 +73,13 @@ classification:
classification_type: sub_label # or: attribute
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
### Step 2: Assign Training Examples
@@ -99,8 +87,6 @@ The system will automatically generate example images from detected objects matc
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
@@ -108,23 +94,3 @@ If examples for some of your classes do not appear in the grid, you can continue
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
## Debugging Classification Models
To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- Consensus calculations and when assignments are made
- Object classification history and weighted scores

View File

@@ -3,7 +3,7 @@ id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
## Minimum System Requirements
@@ -11,8 +11,6 @@ State classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
@@ -48,8 +46,6 @@ classification:
crop: [0, 180, 220, 400]
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
@@ -64,44 +60,13 @@ Choose one or more cameras and draw a rectangle over the area of interest for ea
### Step 3: Assign Training Examples
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state. It's not strictly required to select all images you see. If a state is missing from the samples, you can train it from the Recent tab later.
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
Once some images are assigned, training will begin automatically.
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
Once all images are assigned, training will begin automatically.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
## Debugging Classification Models
To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- State verification progress (consecutive detections needed)
- When state changes are published
### Recent Classifications
For state classification, images are only added to recent classifications under specific circumstances:
- **First detection**: The first classification attempt for a camera is always saved
- **State changes**: Images are saved when the detected state differs from the current verified state
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)
Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.
- **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather.

View File

@@ -56,7 +56,7 @@ Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
@@ -64,10 +64,6 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml

View File

@@ -16,13 +16,12 @@ Review summaries provide structured JSON responses that are saved for each revie
```
- `title` (string): A concise, direct title that describes the purpose or overall action (e.g., "Person taking out trash", "Joe walking dog").
- `scene` (string): A narrative description of what happens across the sequence from start to finish, including setting, detected objects, and their observable actions.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. This is a condensed version of the scene description.
- `confidence` (float): 0-1 confidence in the analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
```
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will automatically show the title and `shortSummary` when the data is available, while the full `scene` description is available in the UI for detailed review.
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will also automatically show the title and description when the data is available.
### Defining Typical Activity
@@ -112,9 +111,3 @@ review:
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.

View File

@@ -13,7 +13,7 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **AMD**
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
- **Intel**

View File

@@ -107,23 +107,23 @@ Fine-tune the LPR feature using these optional parameters at the global level of
### Normalization Rules
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially and are applied _before_ the `format` regex, if specified. Each rule must have a `pattern` (which can be a string or a regex) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially. Each rule must have a `pattern` (which can be a string or a regex, prepended by `r`) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
These rules must be defined at the global level of your `lpr` config.
```yaml
lpr:
replace_rules:
- pattern: "[%#*?]" # Remove noise symbols
- pattern: r'[%#*?]' # Remove noise symbols
replacement: ""
- pattern: "[= ]" # Normalize = or space to dash
- pattern: r'[= ]' # Normalize = or space to dash
replacement: "-"
- pattern: "O" # Swap 'O' to '0' (common OCR error)
replacement: "0"
- pattern: "I" # Swap 'I' to '1'
- pattern: r'I' # Swap 'I' to '1'
replacement: "1"
- pattern: '(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123) - use single quotes to preserve backslashes
replacement: '\1-\2'
- pattern: r'(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123)
replacement: r'\1-\2'
```
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
@@ -374,19 +374,9 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
1. Start with a simplified LPR config.
1. Enable debug logs to see exactly what Frigate is doing.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
lpr:
enabled: true
debug_save_plates: true
```
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary.
```yaml
logger:
@@ -395,7 +385,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
frigate.data_processing.common.license_plate: debug
```
3. Ensure your plates are being _detected_.
2. Ensure your plates are being _detected_.
If you are using a Frigate+ or `license_plate` detecting model:
@@ -408,7 +398,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
- Watch the debug logs for messages from the YOLOv9 plate detector.
- You may need to adjust your `detection_threshold` if your plates are not being detected.
4. Ensure the characters on detected plates are being _recognized_.
3. Ensure the characters on detected plates are being _recognized_.
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.

View File

@@ -178,8 +178,6 @@ To use the Reolink Doorbell with two way talk, you should use the [recommended R
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
To prevent go2rtc from blocking other applications from accessing your camera's two-way audio, you must configure your stream with `#backchannel=0`. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
### Streaming options on camera group dashboards
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
@@ -234,7 +232,7 @@ When your browser runs into problems playing back your camera streams, it will l
- **mse-decode**
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
- What to try: Check the browser console for the supported and negotiated codecs. Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- Possible console messages from the player code:

View File

@@ -28,6 +28,7 @@ To create a poly mask:
5. Click the plus icon under the type of mask or zone you would like to create
6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon.
7. When you've finished creating your mask, press Save.
8. Restart Frigate to apply your changes.
Your config file will be updated with the relative coordinates of the mask/zone:

View File

@@ -13,7 +13,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB, Mini PCIe, and m.2 formats allowing for a wide range of compatibility with devices.
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
@@ -69,10 +69,12 @@ Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8
## Edge TPU Detector
The Edge TPU detector type runs TensorFlow Lite models utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the `"type"` attribute to `"edgetpu"`.
The Edge TPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the `"type"` attribute to `"edgetpu"`.
The Edge TPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
:::tip
See [common Edge TPU troubleshooting steps](/troubleshooting/edgetpu) if the Edge TPU is not detected.
@@ -144,44 +146,6 @@ detectors:
device: pci
```
### EdgeTPU Supported Models
| Model | Notes |
| ----------------------- | ------------------------------------------- |
| [Mobiledet](#mobiledet) | Default model |
| [YOLOv9](#yolov9) | More accurate but slower than default model |
#### Mobiledet
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
#### YOLOv9
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
<details>
<summary>YOLOv9 Setup & Config</summary>
After placing the downloaded files for the tflite model and labels in your config folder, you can use the following configuration:
```yaml
detectors:
coral:
type: edgetpu
device: usb
model:
model_type: yolo-generic
width: 320 # <--- should match the imgsize of the model, typically 320
height: 320 # <--- should match the imgsize of the model, typically 320
path: /config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite
labelmap_path: /config/labels-coco17.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 17 objects.
</details>
---
## Hailo-8
@@ -400,7 +364,7 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::warning
If you are using a Frigate+ model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
@@ -740,7 +704,7 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::warning
If you are using a Frigate+ model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::

View File

@@ -123,7 +123,7 @@ auth:
# Optional: Refresh time in seconds (default: shown below)
# When the session is going to expire in less time than this setting,
# it will be refreshed back to the session_length.
refresh_time: 1800 # 30 minutes
refresh_time: 43200 # 12 hours
# Optional: Rate limiting for login failures to help prevent brute force
# login attacks (default: shown below)
# See the docs for more information on valid values
@@ -710,44 +710,6 @@ audio_transcription:
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
language: en
# Optional: Configuration for classification models
classification:
# Optional: Configuration for bird classification
bird:
# Optional: Enable bird classification (default: shown below)
enabled: False
# Optional: Minimum classification score required to be considered a match (default: shown below)
threshold: 0.9
custom:
# Required: name of the classification model
model_name:
# Optional: Enable running the model (default: shown below)
enabled: True
# Optional: Name of classification model (default: shown below)
name: None
# Optional: Classification score threshold to change the state (default: shown below)
threshold: 0.8
# Optional: Number of classification attempts to save in the recent classifications tab (default: shown below)
# NOTE: Defaults to 200 for object classification and 100 for state classification if not specified
save_attempts: None
# Optional: Object classification configuration
object_config:
# Required: Object types to classify
objects: [dog]
# Optional: Type of classification that is applied (default: shown below)
classification_type: sub_label
# Optional: State classification configuration
state_config:
# Required: Cameras to run classification on
cameras:
camera_name:
# Required: Crop of image frame on this camera to run classification on
crop: [0, 180, 220, 400]
# Optional: If classification should be run when motion is detected in the crop (default: shown below)
motion: False
# Optional: Interval to run classification on in seconds (default: shown below)
interval: None
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# NOTE: The default go2rtc API port (1984) must be used,
@@ -911,7 +873,7 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification and disable digest authentication for the ONVIF server (default: shown below)
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
@@ -1002,6 +964,10 @@ ui:
# full: 8:15:22 PM Mountain Standard Time
# (default: shown below).
time_style: medium
# Optional: Ability to manually override the date / time styling to use strftime format
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
# Used in the UI and in MQTT topics
unit_system: metric

View File

@@ -24,12 +24,11 @@ birdseye:
restream: True
```
:::tip
:::tip
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `12`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `12`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
:::
### Securing Restream With Authentication
The go2rtc restream can be secured with RTSP based username / password authentication. Ex:
@@ -160,31 +159,6 @@ go2rtc:
See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
## Preventing go2rtc from blocking two-way audio {#two-way-talk-restream}
For cameras that support two-way talk, go2rtc will automatically establish an audio output backchannel when connecting to an RTSP stream. This backchannel blocks access to the camera's audio output for two-way talk functionality, preventing both Frigate and other applications from using it.
To prevent this, you must configure two separate stream instances:
1. One stream instance with `#backchannel=0` for Frigate's viewing, recording, and detection (prevents go2rtc from establishing the blocking backchannel)
2. A second stream instance without `#backchannel=0` for two-way talk functionality (can be used by Frigate's WebRTC viewer or other applications)
Configuration example:
```yaml
go2rtc:
streams:
front_door:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2#backchannel=0
front_door_twoway:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
```
In this configuration:
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
@@ -195,4 +169,4 @@ NOTE: The output will need to be passed with two curly braces `{{output}}`
go2rtc:
streams:
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
```
```

View File

@@ -159,7 +159,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | t-320: 14ms s-320: 24ms t-640: 34ms s-640: 65ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |

View File

@@ -135,7 +135,6 @@ Finally, configure [hardware object detection](/configuration/object_detectors#h
### MemryX MX3
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
- x86 (Intel/AMD) PCs
- Raspberry Pi 5
- Orange Pi 5 Plus/Max
@@ -143,6 +142,7 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
#### Configuration
#### Installation
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
@@ -156,7 +156,7 @@ Then follow these steps for installing the correct driver/runtime configuration:
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
@@ -173,7 +173,7 @@ In your `docker-compose.yml`, also add:
privileged: true
volumes:
- /run/mxa_manager:/run/mxa_manager
/run/mxa_manager:/run/mxa_manager
```
If you can't use Docker Compose, you can run the container with something similar to this:
@@ -411,7 +411,7 @@ To install make sure you have the [community app plugin here](https://forums.unr
## Proxmox
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers. Ensure that ballooning is **disabled**, especially if you are passing through a GPU to the VM.
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers.
:::warning

View File

@@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.17.0
image: ghcr.io/blakeblackshear/frigate:0.16.2
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
3. **Start the Container**:
@@ -105,8 +105,8 @@ If an update causes issues:
1. Stop Frigate.
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again.

View File

@@ -113,8 +113,7 @@ section.
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
3. If your camera supports two-way talk, you must configure your stream with `#backchannel=0` to prevent go2rtc from blocking other applications from accessing the camera's audio output. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
## Homekit Configuration
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.

View File

@@ -245,12 +245,6 @@ To load a preview gif of a review item:
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
```
To load the thumbnail of a review item:
```
https://HA_URL/api/frigate/notifications/<review-id>/<camera>/review_thumbnail.webp
```
<a name="streams"></a>
## RTSP stream

View File

@@ -38,7 +38,3 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
## [Periscope](https://github.com/maksz42/periscope)
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.
## [Scrypted - Frigate bridge plugin](https://github.com/apocaliss92/scrypted-frigate-bridge)
[Scrypted - Frigate bridge](https://github.com/apocaliss92/scrypted-frigate-bridge) is an plugin that allows to ingest Frigate detections, motion, videoclips on Scrypted as well as provide templates to export rebroadcast configurations on Frigate.

View File

@@ -15,11 +15,13 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
| Model Type | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
_\* Support coming in 0.17_
### YOLOv9 Details
@@ -37,7 +39,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
## Supported detector types
@@ -53,7 +55,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
## Improving your model

View File

@@ -1,60 +0,0 @@
---
id: dummy-camera
title: Troubleshooting Detection
---
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
## When to use
- Replaying an exported clip to reproduce incorrect detections
- Testing configuration changes (model settings, trackers, filters) against a known clip
- Gathering deterministic logs and recordings for debugging or issue reports
## Example Config
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
```yaml
cameras:
test:
ffmpeg:
inputs:
- path: /media/frigate/car-stopping.mp4
input_args: -re -stream_loop -1 -fflags +genpts
roles:
- detect
detect:
enabled: true
record:
enabled: false
snapshots:
enabled: false
```
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate.
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
## Variables to consider in object tracking
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
## Troubleshooting
- No video: verify the path is correct and accessible from the Frigate process/container.
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.

View File

@@ -1,135 +0,0 @@
---
id: memory
title: Memory Troubleshooting
---
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
## Enabling Memory Profiling
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
```yaml
# docker-compose example
services:
frigate:
...
environment:
- FRIGATE_MEMRAY_MODULES=frigate.embeddings,frigate.capture
```
```bash
# docker run example
docker run -e FRIGATE_MEMRAY_MODULES="frigate.embeddings" \
...
--name frigate <frigate_image>
```
### Module Names
Frigate processes are named using a module-based naming scheme. Common module names include:
- `frigate.review_segment_manager` - Review segment processing
- `frigate.recording_manager` - Recording management
- `frigate.capture` - Camera capture processes (all cameras with this module name)
- `frigate.process` - Camera processing/tracking (all cameras with this module name)
- `frigate.output` - Output processing
- `frigate.audio_manager` - Audio processing
- `frigate.embeddings` - Embeddings processing
- `frigate.embeddings_manager` - Embeddings manager
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
```bash
FRIGATE_MEMRAY_MODULES=frigate.capture:front_door
```
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
## How It Works
1. **Binary File Creation**: When profiling is enabled, memray creates a binary file (`.bin`) in `/config/memray_reports/` that is updated continuously in real-time as the process runs.
2. **Automatic HTML Generation**: On normal process exit, Frigate automatically:
- Stops memray tracking
- Generates an HTML flamegraph report
- Saves it to `/config/memray_reports/<module_name>.html`
3. **Crash Recovery**: If a process crashes (SIGKILL, segfault, etc.), the binary file is preserved with all data up to the crash point. You can manually generate the HTML report from the binary file.
## Viewing Reports
### Automatic Reports
After a process exits normally, you'll find HTML reports in `/config/memray_reports/`. Open these files in a web browser to view interactive flamegraphs showing memory usage patterns.
### Manual Report Generation
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
- Run `memray` inside the Frigate container:
```bash
docker-compose exec frigate memray flamegraph /config/memray_reports/<module_name>.bin
# or
docker exec -it <container_name_or_id> memray flamegraph /config/memray_reports/<module_name>.bin
```
- You can also copy the `.bin` file to the host and run `memray` locally if you have it installed:
```bash
docker cp <container_name_or_id>:/config/memray_reports/<module_name>.bin /tmp/
memray flamegraph /tmp/<module_name>.bin
```
## Understanding the Reports
Memray flamegraphs show:
- **Memory allocations over time**: See where memory is being allocated in your code
- **Call stacks**: Understand the full call chain leading to allocations
- **Memory hotspots**: Identify functions or code paths that allocate the most memory
- **Memory leaks**: Spot patterns where memory is allocated but not freed
The interactive HTML reports allow you to:
- Zoom into specific time ranges
- Filter by function names
- View detailed allocation information
- Export data for further analysis
## Best Practices
1. **Profile During Issues**: Enable profiling when you're experiencing memory issues, not all the time, as it adds some overhead.
2. **Profile Specific Modules**: Instead of profiling everything, focus on the modules you suspect are causing issues.
3. **Let Processes Run**: Allow processes to run for a meaningful duration to capture representative memory usage patterns.
4. **Check Binary Files**: If HTML reports aren't generated automatically (e.g., after a crash), check for `.bin` files in `/config/memray_reports/` and generate reports manually.
5. **Compare Reports**: Generate reports at different times to compare memory usage patterns and identify trends.
## Troubleshooting
### No Reports Generated
- Check that the environment variable is set correctly
- Verify the module name matches exactly (case-sensitive)
- Check logs for memray-related errors
- Ensure `/config/memray_reports/` directory exists and is writable
### Process Crashed Before Report Generation
- Look for `.bin` files in `/config/memray_reports/`
- Manually generate HTML reports using: `memray flamegraph <file>.bin`
- The binary file contains all data up to the crash point
### Reports Show No Data
- Ensure the process ran long enough to generate meaningful data
- Check that memray is properly installed (included by default in Frigate)
- Verify the process actually started and ran (check process logs)
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).

3199
docs/package-lock.json generated
View File

File diff suppressed because it is too large Load Diff

View File

@@ -18,14 +18,14 @@
},
"dependencies": {
"@docusaurus/core": "^3.7.0",
"@docusaurus/plugin-content-docs": "^3.7.0",
"@docusaurus/plugin-content-docs": "^3.6.3",
"@docusaurus/preset-classic": "^3.7.0",
"@docusaurus/theme-mermaid": "^3.7.0",
"@docusaurus/theme-mermaid": "^3.6.3",
"@inkeep/docusaurus": "^2.0.16",
"@mdx-js/react": "^3.1.0",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1",
"docusaurus-plugin-openapi-docs": "^4.3.1",
"docusaurus-theme-openapi-docs": "^4.3.1",
"prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2",
"react": "^18.3.1",
@@ -44,9 +44,9 @@
]
},
"devDependencies": {
"@docusaurus/module-type-aliases": "^3.7.0",
"@docusaurus/types": "^3.7.0",
"@types/react": "^18.3.27"
"@docusaurus/module-type-aliases": "^3.4.0",
"@docusaurus/types": "^3.4.0",
"@types/react": "^18.3.7"
},
"engines": {
"node": ">=18.0"

View File

@@ -131,8 +131,6 @@ const sidebars: SidebarsConfig = {
"troubleshooting/recordings",
"troubleshooting/gpu",
"troubleshooting/edgetpu",
"troubleshooting/memory",
"troubleshooting/dummy-camera",
],
Development: [
"development/contributing",

View File

@@ -1,18 +1,13 @@
.alert {
padding: 12px;
background: #fff8e6;
border-bottom: 1px solid #ffd166;
text-align: center;
font-size: 15px;
}
[data-theme="dark"] .alert {
background: #3b2f0b;
border-bottom: 1px solid #665c22;
}
.alert a {
color: #1890ff;
font-weight: 500;
margin-left: 6px;
}
padding: 12px;
background: #fff8e6;
border-bottom: 1px solid #ffd166;
text-align: center;
font-size: 15px;
}
.alert a {
color: #1890ff;
font-weight: 500;
margin-left: 6px;
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@ from markupsafe import escape
from peewee import SQL, fn, operator
from pydantic import ValidationError
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.auth import require_role
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
@@ -56,33 +56,29 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.app])
@router.get(
"/", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
)
@router.get("/", response_class=PlainTextResponse)
def is_healthy():
return "Frigate is running. Alive and healthy!"
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
@router.get("/config/schema.json")
def config_schema(request: Request):
return Response(
content=request.app.frigate_config.schema_json(), media_type="application/json"
)
@router.get(
"/version", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
)
@router.get("/version", response_class=PlainTextResponse)
def version():
return VERSION
@router.get("/stats", dependencies=[Depends(allow_any_authenticated())])
@router.get("/stats")
def stats(request: Request):
return JSONResponse(content=request.app.stats_emitter.get_latest_stats())
@router.get("/stats/history", dependencies=[Depends(allow_any_authenticated())])
@router.get("/stats/history")
def stats_history(request: Request, keys: str = None):
if keys:
keys = keys.split(",")
@@ -90,7 +86,7 @@ def stats_history(request: Request, keys: str = None):
return JSONResponse(content=request.app.stats_emitter.get_stats_history(keys))
@router.get("/metrics", dependencies=[Depends(allow_any_authenticated())])
@router.get("/metrics")
def metrics(request: Request):
"""Expose Prometheus metrics endpoint and update metrics with latest stats"""
# Retrieve the latest statistics and update the Prometheus metrics
@@ -107,7 +103,7 @@ def metrics(request: Request):
return Response(content=content, media_type=content_type)
@router.get("/config", dependencies=[Depends(allow_any_authenticated())])
@router.get("/config")
def config(request: Request):
config_obj: FrigateConfig = request.app.frigate_config
config: dict[str, dict[str, Any]] = config_obj.model_dump(
@@ -213,7 +209,7 @@ def config_raw_paths(request: Request):
return JSONResponse(content=raw_paths)
@router.get("/config/raw", dependencies=[Depends(allow_any_authenticated())])
@router.get("/config/raw")
def config_raw():
config_file = find_config_file()
@@ -456,7 +452,7 @@ def config_set(request: Request, body: AppConfigSetBody):
)
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
@router.get("/vainfo")
def vainfo():
vainfo = vainfo_hwaccel()
return JSONResponse(
@@ -476,16 +472,12 @@ def vainfo():
)
@router.get("/nvinfo", dependencies=[Depends(allow_any_authenticated())])
@router.get("/nvinfo")
def nvinfo():
return JSONResponse(content=get_nvidia_driver_info())
@router.get(
"/logs/{service}",
tags=[Tags.logs],
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/logs/{service}", tags=[Tags.logs])
async def logs(
service: str = Path(enum=["frigate", "nginx", "go2rtc"]),
download: Optional[str] = None,
@@ -593,7 +585,7 @@ def restart():
)
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
@router.get("/labels")
def get_labels(camera: str = ""):
try:
if camera:
@@ -611,7 +603,7 @@ def get_labels(camera: str = ""):
return JSONResponse(content=labels)
@router.get("/sub_labels", dependencies=[Depends(allow_any_authenticated())])
@router.get("/sub_labels")
def get_sub_labels(split_joined: Optional[int] = None):
try:
events = Event.select(Event.sub_label).distinct()
@@ -642,7 +634,7 @@ def get_sub_labels(split_joined: Optional[int] = None):
return JSONResponse(content=sub_labels)
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
@router.get("/plus/models")
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
if not request.app.frigate_config.plus_api.is_active():
return JSONResponse(
@@ -684,9 +676,7 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
return JSONResponse(content=validModels)
@router.get(
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
)
@router.get("/recognized_license_plates")
def get_recognized_license_plates(split_joined: Optional[int] = None):
try:
query = (
@@ -720,7 +710,7 @@ def get_recognized_license_plates(split_joined: Optional[int] = None):
return JSONResponse(content=recognized_license_plates)
@router.get("/timeline", dependencies=[Depends(allow_any_authenticated())])
@router.get("/timeline")
def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = None):
clauses = []
@@ -757,7 +747,7 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
return JSONResponse(content=[t for t in timeline])
@router.get("/timeline/hourly", dependencies=[Depends(allow_any_authenticated())])
@router.get("/timeline/hourly")
def hourly_timeline(params: AppTimelineHourlyQueryParameters = Depends()):
"""Get hourly summary for timeline."""
cameras = params.cameras

View File

@@ -32,164 +32,10 @@ from frigate.models import User
logger = logging.getLogger(__name__)
def require_admin_by_default():
"""
Global admin requirement dependency for all endpoints by default.
This is set as the default dependency on the FastAPI app to ensure all
endpoints require admin access unless explicitly overridden with
allow_public(), allow_any_authenticated(), or require_role().
Port 5000 (internal) always has admin role set by the /auth endpoint,
so this check passes automatically for internal requests.
Certain paths are exempted from the global admin check because they must
be accessible before authentication (login, auth) or they have their own
route-level authorization dependencies that handle access control.
"""
# Paths that have route-level auth dependencies and should bypass global admin check
# These paths still have authorization - it's handled by their route-level dependencies
EXEMPT_PATHS = {
# Public auth endpoints (allow_public)
"/auth",
"/auth/first_time_login",
"/login",
"/logout",
# Authenticated user endpoints (allow_any_authenticated)
"/profile",
# Public info endpoints (allow_public)
"/",
"/version",
"/config/schema.json",
# Authenticated user endpoints (allow_any_authenticated)
"/metrics",
"/stats",
"/stats/history",
"/config",
"/config/raw",
"/vainfo",
"/nvinfo",
"/labels",
"/sub_labels",
"/plus/models",
"/recognized_license_plates",
"/timeline",
"/timeline/hourly",
"/recordings/storage",
"/recordings/summary",
"/recordings/unavailable",
"/go2rtc/streams",
"/event_ids",
"/events",
"/exports",
}
# Path prefixes that should be exempt (for paths with parameters)
EXEMPT_PREFIXES = (
"/logs/", # /logs/{service}
"/review", # /review, /review/{id}, /review/summary, /review_ids, etc.
"/reviews/", # /reviews/viewed, /reviews/delete
"/events/", # /events/{id}/thumbnail, /events/summary, etc. (camera-scoped)
"/export/", # /export/{camera}/start/..., /export/{id}/rename, /export/{id}
"/go2rtc/streams/", # /go2rtc/streams/{camera}
"/users/", # /users/{username}/password (has own auth)
"/preview/", # /preview/{file}/thumbnail.jpg
"/exports/", # /exports/{export_id}
"/vod/", # /vod/{camera_name}/...
"/notifications/", # /notifications/pubkey, /notifications/register
)
async def admin_checker(request: Request):
path = request.url.path
# Check exact path matches
if path in EXEMPT_PATHS:
return
# Check prefix matches for parameterized paths
if path.startswith(EXEMPT_PREFIXES):
return
# Dynamic camera path exemption:
# Any path whose first segment matches a configured camera name should
# bypass the global admin requirement. These endpoints enforce access
# via route-level dependencies (e.g. require_camera_access) to ensure
# per-camera authorization. This allows non-admin authenticated users
# (e.g. viewer role) to access camera-specific resources without
# needing admin privileges.
try:
if path.startswith("/"):
first_segment = path.split("/", 2)[1]
if (
first_segment
and first_segment in request.app.frigate_config.cameras
):
return
except Exception:
pass
# For all other paths, require admin role
# Port 5000 (internal) requests have admin role set automatically
role = request.headers.get("remote-role")
if role == "admin":
return
raise HTTPException(
status_code=403,
detail="Access denied. A user with the admin role is required.",
)
return admin_checker
def allow_public():
"""
Override dependency to allow unauthenticated access to an endpoint.
Use this for endpoints that should be publicly accessible without
authentication, such as login page, health checks, or pre-auth info.
Example:
@router.get("/public-endpoint", dependencies=[Depends(allow_public())])
"""
async def public_checker(request: Request):
return # Always allow
return public_checker
def allow_any_authenticated():
"""
Override dependency to allow any request that passed through the /auth endpoint.
Allows:
- Port 5000 internal requests (remote-user: "anonymous", remote-role: "admin")
- Authenticated users with JWT tokens (remote-user: username)
- Unauthenticated requests when auth is disabled (remote-user: "viewer")
Rejects:
- Requests with no remote-user header (did not pass through /auth endpoint)
Example:
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
"""
async def auth_checker(request: Request):
# Ensure a remote-user has been set by the /auth endpoint
username = request.headers.get("remote-user")
if username is None:
raise HTTPException(status_code=401, detail="Authentication required")
return
return auth_checker
router = APIRouter(tags=[Tags.auth])
@router.get("/auth/first_time_login", dependencies=[Depends(allow_public())])
@router.get("/auth/first_time_login")
def first_time_login(request: Request):
"""Return whether the admin first-time login help flag is set in config.
@@ -297,10 +143,7 @@ def get_jwt_secret() -> str:
)
jwt_secret = secrets.token_hex(64)
try:
fd = os.open(
jwt_secret_file, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o600
)
with os.fdopen(fd, "w") as f:
with open(jwt_secret_file, "w") as f:
f.write(str(jwt_secret))
except Exception:
logger.warning(
@@ -345,35 +188,9 @@ def verify_password(password, password_hash):
return secrets.compare_digest(password_hash, compare_hash)
def validate_password_strength(password: str) -> tuple[bool, Optional[str]]:
"""
Validate password strength.
Returns a tuple of (is_valid, error_message).
"""
if not password:
return False, "Password cannot be empty"
if len(password) < 8:
return False, "Password must be at least 8 characters long"
if not any(c.isupper() for c in password):
return False, "Password must contain at least one uppercase letter"
if not any(c.isdigit() for c in password):
return False, "Password must contain at least one digit"
if not any(c in '!@#$%^&*(),.?":{}|<>' for c in password):
return False, "Password must contain at least one special character"
return True, None
def create_encoded_jwt(user, role, expiration, secret):
return jwt.encode(
{"alg": "HS256"},
{"sub": user, "role": role, "exp": expiration, "iat": int(time.time())},
secret,
{"alg": "HS256"}, {"sub": user, "role": role, "exp": expiration}, secret
)
@@ -535,37 +352,7 @@ def resolve_role(
# Endpoints
@router.get(
"/auth",
dependencies=[Depends(allow_public())],
summary="Authenticate request",
description=(
"Authenticates the current request based on proxy headers or JWT token. "
"This endpoint verifies authentication credentials and manages JWT token refresh. "
"On success, no JSON body is returned; authentication state is communicated via response headers and cookies."
),
status_code=202,
responses={
202: {
"description": "Authentication Accepted (no response body)",
"headers": {
"remote-user": {
"description": 'Authenticated username or "viewer" in proxy-only mode',
"schema": {"type": "string"},
},
"remote-role": {
"description": "Resolved role (e.g., admin, viewer, or custom)",
"schema": {"type": "string"},
},
"Set-Cookie": {
"description": "May include refreshed JWT cookie when applicable",
"schema": {"type": "string"},
},
},
},
401: {"description": "Authentication Failed"},
},
)
@router.get("/auth")
def auth(request: Request):
auth_config: AuthConfig = request.app.frigate_config.auth
proxy_config: ProxyConfig = request.app.frigate_config.proxy
@@ -592,12 +379,12 @@ def auth(request: Request):
# if auth is disabled, just apply the proxy header map and return success
if not auth_config.enabled:
# pass the user header value from the upstream proxy if a mapping is specified
# or use viewer if none are specified
# or use anonymous if none are specified
user_header = proxy_config.header_map.user
success_response.headers["remote-user"] = (
request.headers.get(user_header, default="viewer")
request.headers.get(user_header, default="anonymous")
if user_header
else "viewer"
else "anonymous"
)
# parse header and resolve a valid role
@@ -664,27 +451,13 @@ def auth(request: Request):
return fail_response
# if the jwt cookie is expiring soon
if jwt_source == "cookie" and expiration - JWT_REFRESH <= current_time:
elif jwt_source == "cookie" and expiration - JWT_REFRESH <= current_time:
logger.debug("jwt token expiring soon, refreshing cookie")
# Check if password has been changed since token was issued
# If so, force re-login by rejecting the refresh
# ensure the user hasn't been deleted
try:
user_obj = User.get_by_id(user)
if user_obj.password_changed_at is not None:
token_iat = int(token.claims.get("iat", 0))
password_changed_timestamp = int(
user_obj.password_changed_at.timestamp()
)
if token_iat < password_changed_timestamp:
logger.debug(
"jwt token issued before password change, rejecting refresh"
)
return fail_response
User.get_by_id(user)
except DoesNotExist:
logger.debug("user not found")
return fail_response
new_expiration = current_time + JWT_SESSION_LENGTH
new_encoded_jwt = create_encoded_jwt(
user, role, new_expiration, request.app.jwt_token
@@ -705,14 +478,9 @@ def auth(request: Request):
return fail_response
@router.get(
"/profile",
dependencies=[Depends(allow_any_authenticated())],
summary="Get user profile",
description="Returns the current authenticated user's profile including username, role, and allowed cameras. This endpoint requires authentication and returns information about the user's permissions.",
)
@router.get("/profile")
def profile(request: Request):
username = request.headers.get("remote-user", "viewer")
username = request.headers.get("remote-user", "anonymous")
role = request.headers.get("remote-role", "viewer")
all_camera_names = set(request.app.frigate_config.cameras.keys())
@@ -724,12 +492,7 @@ def profile(request: Request):
)
@router.get(
"/logout",
dependencies=[Depends(allow_public())],
summary="Logout user",
description="Logs out the current user by clearing the session cookie. After logout, subsequent requests will require re-authentication.",
)
@router.get("/logout")
def logout(request: Request):
auth_config: AuthConfig = request.app.frigate_config.auth
response = RedirectResponse("/login", status_code=303)
@@ -740,12 +503,7 @@ def logout(request: Request):
limiter = Limiter(key_func=get_remote_addr)
@router.post(
"/login",
dependencies=[Depends(allow_public())],
summary="Login with credentials",
description='Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The JWT token can also be retrieved from the response and used as a Bearer token in the Authorization header.\n\nExample using Bearer token:\n```\ncurl -H "Authorization: Bearer <token_value>" https://frigate_ip:8971/api/profile\n```',
)
@router.post("/login")
@limiter.limit(limit_value=rateLimiter.get_limit)
def login(request: Request, body: AppPostLoginBody):
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
@@ -783,12 +541,7 @@ def login(request: Request, body: AppPostLoginBody):
return JSONResponse(content={"message": "Login failed"}, status_code=401)
@router.get(
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Get all users",
description="Returns a list of all users with their usernames and roles. Requires admin role. Each user object contains the username and assigned role.",
)
@router.get("/users", dependencies=[Depends(require_role(["admin"]))])
def get_users():
exports = (
User.select(User.username, User.role).order_by(User.username).dicts().iterator()
@@ -796,12 +549,7 @@ def get_users():
return JSONResponse([e for e in exports])
@router.post(
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Create new user",
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
)
@router.post("/users", dependencies=[Depends(require_role(["admin"]))])
def create_user(
request: Request,
body: AppPostUsersBody,
@@ -830,29 +578,13 @@ def create_user(
return JSONResponse(content={"username": body.username})
@router.delete(
"/users/{username}",
dependencies=[Depends(require_role(["admin"]))],
summary="Delete user",
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role. Returns success message or error if user not found.",
)
def delete_user(request: Request, username: str):
# Prevent deletion of the built-in admin user
if username == "admin":
return JSONResponse(
content={"message": "Cannot delete admin user"}, status_code=403
)
@router.delete("/users/{username}")
def delete_user(username: str):
User.delete_by_id(username)
return JSONResponse(content={"success": True})
@router.put(
"/users/{username}/password",
dependencies=[Depends(allow_any_authenticated())],
summary="Update user password",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
)
@router.put("/users/{username}/password")
async def update_password(
request: Request,
username: str,
@@ -874,66 +606,15 @@ async def update_password(
HASH_ITERATIONS = request.app.frigate_config.auth.hash_iterations
try:
user = User.get_by_id(username)
except DoesNotExist:
return JSONResponse(content={"message": "User not found"}, status_code=404)
# Require old_password when non-admin user is changing any password
# Admin users changing passwords do NOT need to provide the current password
if current_role != "admin":
if not body.old_password:
return JSONResponse(
content={"message": "Current password is required"},
status_code=400,
)
if not verify_password(body.old_password, user.password_hash):
return JSONResponse(
content={"message": "Current password is incorrect"},
status_code=401,
)
# Validate new password strength
is_valid, error_message = validate_password_strength(body.password)
if not is_valid:
return JSONResponse(
content={"message": error_message},
status_code=400,
)
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
User.update(
{
User.password_hash: password_hash,
User.password_changed_at: datetime.now(),
}
).where(User.username == username).execute()
User.set_by_id(username, {User.password_hash: password_hash})
response = JSONResponse(content={"success": True})
# If user changed their own password, issue a new JWT to keep them logged in
if current_username == username:
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
JWT_COOKIE_SECURE = request.app.frigate_config.auth.cookie_secure
JWT_SESSION_LENGTH = request.app.frigate_config.auth.session_length
expiration = int(time.time()) + JWT_SESSION_LENGTH
encoded_jwt = create_encoded_jwt(
username, current_role, expiration, request.app.jwt_token
)
# Set new JWT cookie on response
set_jwt_cookie(
response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE
)
return response
return JSONResponse(content={"success": True})
@router.put(
"/users/{username}/role",
dependencies=[Depends(require_role(["admin"]))],
summary="Update user role",
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role. Valid roles are defined in the configuration.",
)
async def update_role(
request: Request,

View File

@@ -15,11 +15,7 @@ from onvif import ONVIFCamera, ONVIFError
from zeep.exceptions import Fault, TransportError
from zeep.transports import AsyncTransport
from frigate.api.auth import (
allow_any_authenticated,
require_camera_access,
require_role,
)
from frigate.api.auth import require_role
from frigate.api.defs.tags import Tags
from frigate.config.config import FrigateConfig
from frigate.util.builtin import clean_camera_user_pass
@@ -54,7 +50,7 @@ def _is_valid_host(host: str) -> bool:
return False
@router.get("/go2rtc/streams", dependencies=[Depends(allow_any_authenticated())])
@router.get("/go2rtc/streams")
def go2rtc_streams():
r = requests.get("http://127.0.0.1:1984/api/streams")
if not r.ok:
@@ -70,9 +66,7 @@ def go2rtc_streams():
return JSONResponse(content=stream_data)
@router.get(
"/go2rtc/streams/{camera_name}", dependencies=[Depends(require_camera_access)]
)
@router.get("/go2rtc/streams/{camera_name}")
def go2rtc_camera_stream(request: Request, camera_name: str):
r = requests.get(
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=all&microphone"
@@ -167,7 +161,7 @@ def go2rtc_delete_stream(stream_name: str):
)
@router.get("/ffprobe", dependencies=[Depends(require_role(["admin"]))])
@router.get("/ffprobe")
def ffprobe(request: Request, paths: str = "", detailed: bool = False):
path_param = paths

View File

@@ -31,7 +31,6 @@ from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera import DetectConfig
from frigate.config.classification import ObjectClassificationType
from frigate.const import CLIPS_DIR, FACE_DIR, MODEL_CACHE_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event
@@ -40,7 +39,6 @@ from frigate.util.classification import (
collect_state_classification_examples,
get_dataset_image_count,
read_training_metadata,
write_training_metadata,
)
from frigate.util.file import get_event_snapshot
@@ -624,59 +622,6 @@ def get_classification_dataset(name: str):
)
@router.get(
"/classification/attributes",
summary="Get custom classification attributes",
description="""Returns custom classification attributes for a given object type.
Only includes models with classification_type set to 'attribute'.
By default returns a flat sorted list of all attribute labels.
If group_by_model is true, returns attributes grouped by model name.""",
)
def get_custom_attributes(
request: Request, object_type: str = None, group_by_model: bool = False
):
models_with_attributes = {}
for (
model_key,
model_config,
) in request.app.frigate_config.classification.custom.items():
if (
not model_config.enabled
or not model_config.object_config
or model_config.object_config.classification_type
!= ObjectClassificationType.attribute
):
continue
model_objects = getattr(model_config.object_config, "objects", []) or []
if object_type is not None and object_type not in model_objects:
continue
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
if not os.path.exists(dataset_dir):
continue
attributes = []
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if os.path.isdir(category_dir) and category_name != "none":
attributes.append(category_name)
if attributes:
model_name = model_config.name or model_key
models_with_attributes[model_name] = sorted(attributes)
if group_by_model:
return JSONResponse(content=models_with_attributes)
else:
# Flatten to a unique sorted list
all_attributes = set()
for attributes in models_with_attributes.values():
all_attributes.update(attributes)
return JSONResponse(content=sorted(list(all_attributes)))
@router.get(
"/classification/{name}/train",
summary="Get classification train images",
@@ -765,7 +710,7 @@ def delete_classification_dataset_images(
if os.path.isfile(file_path):
os.unlink(file_path)
if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none":
if os.path.exists(folder) and not os.listdir(folder):
os.rmdir(folder)
return JSONResponse(
@@ -843,12 +788,6 @@ def rename_classification_category(
try:
os.rename(old_folder, new_folder)
# Mark dataset as ready to train by resetting training metadata
# This ensures the dataset is marked as changed after renaming
sanitized_name = sanitize_filename(name)
write_training_metadata(sanitized_name, 0)
return JSONResponse(
content=(
{
@@ -931,46 +870,6 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
)
@router.post(
"/classification/{name}/dataset/{category}/create",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Create an empty classification category folder",
description="""Creates an empty folder for a classification category.
This is used to create folders for categories that don't have images yet.
Returns a success message or an error if the name is invalid.""",
)
def create_classification_category(request: Request, name: str, category: str):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
category_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
)
os.makedirs(category_folder, exist_ok=True)
return JSONResponse(
content=(
{
"success": True,
"message": f"Successfully created category folder: {category}",
}
),
status_code=200,
)
@router.post(
"/classification/{name}/train/delete",
response_model=GenericResponse,

View File

@@ -12,7 +12,6 @@ class EventsQueryParams(BaseModel):
labels: Optional[str] = "all"
sub_label: Optional[str] = "all"
sub_labels: Optional[str] = "all"
attributes: Optional[str] = "all"
zone: Optional[str] = "all"
zones: Optional[str] = "all"
limit: Optional[int] = 100
@@ -59,8 +58,6 @@ class EventsSearchQueryParams(BaseModel):
limit: Optional[int] = 50
cameras: Optional[str] = "all"
labels: Optional[str] = "all"
sub_labels: Optional[str] = "all"
attributes: Optional[str] = "all"
zones: Optional[str] = "all"
after: Optional[float] = None
before: Optional[float] = None

View File

@@ -11,7 +11,6 @@ class AppConfigSetBody(BaseModel):
class AppPutPasswordBody(BaseModel):
password: str
old_password: Optional[str] = None
class AppPostUsersBody(BaseModel):

View File

@@ -24,18 +24,12 @@ class EventsLPRBody(BaseModel):
)
class EventsAttributesBody(BaseModel):
attributes: List[str] = Field(
title="Selected classification attributes for the event",
default_factory=list,
)
class EventsDescriptionBody(BaseModel):
description: Union[str, None] = Field(title="The description of the event")
class EventsCreateBody(BaseModel):
source_type: Optional[str] = "api"
sub_label: Optional[str] = None
score: Optional[float] = 0
duration: Optional[int] = 30

View File

@@ -1,4 +1,4 @@
from typing import Optional, Union
from typing import Union
from pydantic import BaseModel, Field
from pydantic.json_schema import SkipJsonSchema
@@ -16,5 +16,5 @@ class ExportRecordingsBody(BaseModel):
source: PlaybackSourceEnum = Field(
default=PlaybackSourceEnum.recordings, title="Playback source"
)
name: Optional[str] = Field(title="Friendly name", default=None, max_length=256)
name: str = Field(title="Friendly name", default=None, max_length=256)
image_path: Union[str, SkipJsonSchema[None]] = None

View File

@@ -22,7 +22,6 @@ from peewee import JOIN, DoesNotExist, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
@@ -37,7 +36,6 @@ from frigate.api.defs.query.regenerate_query_parameters import (
RegenerateQueryParameters,
)
from frigate.api.defs.request.events_body import (
EventsAttributesBody,
EventsCreateBody,
EventsDeleteBody,
EventsDescriptionBody,
@@ -56,7 +54,6 @@ from frigate.api.defs.response.event_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
from frigate.config.classification import ObjectClassificationType
from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline, Trigger
@@ -72,7 +69,6 @@ router = APIRouter(tags=[Tags.events])
@router.get(
"/events",
response_model=list[EventResponse],
dependencies=[Depends(allow_any_authenticated())],
summary="Get events",
description="Returns a list of events.",
)
@@ -101,8 +97,6 @@ def events(
if sub_labels == "all" and sub_label != "all":
sub_labels = sub_label
attributes = unquote(params.attributes)
zone = params.zone
zones = params.zones
@@ -191,17 +185,6 @@ def events(
sub_label_clause = reduce(operator.or_, sub_label_clauses)
clauses.append((sub_label_clause))
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
attribute_clause = reduce(operator.or_, attribute_clauses)
clauses.append(attribute_clause)
if recognized_license_plate != "all":
filtered_recognized_license_plates = recognized_license_plate.split(",")
@@ -360,8 +343,7 @@ def events(
@router.get(
"/events/explore",
response_model=list[EventResponse],
dependencies=[Depends(allow_any_authenticated())],
summary="Get summary of objects",
summary="Get summary of objects.",
description="""Gets a summary of objects from the database.
Returns a list of objects with a max of `limit` objects for each label.
""",
@@ -453,8 +435,7 @@ def events_explore(
@router.get(
"/event_ids",
response_model=list[EventResponse],
dependencies=[Depends(allow_any_authenticated())],
summary="Get events by ids",
summary="Get events by ids.",
description="""Gets events by a list of ids.
Returns a list of events.
""",
@@ -487,8 +468,7 @@ async def event_ids(ids: str, request: Request):
@router.get(
"/events/search",
dependencies=[Depends(allow_any_authenticated())],
summary="Search events",
summary="Search events.",
description="""Searches for events in the database.
Returns a list of events.
""",
@@ -507,8 +487,6 @@ def events_search(
# Filters
cameras = params.cameras
labels = params.labels
sub_labels = params.sub_labels
attributes = params.attributes
zones = params.zones
after = params.after
before = params.before
@@ -583,38 +561,6 @@ def events_search(
if labels != "all":
event_filters.append((Event.label << labels.split(",")))
if sub_labels != "all":
# use matching so joined sub labels are included
# for example a sub label 'bob' would get events
# with sub labels 'bob' and 'bob, john'
sub_label_clauses = []
filtered_sub_labels = sub_labels.split(",")
if "None" in filtered_sub_labels:
filtered_sub_labels.remove("None")
sub_label_clauses.append((Event.sub_label.is_null()))
for label in filtered_sub_labels:
sub_label_clauses.append(
(Event.sub_label.cast("text") == label)
) # include exact matches
# include this label when part of a list
sub_label_clauses.append((Event.sub_label.cast("text") % f"*{label},*"))
sub_label_clauses.append((Event.sub_label.cast("text") % f"*, {label}*"))
event_filters.append((reduce(operator.or_, sub_label_clauses)))
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
event_filters.append(reduce(operator.or_, attribute_clauses))
if zones != "all":
zone_clauses = []
filtered_zones = zones.split(",")
@@ -862,7 +808,7 @@ def events_search(
return JSONResponse(content=processed_events)
@router.get("/events/summary", dependencies=[Depends(allow_any_authenticated())])
@router.get("/events/summary")
def events_summary(
params: EventsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
@@ -972,8 +918,7 @@ def events_summary(
@router.get(
"/events/{event_id}",
response_model=EventResponse,
dependencies=[Depends(allow_any_authenticated())],
summary="Get event by id",
summary="Get event by id.",
description="Gets an event by its id.",
)
async def event(event_id: str, request: Request):
@@ -1016,8 +961,7 @@ def set_retain(event_id: str):
@router.post(
"/events/{event_id}/plus",
response_model=EventUploadPlusResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Send event to Frigate+",
summary="Send event to Frigate+.",
description="""Sends an event to Frigate+.
Returns a success message or an error if the event is not found.
""",
@@ -1157,7 +1101,6 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
@router.put(
"/events/{event_id}/false_positive",
response_model=EventUploadPlusResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Submit false positive to Frigate+",
description="""Submit an event as a false positive to Frigate+.
This endpoint is the same as the standard Frigate+ submission endpoint,
@@ -1256,7 +1199,7 @@ async def false_positive(request: Request, event_id: str):
"/events/{event_id}/retain",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Stop event from being retained indefinitely",
summary="Stop event from being retained indefinitely.",
description="""Stops an event from being retained indefinitely.
Returns a success message or an error if the event is not found.
NOTE: This is a legacy endpoint and is not supported in the frontend.
@@ -1285,7 +1228,7 @@ async def delete_retain(event_id: str, request: Request):
"/events/{event_id}/sub_label",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set event sub label",
summary="Set event sub label.",
description="""Sets an event's sub label.
Returns a success message or an error if the event is not found.
""",
@@ -1344,7 +1287,7 @@ async def set_sub_label(
"/events/{event_id}/recognized_license_plate",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set event license plate",
summary="Set event license plate.",
description="""Sets an event's license plate.
Returns a success message or an error if the event is not found.
""",
@@ -1400,112 +1343,11 @@ async def set_plate(
)
@router.post(
"/events/{event_id}/attributes",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set custom classification attributes",
description=(
"Sets an event's custom classification attributes for all attribute-type "
"models that apply to the event's object type."
),
)
async def set_attributes(
request: Request,
event_id: str,
body: EventsAttributesBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found."}),
status_code=404,
)
object_type = event.label
selected_attributes = set(body.attributes or [])
applied_updates: list[dict[str, str | float | None]] = []
for (
model_key,
model_config,
) in request.app.frigate_config.classification.custom.items():
# Only apply to enabled attribute classifiers that target this object type
if (
not model_config.enabled
or not model_config.object_config
or model_config.object_config.classification_type
!= ObjectClassificationType.attribute
or object_type not in (model_config.object_config.objects or [])
):
continue
# Get available labels from dataset directory
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
available_labels = set()
if os.path.exists(dataset_dir):
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if os.path.isdir(category_dir):
available_labels.add(category_name)
if not available_labels:
logger.warning(
"No dataset found for custom attribute model %s at %s",
model_key,
dataset_dir,
)
continue
# Find all selected attributes that apply to this model
model_name = model_config.name or model_key
matching_attrs = selected_attributes & available_labels
if matching_attrs:
# Publish updates for each selected attribute
for attr in matching_attrs:
request.app.event_metadata_updater.publish(
(event_id, model_name, attr, 1.0),
EventMetadataTypeEnum.attribute.value,
)
applied_updates.append(
{"model": model_name, "label": attr, "score": 1.0}
)
else:
# Clear this model's attribute
request.app.event_metadata_updater.publish(
(event_id, model_name, None, None),
EventMetadataTypeEnum.attribute.value,
)
applied_updates.append({"model": model_name, "label": None, "score": None})
if len(applied_updates) == 0:
return JSONResponse(
content={
"success": False,
"message": "No matching attributes found for this object type.",
},
status_code=400,
)
return JSONResponse(
content={
"success": True,
"message": f"Updated {len(applied_updates)} attribute(s)",
"applied": applied_updates,
},
status_code=200,
)
@router.post(
"/events/{event_id}/description",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set event description",
summary="Set event description.",
description="""Sets an event's description.
Returns a success message or an error if the event is not found.
""",
@@ -1561,7 +1403,7 @@ async def set_description(
"/events/{event_id}/description/regenerate",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Regenerate event description",
summary="Regenerate event description.",
description="""Regenerates an event's description.
Returns a success message or an error if the event is not found.
""",
@@ -1613,8 +1455,8 @@ async def regenerate_description(
@router.post(
"/description/generate",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Generate description embedding",
# dependencies=[Depends(require_role(["admin"]))],
summary="Generate description embedding.",
description="""Generates an embedding for an event's description.
Returns a success message or an error if the event is not found.
""",
@@ -1679,7 +1521,7 @@ async def delete_single_event(event_id: str, request: Request) -> dict:
"/events/{event_id}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete event",
summary="Delete event.",
description="""Deletes an event from the database.
Returns a success message or an error if the event is not found.
""",
@@ -1694,7 +1536,7 @@ async def delete_event(request: Request, event_id: str):
"/events/",
response_model=EventMultiDeleteResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete events",
summary="Delete events.",
description="""Deletes a list of events from the database.
Returns a success message or an error if the events are not found.
""",
@@ -1728,7 +1570,7 @@ async def delete_events(request: Request, body: EventsDeleteBody):
"/events/{camera_name}/{label}/create",
response_model=EventCreateResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Create manual event",
summary="Create manual event.",
description="""Creates a manual event in the database.
Returns a success message or an error if the event is not found.
NOTES:
@@ -1770,7 +1612,7 @@ def create_event(
body.score,
body.sub_label,
body.duration,
"api",
body.source_type,
body.draw,
),
EventMetadataTypeEnum.manual_event_create.value,
@@ -1792,7 +1634,7 @@ def create_event(
"/events/{event_id}/end",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="End manual event",
summary="End manual event.",
description="""Ends a manual event.
Returns a success message or an error if the event is not found.
NOTE: This should only be used for manual events.
@@ -1802,27 +1644,10 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
if body.end_time is not None and body.end_time < event.start_time:
return JSONResponse(
content=(
{
"success": False,
"message": f"end_time ({body.end_time}) cannot be before start_time ({event.start_time}).",
}
),
status_code=400,
)
end_time = body.end_time or datetime.datetime.now().timestamp()
request.app.event_metadata_updater.publish(
(event_id, end_time), EventMetadataTypeEnum.manual_event_end.value
)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found."}),
status_code=404,
)
except Exception:
return JSONResponse(
content=(
@@ -1841,7 +1666,7 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
"/trigger/embedding",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Create trigger embedding",
summary="Create trigger embedding.",
description="""Creates a trigger embedding for a specific trigger.
Returns a success message or an error if the trigger is not found.
""",
@@ -1898,40 +1723,37 @@ def create_trigger_embedding(
if event.data.get("type") != "object":
return
# Get the thumbnail
thumbnail = get_event_thumbnail_bytes(event)
if thumbnail is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
},
status_code=400,
if thumbnail := get_event_thumbnail_bytes(event):
cursor = context.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[body.data],
)
# Try to reuse existing embedding from database
cursor = context.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[body.data],
)
row = cursor.fetchone() if cursor else None
row = cursor.fetchone() if cursor else None
if row:
query_embedding = row[0]
embedding = np.frombuffer(query_embedding, dtype=np.float32)
if row:
query_embedding = row[0]
embedding = np.frombuffer(query_embedding, dtype=np.float32)
else:
# Generate new embedding
# Extract valid thumbnail
thumbnail = get_event_thumbnail_bytes(event)
if thumbnail is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
},
status_code=400,
)
embedding = context.generate_image_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None or (
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
):
if embedding is None:
return JSONResponse(
content={
"success": False,
@@ -1999,7 +1821,7 @@ def create_trigger_embedding(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Update trigger embedding",
summary="Update trigger embedding.",
description="""Updates a trigger embedding for a specific trigger.
Returns a success message or an error if the trigger is not found.
""",
@@ -2066,9 +1888,7 @@ def update_trigger_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None or (
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
):
if embedding is None:
return JSONResponse(
content={
"success": False,
@@ -2164,7 +1984,7 @@ def update_trigger_embedding(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete trigger embedding",
summary="Delete trigger embedding.",
description="""Deletes a trigger embedding for a specific trigger.
Returns a success message or an error if the trigger is not found.
""",
@@ -2238,7 +2058,7 @@ def delete_trigger_embedding(
"/triggers/status/{camera_name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Get triggers status",
summary="Get triggers status.",
description="""Gets the status of all triggers for a specific camera.
Returns a success message or an error if the camera is not found.
""",

View File

@@ -14,7 +14,6 @@ from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
@@ -45,7 +44,6 @@ router = APIRouter(tags=[Tags.export])
@router.get(
"/exports",
response_model=ExportsResponse,
dependencies=[Depends(allow_any_authenticated())],
summary="Get exports",
description="""Gets all exports from the database for cameras the user has access to.
Returns a list of exports ordered by date (most recent first).""",
@@ -274,7 +272,6 @@ async def export_delete(event_id: str, request: Request):
@router.get(
"/exports/{export_id}",
response_model=ExportModel,
dependencies=[Depends(allow_any_authenticated())],
summary="Get a single export",
description="""Gets a specific export by ID. The user must have access to the camera
associated with the export.""",

View File

@@ -2,7 +2,7 @@ import logging
import re
from typing import Optional
from fastapi import Depends, FastAPI, Request
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
from joserfc.jwk import OctKey
from playhouse.sqliteq import SqliteQueueDatabase
@@ -24,7 +24,7 @@ from frigate.api import (
preview,
review,
)
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
from frigate.api.auth import get_jwt_secret, limiter
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
)
@@ -62,15 +62,11 @@ def create_fastapi_app(
stats_emitter: StatsEmitter,
event_metadata_updater: EventMetadataPublisher,
config_publisher: CameraConfigUpdatePublisher,
enforce_default_admin: bool = True,
):
logger.info("Starting FastAPI app")
app = FastAPI(
debug=False,
swagger_ui_parameters={"apisSorter": "alpha", "operationsSorter": "alpha"},
dependencies=[Depends(require_admin_by_default())]
if enforce_default_admin
else [],
)
# update the request_address with the x-forwarded-for header from nginx

View File

@@ -22,11 +22,7 @@ from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn, operator
from tzlocal import get_localzone_name
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
)
from frigate.api.auth import get_allowed_cameras_for_filter, require_camera_access
from frigate.api.defs.query.media_query_parameters import (
Extension,
MediaEventsSnapshotQueryParams,
@@ -397,7 +393,7 @@ async def submit_recording_snapshot_to_plus(
)
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
@router.get("/recordings/storage")
def get_recordings_storage_usage(request: Request):
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
"storage"
@@ -421,7 +417,7 @@ def get_recordings_storage_usage(request: Request):
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
@router.get("/recordings/summary")
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
@@ -639,11 +635,7 @@ async def recordings(
return JSONResponse(content=list(recordings))
@router.get(
"/recordings/unavailable",
response_model=list[dict],
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/recordings/unavailable", response_model=list[dict])
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
@@ -837,19 +829,7 @@ async def recording_clip(
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified timestamp-range on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_ts(
camera_name: str,
start_ts: float,
end_ts: float,
force_discontinuity: bool = False,
):
logger.debug(
"VOD: Generating VOD for %s from %s to %s with force_discontinuity=%s",
camera_name,
start_ts,
end_ts,
force_discontinuity,
)
async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
recordings = (
Recordings.select(
Recordings.path,
@@ -874,14 +854,6 @@ async def vod_ts(
recording: Recordings
for recording in recordings:
logger.debug(
"VOD: processing recording: %s start=%s end=%s duration=%s",
recording.path,
recording.start_time,
recording.end_time,
recording.duration,
)
clip = {"type": "source", "path": recording.path}
duration = int(recording.duration * 1000)
@@ -890,11 +862,6 @@ async def vod_ts(
inpoint = int((start_ts - recording.start_time) * 1000)
clip["clipFrom"] = inpoint
duration -= inpoint
logger.debug(
"VOD: applied clipFrom %sms to %s",
inpoint,
recording.path,
)
# adjust end if recording.end_time is after end_ts
if recording.end_time > end_ts:
@@ -902,23 +869,12 @@ async def vod_ts(
if duration < min_duration_ms:
# skip if the clip has no valid duration (too short to contain frames)
logger.debug(
"VOD: skipping recording %s - resulting duration %sms too short",
recording.path,
duration,
)
continue
if min_duration_ms <= duration < max_duration_ms:
clip["keyFrameDurations"] = [duration]
clips.append(clip)
durations.append(duration)
logger.debug(
"VOD: added clip %s duration_ms=%s clipFrom=%s",
recording.path,
duration,
clip.get("clipFrom"),
)
else:
logger.warning(f"Recording clip is missing or empty: {recording.path}")
@@ -938,7 +894,7 @@ async def vod_ts(
return JSONResponse(
content={
"cache": hour_ago.timestamp() > start_ts,
"discontinuity": force_discontinuity,
"discontinuity": False,
"consistentSequenceMediaInfo": True,
"durations": durations,
"segment_duration": max(durations),
@@ -981,7 +937,6 @@ async def vod_hour(
@router.get(
"/vod/event/{event_id}",
dependencies=[Depends(allow_any_authenticated())],
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_event(
@@ -1022,19 +977,6 @@ async def vod_event(
return vod_response
@router.get(
"/vod/clip/{camera_name}/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for a timestamp range with HLS discontinuity enabled. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_clip(
camera_name: str,
start_ts: float,
end_ts: float,
):
return await vod_ts(camera_name, start_ts, end_ts, force_discontinuity=True)
@router.get(
"/events/{event_id}/snapshot.jpg",
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
@@ -1111,10 +1053,7 @@ async def event_snapshot(
)
@router.get(
"/events/{event_id}/thumbnail.{extension}",
dependencies=[Depends(require_camera_access)],
)
@router.get("/events/{event_id}/thumbnail.{extension}")
async def event_thumbnail(
request: Request,
event_id: str,
@@ -1312,10 +1251,7 @@ def grid_snapshot(
)
@router.get(
"/events/{event_id}/snapshot-clean.webp",
dependencies=[Depends(require_camera_access)],
)
@router.get("/events/{event_id}/snapshot-clean.webp")
def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
webp_bytes = None
try:
@@ -1439,9 +1375,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
)
@router.get(
"/events/{event_id}/clip.mp4", dependencies=[Depends(require_camera_access)]
)
@router.get("/events/{event_id}/clip.mp4")
async def event_clip(
request: Request,
event_id: str,
@@ -1469,9 +1403,7 @@ async def event_clip(
)
@router.get(
"/events/{event_id}/preview.gif", dependencies=[Depends(require_camera_access)]
)
@router.get("/events/{event_id}/preview.gif")
def event_preview(request: Request, event_id: str):
try:
event: Event = Event.get(Event.id == event_id)
@@ -1824,7 +1756,7 @@ def preview_mp4(
)
@router.get("/review/{event_id}/preview", dependencies=[Depends(require_camera_access)])
@router.get("/review/{event_id}/preview")
def review_preview(
request: Request,
event_id: str,
@@ -1850,12 +1782,8 @@ def review_preview(
return preview_mp4(request, review.camera, start_ts, end_ts)
@router.get(
"/preview/{file_name}/thumbnail.jpg", dependencies=[Depends(require_camera_access)]
)
@router.get(
"/preview/{file_name}/thumbnail.webp", dependencies=[Depends(require_camera_access)]
)
@router.get("/preview/{file_name}/thumbnail.jpg")
@router.get("/preview/{file_name}/thumbnail.webp")
def preview_thumbnail(file_name: str):
"""Get a thumbnail from the cached preview frames."""
if len(file_name) > 1000:

View File

@@ -5,12 +5,11 @@ import os
from typing import Any
from cryptography.hazmat.primitives import serialization
from fastapi import APIRouter, Depends, Request
from fastapi import APIRouter, Request
from fastapi.responses import JSONResponse
from peewee import DoesNotExist
from py_vapid import Vapid01, utils
from frigate.api.auth import allow_any_authenticated
from frigate.api.defs.tags import Tags
from frigate.const import CONFIG_DIR
from frigate.models import User
@@ -22,7 +21,6 @@ router = APIRouter(tags=[Tags.notifications])
@router.get(
"/notifications/pubkey",
dependencies=[Depends(allow_any_authenticated())],
summary="Get VAPID public key",
description="""Gets the VAPID public key for the notifications.
Returns the public key or an error if notifications are not enabled.
@@ -49,7 +47,6 @@ def get_vapid_pub_key(request: Request):
@router.post(
"/notifications/register",
dependencies=[Depends(allow_any_authenticated())],
summary="Register notifications",
description="""Registers a notifications subscription.
Returns a success message or an error if the subscription is not provided.

View File

@@ -5,14 +5,10 @@ import os
from datetime import datetime, timedelta, timezone
import pytz
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from fastapi.responses import JSONResponse
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
)
from frigate.api.auth import require_camera_access
from frigate.api.defs.response.preview_response import (
PreviewFramesResponse,
PreviewsResponse,
@@ -30,32 +26,19 @@ router = APIRouter(tags=[Tags.preview])
@router.get(
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}",
response_model=PreviewsResponse,
dependencies=[Depends(allow_any_authenticated())],
dependencies=[Depends(require_camera_access)],
summary="Get preview clips for time range",
description="""Gets all preview clips for a specified camera and time range.
Returns a list of preview video clips that overlap with the requested time period,
ordered by start time. Use camera_name='all' to get previews from all cameras.
Returns an error if no previews are found.""",
)
def preview_ts(
camera_name: str,
start_ts: float,
end_ts: float,
allowed_cameras: list[str] = Depends(get_allowed_cameras_for_filter),
):
def preview_ts(camera_name: str, start_ts: float, end_ts: float):
"""Get all mp4 previews relevant for time period."""
if camera_name != "all":
if camera_name not in allowed_cameras:
raise HTTPException(status_code=403, detail="Access denied for camera")
camera_list = [camera_name]
camera_clause = Previews.camera == camera_name
else:
camera_list = allowed_cameras
if not camera_list:
return JSONResponse(
content={"success": False, "message": "No previews found."},
status_code=404,
)
camera_clause = True
previews = (
Previews.select(
@@ -70,7 +53,7 @@ def preview_ts(
| Previews.end_time.between(start_ts, end_ts)
| ((start_ts > Previews.start_time) & (end_ts < Previews.end_time))
)
.where(Previews.camera << camera_list)
.where(camera_clause)
.order_by(Previews.start_time.asc())
.dicts()
.iterator()
@@ -105,21 +88,14 @@ def preview_ts(
@router.get(
"/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
response_model=PreviewsResponse,
dependencies=[Depends(allow_any_authenticated())],
dependencies=[Depends(require_camera_access)],
summary="Get preview clips for specific hour",
description="""Gets all preview clips for a specific hour in a given timezone.
Converts the provided date/time from the specified timezone to UTC and retrieves
all preview clips for that hour. Use camera_name='all' to get previews from all cameras.
The tz_name should be a timezone like 'America/New_York' (use commas instead of slashes).""",
)
def preview_hour(
year_month: str,
day: int,
hour: int,
camera_name: str,
tz_name: str,
allowed_cameras: list[str] = Depends(get_allowed_cameras_for_filter),
):
def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
"""Get all mp4 previews relevant for time period given the timezone"""
parts = year_month.split("-")
start_date = (
@@ -130,7 +106,7 @@ def preview_hour(
start_ts = start_date.timestamp()
end_ts = end_date.timestamp()
return preview_ts(camera_name, start_ts, end_ts, allowed_cameras)
return preview_ts(camera_name, start_ts, end_ts)
@router.get(

View File

@@ -14,7 +14,6 @@ from peewee import Case, DoesNotExist, IntegrityError, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
get_current_user,
require_camera_access,
@@ -44,11 +43,7 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.review])
@router.get(
"/review",
response_model=list[ReviewSegmentResponse],
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/review", response_model=list[ReviewSegmentResponse])
async def review(
params: ReviewQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
@@ -157,11 +152,7 @@ async def review(
return JSONResponse(content=[r for r in review_query])
@router.get(
"/review_ids",
response_model=list[ReviewSegmentResponse],
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
async def review_ids(request: Request, ids: str):
ids = ids.split(",")
@@ -195,11 +186,7 @@ async def review_ids(request: Request, ids: str):
)
@router.get(
"/review/summary",
response_model=ReviewSummaryResponse,
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/review/summary", response_model=ReviewSummaryResponse)
async def review_summary(
params: ReviewSummaryQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
@@ -474,11 +461,7 @@ async def review_summary(
return JSONResponse(content=data)
@router.post(
"/reviews/viewed",
response_model=GenericResponse,
dependencies=[Depends(allow_any_authenticated())],
)
@router.post("/reviews/viewed", response_model=GenericResponse)
async def set_multiple_reviewed(
request: Request,
body: ReviewModifyMultipleBody,
@@ -577,9 +560,7 @@ def delete_reviews(body: ReviewModifyMultipleBody):
@router.get(
"/review/activity/motion",
response_model=list[ReviewActivityMotionResponse],
dependencies=[Depends(allow_any_authenticated())],
"/review/activity/motion", response_model=list[ReviewActivityMotionResponse]
)
def motion_activity(
params: ReviewActivityMotionQueryParams = Depends(),
@@ -663,11 +644,7 @@ def motion_activity(
return JSONResponse(content=normalized)
@router.get(
"/review/event/{event_id}",
response_model=ReviewSegmentResponse,
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/review/event/{event_id}", response_model=ReviewSegmentResponse)
async def get_review_from_event(request: Request, event_id: str):
try:
review = ReviewSegment.get(
@@ -682,11 +659,7 @@ async def get_review_from_event(request: Request, event_id: str):
)
@router.get(
"/review/{review_id}",
response_model=ReviewSegmentResponse,
dependencies=[Depends(allow_any_authenticated())],
)
@router.get("/review/{review_id}", response_model=ReviewSegmentResponse)
async def get_review(request: Request, review_id: str):
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
@@ -699,11 +672,7 @@ async def get_review(request: Request, review_id: str):
)
@router.delete(
"/review/{review_id}/viewed",
response_model=GenericResponse,
dependencies=[Depends(allow_any_authenticated())],
)
@router.delete("/review/{review_id}/viewed", response_model=GenericResponse)
async def set_not_reviewed(
review_id: str,
current_user: dict = Depends(get_current_user),
@@ -741,7 +710,6 @@ async def set_not_reviewed(
@router.post(
"/review/summarize/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(allow_any_authenticated())],
description="Use GenAI to summarize review items over a period of time.",
)
def generate_review_summary(request: Request, start_ts: float, end_ts: float):

View File

@@ -100,10 +100,6 @@ class FrigateApp:
)
if (
config.semantic_search.enabled
or any(
c.objects.genai.enabled or c.review.genai.enabled
for c in config.cameras.values()
)
or config.lpr.enabled
or config.face_recognition.enabled
or len(config.classification.custom) > 0

View File

@@ -607,27 +607,23 @@ class Dispatcher:
)
self.publish(f"{camera_name}/snapshots/state", payload, retain=True)
def _on_ptz_command(self, camera_name: str, payload: str | bytes) -> None:
def _on_ptz_command(self, camera_name: str, payload: str) -> None:
"""Callback for ptz topic."""
try:
preset: str = (
payload.decode("utf-8") if isinstance(payload, bytes) else payload
).lower()
if "preset" in preset:
if "preset" in payload.lower():
command = OnvifCommandEnum.preset
param = preset[preset.index("_") + 1 :]
elif "move_relative" in preset:
param = payload.lower()[payload.index("_") + 1 :]
elif "move_relative" in payload.lower():
command = OnvifCommandEnum.move_relative
param = preset[preset.index("_") + 1 :]
param = payload.lower()[payload.index("_") + 1 :]
else:
command = OnvifCommandEnum[preset]
command = OnvifCommandEnum[payload.lower()]
param = ""
self.onvif.handle_command(camera_name, command, param)
logger.info(f"Setting ptz command to {command} for {camera_name}")
except KeyError as k:
logger.error(f"Invalid PTZ command {preset}: {k}")
logger.error(f"Invalid PTZ command {payload}: {k}")
def _on_birdseye_command(self, camera_name: str, payload: str) -> None:
"""Callback for birdseye topic."""

View File

@@ -225,8 +225,7 @@ class MqttClient(Communicator):
"birdseye_mode",
"review_alerts",
"review_detections",
"object_descriptions",
"review_descriptions",
"genai",
]
for name in self.config.cameras.keys():

View File

@@ -21,7 +21,7 @@ from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
)
from frigate.const import BASE_DIR, CONFIG_DIR
from frigate.const import CONFIG_DIR
from frigate.models import User
logger = logging.getLogger(__name__)
@@ -371,39 +371,14 @@ class WebPushClient(Communicator):
sorted_objects.update(payload["after"]["data"]["sub_labels"])
image = f"{payload['after']['thumb_path'].replace(BASE_DIR, '')}"
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
ended = state == "end" or state == "genai"
if state == "genai" and payload["after"]["data"]["metadata"]:
base_title = payload["after"]["data"]["metadata"]["title"]
threat_level = payload["after"]["data"]["metadata"].get(
"potential_threat_level", 0
)
# Add prefix for threat levels 1 and 2
if threat_level == 1:
title = f"Needs Review: {base_title}"
elif threat_level == 2:
title = f"Security Concern: {base_title}"
else:
title = base_title
message = payload["after"]["data"]["metadata"]["shortSummary"]
title = payload["after"]["data"]["metadata"]["title"]
message = payload["after"]["data"]["metadata"]["scene"]
else:
zone_names = payload["after"]["data"]["zones"]
formatted_zone_names = []
for zone_name in zone_names:
if zone_name in self.config.cameras[camera].zones:
formatted_zone_names.append(
self.config.cameras[camera]
.zones[zone_name]
.get_formatted_name(zone_name)
)
else:
formatted_zone_names.append(titlecase(zone_name.replace("_", " ")))
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {', '.join(formatted_zone_names)}"
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {titlecase(', '.join(payload['after']['data']['zones']).replace('_', ' '))}"
message = f"Detected on {camera_name}"
if ended:

View File

@@ -20,7 +20,7 @@ class AuthConfig(FrigateBaseModel):
default=86400, title="Session length for jwt session tokens", ge=60
)
refresh_time: int = Field(
default=1800,
default=43200,
title="Refresh the session if it is going to expire in this many seconds",
ge=30,
)

View File

@@ -105,11 +105,6 @@ class CustomClassificationConfig(FrigateBaseModel):
threshold: float = Field(
default=0.8, title="Classification score threshold to change the state."
)
save_attempts: int | None = Field(
default=None,
title="Number of classification attempts to save in the recent classifications tab. If not specified, defaults to 200 for object classification and 100 for state classification.",
ge=0,
)
object_config: CustomClassificationObjectConfig | None = Field(default=None)
state_config: CustomClassificationStateConfig | None = Field(default=None)

View File

@@ -28,7 +28,6 @@ from frigate.util.builtin import (
get_ffmpeg_arg_list,
)
from frigate.util.config import (
CURRENT_CONFIG_VERSION,
StreamInfoRetriever,
convert_area_to_pixels,
find_config_file,
@@ -77,12 +76,11 @@ logger = logging.getLogger(__name__)
yaml = YAML()
DEFAULT_CONFIG = f"""
DEFAULT_CONFIG = """
mqtt:
enabled: False
cameras: {{}} # No cameras defined, UI wizard should be used
version: {CURRENT_CONFIG_VERSION}
cameras: {} # No cameras defined, UI wizard should be used
"""
DEFAULT_DETECTORS = {"cpu": {"type": "cpu"}}
@@ -755,7 +753,8 @@ class FrigateConfig(FrigateBaseModel):
if new_config and f.tell() == 0:
f.write(DEFAULT_CONFIG)
logger.info(
"Created default config file, see the getting started docs for configuration: https://docs.frigate.video/guides/getting_started"
"Created default config file, see the getting started docs \
for configuration https://docs.frigate.video/guides/getting_started"
)
f.seek(0)

View File

@@ -37,6 +37,9 @@ class UIConfig(FrigateBaseModel):
time_style: DateTimeStyleEnum = Field(
default=DateTimeStyleEnum.medium, title="Override UI timeStyle."
)
strftime_fmt: Optional[str] = Field(
default=None, title="Override date and time format using strftime syntax."
)
unit_system: UnitSystemEnum = Field(
default=UnitSystemEnum.metric, title="The unit system to use for measurements."
)

View File

@@ -77,9 +77,6 @@ FFMPEG_HWACCEL_RKMPP = "preset-rkmpp"
FFMPEG_HWACCEL_AMF = "preset-amd-amf"
FFMPEG_HVC1_ARGS = ["-tag:v", "hvc1"]
# RKNN constants
SUPPORTED_RK_SOCS = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
# Regex constants
REGEX_CAMERA_NAME = r"^[a-zA-Z0-9_-]+$"

View File

@@ -374,9 +374,6 @@ class LicensePlateProcessingMixin:
combined_plate = re.sub(
pattern, replacement, combined_plate
)
logger.debug(
f"{camera}: Processing replace rule: '{pattern}' -> '{replacement}', result: '{combined_plate}'"
)
except re.error as e:
logger.warning(
f"{camera}: Invalid regex in replace_rules '{pattern}': {e}"
@@ -384,7 +381,7 @@ class LicensePlateProcessingMixin:
if combined_plate != original_combined:
logger.debug(
f"{camera}: All rules applied: '{original_combined}' -> '{combined_plate}'"
f"{camera}: Rules applied: '{original_combined}' -> '{combined_plate}'"
)
# Compute the combined area for qualifying boxes

View File

@@ -131,9 +131,8 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
},
)
# Embed the description if semantic search is enabled
if self.config.semantic_search.enabled:
self.embeddings.embed_description(event_id, transcription)
# Embed the description
self.embeddings.embed_description(event_id, transcription)
except DoesNotExist:
logger.debug("No recording found for audio transcription post-processing")

View File

@@ -86,11 +86,7 @@ class ObjectDescriptionProcessor(PostProcessorApi):
and data["id"] not in self.early_request_sent
):
if data["has_clip"] and data["has_snapshot"]:
try:
event: Event = Event.get(Event.id == data["id"])
except DoesNotExist:
logger.error(f"Event {data['id']} not found")
return
event: Event = Event.get(Event.id == data["id"])
if (
not camera_config.objects.genai.objects
@@ -135,8 +131,6 @@ class ObjectDescriptionProcessor(PostProcessorApi):
)
):
self._process_genai_description(event, camera_config, thumbnail)
else:
self.cleanup_event(event.id)
def __regenerate_description(self, event_id: str, source: str, force: bool) -> None:
"""Regenerate the description for an event."""
@@ -210,17 +204,6 @@ class ObjectDescriptionProcessor(PostProcessorApi):
)
return None
def cleanup_event(self, event_id: str) -> None:
"""Clean up tracked event data to prevent memory leaks.
This should be called when an event ends, regardless of whether
genai processing is triggered.
"""
if event_id in self.tracked_events:
del self.tracked_events[event_id]
if event_id in self.early_request_sent:
del self.early_request_sent[event_id]
def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
"""Read, decode, and crop the snapshot image."""
@@ -316,8 +299,9 @@ class ObjectDescriptionProcessor(PostProcessorApi):
),
).start()
# Clean up tracked events and early request state
self.cleanup_event(event.id)
# Delete tracked events based on the event_id
if event.id in self.tracked_events:
del self.tracked_events[event.id]
def _genai_embed_description(self, event: Event, thumbnails: list[bytes]) -> None:
"""Embed the description for an event."""

View File

@@ -12,7 +12,6 @@ from typing import Any
import cv2
from peewee import DoesNotExist
from titlecase import titlecase
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
@@ -92,7 +91,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
pixels_per_image = width * height
tokens_per_image = pixels_per_image / 1250
prompt_tokens = 3800
prompt_tokens = 3500
response_tokens = 300
available_tokens = context_size - prompt_tokens - response_tokens
max_frames = int(available_tokens / tokens_per_image)
@@ -209,22 +208,10 @@ class ReviewDescriptionProcessor(PostProcessorApi):
logger.debug(
f"Found GenAI Review Summary request for {start_ts} to {end_ts}"
)
# Query all review segments with camera and time information
segments: list[dict[str, Any]] = [
{
"camera": r["camera"].replace("_", " ").title(),
"start_time": r["start_time"],
"end_time": r["end_time"],
"metadata": r["data"]["metadata"],
}
items: list[dict[str, Any]] = [
r["data"]["metadata"]
for r in (
ReviewSegment.select(
ReviewSegment.camera,
ReviewSegment.start_time,
ReviewSegment.end_time,
ReviewSegment.data,
)
ReviewSegment.select(ReviewSegment.data)
.where(
(ReviewSegment.data["metadata"].is_null(False))
& (ReviewSegment.start_time < end_ts)
@@ -236,72 +223,21 @@ class ReviewDescriptionProcessor(PostProcessorApi):
)
]
if len(segments) == 0:
if len(items) == 0:
logger.debug("No review items with metadata found during time period")
return "No activity was found during this time period."
return "No activity was found during this time."
# Identify primary items (important items that need review)
primary_segments = [
seg
for seg in segments
if seg["metadata"].get("potential_threat_level", 0) > 0
or seg["metadata"].get("other_concerns")
]
important_items = list(
filter(
lambda item: item.get("potential_threat_level", 0) > 0
or item.get("other_concerns"),
items,
)
)
if not primary_segments:
if not important_items:
return "No concerns were found during this time period."
# Build hierarchical structure: each primary event with its contextual items
events_with_context = []
for primary_seg in primary_segments:
# Start building the primary event structure
primary_item = copy.deepcopy(primary_seg["metadata"])
primary_item["camera"] = primary_seg["camera"]
primary_item["start_time"] = primary_seg["start_time"]
primary_item["end_time"] = primary_seg["end_time"]
# Find overlapping contextual items from other cameras
primary_start = primary_seg["start_time"]
primary_end = primary_seg["end_time"]
primary_camera = primary_seg["camera"]
contextual_items = []
seen_contextual_cameras = set()
for seg in segments:
seg_camera = seg["camera"]
if seg_camera == primary_camera:
continue
if seg in primary_segments:
continue
seg_start = seg["start_time"]
seg_end = seg["end_time"]
if seg_start < primary_end and primary_start < seg_end:
# Avoid duplicates if same camera has multiple overlapping segments
if seg_camera not in seen_contextual_cameras:
contextual_item = copy.deepcopy(seg["metadata"])
contextual_item["camera"] = seg_camera
contextual_item["start_time"] = seg_start
contextual_item["end_time"] = seg_end
contextual_items.append(contextual_item)
seen_contextual_cameras.add(seg_camera)
# Add context array to primary item
primary_item["context"] = contextual_items
events_with_context.append(primary_item)
total_context_items = sum(
len(event.get("context", [])) for event in events_with_context
)
logger.debug(
f"Summary includes {len(events_with_context)} primary events with "
f"{total_context_items} total contextual items"
)
if self.config.review.genai.debug_save_thumbnails:
Path(
os.path.join(CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}")
@@ -310,8 +246,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
return self.genai_client.generate_review_summary(
start_ts,
end_ts,
events_with_context,
self.config.review.genai.preferred_language,
important_items,
self.config.review.genai.debug_save_thumbnails,
)
else:
@@ -520,14 +455,14 @@ def run_analysis(
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
object_type = verified_label.replace("-verified", "").replace("_", " ")
name = titlecase(sub_labels_list[i].replace("_", " "))
name = sub_labels_list[i].replace("_", " ").title()
unified_objects.append(f"{name} ({object_type})")
for label in objects_list:
if "-verified" in label:
continue
elif label in labelmap_objects:
object_type = titlecase(label.replace("_", " "))
object_type = label.replace("_", " ").title()
if label in attribute_labels:
unified_objects.append(f"{object_type} (delivery/service)")

View File

@@ -8,9 +8,6 @@ class ReviewMetadata(BaseModel):
scene: str = Field(
description="A comprehensive description of the setting and entities, including relevant context and plausible inferences if supported by visual evidence."
)
shortSummary: str = Field(
description="A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail."
)
confidence: float = Field(
description="A float between 0 and 1 representing your overall confidence in this analysis."
)

View File

@@ -13,7 +13,7 @@ from frigate.comms.event_metadata_updater import (
)
from frigate.config import FrigateConfig
from frigate.const import MODEL_CACHE_DIR
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from frigate.util.object import calculate_region
from ..types import DataProcessorMetrics
@@ -80,14 +80,13 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
except Exception as e:
logger.error(f"Failed to download {path}: {e}")
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
num_threads=2,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()

View File

@@ -21,7 +21,7 @@ from frigate.config.classification import (
ObjectClassificationType,
)
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
from frigate.util.object import box_overlaps, calculate_region
@@ -52,7 +52,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.requestor = requestor
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter = None
self.interpreter: Interpreter | None = None
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.labelmap: dict[int, str] = {}
@@ -72,12 +72,8 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.last_run = datetime.datetime.now().timestamp()
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from tensorflow.lite.python.interpreter import Interpreter
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
@@ -88,13 +84,11 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.labelmap = {}
return
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0)
@@ -105,42 +99,6 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
if self.inference_speed:
self.inference_speed.update(duration)
def _should_save_image(
self, camera: str, detected_state: str, score: float = 1.0
) -> bool:
"""
Determine if we should save the image for training.
Save when:
- State is changing or being verified (regardless of score)
- Score is less than 100% (even if state matches, useful for training)
Don't save when:
- State is stable (matches current_state) AND score is 100%
"""
if camera not in self.state_history:
# First detection for this camera, save it
return True
verification = self.state_history[camera]
current_state = verification.get("current_state")
pending_state = verification.get("pending_state")
# Save if there's a pending state change being verified
if pending_state is not None:
return True
# Save if the detected state differs from the current verified state
# (state is changing)
if current_state is not None and detected_state != current_state:
return True
# If score is less than 100%, save even if state matches
# (useful for training to improve confidence)
if score < 1.0:
return True
# Don't save if state is stable (detected_state == current_state) AND score is 100%
return False
def verify_state_change(self, camera: str, detected_state: str) -> str | None:
"""
Verify state change requires 3 consecutive identical states before publishing.
@@ -230,52 +188,38 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
if not should_run:
return
x, y, x2, y2 = calculate_region(
frame.shape,
crop[0],
crop[1],
crop[2],
crop[3],
224,
1.0,
)
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
height, width = rgb.shape[:2]
frame = rgb[
y:y2,
x:x2,
]
# Convert normalized crop coordinates to pixel values
x1 = int(camera_config.crop[0] * width)
y1 = int(camera_config.crop[1] * height)
x2 = int(camera_config.crop[2] * width)
y2 = int(camera_config.crop[3] * height)
# Clip coordinates to frame boundaries
x1 = max(0, min(x1, width))
y1 = max(0, min(y1, height))
x2 = max(0, min(x2, width))
y2 = max(0, min(y2, height))
if x2 <= x1 or y2 <= y1:
logger.warning(
f"Invalid crop coordinates for {camera}: [{x1}, {y1}, {x2}, {y2}]"
)
return
frame = rgb[y1:y2, x1:x2]
try:
resized_frame = cv2.resize(frame, (224, 224))
except Exception:
logger.warning("Failed to resize image for state classification")
return
if frame.shape != (224, 224):
try:
resized_frame = cv2.resize(frame, (224, 224))
except Exception:
logger.warning("Failed to resize image for state classification")
return
if self.interpreter is None:
# When interpreter is None, always save (score is 0.0, which is < 1.0)
if self._should_save_image(camera, "unknown", 0.0):
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 100
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
"unknown",
0.0,
max_files=save_attempts,
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
"unknown",
0.0,
)
return
input = np.expand_dims(resized_frame, axis=0)
@@ -292,23 +236,14 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
detected_state = self.labelmap[best_id]
if self._should_save_image(camera, detected_state, score):
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 100
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
detected_state,
score,
max_files=save_attempts,
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
self.labelmap[best_id],
score,
)
if score < self.model_config.threshold:
logger.debug(
@@ -316,6 +251,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
)
return
detected_state = self.labelmap[best_id]
verified_state = self.verify_state_change(camera, detected_state)
if verified_state is not None:
@@ -357,7 +293,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.model_config = model_config
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter = None
self.interpreter: Interpreter | None = None
self.sub_label_publisher = sub_label_publisher
self.requestor = requestor
self.tensor_input_details: dict[str, Any] | None = None
@@ -378,6 +314,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
@@ -389,13 +326,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.labelmap = {}
return
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0)
@@ -470,6 +405,9 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if obj_data.get("end_time") is not None:
return
if obj_data.get("stationary"):
return
object_id = obj_data["id"]
if (
@@ -507,11 +445,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
return
if self.interpreter is None:
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 200
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
@@ -519,15 +452,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
now,
"unknown",
0.0,
max_files=save_attempts,
)
# Still track history even when model doesn't exist to respect MAX_OBJECT_CLASSIFICATIONS
# Add an entry with "unknown" label so the history limit is enforced
if object_id not in self.classification_history:
self.classification_history[object_id] = []
self.classification_history[object_id].append(("unknown", 0.0, now))
return
input = np.expand_dims(resized_crop, axis=0)
@@ -544,11 +469,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 200
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
@@ -556,7 +476,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
now,
self.labelmap[best_id],
score,
max_files=save_attempts,
max_files=200,
)
if score < self.model_config.threshold:
@@ -659,15 +579,15 @@ def write_classification_attempt(
os.makedirs(folder, exist_ok=True)
cv2.imwrite(file, frame)
files = sorted(
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
key=lambda f: os.path.getctime(os.path.join(folder, f)),
reverse=True,
)
# delete oldest face image if maximum is reached
try:
files = sorted(
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
key=lambda f: os.path.getctime(os.path.join(folder, f)),
reverse=True,
)
if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
except (FileNotFoundError, OSError):
except FileNotFoundError:
pass

View File

@@ -131,7 +131,6 @@ class ONNXModelRunner(BaseModelRunner):
return model_type in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.yolov9_license_plate.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.facenet.value,
@@ -170,7 +169,6 @@ class CudaGraphRunner(BaseModelRunner):
return model_type not in [
ModelTypeEnum.yolonas.value,
ModelTypeEnum.dfine.value,
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,

View File

@@ -5,7 +5,7 @@ from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from ..detector_utils import tflite_detect_raw, tflite_init
@@ -28,13 +28,12 @@ class CpuDetectorConfig(BaseDetectorConfig):
class CpuTfl(DetectionApi):
type_key = DETECTOR_KEY
@redirect_output_to_logger(logger, logging.DEBUG)
def __init__(self, detector_config: CpuDetectorConfig):
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
interpreter = Interpreter(
model_path=detector_config.model.path,
num_threads=detector_config.num_threads or 3,
)
interpreter = Interpreter(
model_path=detector_config.model.path,
num_threads=detector_config.num_threads or 3,
)
tflite_init(self, interpreter)

View File

@@ -1,20 +1,19 @@
import logging
import math
import os
import cv2
import numpy as np
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
from frigate.detectors.detector_config import BaseDetectorConfig
try:
from tflite_runtime.interpreter import Interpreter, load_delegate
except ModuleNotFoundError:
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
logger = logging.getLogger(__name__)
DETECTOR_KEY = "edgetpu"
@@ -27,10 +26,6 @@ class EdgeTpuDetectorConfig(BaseDetectorConfig):
class EdgeTpuTfl(DetectionApi):
type_key = DETECTOR_KEY
supported_models = [
ModelTypeEnum.ssd,
ModelTypeEnum.yologeneric,
]
def __init__(self, detector_config: EdgeTpuDetectorConfig):
device_config = {}
@@ -68,294 +63,31 @@ class EdgeTpuTfl(DetectionApi):
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.model_width = detector_config.model.width
self.model_height = detector_config.model.height
self.min_score = 0.4
self.max_detections = 20
self.model_type = detector_config.model.model_type
self.model_requires_int8 = self.tensor_input_details[0]["dtype"] == np.int8
if self.model_type == ModelTypeEnum.yologeneric:
logger.debug("Using YOLO preprocessing/postprocessing")
if len(self.tensor_output_details) not in [2, 3]:
logger.error(
f"Invalid count of output tensors in YOLO model. Found {len(self.tensor_output_details)}, expecting 2 or 3."
)
raise
self.reg_max = 16 # = 64 dfl_channels // 4 # YOLO standard
self.min_logit_value = np.log(
self.min_score / (1 - self.min_score)
) # for filtering
self._generate_anchors_and_strides() # decode bounding box DFL
self.project = np.arange(
self.reg_max, dtype=np.float32
) # for decoding bounding box DFL information
# Determine YOLO tensor indices and quantization scales for
# boxes and class_scores the tensor ordering and names are
# not reliable, so use tensor shape to detect which tensor
# holds boxes or class scores.
# The tensors have shapes (B, N, C)
# where N is the number of candidates (=2100 for 320x320)
# this may guess wrong if the number of classes is exactly 64
output_boxes_index = None
output_classes_index = None
for i, x in enumerate(self.tensor_output_details):
# the nominal index seems to start at 1 instead of 0
if len(x["shape"]) == 3 and x["shape"][2] == 64:
output_boxes_index = i
elif len(x["shape"]) == 3 and x["shape"][2] > 1:
# require the number of classes to be more than 1
# to differentiate from (not used) max score tensor
output_classes_index = i
if output_boxes_index is None or output_classes_index is None:
logger.warning("Unrecognized model output, unexpected tensor shapes.")
output_classes_index = (
0
if (output_boxes_index is None or output_classes_index == 1)
else 1
) # 0 is default guess
output_boxes_index = 1 if (output_boxes_index == 0) else 0
scores_details = self.tensor_output_details[output_classes_index]
self.scores_tensor_index = scores_details["index"]
self.scores_scale, self.scores_zero_point = scores_details["quantization"]
# calculate the quantized version of the min_score
self.min_score_quantized = int(
(self.min_logit_value / self.scores_scale) + self.scores_zero_point
)
self.logit_shift_to_positive_values = (
max(0, math.ceil((128 + self.scores_zero_point) * self.scores_scale))
+ 1
) # round up
boxes_details = self.tensor_output_details[output_boxes_index]
self.boxes_tensor_index = boxes_details["index"]
self.boxes_scale, self.boxes_zero_point = boxes_details["quantization"]
elif self.model_type == ModelTypeEnum.ssd:
logger.debug("Using SSD preprocessing/postprocessing")
# SSD model indices (4 outputs: boxes, class_ids, scores, count)
for x in self.tensor_output_details:
if len(x["shape"]) == 3:
self.output_boxes_index = x["index"]
elif len(x["shape"]) == 1:
self.output_count_index = x["index"]
self.output_class_ids_index = None
self.output_class_scores_index = None
else:
raise Exception(
f"{self.model_type} is currently not supported for edgetpu. See the docs for more info on supported models."
)
def _generate_anchors_and_strides(self):
# for decoding the bounding box DFL information into xy coordinates
all_anchors = []
all_strides = []
strides = (8, 16, 32) # YOLO's small, medium, large detection heads
for stride in strides:
feat_h, feat_w = self.model_height // stride, self.model_width // stride
grid_y, grid_x = np.meshgrid(
np.arange(feat_h, dtype=np.float32),
np.arange(feat_w, dtype=np.float32),
indexing="ij",
)
grid_coords = np.stack((grid_x.flatten(), grid_y.flatten()), axis=1)
anchor_points = grid_coords + 0.5
all_anchors.append(anchor_points)
all_strides.append(np.full((feat_h * feat_w, 1), stride, dtype=np.float32))
self.anchors = np.concatenate(all_anchors, axis=0)
self.anchor_strides = np.concatenate(all_strides, axis=0)
def determine_indexes_for_non_yolo_models(self):
"""Legacy method for SSD models."""
if (
self.output_class_ids_index is None
or self.output_class_scores_index is None
):
for i in range(4):
index = self.tensor_output_details[i]["index"]
if (
index != self.output_boxes_index
and index != self.output_count_index
):
if (
np.mod(np.float32(self.interpreter.tensor(index)()[0][0]), 1)
== 0.0
):
self.output_class_ids_index = index
else:
self.output_scores_index = index
def pre_process(self, tensor_input):
if self.model_requires_int8:
tensor_input = np.bitwise_xor(tensor_input, 128).view(
np.int8
) # shift by -128
return tensor_input
def detect_raw(self, tensor_input):
tensor_input = self.pre_process(tensor_input)
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], tensor_input)
self.interpreter.invoke()
if self.model_type == ModelTypeEnum.yologeneric:
# Multi-tensor YOLO model with (non-standard B(H*W)C output format).
# (the comments indicate the shape of tensors,
# using "2100" as the anchor count (for image size of 320x320),
# "NC" as number of classes,
# "N" as the count that survive after min-score filtering)
# TENSOR A) class scores (1, 2100, NC) with logit values
# TENSOR B) box coordinates (1, 2100, 64) encoded as dfl scores
# Recommend that the model clamp the logit values in tensor (A)
# to the range [-4,+4] to preserve precision from [2%,98%]
# and because NMS requires the min_score parameter to be >= 0
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
class_ids = self.interpreter.tensor(self.tensor_output_details[1]["index"])()[0]
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[0]
count = int(
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
)
# don't dequantize scores data yet, wait until the low-confidence
# candidates are filtered out from the overall result set.
# This reduces the work and makes post-processing faster.
# this method works with raw quantized numbers when possible,
# which relies on the value of the scale factor to be >0.
# This speeds up max and argmax operations.
# Get max confidence for each detection and create the mask
detections = np.zeros(
(self.max_detections, 6), np.float32
) # initialize zero results
scores_output_quantized = self.interpreter.get_tensor(
self.scores_tensor_index
)[0] # (2100, NC)
max_scores_quantized = np.max(scores_output_quantized, axis=1) # (2100,)
mask = max_scores_quantized >= self.min_score_quantized # (2100,)
detections = np.zeros((20, 6), np.float32)
if not np.any(mask):
return detections # empty results
max_scores_filtered_shiftedpositive = (
(max_scores_quantized[mask] - self.scores_zero_point)
* self.scores_scale
) + self.logit_shift_to_positive_values # (N,1) shifted logit values
scores_output_quantized_filtered = scores_output_quantized[mask]
# dequantize boxes. NMS needs them to be in float format
# remove candidates with probabilities < threshold
boxes_output_quantized_filtered = (
self.interpreter.get_tensor(self.boxes_tensor_index)[0]
)[mask] # (N, 64)
boxes_output_filtered = (
boxes_output_quantized_filtered.astype(np.float32)
- self.boxes_zero_point
) * self.boxes_scale
# 2. Decode DFL to distances (ltrb)
dfl_distributions = boxes_output_filtered.reshape(
-1, 4, self.reg_max
) # (N, 4, 16)
# Softmax over the 16 bins
dfl_max = np.max(dfl_distributions, axis=2, keepdims=True)
dfl_exp = np.exp(dfl_distributions - dfl_max)
dfl_probs = dfl_exp / np.sum(dfl_exp, axis=2, keepdims=True) # (N, 4, 16)
# Weighted sum: (N, 4, 16) * (16,) -> (N, 4)
distances = np.einsum("pcr,r->pc", dfl_probs, self.project)
# Calculate box corners in pixel coordinates
anchors_filtered = self.anchors[mask]
anchor_strides_filtered = self.anchor_strides[mask]
x1y1 = (
anchors_filtered - distances[:, [0, 1]]
) * anchor_strides_filtered # (N, 2)
x2y2 = (
anchors_filtered + distances[:, [2, 3]]
) * anchor_strides_filtered # (N, 2)
boxes_filtered_decoded = np.concatenate((x1y1, x2y2), axis=-1) # (N, 4)
# 9. Apply NMS. Use logit scores here to defer sigmoid()
# until after filtering out redundant boxes
# Shift the logit scores to be non-negative (required by cv2)
indices = cv2.dnn.NMSBoxes(
bboxes=boxes_filtered_decoded,
scores=max_scores_filtered_shiftedpositive,
score_threshold=(
self.min_logit_value + self.logit_shift_to_positive_values
),
nms_threshold=0.4, # should this be a model config setting?
)
num_detections = len(indices)
if num_detections == 0:
return detections # empty results
nms_indices = np.array(indices, dtype=np.int32).ravel() # or .flatten()
if num_detections > self.max_detections:
nms_indices = nms_indices[: self.max_detections]
num_detections = self.max_detections
kept_logits_quantized = scores_output_quantized_filtered[nms_indices]
class_ids_post_nms = np.argmax(kept_logits_quantized, axis=1)
# Extract the final boxes and scores using fancy indexing
final_boxes = boxes_filtered_decoded[nms_indices]
final_scores_logits = (
max_scores_filtered_shiftedpositive[nms_indices]
- self.logit_shift_to_positive_values
) # Unshifted logits
# Detections array format: [class_id, score, ymin, xmin, ymax, xmax]
detections[:num_detections, 0] = class_ids_post_nms
detections[:num_detections, 1] = 1.0 / (
1.0 + np.exp(-final_scores_logits)
) # sigmoid
detections[:num_detections, 2] = final_boxes[:, 1] / self.model_height
detections[:num_detections, 3] = final_boxes[:, 0] / self.model_width
detections[:num_detections, 4] = final_boxes[:, 3] / self.model_height
detections[:num_detections, 5] = final_boxes[:, 2] / self.model_width
return detections
elif self.model_type == ModelTypeEnum.ssd:
self.determine_indexes_for_non_yolo_models()
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
class_ids = self.interpreter.tensor(
self.tensor_output_details[1]["index"]
)()[0]
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[
0
for i in range(count):
if scores[i] < 0.4 or i == 20:
break
detections[i] = [
class_ids[i],
float(scores[i]),
boxes[i][0],
boxes[i][1],
boxes[i][2],
boxes[i][3],
]
count = int(
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
)
detections = np.zeros((self.max_detections, 6), np.float32)
for i in range(count):
if scores[i] < self.min_score:
break
if i == self.max_detections:
logger.debug(f"Too many detections ({count})!")
break
detections[i] = [
class_ids[i],
float(scores[i]),
boxes[i][0],
boxes[i][1],
boxes[i][2],
boxes[i][3],
]
return detections
else:
raise Exception(
f"{self.model_type} is currently not supported for edgetpu. See the docs for more info on supported models."
)
return detections

View File

@@ -2,6 +2,7 @@ import glob
import logging
import os
import shutil
import time
import urllib.request
import zipfile
from queue import Queue
@@ -54,9 +55,6 @@ class MemryXDetector(DetectionApi):
)
return
# Initialize stop_event as None, will be set later by set_stop_event()
self.stop_event = None
model_cfg = getattr(detector_config, "model", None)
# Check if model_type was explicitly set by the user
@@ -365,43 +363,26 @@ class MemryXDetector(DetectionApi):
def process_input(self):
"""Input callback function: wait for frames in the input queue, preprocess, and send to MX3 (return)"""
while True:
# Check if shutdown is requested
if self.stop_event and self.stop_event.is_set():
logger.debug("[process_input] Stop event detected, returning None")
return None
try:
# Wait for a frame from the queue with timeout to check stop_event periodically
frame = self.capture_queue.get(block=True, timeout=0.5)
# Wait for a frame from the queue (blocking call)
frame = self.capture_queue.get(
block=True
) # Blocks until data is available
return frame
except Exception as e:
# Silently handle queue.Empty timeouts (expected during normal operation)
# Log any other unexpected exceptions
if "Empty" not in str(type(e).__name__):
logger.warning(f"[process_input] Unexpected error: {e}")
# Loop continues and will check stop_event at the top
logger.info(f"[process_input] Error processing input: {e}")
time.sleep(0.1) # Prevent busy waiting in case of error
def receive_output(self):
"""Retrieve processed results from MemryX output queue + a copy of the original frame"""
try:
# Get connection ID with timeout
connection_id = self.capture_id_queue.get(
block=True, timeout=1.0
) # Get the corresponding connection ID
detections = self.output_queue.get() # Get detections from MemryX
connection_id = (
self.capture_id_queue.get()
) # Get the corresponding connection ID
detections = self.output_queue.get() # Get detections from MemryX
return connection_id, detections
except Exception as e:
# On timeout or stop event, return None
if self.stop_event and self.stop_event.is_set():
logger.debug("[receive_output] Stop event detected, exiting")
# Silently handle queue.Empty timeouts, they're expected during normal operation
elif "Empty" not in str(type(e).__name__):
logger.warning(f"[receive_output] Error receiving output: {e}")
return None, None
return connection_id, detections
def post_process_yolonas(self, output):
predictions = output[0]
@@ -850,19 +831,6 @@ class MemryXDetector(DetectionApi):
f"{self.memx_model_type} is currently not supported for memryx. See the docs for more info on supported models."
)
def set_stop_event(self, stop_event):
"""Set the stop event for graceful shutdown."""
self.stop_event = stop_event
def shutdown(self):
"""Gracefully shutdown the MemryX accelerator"""
try:
if hasattr(self, "accl") and self.accl is not None:
self.accl.shutdown()
logger.info("MemryX accelerator shutdown complete")
except Exception as e:
logger.error(f"Error during MemryX shutdown: {e}")
def detect_raw(self, tensor_input: np.ndarray):
"""Removed synchronous detect_raw() function so that we only use async"""
return 0

View File

@@ -8,7 +8,7 @@ import cv2
import numpy as np
from pydantic import Field
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detection_runners import RKNNModelRunner
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
@@ -19,6 +19,8 @@ logger = logging.getLogger(__name__)
DETECTOR_KEY = "rknn"
supported_socs = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
supported_models = {
ModelTypeEnum.yologeneric: "^frigate-fp16-yolov9-[cemst]$",
ModelTypeEnum.yolonas: "^deci-fp16-yolonas_[sml]$",
@@ -80,9 +82,9 @@ class Rknn(DetectionApi):
except FileNotFoundError:
raise Exception("Make sure to run docker in privileged mode.")
if soc not in SUPPORTED_RK_SOCS:
if soc not in supported_socs:
raise Exception(
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {SUPPORTED_RK_SOCS}."
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {supported_socs}."
)
return soc

View File

@@ -12,6 +12,9 @@ from peewee import DoesNotExist, IntegrityError
from PIL import Image
from playhouse.shortcuts import model_to_dict
from frigate.comms.embeddings_updater import (
EmbeddingsRequestEnum,
)
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.classification import SemanticSearchModelEnum
@@ -492,49 +495,44 @@ class Embeddings:
or thumbnail_missing
):
existing_trigger.embedding = self._calculate_trigger_embedding(
trigger, trigger_name, camera.name
trigger
)
needs_embedding_update = True
if needs_embedding_update:
existing_trigger.save()
continue
else:
# Create new trigger
try:
# For thumbnail triggers, validate the event exists
if trigger.type == "thumbnail":
try:
event: Event = Event.get(Event.id == trigger.data)
except DoesNotExist:
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
)
continue
# Skip the event if not an object
if event.data.get("type") != "object":
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} is not a tracked object."
)
continue
thumbnail = get_event_thumbnail_bytes(event)
if not thumbnail:
logger.warning(
f"Unable to retrieve thumbnail for event ID {trigger.data} for {trigger_name}."
)
continue
self.write_trigger_thumbnail(
camera.name, trigger.data, thumbnail
try:
event: Event = Event.get(Event.id == trigger.data)
except DoesNotExist:
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
)
continue
# Skip the event if not an object
if event.data.get("type") != "object":
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} is not a tracked object."
)
continue
thumbnail = get_event_thumbnail_bytes(event)
if not thumbnail:
logger.warning(
f"Unable to retrieve thumbnail for event ID {trigger.data} for {trigger_name}."
)
continue
self.write_trigger_thumbnail(
camera.name, trigger.data, thumbnail
)
# Calculate embedding for new trigger
embedding = self._calculate_trigger_embedding(
trigger, trigger_name, camera.name
)
embedding = self._calculate_trigger_embedding(trigger)
Trigger.create(
camera=camera.name,
@@ -560,11 +558,7 @@ class Embeddings:
Trigger.camera == camera.name, Trigger.name.in_(triggers_to_remove)
).execute()
for trigger_name in triggers_to_remove:
# Only remove thumbnail files for thumbnail triggers
if existing_triggers[trigger_name].type == "thumbnail":
self.remove_trigger_thumbnail(
camera.name, existing_triggers[trigger_name].data
)
self.remove_trigger_thumbnail(camera.name, trigger_name)
def write_trigger_thumbnail(
self, camera: str, event_id: str, thumbnail: bytes
@@ -594,13 +588,14 @@ class Embeddings:
f"Failed to delete thumbnail for trigger with data {event_id} in {camera}: {e}"
)
def _calculate_trigger_embedding(
self, trigger, trigger_name: str, camera_name: str
) -> bytes:
def _calculate_trigger_embedding(self, trigger) -> bytes:
"""Calculate embedding for a trigger based on its type and data."""
if trigger.type == "description":
logger.debug(f"Generating embedding for trigger description {trigger_name}")
embedding = self.embed_description(None, trigger.data, upsert=False)
logger.debug(f"Generating embedding for trigger description {trigger.name}")
embedding = self.requestor.send_data(
EmbeddingsRequestEnum.embed_description.value,
{"id": None, "description": trigger.data, "upsert": False},
)
return embedding.astype(np.float32).tobytes()
elif trigger.type == "thumbnail":
@@ -620,21 +615,28 @@ class Embeddings:
try:
with open(
os.path.join(TRIGGER_DIR, camera_name, f"{trigger.data}.webp"),
os.path.join(
TRIGGER_DIR, trigger.camera, f"{trigger.data}.webp"
),
"rb",
) as f:
thumbnail = f.read()
except Exception as e:
logger.error(
f"Failed to read thumbnail for trigger {trigger_name} with ID {trigger.data}: {e}"
f"Failed to read thumbnail for trigger {trigger.name} with ID {trigger.data}: {e}"
)
return b""
logger.debug(
f"Generating embedding for trigger thumbnail {trigger_name} with ID {trigger.data}"
f"Generating embedding for trigger thumbnail {trigger.name} with ID {trigger.data}"
)
embedding = self.embed_thumbnail(
str(trigger.data), thumbnail, upsert=False
embedding = self.requestor.send_data(
EmbeddingsRequestEnum.embed_thumbnail.value,
{
"id": str(trigger.data),
"thumbnail": str(thumbnail),
"upsert": False,
},
)
return embedding.astype(np.float32).tobytes()

View File

@@ -203,9 +203,7 @@ class EmbeddingMaintainer(threading.Thread):
# post processors
self.post_processors: list[PostProcessorApi] = []
if self.genai_client is not None and any(
c.review.genai.enabled_in_config for c in self.config.cameras.values()
):
if any(c.review.genai.enabled_in_config for c in self.config.cameras.values()):
self.post_processors.append(
ReviewDescriptionProcessor(
self.config, self.requestor, self.metrics, self.genai_client
@@ -246,9 +244,7 @@ class EmbeddingMaintainer(threading.Thread):
)
self.post_processors.append(semantic_trigger_processor)
if self.genai_client is not None and any(
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
):
if any(c.objects.genai.enabled_in_config for c in self.config.cameras.values()):
self.post_processors.append(
ObjectDescriptionProcessor(
self.config,
@@ -526,8 +522,6 @@ class EmbeddingMaintainer(threading.Thread):
)
elif isinstance(processor, ObjectDescriptionProcessor):
if not updated_db:
# Still need to cleanup tracked events even if not processing
processor.cleanup_event(event_id)
continue
processor.process_data(
@@ -633,7 +627,7 @@ class EmbeddingMaintainer(threading.Thread):
camera, frame_name, _, _, motion_boxes, _ = data
if not camera or len(motion_boxes) == 0 or camera not in self.config.cameras:
if not camera or len(motion_boxes) == 0:
return
camera_config = self.config.cameras[camera]

View File

@@ -8,7 +8,7 @@ import numpy as np
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_runners import get_optimized_runner
from frigate.embeddings.types import EnrichmentModelTypeEnum
from frigate.log import suppress_stderr_during
from frigate.log import redirect_output_to_logger
from frigate.util.downloader import ModelDownloader
from ...config import FaceRecognitionConfig
@@ -57,18 +57,17 @@ class FaceNetEmbedding(BaseEmbedding):
self._load_model_and_utils()
logger.debug(f"models are already downloaded for {self.model_name}")
@redirect_output_to_logger(logger, logging.DEBUG)
def _load_model_and_utils(self):
if self.runner is None:
if self.downloader:
self.downloader.wait_for_download()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.runner = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
num_threads=2,
)
self.runner.allocate_tensors()
self.runner = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
num_threads=2,
)
self.runner.allocate_tensors()
self.tensor_input_details = self.runner.get_input_details()
self.tensor_output_details = self.runner.get_output_details()

View File

@@ -186,9 +186,6 @@ class JinaV1ImageEmbedding(BaseEmbedding):
download_func=self._download_model,
)
self.downloader.ensure_model_files()
# Avoid lazy loading in worker threads: block until downloads complete
# and load the model on the main thread during initialization.
self._load_model_and_utils()
else:
self.downloader = None
ModelDownloader.mark_files_state(

View File

@@ -65,9 +65,6 @@ class JinaV2Embedding(BaseEmbedding):
download_func=self._download_model,
)
self.downloader.ensure_model_files()
# Avoid lazy loading in worker threads: block until downloads complete
# and load the model on the main thread during initialization.
self._load_model_and_utils()
else:
self.downloader = None
ModelDownloader.mark_files_state(

View File

@@ -34,7 +34,7 @@ from frigate.data_processing.real_time.audio_transcription import (
AudioTranscriptionRealTimeProcessor,
)
from frigate.ffmpeg_presets import parse_preset_input
from frigate.log import LogPipe, suppress_stderr_during
from frigate.log import LogPipe, redirect_output_to_logger
from frigate.object_detection.base import load_labels
from frigate.util.builtin import get_ffmpeg_arg_list
from frigate.util.process import FrigateProcess
@@ -367,17 +367,17 @@ class AudioEventMaintainer(threading.Thread):
class AudioTfl:
@redirect_output_to_logger(logger, logging.DEBUG)
def __init__(self, stop_event: threading.Event, num_threads=2):
self.stop_event = stop_event
self.num_threads = num_threads
self.labels = load_labels("/audio-labelmap.txt", prefill=521)
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path="/cpu_audio_model.tflite",
num_threads=self.num_threads,
)
self.interpreter.allocate_tensors()
self.interpreter = Interpreter(
model_path="/cpu_audio_model.tflite",
num_threads=self.num_threads,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()

View File

@@ -46,7 +46,7 @@ def should_update_state(prev_event: Event, current_event: Event) -> bool:
if prev_event["sub_label"] != current_event["sub_label"]:
return True
if set(prev_event["current_zones"]) != set(current_event["current_zones"]):
if len(prev_event["current_zones"]) < len(current_event["current_zones"]):
return True
return False

View File

@@ -153,7 +153,7 @@ PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
"preset-intel-qsv-h264": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {2}",
"preset-intel-qsv-h265": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v main -level:v 4.1 -async_depth:v 1 {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
"preset-jetson-h264": "{0} -hide_banner {1} -c:v h264_nvmpi -profile high {2}",
"preset-jetson-h265": "{0} -hide_banner {1} -c:v h264_nvmpi -profile main {2}",
FFMPEG_HWACCEL_RKMPP: "{0} -hide_banner {1} -c:v h264_rkmpp -profile:v high {2}",

View File

@@ -101,7 +101,6 @@ When forming your description:
Your response MUST be a flat JSON object with:
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail. This should be a condensed version of the scene description above.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
{get_concern_prompt()}
@@ -178,66 +177,50 @@ Each line represents a detection state, not necessarily unique individuals. Pare
self,
start_ts: float,
end_ts: float,
events: list[dict[str, Any]],
preferred_language: str | None,
segments: list[dict[str, Any]],
debug_save: bool,
) -> str | None:
"""Generate a summary of review item descriptions over a period of time."""
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%B %d, %Y at %I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%B %d, %Y at %I:%M %p')}"
timeline_summary_prompt = f"""
You are a security officer writing a concise security report.
You are a security officer.
Time range: {time_range}.
Input: JSON list with "title", "scene", "confidence", "potential_threat_level" (1-2), "other_concerns".
Time range: {time_range}
Task: Write a concise, human-presentable security report in markdown format.
Input format: Each event is a JSON object with:
- "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "camera", "time", "start_time", "end_time"
- "context": array of related events from other cameras that occurred during overlapping time periods
Rules for the report:
**Note: Use the "scene" field for event descriptions in the report. Ignore any "shortSummary" field if present.**
- Title & overview
- Start with:
# Security Summary - {time_range}
- Write a 1-2 sentence situational overview capturing the general pattern of the period.
Report Structure - Use this EXACT format:
- Event details
- Present events in chronological order as a bullet list.
- **If multiple events occur within the same minute or overlapping time range, COMBINE them into a single bullet.**
- Summarize the distinct activities as sub-points under the shared timestamp.
- If no timestamp is given, preserve order but label as “Time not specified.”
- Use bold timestamps for clarity.
- Group bullets under subheadings when multiple events fall into the same category (e.g., Vehicle Activity, Porch Activity, Unusual Behavior).
# Security Summary - {time_range}
- Threat levels
- Always show (threat level: X) for each event.
- If multiple events at the same time share the same threat level, only state it once.
## Overview
[Write 1-2 sentences summarizing the overall activity pattern during this period.]
- Final assessment
- End with a Final Assessment section.
- If all events are threat level 1 with no escalation:
Final assessment: Only normal residential activity observed during this period.
- If threat level 2+ events are present, clearly summarize them as Potential concerns requiring review.
---
## Timeline
[Group events by time periods (e.g., "Morning (6:00 AM - 12:00 PM)", "Afternoon (12:00 PM - 5:00 PM)", "Evening (5:00 PM - 9:00 PM)", "Night (9:00 PM - 6:00 AM)"). Use appropriate time blocks based on when events occurred.]
### [Time Block Name]
**HH:MM AM/PM** | [Camera Name] | [Threat Level Indicator]
- [Event title]: [Clear description incorporating contextual information from the "context" array]
- Context: [If context array has items, mention them here, e.g., "Delivery truck present on Front Driveway Cam (HH:MM AM/PM)"]
- Assessment: [Brief assessment incorporating context - if context explains the event, note it here]
[Repeat for each event in chronological order within the time block]
---
## Summary
[One sentence summarizing the period. If all events are normal/explained: "Routine activity observed." If review needed: "Some activity requires review but no security concerns." If security concerns: "Security concerns requiring immediate attention."]
Guidelines:
- List ALL events in chronological order, grouped by time blocks
- Threat level indicators: ✓ Normal, ⚠️ Needs review, 🔴 Security concern
- Integrate contextual information naturally - use the "context" array to enrich each event's description
- If context explains the event (e.g., delivery truck explains person at door), describe it accordingly (e.g., "delivery person" not "unidentified person")
- Be concise but informative - focus on what happened and what it means
- If contextual information makes an event clearly normal, reflect that in your assessment
- Only create time blocks that have events - don't create empty sections
- Conciseness
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
- Summarize similar routine events instead of restating full scene descriptions.
"""
timeline_summary_prompt += "\n\nEvents:\n"
for event in events:
timeline_summary_prompt += f"\n{event}\n"
if preferred_language:
timeline_summary_prompt += f"\nProvide your answer in {preferred_language}"
for item in segments:
timeline_summary_prompt += f"\n{item}"
if debug_save:
with open(

View File

@@ -3,7 +3,7 @@
import logging
from typing import Any, Optional
from httpx import RemoteProtocolError, TimeoutException
from httpx import TimeoutException
from ollama import Client as ApiClient
from ollama import ResponseError
@@ -68,12 +68,7 @@ class OllamaClient(GenAIClient):
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
)
return result["response"].strip()
except (
TimeoutException,
ResponseError,
RemoteProtocolError,
ConnectionError,
) as e:
except (TimeoutException, ResponseError, ConnectionError) as e:
logger.warning("Ollama returned an error: %s", str(e))
return None

View File

@@ -80,15 +80,10 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
log_levels = {
"absl": LogLevel.error,
"httpx": LogLevel.error,
"h5py": LogLevel.error,
"keras": LogLevel.error,
"matplotlib": LogLevel.error,
"tensorflow": LogLevel.error,
"tensorflow.python": LogLevel.error,
"werkzeug": LogLevel.error,
"ws4py": LogLevel.error,
"PIL": LogLevel.warning,
"numba": LogLevel.warning,
**log_levels,
}
@@ -323,31 +318,3 @@ def suppress_os_output(func: Callable) -> Callable:
return result
return wrapper
@contextmanager
def suppress_stderr_during(operation_name: str) -> Generator[None, None, None]:
"""
Context manager to suppress stderr output during a specific operation.
Useful for silencing LLVM debug output, CUDA messages, and other native
library logging that cannot be controlled via Python logging or environment
variables. Completely redirects file descriptor 2 (stderr) to /dev/null.
Usage:
with suppress_stderr_during("model_conversion"):
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Args:
operation_name: Name of the operation for debugging purposes
"""
original_stderr_fd = os.dup(2)
devnull = os.open(os.devnull, os.O_WRONLY)
try:
os.dup2(devnull, 2)
yield
finally:
os.dup2(original_stderr_fd, 2)
os.close(devnull)
os.close(original_stderr_fd)

View File

@@ -133,7 +133,6 @@ class User(Model):
default="admin",
)
password_hash = CharField(null=False, max_length=120)
password_changed_at = DateTimeField(null=True)
notification_tokens = JSONField()
@classmethod

View File

@@ -239,12 +239,6 @@ class ImprovedMotionDetector(MotionDetector):
)
self.mask = np.where(resized_mask == [0])
# Reset motion detection state when mask changes
# so motion detection can quickly recalibrate with the new mask
self.avg_frame = np.zeros(self.motion_frame_size, np.float32)
self.calibrating = True
self.motion_frame_count = 0
def stop(self) -> None:
"""stop the motion detector."""
pass

View File

@@ -43,7 +43,6 @@ class BaseLocalDetector(ObjectDetector):
self,
detector_config: BaseDetectorConfig = None,
labels: str = None,
stop_event: MpEvent = None,
):
self.fps = EventsPerSecond()
if labels is None:
@@ -61,10 +60,6 @@ class BaseLocalDetector(ObjectDetector):
self.detect_api = create_detector(detector_config)
# If the detector supports stop_event, pass it
if hasattr(self.detect_api, "set_stop_event") and stop_event:
self.detect_api.set_stop_event(stop_event)
def _transform_input(self, tensor_input: np.ndarray) -> np.ndarray:
if self.input_transform:
tensor_input = np.transpose(tensor_input, self.input_transform)
@@ -245,10 +240,6 @@ class AsyncDetectorRunner(FrigateProcess):
while not self.stop_event.is_set():
connection_id, detections = self._detector.async_receive_output()
# Handle timeout case (queue.Empty) - just continue
if connection_id is None:
continue
if not self.send_times:
# guard; shouldn't happen if send/recv are balanced
continue
@@ -275,38 +266,21 @@ class AsyncDetectorRunner(FrigateProcess):
self._frame_manager = SharedMemoryFrameManager()
self._publisher = ObjectDetectorPublisher()
self._detector = AsyncLocalObjectDetector(
detector_config=self.detector_config, stop_event=self.stop_event
)
self._detector = AsyncLocalObjectDetector(detector_config=self.detector_config)
for name in self.cameras:
self.create_output_shm(name)
t_detect = threading.Thread(target=self._detect_worker, daemon=False)
t_result = threading.Thread(target=self._result_worker, daemon=False)
t_detect = threading.Thread(target=self._detect_worker, daemon=True)
t_result = threading.Thread(target=self._result_worker, daemon=True)
t_detect.start()
t_result.start()
try:
while not self.stop_event.is_set():
time.sleep(0.5)
while not self.stop_event.is_set():
time.sleep(0.5)
logger.info(
"Stop event detected, waiting for detector threads to finish..."
)
# Wait for threads to finish processing
t_detect.join(timeout=5)
t_result.join(timeout=5)
# Shutdown the AsyncDetector
self._detector.detect_api.shutdown()
self._publisher.stop()
except Exception as e:
logger.error(f"Error during async detector shutdown: {e}")
finally:
logger.info("Exited Async detection process...")
self._publisher.stop()
logger.info("Exited async detection process...")
class ObjectDetectProcess:
@@ -334,7 +308,7 @@ class ObjectDetectProcess:
# if the process has already exited on its own, just return
if self.detect_process and self.detect_process.exitcode:
return
self.detect_process.terminate()
logging.info("Waiting for detection process to exit gracefully...")
self.detect_process.join(timeout=30)
if self.detect_process.exitcode is None:

View File

@@ -139,11 +139,9 @@ class OutputProcess(FrigateProcess):
if CameraConfigUpdateEnum.add in updates:
for camera in updates["add"]:
jsmpeg_cameras[camera] = JsmpegCamera(
self.config.cameras[camera], self.stop_event, websocket_server
)
preview_recorders[camera] = PreviewRecorder(
self.config.cameras[camera]
cam_config, self.stop_event, websocket_server
)
preview_recorders[camera] = PreviewRecorder(cam_config)
preview_write_times[camera] = 0
if (

View File

@@ -95,21 +95,12 @@ class OnvifController:
cam = self.camera_configs[cam_name]
try:
user = cam.onvif.user
password = cam.onvif.password
if user is not None and isinstance(user, bytes):
user = user.decode("utf-8")
if password is not None and isinstance(password, bytes):
password = password.decode("utf-8")
self.cams[cam_name] = {
"onvif": ONVIFCamera(
cam.onvif.host,
cam.onvif.port,
user,
password,
cam.onvif.user,
cam.onvif.password,
wsdl_dir=str(Path(find_spec("onvif").origin).parent / "wsdl"),
adjust_time=cam.onvif.ignore_time_mismatch,
encrypt=not cam.onvif.tls_insecure,
@@ -199,11 +190,7 @@ class OnvifController:
ptz: ONVIFService = await onvif.create_ptz_service()
self.cams[camera_name]["ptz"] = ptz
try:
imaging: ONVIFService = await onvif.create_imaging_service()
except (Fault, ONVIFError, TransportError, Exception) as e:
logger.debug(f"Imaging service not supported for {camera_name}: {e}")
imaging = None
imaging: ONVIFService = await onvif.create_imaging_service()
self.cams[camera_name]["imaging"] = imaging
try:
video_sources = await media.GetVideoSources()
@@ -334,15 +321,9 @@ class OnvifController:
presets = []
for preset in presets:
# Ensure preset name is a Unicode string and handle UTF-8 characters correctly
preset_name = getattr(preset, "Name") or f"preset {preset['token']}"
if isinstance(preset_name, bytes):
preset_name = preset_name.decode("utf-8")
# Convert to lowercase while preserving UTF-8 characters
preset_name_lower = preset_name.lower()
self.cams[camera_name]["presets"][preset_name_lower] = preset["token"]
self.cams[camera_name]["presets"][
(getattr(preset, "Name") or f"preset {preset['token']}").lower()
] = preset["token"]
# get list of supported features
supported_features = []
@@ -400,10 +381,7 @@ class OnvifController:
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported. Exception: {e}"
)
if (
self.cams[camera_name]["video_source_token"] is not None
and imaging is not None
):
if self.cams[camera_name]["video_source_token"] is not None:
try:
imaging_capabilities = await imaging.GetImagingSettings(
{"VideoSourceToken": self.cams[camera_name]["video_source_token"]}
@@ -443,7 +421,6 @@ class OnvifController:
if (
"focus" in self.cams[camera_name]["features"]
and self.cams[camera_name]["video_source_token"]
and self.cams[camera_name]["imaging"] is not None
):
try:
stop_request = self.cams[camera_name]["imaging"].create_type("Stop")
@@ -578,11 +555,6 @@ class OnvifController:
self.cams[camera_name]["active"] = False
async def _move_to_preset(self, camera_name: str, preset: str) -> None:
if isinstance(preset, bytes):
preset = preset.decode("utf-8")
preset = preset.lower()
if preset not in self.cams[camera_name]["presets"]:
logger.error(f"{preset} is not a valid preset for {camera_name}")
return
@@ -676,7 +648,6 @@ class OnvifController:
if (
"focus" not in self.cams[camera_name]["features"]
or not self.cams[camera_name]["video_source_token"]
or self.cams[camera_name]["imaging"] is None
):
logger.error(f"{camera_name} does not support ONVIF continuous focus.")
return

View File

@@ -119,7 +119,6 @@ class RecordingCleanup(threading.Thread):
Recordings.path,
Recordings.objects,
Recordings.motion,
Recordings.dBFS,
)
.where(
(Recordings.camera == config.name)
@@ -127,7 +126,6 @@ class RecordingCleanup(threading.Thread):
(
(Recordings.end_time < continuous_expire_date)
& (Recordings.motion == 0)
& (Recordings.dBFS == 0)
)
| (Recordings.end_time < motion_expire_date)
)
@@ -187,7 +185,6 @@ class RecordingCleanup(threading.Thread):
mode == RetainModeEnum.motion
and recording.motion == 0
and recording.objects == 0
and recording.dBFS == 0
)
or (mode == RetainModeEnum.active_objects and recording.objects == 0)
):

View File

@@ -67,7 +67,7 @@ class SegmentInfo:
if (
not keep
and retain_mode == RetainModeEnum.motion
and (self.motion_count > 0 or self.average_dBFS != 0)
and (self.motion_count > 0 or self.average_dBFS > 0)
):
keep = True

View File

@@ -42,10 +42,11 @@ def get_latest_version(config: FrigateConfig) -> str:
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
timeout=10,
)
response = request.json()
except (RequestException, JSONDecodeError):
return "unknown"
response = request.json()
if request.ok and response and "tag_name" in response:
return str(response.get("tag_name").replace("v", ""))
else:

View File

@@ -5,7 +5,7 @@ import shutil
import threading
from pathlib import Path
from peewee import SQL, fn
from peewee import fn
from frigate.config import FrigateConfig
from frigate.const import RECORD_DIR
@@ -44,19 +44,13 @@ class StorageMaintainer(threading.Thread):
)
}
# calculate MB/hr from last 100 segments
# calculate MB/hr
try:
# Subquery to get last 100 segments, then average their bandwidth
last_100 = (
Recordings.select(bandwidth_equation.alias("bw"))
.where(Recordings.camera == camera, Recordings.segment_size > 0)
.order_by(Recordings.start_time.desc())
.limit(100)
.alias("recent")
)
bandwidth = round(
Recordings.select(fn.AVG(SQL("bw"))).from_(last_100).scalar()
Recordings.select(fn.AVG(bandwidth_equation))
.where(Recordings.camera == camera, Recordings.segment_size > 0)
.limit(100)
.scalar()
* 3600,
2,
)

View File

@@ -3,8 +3,6 @@ import logging
import os
import unittest
from fastapi import Request
from fastapi.testclient import TestClient
from peewee_migrate import Router
from playhouse.sqlite_ext import SqliteExtDatabase
from playhouse.sqliteq import SqliteQueueDatabase
@@ -18,20 +16,6 @@ from frigate.review.types import SeverityEnum
from frigate.test.const import TEST_DB, TEST_DB_CLEANUPS
class AuthTestClient(TestClient):
"""TestClient that automatically adds auth headers to all requests."""
def request(self, *args, **kwargs):
# Add default auth headers if not already present
headers = kwargs.get("headers") or {}
if "remote-user" not in headers:
headers["remote-user"] = "admin"
if "remote-role" not in headers:
headers["remote-role"] = "admin"
kwargs["headers"] = headers
return super().request(*args, **kwargs)
class BaseTestHttp(unittest.TestCase):
def setUp(self, models):
# setup clean database for each test run
@@ -129,9 +113,7 @@ class BaseTestHttp(unittest.TestCase):
pass
def create_app(self, stats=None, event_metadata_publisher=None):
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
app = create_fastapi_app(
return create_fastapi_app(
FrigateConfig(**self.minimal_config),
self.db,
None,
@@ -141,33 +123,8 @@ class BaseTestHttp(unittest.TestCase):
stats,
event_metadata_publisher,
None,
enforce_default_admin=False,
)
# Default test mocks for authentication
# Tests can override these in their setUp if needed
# This mock uses headers set by AuthTestClient
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
async def mock_get_allowed_cameras_for_filter(request: Request):
return list(self.minimal_config.get("cameras", {}).keys())
app.dependency_overrides[get_current_user] = mock_get_current_user
app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
return app
def insert_mock_event(
self,
id: str,

View File

@@ -1,8 +1,10 @@
from unittest.mock import Mock
from fastapi.testclient import TestClient
from frigate.models import Event, Recordings, ReviewSegment
from frigate.stats.emitter import StatsEmitter
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
from frigate.test.http_api.base_http_test import BaseTestHttp
class TestHttpApp(BaseTestHttp):
@@ -18,7 +20,7 @@ class TestHttpApp(BaseTestHttp):
stats.get_latest_stats.return_value = self.test_stats
app = super().create_app(stats)
with AuthTestClient(app) as client:
with TestClient(app) as client:
response = client.get("/stats")
response_json = response.json()
assert response_json == self.test_stats

View File

@@ -1,13 +1,14 @@
from unittest.mock import patch
from fastapi import HTTPException, Request
from fastapi.testclient import TestClient
from frigate.api.auth import (
get_allowed_cameras_for_filter,
get_current_user,
)
from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
from frigate.test.http_api.base_http_test import BaseTestHttp
class TestCameraAccessEventReview(BaseTestHttp):
@@ -15,17 +16,9 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().setUp([Event, ReviewSegment, Recordings])
self.app = super().create_app()
# Mock get_current_user for all tests
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
# Mock get_current_user to return valid user for all tests
async def mock_get_current_user():
return {"username": "test_user", "role": "user"}
self.app.dependency_overrides[get_current_user] = mock_get_current_user
@@ -37,25 +30,21 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_event("event1", camera="front_door")
super().insert_mock_event("event2", camera="back_door")
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
resp = client.get("/events")
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
assert "event1" in ids
assert "event2" not in ids
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
resp = client.get("/events")
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
@@ -65,25 +54,21 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_review_segment("rev1", camera="front_door")
super().insert_mock_review_segment("rev2", camera="back_door")
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
resp = client.get("/review")
assert resp.status_code == 200
ids = [r["id"] for r in resp.json()]
assert "rev1" in ids
assert "rev2" not in ids
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
resp = client.get("/review")
assert resp.status_code == 200
ids = [r["id"] for r in resp.json()]
@@ -99,7 +84,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.event.require_camera_access", mock_require_allowed):
with AuthTestClient(self.app) as client:
with TestClient(self.app) as client:
resp = client.get("/events/event1")
assert resp.status_code == 200
assert resp.json()["id"] == "event1"
@@ -109,7 +94,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.event.require_camera_access", mock_require_disallowed):
with AuthTestClient(self.app) as client:
with TestClient(self.app) as client:
resp = client.get("/events/event1")
assert resp.status_code == 403
@@ -123,7 +108,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.review.require_camera_access", mock_require_allowed):
with AuthTestClient(self.app) as client:
with TestClient(self.app) as client:
resp = client.get("/review/rev1")
assert resp.status_code == 200
assert resp.json()["id"] == "rev1"
@@ -133,7 +118,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.review.require_camera_access", mock_require_disallowed):
with AuthTestClient(self.app) as client:
with TestClient(self.app) as client:
resp = client.get("/review/rev1")
assert resp.status_code == 403
@@ -141,25 +126,21 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_event("event1", camera="front_door")
super().insert_mock_event("event2", camera="back_door")
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
resp = client.get("/events", params={"cameras": "all"})
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
assert "event1" in ids
assert "event2" not in ids
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
resp = client.get("/events", params={"cameras": "all"})
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
@@ -169,24 +150,20 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_event("event1", camera="front_door")
super().insert_mock_event("event2", camera="back_door")
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
resp = client.get("/events/summary")
assert resp.status_code == 200
summary_list = resp.json()
assert len(summary_list) == 1
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
resp = client.get("/events/summary")
summary_list = resp.json()
assert len(summary_list) == 2

Some files were not shown because too many files have changed in this diff Show More