mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-09 22:08:21 -05:00
Compare commits
97 Commits
dependabot
...
media-sync
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a8b90834b0 | ||
|
|
f1a19128ed | ||
|
|
a77b0a7c4b | ||
|
|
1c95eb2c39 | ||
|
|
26744efb1e | ||
|
|
aa0b082184 | ||
|
|
7fb8d9b050 | ||
|
|
b8bc98a423 | ||
|
|
f9e06bb7b7 | ||
|
|
7cc16161b3 | ||
|
|
08311a6ee2 | ||
|
|
a08c044144 | ||
|
|
5cced22f65 | ||
|
|
b962c95725 | ||
|
|
0cbec25494 | ||
|
|
e1545a8db8 | ||
|
|
51ee6f26e6 | ||
|
|
430cebecda | ||
|
|
af7af33645 | ||
|
|
9822716df5 | ||
|
|
66bc4e6f96 | ||
|
|
23dbe098a5 | ||
|
|
c7e6c23f59 | ||
|
|
de968de878 | ||
|
|
b951cd9aef | ||
|
|
6286010014 | ||
|
|
1a0111a683 | ||
|
|
5716674703 | ||
|
|
dd22c8e2a3 | ||
|
|
61ca688b14 | ||
|
|
68f5eab839 | ||
|
|
2e125dcab2 | ||
|
|
83753fb2ce | ||
|
|
95368d39d1 | ||
|
|
8cb1fafed3 | ||
|
|
b75c85114e | ||
|
|
e1a1baa98d | ||
|
|
4098db2441 | ||
|
|
71f4417314 | ||
|
|
4002feb9f8 | ||
|
|
7a399c842d | ||
|
|
bc5d6cf1c2 | ||
|
|
dde02cadb2 | ||
|
|
8ddcbf9a8d | ||
|
|
6b9b3778f5 | ||
|
|
308e692732 | ||
|
|
67e18eff94 | ||
|
|
649ca49e55 | ||
|
|
fa6dda6735 | ||
|
|
9cdc10008d | ||
|
|
4cf4520ea7 | ||
|
|
dfd837cfb0 | ||
|
|
152e585206 | ||
|
|
28b0ad782a | ||
|
|
644c7fa6b4 | ||
|
|
88a8de0b1c | ||
|
|
c136e5e8bd | ||
|
|
9ab78f496c | ||
|
|
8a360eecf8 | ||
|
|
1f9669bbe5 | ||
|
|
9d4aac2b8e | ||
|
|
aa09132dfd | ||
|
|
24766ce427 | ||
|
|
97b29d177a | ||
|
|
1a75251ffb | ||
|
|
048475e750 | ||
|
|
1b57fb15a7 | ||
|
|
cd606ad240 | ||
|
|
de2144f158 | ||
|
|
e79ff9a079 | ||
|
|
fe47620153 | ||
|
|
8520ade5c4 | ||
|
|
1c7ed45f21 | ||
|
|
130c7c9eec | ||
|
|
26e630aa8c | ||
|
|
a478da45a3 | ||
|
|
694f72d577 | ||
|
|
1e42cedf9e | ||
|
|
a35a0fc8ba | ||
|
|
10b7ffe3d1 | ||
|
|
42c6cfc9a2 | ||
|
|
e8bf570d21 | ||
|
|
cdbd9038b8 | ||
|
|
1e05abb0ea | ||
|
|
70d1c2e041 | ||
|
|
f4d128b3ee | ||
|
|
dd64ffca6c | ||
|
|
fce1f78bdc | ||
|
|
69ca63d608 | ||
|
|
111b83e8e3 | ||
|
|
198733b729 | ||
|
|
03d9fd6f19 | ||
|
|
f90a54f1d9 | ||
|
|
bbec4c4a60 | ||
|
|
9fe16d7b17 | ||
|
|
dc886b11f3 | ||
|
|
3bbe24f5f8 |
@@ -22,6 +22,7 @@ autotrack
|
||||
autotracked
|
||||
autotracker
|
||||
autotracking
|
||||
backchannel
|
||||
balena
|
||||
Beelink
|
||||
BGRA
|
||||
@@ -191,6 +192,7 @@ ONVIF
|
||||
openai
|
||||
opencv
|
||||
openvino
|
||||
overfitting
|
||||
OWASP
|
||||
paddleocr
|
||||
paho
|
||||
@@ -315,4 +317,4 @@ yolo
|
||||
yolonas
|
||||
yolox
|
||||
zeep
|
||||
zerolatency
|
||||
zerolatency
|
||||
|
||||
129
.github/DISCUSSION_TEMPLATE/beta-support.yml
vendored
Normal file
129
.github/DISCUSSION_TEMPLATE/beta-support.yml
vendored
Normal file
@@ -0,0 +1,129 @@
|
||||
title: "[Beta Support]: "
|
||||
labels: ["support", "triage", "beta"]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for testing Frigate beta versions! Use this form for support with beta releases.
|
||||
|
||||
**Note:** Beta versions may have incomplete features, known issues, or unexpected behavior. Please check the [release notes](https://github.com/blakeblackshear/frigate/releases) and [recent discussions][discussions] for known beta issues before submitting.
|
||||
|
||||
Before submitting, read the [beta documentation][docs].
|
||||
|
||||
[docs]: https://deploy-preview-19787--frigate-docs.netlify.app/
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Describe the problem you are having
|
||||
description: Please be as detailed as possible. Include what you expected to happen vs what actually happened.
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: Beta Version
|
||||
description: Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.17.0-beta1)
|
||||
placeholder: "0.17.0-beta1"
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: issue-category
|
||||
attributes:
|
||||
label: Issue Category
|
||||
description: What area is your issue related to? This helps us understand the context.
|
||||
options:
|
||||
- Object Detection / Detectors
|
||||
- Hardware Acceleration
|
||||
- Configuration / Setup
|
||||
- WebUI / Frontend
|
||||
- Recordings / Storage
|
||||
- Notifications / Events
|
||||
- Integration (Home Assistant, etc)
|
||||
- Performance / Stability
|
||||
- Installation / Updates
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: config
|
||||
attributes:
|
||||
label: Frigate config file
|
||||
description: This will be automatically formatted into code, so no need for backticks. Remove any sensitive information like passwords or URLs.
|
||||
render: yaml
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: frigatelogs
|
||||
attributes:
|
||||
label: Relevant Frigate log output
|
||||
description: Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: go2rtclogs
|
||||
attributes:
|
||||
label: Relevant go2rtc log output (if applicable)
|
||||
description: If your issue involves cameras, streams, or playback, please include go2rtc logs. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
- type: dropdown
|
||||
id: install-method
|
||||
attributes:
|
||||
label: Install method
|
||||
options:
|
||||
- Home Assistant Add-on
|
||||
- Docker Compose
|
||||
- Docker CLI
|
||||
- Proxmox via Docker
|
||||
- Proxmox via TTeck Script
|
||||
- Windows WSL2
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: docker
|
||||
attributes:
|
||||
label: docker-compose file or Docker CLI command
|
||||
description: This will be automatically formatted into code, so no need for backticks. Include relevant environment variables and device mappings.
|
||||
render: yaml
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: Operating system
|
||||
options:
|
||||
- Home Assistant OS
|
||||
- Debian
|
||||
- Ubuntu
|
||||
- Other Linux
|
||||
- Proxmox
|
||||
- UNRAID
|
||||
- Windows
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: hardware
|
||||
attributes:
|
||||
label: CPU / GPU / Hardware
|
||||
description: Provide details about your hardware (e.g., Intel i5-9400, NVIDIA RTX 3060, Raspberry Pi 4, etc)
|
||||
placeholder: "Intel i7-10700, NVIDIA GTX 1660"
|
||||
- type: textarea
|
||||
id: screenshots
|
||||
attributes:
|
||||
label: Screenshots
|
||||
description: Screenshots of the issue, System metrics pages, or any relevant UI. Drag and drop or paste images directly.
|
||||
- type: textarea
|
||||
id: steps-to-reproduce
|
||||
attributes:
|
||||
label: Steps to reproduce
|
||||
description: If applicable, provide detailed steps to reproduce the issue
|
||||
placeholder: |
|
||||
1. Go to '...'
|
||||
2. Click on '...'
|
||||
3. See error
|
||||
- type: textarea
|
||||
id: other
|
||||
attributes:
|
||||
label: Any other information that may be helpful
|
||||
description: Additional context, related issues, when the problem started appearing, etc.
|
||||
2
.github/DISCUSSION_TEMPLATE/report-a-bug.yml
vendored
2
.github/DISCUSSION_TEMPLATE/report-a-bug.yml
vendored
@@ -6,6 +6,8 @@ body:
|
||||
value: |
|
||||
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
|
||||
|
||||
**⚠️ If you are running a beta version (0.17.0-beta or similar), please use the [Beta Support template](https://github.com/blakeblackshear/frigate/discussions/new?category=beta-support) instead.**
|
||||
|
||||
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
|
||||
|
||||
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**
|
||||
|
||||
2
.github/copilot-instructions.md
vendored
Normal file
2
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
Never write strings in the frontend directly, always write to and reference the relevant translations file.
|
||||
Always conform new and refactored code to the existing coding style in the project.
|
||||
15
.github/workflows/ci.yml
vendored
15
.github/workflows/ci.yml
vendored
@@ -15,7 +15,7 @@ concurrency:
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
PYTHON_VERSION: 3.9
|
||||
PYTHON_VERSION: 3.11
|
||||
|
||||
jobs:
|
||||
amd64_build:
|
||||
@@ -23,7 +23,7 @@ jobs:
|
||||
name: AMD64 Build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
@@ -47,7 +47,7 @@ jobs:
|
||||
name: ARM Build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
@@ -82,7 +82,7 @@ jobs:
|
||||
name: Jetson Jetpack 6
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
@@ -113,7 +113,7 @@ jobs:
|
||||
- amd64_build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
@@ -136,7 +136,6 @@ jobs:
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-tensorrt,mode=max
|
||||
- name: AMD/ROCm general build
|
||||
env:
|
||||
AMDGPU: gfx
|
||||
HSA_OVERRIDE: 0
|
||||
uses: docker/bake-action@v6
|
||||
with:
|
||||
@@ -155,7 +154,7 @@ jobs:
|
||||
- arm64_build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
@@ -180,7 +179,7 @@ jobs:
|
||||
- arm64_build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
|
||||
10
.github/workflows/pull_request.yml
vendored
10
.github/workflows/pull_request.yml
vendored
@@ -16,7 +16,7 @@ jobs:
|
||||
name: Web - Lint
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
@@ -32,7 +32,7 @@ jobs:
|
||||
name: Web - Test
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
@@ -52,11 +52,11 @@ jobs:
|
||||
name: Python Checks
|
||||
steps:
|
||||
- name: Check out the repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
|
||||
uses: actions/setup-python@v6.1.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
with:
|
||||
python-version: ${{ env.DEFAULT_PYTHON }}
|
||||
- name: Install requirements
|
||||
@@ -75,7 +75,7 @@ jobs:
|
||||
name: Python Tests
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
|
||||
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -10,7 +10,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- id: lowercaseRepo
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.17.0
|
||||
VERSION = 0.18.0
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
BOARDS= #Initialized empty
|
||||
|
||||
53
README_CN.md
53
README_CN.md
@@ -1,28 +1,31 @@
|
||||
<p align="center">
|
||||
<img align="center" alt="logo" src="docs/static/img/frigate.png">
|
||||
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
|
||||
</p>
|
||||
|
||||
# Frigate - 一个具有实时目标检测的本地NVR
|
||||
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
|
||||
|
||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
|
||||
</a>
|
||||
|
||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备AI物体检测功能。使用OpenCV和TensorFlow在本地为IP摄像头执行实时物体检测。
|
||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
|
||||
|
||||
强烈推荐使用GPU或者AI加速器(例如[Google Coral加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/))。它们的性能甚至超过目前的顶级CPU,并且可以以极低的耗电实现更优的性能。
|
||||
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与Home Assistant紧密集成
|
||||
- 设计上通过仅在必要时和必要地点寻找物体,最大限度地减少资源使用并最大化性能
|
||||
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU,并且功耗也极低。
|
||||
|
||||
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与 Home Assistant 紧密集成
|
||||
- 设计上通过仅在必要时和必要地点寻找目标,最大限度地减少资源使用并最大化性能
|
||||
- 大量利用多进程处理,强调实时性而非处理每一帧
|
||||
- 使用非常低开销的运动检测来确定运行物体检测的位置
|
||||
- 使用TensorFlow进行物体检测,运行在单独的进程中以达到最大FPS
|
||||
- 通过MQTT进行通信,便于集成到其他系统中
|
||||
- 使用非常低开销的画面变动检测(也叫运动检测)来确定运行目标检测的位置
|
||||
- 使用 TensorFlow 进行目标检测,并运行在单独的进程中以达到最大 FPS
|
||||
- 通过 MQTT 进行通信,便于集成到其他系统中
|
||||
- 根据检测到的物体设置保留时间进行视频录制
|
||||
- 24/7全天候录制
|
||||
- 通过RTSP重新流传输以减少摄像头的连接数
|
||||
- 支持WebRTC和MSE,实现低延迟的实时观看
|
||||
- 24/7 全天候录制
|
||||
- 通过 RTSP 重新流传输以减少摄像头的连接数
|
||||
- 支持 WebRTC 和 MSE,实现低延迟的实时观看
|
||||
|
||||
## 社区中文翻译文档
|
||||
|
||||
@@ -32,39 +35,55 @@
|
||||
|
||||
如果您想通过捐赠支持开发,请使用 [Github Sponsors](https://github.com/sponsors/blakeblackshear)。
|
||||
|
||||
## 协议
|
||||
|
||||
本项目采用 **MIT 许可证**授权。
|
||||
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
|
||||
|
||||
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
|
||||
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
|
||||
|
||||
## 截图
|
||||
|
||||
### 实时监控面板
|
||||
|
||||
<div>
|
||||
<img width="800" alt="实时监控面板" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
|
||||
</div>
|
||||
|
||||
### 简单的核查工作流程
|
||||
|
||||
<div>
|
||||
<img width="800" alt="简单的审查工作流程" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
|
||||
</div>
|
||||
|
||||
### 多摄像头可按时间轴查看
|
||||
|
||||
<div>
|
||||
<img width="800" alt="多摄像头可按时间轴查看" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
|
||||
</div>
|
||||
|
||||
### 内置遮罩和区域编辑器
|
||||
|
||||
<div>
|
||||
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
|
||||
</div>
|
||||
|
||||
|
||||
## 翻译
|
||||
|
||||
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
|
||||
|
||||
|
||||
## 非官方中文讨论社区
|
||||
欢迎加入中文讨论QQ群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
|
||||
|
||||
欢迎加入中文讨论 QQ 群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
|
||||
|
||||
Bilibili:https://space.bilibili.com/3546894915602564
|
||||
|
||||
|
||||
## 中文社区赞助商
|
||||
|
||||
[](https://edgeone.ai/zh?from=github)
|
||||
本项目 CDN 加速及安全防护由 Tencent EdgeOne 赞助
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2025 Frigate LLC.**
|
||||
|
||||
@@ -21,7 +21,7 @@ onvif-zeep-async == 4.0.*
|
||||
paho-mqtt == 2.1.*
|
||||
pandas == 2.2.*
|
||||
peewee == 3.17.*
|
||||
peewee_migrate == 1.13.*
|
||||
peewee_migrate == 1.14.*
|
||||
psutil == 7.1.*
|
||||
pydantic == 2.10.*
|
||||
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
|
||||
@@ -81,3 +81,5 @@ librosa==0.11.*
|
||||
soundfile==0.13.*
|
||||
# DeGirum detector
|
||||
degirum == 0.16.*
|
||||
# Memory profiling
|
||||
memray == 1.15.*
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ARG ROCM=1
|
||||
ARG AMDGPU=gfx900
|
||||
ARG HSA_OVERRIDE_GFX_VERSION
|
||||
ARG HSA_OVERRIDE
|
||||
|
||||
@@ -11,11 +10,10 @@ ARG HSA_OVERRIDE
|
||||
FROM wget AS rocm
|
||||
|
||||
ARG ROCM
|
||||
ARG AMDGPU
|
||||
|
||||
RUN apt update -qq && \
|
||||
apt install -y wget gpg && \
|
||||
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1/ubuntu/jammy/amdgpu-install_7.1.70100-1_all.deb && \
|
||||
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1.1/ubuntu/jammy/amdgpu-install_7.1.1.70101-1_all.deb && \
|
||||
apt install -y ./rocm.deb && \
|
||||
apt update && \
|
||||
apt install -qq -y rocm
|
||||
@@ -36,7 +34,10 @@ FROM deps AS deps-prelim
|
||||
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
|
||||
RUN apt-get update && \
|
||||
apt-get install -y libnuma1 && \
|
||||
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers
|
||||
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers && \
|
||||
# Install C++ standard library headers for HIPRTC kernel compilation fallback
|
||||
apt-get install -qq -y libstdc++-12-dev && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /opt/frigate
|
||||
COPY --from=rootfs / /
|
||||
@@ -54,12 +55,14 @@ RUN pip3 uninstall -y onnxruntime \
|
||||
FROM scratch AS rocm-dist
|
||||
|
||||
ARG ROCM
|
||||
ARG AMDGPU
|
||||
|
||||
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share/miopen/db/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
|
||||
# Copy MIOpen database files for gfx10xx and gfx11xx only (RDNA2/RDNA3)
|
||||
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx10* /opt/rocm-$ROCM/share/miopen/db/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx11* /opt/rocm-$ROCM/share/miopen/db/
|
||||
# Copy rocBLAS library files for gfx10xx and gfx11xx only
|
||||
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx10* /opt/rocm-$ROCM/lib/rocblas/library/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx11* /opt/rocm-$ROCM/lib/rocblas/library/
|
||||
COPY --from=rocm /opt/rocm-dist/ /
|
||||
|
||||
#######################################################################
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
variable "AMDGPU" {
|
||||
default = "gfx900"
|
||||
}
|
||||
variable "ROCM" {
|
||||
default = "7.1"
|
||||
default = "7.1.1"
|
||||
}
|
||||
variable "HSA_OVERRIDE_GFX_VERSION" {
|
||||
default = ""
|
||||
@@ -38,7 +35,6 @@ target rocm {
|
||||
}
|
||||
platforms = ["linux/amd64"]
|
||||
args = {
|
||||
AMDGPU = AMDGPU,
|
||||
ROCM = ROCM,
|
||||
HSA_OVERRIDE_GFX_VERSION = HSA_OVERRIDE_GFX_VERSION,
|
||||
HSA_OVERRIDE = HSA_OVERRIDE
|
||||
|
||||
@@ -1,53 +1,15 @@
|
||||
BOARDS += rocm
|
||||
|
||||
# AMD/ROCm is chunky so we build couple of smaller images for specific chipsets
|
||||
ROCM_CHIPSETS:=gfx900:9.0.0 gfx1030:10.3.0 gfx1100:11.0.0
|
||||
|
||||
local-rocm: version
|
||||
$(foreach chipset,$(ROCM_CHIPSETS), \
|
||||
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE=1 \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) \
|
||||
--load \
|
||||
&&) true
|
||||
|
||||
unset HSA_OVERRIDE_GFX_VERSION && \
|
||||
HSA_OVERRIDE=0 \
|
||||
AMDGPU=gfx \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=frigate:latest-rocm \
|
||||
--load
|
||||
|
||||
build-rocm: version
|
||||
$(foreach chipset,$(ROCM_CHIPSETS), \
|
||||
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE=1 \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
|
||||
&&) true
|
||||
|
||||
unset HSA_OVERRIDE_GFX_VERSION && \
|
||||
HSA_OVERRIDE=0 \
|
||||
AMDGPU=gfx \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm
|
||||
|
||||
push-rocm: build-rocm
|
||||
$(foreach chipset,$(ROCM_CHIPSETS), \
|
||||
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE=1 \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
|
||||
--push \
|
||||
&&) true
|
||||
|
||||
unset HSA_OVERRIDE_GFX_VERSION && \
|
||||
HSA_OVERRIDE=0 \
|
||||
AMDGPU=gfx \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm \
|
||||
--push
|
||||
|
||||
@@ -75,7 +75,13 @@ audio:
|
||||
|
||||
### Audio Transcription
|
||||
|
||||
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAI’s open-source Whisper models via `faster-whisper`. To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
|
||||
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAI’s open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
|
||||
|
||||
Transcription accuracy also depends heavily on the quality of your camera's microphone and recording conditions. Many cameras use inexpensive microphones, and distance to the speaker, low audio bitrate, or background noise can significantly reduce transcription quality. If you need higher accuracy, more robust long-running queues, or large-scale automatic transcription, consider using the HTTP API in combination with an automation platform and a cloud transcription service.
|
||||
|
||||
#### Configuration
|
||||
|
||||
To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
|
||||
|
||||
```yaml
|
||||
audio_transcription:
|
||||
@@ -151,3 +157,21 @@ Only one `speech` event may be transcribed at a time. Frigate does not automatic
|
||||
:::
|
||||
|
||||
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a supported Nvidia GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.
|
||||
|
||||
#### FAQ
|
||||
|
||||
1. Why doesn't Frigate automatically transcribe all `speech` events?
|
||||
|
||||
Frigate does not implement a queue mechanism for speech transcription, and adding one is not trivial. A proper queue would need backpressure, prioritization, memory/disk buffering, retry logic, crash recovery, and safeguards to prevent unbounded growth when events outpace processing. That’s a significant amount of complexity for a feature that, in most real-world environments, would mostly just churn through low-value noise.
|
||||
|
||||
Because transcription is **serialized (one event at a time)** and speech events can be generated far faster than they can be processed, an auto-transcribe toggle would very quickly create an ever-growing backlog and degrade core functionality. For the amount of engineering and risk involved, it adds **very little practical value** for the majority of deployments, which are often on low-powered, edge hardware.
|
||||
|
||||
If you hear speech that’s actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
|
||||
|
||||
Other options are being considered for future versions of Frigate to add transcription options that support external `whisper` Docker containers. A single transcription service could then be shared by Frigate and other applications (for example, Home Assistant Voice), and run on more powerful machines when available.
|
||||
|
||||
2. Why don't you save live transcription text and use that for `speech` events?
|
||||
|
||||
There’s no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
|
||||
|
||||
Automatically persisting that data would often result in **misaligned, partial, or irrelevant transcripts**, while still incurring all of the CPU, storage, and privacy costs of transcription. That’s why Frigate treats transcription as an **explicit, user-initiated action** rather than an automatic side-effect of every `speech` event.
|
||||
|
||||
@@ -270,3 +270,42 @@ To use role-based access control, you must connect to Frigate via the **authenti
|
||||
1. Log in as an **admin** user via port `8971`.
|
||||
2. Navigate to **Settings > Users**.
|
||||
3. Edit a user’s role by selecting **admin** or **viewer**.
|
||||
|
||||
## API Authentication Guide
|
||||
|
||||
### Getting a Bearer Token
|
||||
|
||||
To use the Frigate API, you need to authenticate first. Follow these steps to obtain a Bearer token:
|
||||
|
||||
#### 1. Login
|
||||
|
||||
Make a POST request to `/login` with your credentials:
|
||||
|
||||
```bash
|
||||
curl -i -X POST https://frigate_ip:8971/api/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"user": "admin", "password": "your_password"}'
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
You may need to include `-k` in the argument list in these steps (eg: `curl -k -i -X POST ...`) if your Frigate instance is using a self-signed certificate.
|
||||
|
||||
:::
|
||||
|
||||
The response will contain a cookie with the JWT token.
|
||||
|
||||
#### 2. Using the Bearer Token
|
||||
|
||||
Once you have the token, include it in the Authorization header for subsequent requests:
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer <your_token>" https://frigate_ip:8971/api/profile
|
||||
```
|
||||
|
||||
#### 3. Token Lifecycle
|
||||
|
||||
- Tokens are valid for the configured session length
|
||||
- Tokens are automatically refreshed when you visit the `/auth` endpoint
|
||||
- Tokens are invalidated when the user's password is changed
|
||||
- Use `/logout` to clear your session cookie
|
||||
|
||||
@@ -60,13 +60,13 @@ Choose one or more cameras and draw a rectangle over the area of interest for ea
|
||||
|
||||
### Step 3: Assign Training Examples
|
||||
|
||||
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
|
||||
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state. It's not strictly required to select all images you see. If a state is missing from the samples, you can train it from the Recent tab later.
|
||||
|
||||
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
|
||||
|
||||
Once all images are assigned, training will begin automatically.
|
||||
Once some images are assigned, training will begin automatically.
|
||||
|
||||
### Improving the Model
|
||||
|
||||
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
|
||||
- **Data collection**: Use the model’s Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
|
||||
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
|
||||
|
||||
@@ -111,3 +111,9 @@ review:
|
||||
## Review Reports
|
||||
|
||||
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
|
||||
|
||||
### Requesting Reports Programmatically
|
||||
|
||||
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
|
||||
|
||||
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.
|
||||
|
||||
@@ -107,23 +107,23 @@ Fine-tune the LPR feature using these optional parameters at the global level of
|
||||
|
||||
### Normalization Rules
|
||||
|
||||
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially. Each rule must have a `pattern` (which can be a string or a regex, prepended by `r`) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
|
||||
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially and are applied _before_ the `format` regex, if specified. Each rule must have a `pattern` (which can be a string or a regex) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
|
||||
|
||||
These rules must be defined at the global level of your `lpr` config.
|
||||
|
||||
```yaml
|
||||
lpr:
|
||||
replace_rules:
|
||||
- pattern: r'[%#*?]' # Remove noise symbols
|
||||
- pattern: "[%#*?]" # Remove noise symbols
|
||||
replacement: ""
|
||||
- pattern: r'[= ]' # Normalize = or space to dash
|
||||
- pattern: "[= ]" # Normalize = or space to dash
|
||||
replacement: "-"
|
||||
- pattern: "O" # Swap 'O' to '0' (common OCR error)
|
||||
replacement: "0"
|
||||
- pattern: r'I' # Swap 'I' to '1'
|
||||
- pattern: "I" # Swap 'I' to '1'
|
||||
replacement: "1"
|
||||
- pattern: r'(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123)
|
||||
replacement: r'\1-\2'
|
||||
- pattern: '(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123) - use single quotes to preserve backslashes
|
||||
replacement: '\1-\2'
|
||||
```
|
||||
|
||||
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
|
||||
@@ -374,9 +374,19 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
|
||||
|
||||
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
|
||||
|
||||
1. Enable debug logs to see exactly what Frigate is doing.
|
||||
1. Start with a simplified LPR config.
|
||||
|
||||
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary.
|
||||
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
|
||||
|
||||
```yaml
|
||||
lpr:
|
||||
enabled: true
|
||||
debug_save_plates: true
|
||||
```
|
||||
|
||||
2. Enable debug logs to see exactly what Frigate is doing.
|
||||
|
||||
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
|
||||
|
||||
```yaml
|
||||
logger:
|
||||
@@ -385,7 +395,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
|
||||
frigate.data_processing.common.license_plate: debug
|
||||
```
|
||||
|
||||
2. Ensure your plates are being _detected_.
|
||||
3. Ensure your plates are being _detected_.
|
||||
|
||||
If you are using a Frigate+ or `license_plate` detecting model:
|
||||
|
||||
@@ -398,7 +408,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
|
||||
- Watch the debug logs for messages from the YOLOv9 plate detector.
|
||||
- You may need to adjust your `detection_threshold` if your plates are not being detected.
|
||||
|
||||
3. Ensure the characters on detected plates are being _recognized_.
|
||||
4. Ensure the characters on detected plates are being _recognized_.
|
||||
|
||||
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
|
||||
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.
|
||||
|
||||
@@ -178,6 +178,8 @@ To use the Reolink Doorbell with two way talk, you should use the [recommended R
|
||||
|
||||
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
|
||||
|
||||
To prevent go2rtc from blocking other applications from accessing your camera's two-way audio, you must configure your stream with `#backchannel=0`. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
|
||||
|
||||
### Streaming options on camera group dashboards
|
||||
|
||||
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
|
||||
@@ -232,7 +234,7 @@ When your browser runs into problems playing back your camera streams, it will l
|
||||
- **mse-decode**
|
||||
|
||||
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
|
||||
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
|
||||
- What to try: Check the browser console for the supported and negotiated codecs. Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
|
||||
|
||||
- Possible console messages from the player code:
|
||||
|
||||
|
||||
@@ -28,7 +28,6 @@ To create a poly mask:
|
||||
5. Click the plus icon under the type of mask or zone you would like to create
|
||||
6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon.
|
||||
7. When you've finished creating your mask, press Save.
|
||||
8. Restart Frigate to apply your changes.
|
||||
|
||||
Your config file will be updated with the relative coordinates of the mask/zone:
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
**Most Hardware**
|
||||
|
||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB, Mini PCIe, and m.2 formats allowing for a wide range of compatibility with devices.
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
||||
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
|
||||
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
|
||||
@@ -69,12 +69,10 @@ Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8
|
||||
|
||||
## Edge TPU Detector
|
||||
|
||||
The Edge TPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the `"type"` attribute to `"edgetpu"`.
|
||||
The Edge TPU detector type runs TensorFlow Lite models utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the `"type"` attribute to `"edgetpu"`.
|
||||
|
||||
The Edge TPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
|
||||
|
||||
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||
|
||||
:::tip
|
||||
|
||||
See [common Edge TPU troubleshooting steps](/troubleshooting/edgetpu) if the Edge TPU is not detected.
|
||||
@@ -146,6 +144,44 @@ detectors:
|
||||
device: pci
|
||||
```
|
||||
|
||||
### EdgeTPU Supported Models
|
||||
|
||||
| Model | Notes |
|
||||
| ------------------------------------- | ------------------------------------------- |
|
||||
| [MobileNet v2](#ssdlite-mobilenet-v2) | Default model |
|
||||
| [YOLOv9](#yolo-v9) | More accurate but slower than default model |
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
|
||||
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||
|
||||
#### YOLO v9
|
||||
|
||||
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
|
||||
|
||||
<details>
|
||||
<summary>YOLOv9 Setup & Config</summary>
|
||||
|
||||
After placing the downloaded files for the tflite model and labels in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
coral:
|
||||
type: edgetpu
|
||||
device: usb
|
||||
|
||||
model:
|
||||
model_type: yolo-generic
|
||||
width: 320 # <--- should match the imgsize of the model, typically 320
|
||||
height: 320 # <--- should match the imgsize of the model, typically 320
|
||||
path: /config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite
|
||||
labelmap_path: /config/labels-coco17.txt
|
||||
```
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 17 objects.
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Hailo-8
|
||||
@@ -364,7 +400,7 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
If you are using a Frigate+ model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
@@ -704,7 +740,7 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
If you are using a Frigate+ model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@@ -139,7 +139,13 @@ record:
|
||||
|
||||
:::tip
|
||||
|
||||
When using `hwaccel_args` globally hardware encoding is used for time lapse generation. The encoder determines its own behavior so the resulting file size may be undesirably large.
|
||||
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras.<camera>.record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
The encoder determines its own behavior so the resulting file size may be undesirably large.
|
||||
To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (where `n` stands for the value of the quantisation parameter). The value can be adjusted to get an acceptable tradeoff between quality and file size for the given scenario.
|
||||
|
||||
:::
|
||||
@@ -148,19 +154,16 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
|
||||
|
||||
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
|
||||
|
||||
## Syncing Recordings With Disk
|
||||
## Syncing Media Files With Disk
|
||||
|
||||
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.
|
||||
Media files (event snapshots, event thumbnails, review thumbnails, previews, exports, and recordings) can become orphaned when database entries are deleted but the corresponding files remain on disk.
|
||||
|
||||
```yaml
|
||||
record:
|
||||
sync_recordings: True
|
||||
```
|
||||
Normal operation may leave small numbers of orphaned files until Frigate's scheduled cleanup, but crashes, configuration changes, or upgrades may cause more orphaned files that Frigate does not clean up. This feature checks the file system for media files and removes any that are not referenced in the database.
|
||||
|
||||
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
|
||||
The Maintenance pane in the Frigate UI or an API endpoint `POST /api/media/sync` can be used to trigger a media sync. When using the API, a job ID is returned and the operation continues on the server. Status can be checked with the `/api/media/sync/status/{job_id}` endpoint.
|
||||
|
||||
:::warning
|
||||
|
||||
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.
|
||||
This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended.
|
||||
|
||||
:::
|
||||
|
||||
@@ -123,7 +123,7 @@ auth:
|
||||
# Optional: Refresh time in seconds (default: shown below)
|
||||
# When the session is going to expire in less time than this setting,
|
||||
# it will be refreshed back to the session_length.
|
||||
refresh_time: 43200 # 12 hours
|
||||
refresh_time: 1800 # 30 minutes
|
||||
# Optional: Rate limiting for login failures to help prevent brute force
|
||||
# login attacks (default: shown below)
|
||||
# See the docs for more information on valid values
|
||||
@@ -510,8 +510,6 @@ record:
|
||||
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
|
||||
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
|
||||
expire_interval: 60
|
||||
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
|
||||
sync_recordings: False
|
||||
# Optional: Continuous retention settings
|
||||
continuous:
|
||||
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
|
||||
@@ -534,6 +532,8 @@ record:
|
||||
# The -r (framerate) dictates how smooth the output video is.
|
||||
# So the args would be -vf setpts=0.02*PTS -r 30 in that case.
|
||||
timelapse_args: "-vf setpts=0.04*PTS -r 30"
|
||||
# Optional: Global hardware acceleration settings for timelapse exports. (default: inherit)
|
||||
hwaccel_args: auto
|
||||
# Optional: Recording Preview Settings
|
||||
preview:
|
||||
# Optional: Quality of recording preview (default: shown below).
|
||||
@@ -710,6 +710,44 @@ audio_transcription:
|
||||
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
|
||||
language: en
|
||||
|
||||
# Optional: Configuration for classification models
|
||||
classification:
|
||||
# Optional: Configuration for bird classification
|
||||
bird:
|
||||
# Optional: Enable bird classification (default: shown below)
|
||||
enabled: False
|
||||
# Optional: Minimum classification score required to be considered a match (default: shown below)
|
||||
threshold: 0.9
|
||||
custom:
|
||||
# Required: name of the classification model
|
||||
model_name:
|
||||
# Optional: Enable running the model (default: shown below)
|
||||
enabled: True
|
||||
# Optional: Name of classification model (default: shown below)
|
||||
name: None
|
||||
# Optional: Classification score threshold to change the state (default: shown below)
|
||||
threshold: 0.8
|
||||
# Optional: Number of classification attempts to save in the recent classifications tab (default: shown below)
|
||||
# NOTE: Defaults to 200 for object classification and 100 for state classification if not specified
|
||||
save_attempts: None
|
||||
# Optional: Object classification configuration
|
||||
object_config:
|
||||
# Required: Object types to classify
|
||||
objects: [dog]
|
||||
# Optional: Type of classification that is applied (default: shown below)
|
||||
classification_type: sub_label
|
||||
# Optional: State classification configuration
|
||||
state_config:
|
||||
# Required: Cameras to run classification on
|
||||
cameras:
|
||||
camera_name:
|
||||
# Required: Crop of image frame on this camera to run classification on
|
||||
crop: [0, 180, 220, 400]
|
||||
# Optional: If classification should be run when motion is detected in the crop (default: shown below)
|
||||
motion: False
|
||||
# Optional: Interval to run classification on in seconds (default: shown below)
|
||||
interval: None
|
||||
|
||||
# Optional: Restream configuration
|
||||
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
|
||||
# NOTE: The default go2rtc API port (1984) must be used,
|
||||
@@ -797,6 +835,11 @@ cameras:
|
||||
# Optional: camera specific output args (default: inherit)
|
||||
# output_args:
|
||||
|
||||
# Optional: camera specific hwaccel args for timelapse export (default: inherit)
|
||||
# record:
|
||||
# export:
|
||||
# hwaccel_args:
|
||||
|
||||
# Optional: timeout for highest scoring image before allowing it
|
||||
# to be replaced by a newer image. (default: shown below)
|
||||
best_image_timeout: 60
|
||||
@@ -873,7 +916,7 @@ cameras:
|
||||
user: admin
|
||||
# Optional: password for login.
|
||||
password: admin
|
||||
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
|
||||
# Optional: Skip TLS verification and disable digest authentication for the ONVIF server (default: shown below)
|
||||
tls_insecure: False
|
||||
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
|
||||
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
|
||||
@@ -964,10 +1007,6 @@ ui:
|
||||
# full: 8:15:22 PM Mountain Standard Time
|
||||
# (default: shown below).
|
||||
time_style: medium
|
||||
# Optional: Ability to manually override the date / time styling to use strftime format
|
||||
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
|
||||
# possible values are shown above (default: not set)
|
||||
strftime_fmt: "%Y/%m/%d %H:%M"
|
||||
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
|
||||
# Used in the UI and in MQTT topics
|
||||
unit_system: metric
|
||||
|
||||
@@ -24,11 +24,12 @@ birdseye:
|
||||
restream: True
|
||||
```
|
||||
|
||||
:::tip
|
||||
:::tip
|
||||
|
||||
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `1–2`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
|
||||
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `1–2`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
|
||||
|
||||
:::
|
||||
|
||||
### Securing Restream With Authentication
|
||||
|
||||
The go2rtc restream can be secured with RTSP based username / password authentication. Ex:
|
||||
@@ -159,6 +160,31 @@ go2rtc:
|
||||
|
||||
See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
|
||||
|
||||
## Preventing go2rtc from blocking two-way audio {#two-way-talk-restream}
|
||||
|
||||
For cameras that support two-way talk, go2rtc will automatically establish an audio output backchannel when connecting to an RTSP stream. This backchannel blocks access to the camera's audio output for two-way talk functionality, preventing both Frigate and other applications from using it.
|
||||
|
||||
To prevent this, you must configure two separate stream instances:
|
||||
|
||||
1. One stream instance with `#backchannel=0` for Frigate's viewing, recording, and detection (prevents go2rtc from establishing the blocking backchannel)
|
||||
2. A second stream instance without `#backchannel=0` for two-way talk functionality (can be used by Frigate's WebRTC viewer or other applications)
|
||||
|
||||
Configuration example:
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
front_door:
|
||||
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2#backchannel=0
|
||||
front_door_twoway:
|
||||
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
|
||||
```
|
||||
|
||||
In this configuration:
|
||||
|
||||
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
|
||||
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
|
||||
|
||||
## Advanced Restream Configurations
|
||||
|
||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
||||
@@ -169,4 +195,4 @@ NOTE: The output will need to be passed with two curly braces `{{output}}`
|
||||
go2rtc:
|
||||
streams:
|
||||
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
|
||||
```
|
||||
```
|
||||
|
||||
@@ -159,7 +159,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
|
||||
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
|
||||
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
|
||||
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | t-320: 14ms s-320: 24ms t-640: 34ms s-640: 65ms | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
|
||||
|
||||
@@ -135,6 +135,7 @@ Finally, configure [hardware object detection](/configuration/object_detectors#h
|
||||
### MemryX MX3
|
||||
|
||||
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
|
||||
|
||||
- x86 (Intel/AMD) PCs
|
||||
- Raspberry Pi 5
|
||||
- Orange Pi 5 Plus/Max
|
||||
@@ -142,7 +143,6 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
|
||||
|
||||
#### Configuration
|
||||
|
||||
|
||||
#### Installation
|
||||
|
||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
|
||||
@@ -156,7 +156,7 @@ Then follow these steps for installing the correct driver/runtime configuration:
|
||||
|
||||
#### Setup
|
||||
|
||||
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
||||
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
||||
|
||||
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
|
||||
|
||||
@@ -173,7 +173,7 @@ In your `docker-compose.yml`, also add:
|
||||
privileged: true
|
||||
|
||||
volumes:
|
||||
/run/mxa_manager:/run/mxa_manager
|
||||
- /run/mxa_manager:/run/mxa_manager
|
||||
```
|
||||
|
||||
If you can't use Docker Compose, you can run the container with something similar to this:
|
||||
@@ -411,7 +411,7 @@ To install make sure you have the [community app plugin here](https://forums.unr
|
||||
|
||||
## Proxmox
|
||||
|
||||
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.
|
||||
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers. Ensure that ballooning is **disabled**, especially if you are passing through a GPU to the VM.
|
||||
|
||||
:::warning
|
||||
|
||||
|
||||
@@ -113,7 +113,8 @@ section.
|
||||
|
||||
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
|
||||
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
|
||||
3. If your camera supports two-way talk, you must configure your stream with `#backchannel=0` to prevent go2rtc from blocking other applications from accessing the camera's audio output. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
|
||||
|
||||
## Homekit Configuration
|
||||
|
||||
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.
|
||||
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.
|
||||
|
||||
129
docs/docs/troubleshooting/memory.md
Normal file
129
docs/docs/troubleshooting/memory.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
id: memory
|
||||
title: Memory Troubleshooting
|
||||
---
|
||||
|
||||
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
|
||||
|
||||
## Enabling Memory Profiling
|
||||
|
||||
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
|
||||
|
||||
```bash
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
|
||||
```
|
||||
|
||||
### Module Names
|
||||
|
||||
Frigate processes are named using a module-based naming scheme. Common module names include:
|
||||
|
||||
- `frigate.review_segment_manager` - Review segment processing
|
||||
- `frigate.recording_manager` - Recording management
|
||||
- `frigate.capture` - Camera capture processes (all cameras with this module name)
|
||||
- `frigate.process` - Camera processing/tracking (all cameras with this module name)
|
||||
- `frigate.output` - Output processing
|
||||
- `frigate.audio_manager` - Audio processing
|
||||
- `frigate.embeddings` - Embeddings processing
|
||||
|
||||
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
|
||||
|
||||
```bash
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.capture:front_door"
|
||||
```
|
||||
|
||||
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Binary File Creation**: When profiling is enabled, memray creates a binary file (`.bin`) in `/config/memray_reports/` that is updated continuously in real-time as the process runs.
|
||||
|
||||
2. **Automatic HTML Generation**: On normal process exit, Frigate automatically:
|
||||
|
||||
- Stops memray tracking
|
||||
- Generates an HTML flamegraph report
|
||||
- Saves it to `/config/memray_reports/<module_name>.html`
|
||||
|
||||
3. **Crash Recovery**: If a process crashes (SIGKILL, segfault, etc.), the binary file is preserved with all data up to the crash point. You can manually generate the HTML report from the binary file.
|
||||
|
||||
## Viewing Reports
|
||||
|
||||
### Automatic Reports
|
||||
|
||||
After a process exits normally, you'll find HTML reports in `/config/memray_reports/`. Open these files in a web browser to view interactive flamegraphs showing memory usage patterns.
|
||||
|
||||
### Manual Report Generation
|
||||
|
||||
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
|
||||
|
||||
```bash
|
||||
memray flamegraph /config/memray_reports/<module_name>.bin
|
||||
```
|
||||
|
||||
This will generate an HTML file that you can open in your browser.
|
||||
|
||||
## Understanding the Reports
|
||||
|
||||
Memray flamegraphs show:
|
||||
|
||||
- **Memory allocations over time**: See where memory is being allocated in your code
|
||||
- **Call stacks**: Understand the full call chain leading to allocations
|
||||
- **Memory hotspots**: Identify functions or code paths that allocate the most memory
|
||||
- **Memory leaks**: Spot patterns where memory is allocated but not freed
|
||||
|
||||
The interactive HTML reports allow you to:
|
||||
|
||||
- Zoom into specific time ranges
|
||||
- Filter by function names
|
||||
- View detailed allocation information
|
||||
- Export data for further analysis
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Profile During Issues**: Enable profiling when you're experiencing memory issues, not all the time, as it adds some overhead.
|
||||
|
||||
2. **Profile Specific Modules**: Instead of profiling everything, focus on the modules you suspect are causing issues.
|
||||
|
||||
3. **Let Processes Run**: Allow processes to run for a meaningful duration to capture representative memory usage patterns.
|
||||
|
||||
4. **Check Binary Files**: If HTML reports aren't generated automatically (e.g., after a crash), check for `.bin` files in `/config/memray_reports/` and generate reports manually.
|
||||
|
||||
5. **Compare Reports**: Generate reports at different times to compare memory usage patterns and identify trends.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Reports Generated
|
||||
|
||||
- Check that the environment variable is set correctly
|
||||
- Verify the module name matches exactly (case-sensitive)
|
||||
- Check logs for memray-related errors
|
||||
- Ensure `/config/memray_reports/` directory exists and is writable
|
||||
|
||||
### Process Crashed Before Report Generation
|
||||
|
||||
- Look for `.bin` files in `/config/memray_reports/`
|
||||
- Manually generate HTML reports using: `memray flamegraph <file>.bin`
|
||||
- The binary file contains all data up to the crash point
|
||||
|
||||
### Reports Show No Data
|
||||
|
||||
- Ensure the process ran long enough to generate meaningful data
|
||||
- Check that memray is properly installed (included by default in Frigate)
|
||||
- Verify the process actually started and ran (check process logs)
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Enable profiling for review and capture modules
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
|
||||
|
||||
# Start Frigate
|
||||
# ... let it run for a while ...
|
||||
|
||||
# Check for reports
|
||||
ls -lh /config/memray_reports/
|
||||
|
||||
# If a process crashed, manually generate report
|
||||
memray flamegraph /config/memray_reports/frigate_capture_front_door.bin
|
||||
```
|
||||
|
||||
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).
|
||||
3199
docs/package-lock.json
generated
3199
docs/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -18,14 +18,14 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "^3.7.0",
|
||||
"@docusaurus/plugin-content-docs": "^3.6.3",
|
||||
"@docusaurus/plugin-content-docs": "^3.7.0",
|
||||
"@docusaurus/preset-classic": "^3.7.0",
|
||||
"@docusaurus/theme-mermaid": "^3.6.3",
|
||||
"@docusaurus/theme-mermaid": "^3.7.0",
|
||||
"@inkeep/docusaurus": "^2.0.16",
|
||||
"@mdx-js/react": "^3.1.0",
|
||||
"clsx": "^2.1.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.3.1",
|
||||
"docusaurus-theme-openapi-docs": "^4.3.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.5.1",
|
||||
"docusaurus-theme-openapi-docs": "^4.5.1",
|
||||
"prism-react-renderer": "^2.4.1",
|
||||
"raw-loader": "^4.0.2",
|
||||
"react": "^18.3.1",
|
||||
@@ -44,9 +44,9 @@
|
||||
]
|
||||
},
|
||||
"devDependencies": {
|
||||
"@docusaurus/module-type-aliases": "^3.4.0",
|
||||
"@docusaurus/types": "^3.4.0",
|
||||
"@types/react": "^18.3.7"
|
||||
"@docusaurus/module-type-aliases": "^3.7.0",
|
||||
"@docusaurus/types": "^3.7.0",
|
||||
"@types/react": "^18.3.27"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=18.0"
|
||||
|
||||
@@ -131,6 +131,7 @@ const sidebars: SidebarsConfig = {
|
||||
"troubleshooting/recordings",
|
||||
"troubleshooting/gpu",
|
||||
"troubleshooting/edgetpu",
|
||||
"troubleshooting/memory",
|
||||
],
|
||||
Development: [
|
||||
"development/contributing",
|
||||
|
||||
@@ -1,13 +1,18 @@
|
||||
.alert {
|
||||
padding: 12px;
|
||||
background: #fff8e6;
|
||||
border-bottom: 1px solid #ffd166;
|
||||
text-align: center;
|
||||
font-size: 15px;
|
||||
}
|
||||
|
||||
.alert a {
|
||||
color: #1890ff;
|
||||
font-weight: 500;
|
||||
margin-left: 6px;
|
||||
}
|
||||
padding: 12px;
|
||||
background: #fff8e6;
|
||||
border-bottom: 1px solid #ffd166;
|
||||
text-align: center;
|
||||
font-size: 15px;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .alert {
|
||||
background: #3b2f0b;
|
||||
border-bottom: 1px solid #665c22;
|
||||
}
|
||||
|
||||
.alert a {
|
||||
color: #1890ff;
|
||||
font-weight: 500;
|
||||
margin-left: 6px;
|
||||
}
|
||||
|
||||
1293
docs/static/frigate-api.yaml
vendored
1293
docs/static/frigate-api.yaml
vendored
File diff suppressed because it is too large
Load Diff
@@ -23,17 +23,24 @@ from markupsafe import escape
|
||||
from peewee import SQL, fn, operator
|
||||
from pydantic import ValidationError
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
|
||||
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
|
||||
from frigate.api.defs.request.app_body import AppConfigSetBody
|
||||
from frigate.api.defs.request.app_body import AppConfigSetBody, MediaSyncBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.config.camera.updater import (
|
||||
CameraConfigUpdateEnum,
|
||||
CameraConfigUpdateTopic,
|
||||
)
|
||||
from frigate.ffmpeg_presets import FFMPEG_HWACCEL_VAAPI, _gpu_selector
|
||||
from frigate.jobs.media_sync import (
|
||||
get_current_media_sync_job,
|
||||
get_media_sync_job_by_id,
|
||||
start_media_sync_job,
|
||||
)
|
||||
from frigate.models import Event, Timeline
|
||||
from frigate.stats.prometheus import get_metrics, update_metrics
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
from frigate.util.builtin import (
|
||||
clean_camera_user_pass,
|
||||
flatten_config_data,
|
||||
@@ -56,29 +63,33 @@ logger = logging.getLogger(__name__)
|
||||
router = APIRouter(tags=[Tags.app])
|
||||
|
||||
|
||||
@router.get("/", response_class=PlainTextResponse)
|
||||
@router.get(
|
||||
"/", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
|
||||
)
|
||||
def is_healthy():
|
||||
return "Frigate is running. Alive and healthy!"
|
||||
|
||||
|
||||
@router.get("/config/schema.json")
|
||||
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
|
||||
def config_schema(request: Request):
|
||||
return Response(
|
||||
content=request.app.frigate_config.schema_json(), media_type="application/json"
|
||||
)
|
||||
|
||||
|
||||
@router.get("/version", response_class=PlainTextResponse)
|
||||
@router.get(
|
||||
"/version", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
|
||||
)
|
||||
def version():
|
||||
return VERSION
|
||||
|
||||
|
||||
@router.get("/stats")
|
||||
@router.get("/stats", dependencies=[Depends(allow_any_authenticated())])
|
||||
def stats(request: Request):
|
||||
return JSONResponse(content=request.app.stats_emitter.get_latest_stats())
|
||||
|
||||
|
||||
@router.get("/stats/history")
|
||||
@router.get("/stats/history", dependencies=[Depends(allow_any_authenticated())])
|
||||
def stats_history(request: Request, keys: str = None):
|
||||
if keys:
|
||||
keys = keys.split(",")
|
||||
@@ -86,7 +97,7 @@ def stats_history(request: Request, keys: str = None):
|
||||
return JSONResponse(content=request.app.stats_emitter.get_stats_history(keys))
|
||||
|
||||
|
||||
@router.get("/metrics")
|
||||
@router.get("/metrics", dependencies=[Depends(allow_any_authenticated())])
|
||||
def metrics(request: Request):
|
||||
"""Expose Prometheus metrics endpoint and update metrics with latest stats"""
|
||||
# Retrieve the latest statistics and update the Prometheus metrics
|
||||
@@ -103,7 +114,7 @@ def metrics(request: Request):
|
||||
return Response(content=content, media_type=content_type)
|
||||
|
||||
|
||||
@router.get("/config")
|
||||
@router.get("/config", dependencies=[Depends(allow_any_authenticated())])
|
||||
def config(request: Request):
|
||||
config_obj: FrigateConfig = request.app.frigate_config
|
||||
config: dict[str, dict[str, Any]] = config_obj.model_dump(
|
||||
@@ -209,7 +220,7 @@ def config_raw_paths(request: Request):
|
||||
return JSONResponse(content=raw_paths)
|
||||
|
||||
|
||||
@router.get("/config/raw")
|
||||
@router.get("/config/raw", dependencies=[Depends(allow_any_authenticated())])
|
||||
def config_raw():
|
||||
config_file = find_config_file()
|
||||
|
||||
@@ -452,9 +463,17 @@ def config_set(request: Request, body: AppConfigSetBody):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/vainfo")
|
||||
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
|
||||
def vainfo():
|
||||
vainfo = vainfo_hwaccel()
|
||||
# Use LibvaGpuSelector to pick an appropriate libva device (if available)
|
||||
selected_gpu = ""
|
||||
try:
|
||||
selected_gpu = _gpu_selector.get_gpu_arg(FFMPEG_HWACCEL_VAAPI, 0) or ""
|
||||
except Exception:
|
||||
selected_gpu = ""
|
||||
|
||||
# If selected_gpu is empty, pass None to vainfo_hwaccel to run plain `vainfo`.
|
||||
vainfo = vainfo_hwaccel(device_name=selected_gpu or None)
|
||||
return JSONResponse(
|
||||
content={
|
||||
"return_code": vainfo.returncode,
|
||||
@@ -472,12 +491,16 @@ def vainfo():
|
||||
)
|
||||
|
||||
|
||||
@router.get("/nvinfo")
|
||||
@router.get("/nvinfo", dependencies=[Depends(allow_any_authenticated())])
|
||||
def nvinfo():
|
||||
return JSONResponse(content=get_nvidia_driver_info())
|
||||
|
||||
|
||||
@router.get("/logs/{service}", tags=[Tags.logs])
|
||||
@router.get(
|
||||
"/logs/{service}",
|
||||
tags=[Tags.logs],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def logs(
|
||||
service: str = Path(enum=["frigate", "nginx", "go2rtc"]),
|
||||
download: Optional[str] = None,
|
||||
@@ -585,7 +608,99 @@ def restart():
|
||||
)
|
||||
|
||||
|
||||
@router.get("/labels")
|
||||
@router.post(
|
||||
"/media/sync",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Start media sync job",
|
||||
description="""Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
|
||||
Returns 202 with job details when queued, or 409 if a job is already running.""",
|
||||
)
|
||||
def sync_media(body: MediaSyncBody = Body(...)):
|
||||
"""Start async media sync job - remove orphaned files.
|
||||
|
||||
Syncs specified media types: event snapshots, event thumbnails, review thumbnails,
|
||||
previews, exports, and/or recordings. Job runs in background; use /media/sync/current
|
||||
or /media/sync/status/{job_id} to check status.
|
||||
|
||||
Args:
|
||||
body: MediaSyncBody with dry_run flag and media_types list.
|
||||
media_types can include: 'all', 'event_snapshots', 'event_thumbnails',
|
||||
'review_thumbnails', 'previews', 'exports', 'recordings'
|
||||
|
||||
Returns:
|
||||
202 Accepted with job_id, or 409 Conflict if job already running.
|
||||
"""
|
||||
job_id = start_media_sync_job(
|
||||
dry_run=body.dry_run, media_types=body.media_types, force=body.force
|
||||
)
|
||||
|
||||
if job_id is None:
|
||||
# A job is already running
|
||||
current = get_current_media_sync_job()
|
||||
return JSONResponse(
|
||||
content={
|
||||
"error": "A media sync job is already running",
|
||||
"current_job_id": current.id if current else None,
|
||||
},
|
||||
status_code=409,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"job": {
|
||||
"job_type": "media_sync",
|
||||
"status": JobStatusTypesEnum.queued,
|
||||
"id": job_id,
|
||||
}
|
||||
},
|
||||
status_code=202,
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/media/sync/current",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get current media sync job",
|
||||
description="""Retrieve the current running media sync job, if any. Returns the job details
|
||||
or null when no job is active.""",
|
||||
)
|
||||
def get_media_sync_current():
|
||||
"""Get the current running media sync job, if any."""
|
||||
job = get_current_media_sync_job()
|
||||
|
||||
if job is None:
|
||||
return JSONResponse(content={"job": None}, status_code=200)
|
||||
|
||||
return JSONResponse(
|
||||
content={"job": job.to_dict()},
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/media/sync/status/{job_id}",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get media sync job status",
|
||||
description="""Get status and results for the specified media sync job id. Returns 200 with
|
||||
job details including results, or 404 if the job is not found.""",
|
||||
)
|
||||
def get_media_sync_status(job_id: str):
|
||||
"""Get the status of a specific media sync job."""
|
||||
job = get_media_sync_job_by_id(job_id)
|
||||
|
||||
if job is None:
|
||||
return JSONResponse(
|
||||
content={"error": "Job not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={"job": job.to_dict()},
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_labels(camera: str = ""):
|
||||
try:
|
||||
if camera:
|
||||
@@ -603,7 +718,7 @@ def get_labels(camera: str = ""):
|
||||
return JSONResponse(content=labels)
|
||||
|
||||
|
||||
@router.get("/sub_labels")
|
||||
@router.get("/sub_labels", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_sub_labels(split_joined: Optional[int] = None):
|
||||
try:
|
||||
events = Event.select(Event.sub_label).distinct()
|
||||
@@ -634,7 +749,7 @@ def get_sub_labels(split_joined: Optional[int] = None):
|
||||
return JSONResponse(content=sub_labels)
|
||||
|
||||
|
||||
@router.get("/plus/models")
|
||||
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
|
||||
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||
if not request.app.frigate_config.plus_api.is_active():
|
||||
return JSONResponse(
|
||||
@@ -676,7 +791,9 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||
return JSONResponse(content=validModels)
|
||||
|
||||
|
||||
@router.get("/recognized_license_plates")
|
||||
@router.get(
|
||||
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
|
||||
)
|
||||
def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
try:
|
||||
query = (
|
||||
@@ -710,7 +827,7 @@ def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
return JSONResponse(content=recognized_license_plates)
|
||||
|
||||
|
||||
@router.get("/timeline")
|
||||
@router.get("/timeline", dependencies=[Depends(allow_any_authenticated())])
|
||||
def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = None):
|
||||
clauses = []
|
||||
|
||||
@@ -747,7 +864,7 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
|
||||
return JSONResponse(content=[t for t in timeline])
|
||||
|
||||
|
||||
@router.get("/timeline/hourly")
|
||||
@router.get("/timeline/hourly", dependencies=[Depends(allow_any_authenticated())])
|
||||
def hourly_timeline(params: AppTimelineHourlyQueryParameters = Depends()):
|
||||
"""Get hourly summary for timeline."""
|
||||
cameras = params.cameras
|
||||
|
||||
@@ -32,10 +32,178 @@ from frigate.models import User
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def require_admin_by_default():
|
||||
"""
|
||||
Global admin requirement dependency for all endpoints by default.
|
||||
|
||||
This is set as the default dependency on the FastAPI app to ensure all
|
||||
endpoints require admin access unless explicitly overridden with
|
||||
allow_public(), allow_any_authenticated(), or require_role().
|
||||
|
||||
Port 5000 (internal) always has admin role set by the /auth endpoint,
|
||||
so this check passes automatically for internal requests.
|
||||
|
||||
Certain paths are exempted from the global admin check because they must
|
||||
be accessible before authentication (login, auth) or they have their own
|
||||
route-level authorization dependencies that handle access control.
|
||||
"""
|
||||
# Paths that have route-level auth dependencies and should bypass global admin check
|
||||
# These paths still have authorization - it's handled by their route-level dependencies
|
||||
EXEMPT_PATHS = {
|
||||
# Public auth endpoints (allow_public)
|
||||
"/auth",
|
||||
"/auth/first_time_login",
|
||||
"/login",
|
||||
"/logout",
|
||||
# Authenticated user endpoints (allow_any_authenticated)
|
||||
"/profile",
|
||||
# Public info endpoints (allow_public)
|
||||
"/",
|
||||
"/version",
|
||||
"/config/schema.json",
|
||||
# Authenticated user endpoints (allow_any_authenticated)
|
||||
"/metrics",
|
||||
"/stats",
|
||||
"/stats/history",
|
||||
"/config",
|
||||
"/config/raw",
|
||||
"/vainfo",
|
||||
"/nvinfo",
|
||||
"/labels",
|
||||
"/sub_labels",
|
||||
"/plus/models",
|
||||
"/recognized_license_plates",
|
||||
"/timeline",
|
||||
"/timeline/hourly",
|
||||
"/recordings/storage",
|
||||
"/recordings/summary",
|
||||
"/recordings/unavailable",
|
||||
"/go2rtc/streams",
|
||||
"/event_ids",
|
||||
"/events",
|
||||
"/exports",
|
||||
}
|
||||
|
||||
# Path prefixes that should be exempt (for paths with parameters)
|
||||
EXEMPT_PREFIXES = (
|
||||
"/logs/", # /logs/{service}
|
||||
"/review", # /review, /review/{id}, /review/summary, /review_ids, etc.
|
||||
"/reviews/", # /reviews/viewed, /reviews/delete
|
||||
"/events/", # /events/{id}/thumbnail, /events/summary, etc. (camera-scoped)
|
||||
"/export/", # /export/{camera}/start/..., /export/{id}/rename, /export/{id}
|
||||
"/go2rtc/streams/", # /go2rtc/streams/{camera}
|
||||
"/users/", # /users/{username}/password (has own auth)
|
||||
"/preview/", # /preview/{file}/thumbnail.jpg
|
||||
"/exports/", # /exports/{export_id}
|
||||
"/vod/", # /vod/{camera_name}/...
|
||||
"/notifications/", # /notifications/pubkey, /notifications/register
|
||||
)
|
||||
|
||||
async def admin_checker(request: Request):
|
||||
path = request.url.path
|
||||
|
||||
# Check exact path matches
|
||||
if path in EXEMPT_PATHS:
|
||||
return
|
||||
|
||||
# Check prefix matches for parameterized paths
|
||||
if path.startswith(EXEMPT_PREFIXES):
|
||||
return
|
||||
|
||||
# Dynamic camera path exemption:
|
||||
# Any path whose first segment matches a configured camera name should
|
||||
# bypass the global admin requirement. These endpoints enforce access
|
||||
# via route-level dependencies (e.g. require_camera_access) to ensure
|
||||
# per-camera authorization. This allows non-admin authenticated users
|
||||
# (e.g. viewer role) to access camera-specific resources without
|
||||
# needing admin privileges.
|
||||
try:
|
||||
if path.startswith("/"):
|
||||
first_segment = path.split("/", 2)[1]
|
||||
if (
|
||||
first_segment
|
||||
and first_segment in request.app.frigate_config.cameras
|
||||
):
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# For all other paths, require admin role
|
||||
# Port 5000 (internal) requests have admin role set automatically
|
||||
role = request.headers.get("remote-role")
|
||||
if role == "admin":
|
||||
return
|
||||
|
||||
raise HTTPException(
|
||||
status_code=403,
|
||||
detail="Access denied. A user with the admin role is required.",
|
||||
)
|
||||
|
||||
return admin_checker
|
||||
|
||||
|
||||
def _is_authenticated(request: Request) -> bool:
|
||||
"""
|
||||
Helper to determine if a request is from an authenticated user.
|
||||
|
||||
Returns True if the request has a valid authenticated user (not anonymous).
|
||||
Port 5000 internal requests are considered anonymous despite having admin role.
|
||||
"""
|
||||
username = request.headers.get("remote-user")
|
||||
return username is not None and username != "anonymous"
|
||||
|
||||
|
||||
def allow_public():
|
||||
"""
|
||||
Override dependency to allow unauthenticated access to an endpoint.
|
||||
|
||||
Use this for endpoints that should be publicly accessible without
|
||||
authentication, such as login page, health checks, or pre-auth info.
|
||||
|
||||
Example:
|
||||
@router.get("/public-endpoint", dependencies=[Depends(allow_public())])
|
||||
"""
|
||||
|
||||
async def public_checker(request: Request):
|
||||
return # Always allow
|
||||
|
||||
return public_checker
|
||||
|
||||
|
||||
def allow_any_authenticated():
|
||||
"""
|
||||
Override dependency to allow any authenticated user (bypass admin requirement).
|
||||
|
||||
Allows:
|
||||
- Port 5000 internal requests (have admin role despite anonymous user)
|
||||
- Any authenticated user with a real username (not "anonymous")
|
||||
|
||||
Rejects:
|
||||
- Port 8971 requests with anonymous user (auth disabled, no proxy auth)
|
||||
|
||||
Example:
|
||||
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
|
||||
"""
|
||||
|
||||
async def auth_checker(request: Request):
|
||||
# Port 5000 requests have admin role and should be allowed
|
||||
role = request.headers.get("remote-role")
|
||||
if role == "admin":
|
||||
return
|
||||
|
||||
# Otherwise require a real authenticated user (not anonymous)
|
||||
if not _is_authenticated(request):
|
||||
raise HTTPException(status_code=401, detail="Authentication required")
|
||||
return
|
||||
|
||||
return auth_checker
|
||||
|
||||
|
||||
router = APIRouter(tags=[Tags.auth])
|
||||
|
||||
|
||||
@router.get("/auth/first_time_login")
|
||||
@router.get("/auth/first_time_login", dependencies=[Depends(allow_public())])
|
||||
def first_time_login(request: Request):
|
||||
"""Return whether the admin first-time login help flag is set in config.
|
||||
|
||||
@@ -143,7 +311,10 @@ def get_jwt_secret() -> str:
|
||||
)
|
||||
jwt_secret = secrets.token_hex(64)
|
||||
try:
|
||||
with open(jwt_secret_file, "w") as f:
|
||||
fd = os.open(
|
||||
jwt_secret_file, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o600
|
||||
)
|
||||
with os.fdopen(fd, "w") as f:
|
||||
f.write(str(jwt_secret))
|
||||
except Exception:
|
||||
logger.warning(
|
||||
@@ -188,9 +359,35 @@ def verify_password(password, password_hash):
|
||||
return secrets.compare_digest(password_hash, compare_hash)
|
||||
|
||||
|
||||
def validate_password_strength(password: str) -> tuple[bool, Optional[str]]:
|
||||
"""
|
||||
Validate password strength.
|
||||
|
||||
Returns a tuple of (is_valid, error_message).
|
||||
"""
|
||||
if not password:
|
||||
return False, "Password cannot be empty"
|
||||
|
||||
if len(password) < 8:
|
||||
return False, "Password must be at least 8 characters long"
|
||||
|
||||
if not any(c.isupper() for c in password):
|
||||
return False, "Password must contain at least one uppercase letter"
|
||||
|
||||
if not any(c.isdigit() for c in password):
|
||||
return False, "Password must contain at least one digit"
|
||||
|
||||
if not any(c in '!@#$%^&*(),.?":{}|<>' for c in password):
|
||||
return False, "Password must contain at least one special character"
|
||||
|
||||
return True, None
|
||||
|
||||
|
||||
def create_encoded_jwt(user, role, expiration, secret):
|
||||
return jwt.encode(
|
||||
{"alg": "HS256"}, {"sub": user, "role": role, "exp": expiration}, secret
|
||||
{"alg": "HS256"},
|
||||
{"sub": user, "role": role, "exp": expiration, "iat": int(time.time())},
|
||||
secret,
|
||||
)
|
||||
|
||||
|
||||
@@ -352,7 +549,12 @@ def resolve_role(
|
||||
|
||||
|
||||
# Endpoints
|
||||
@router.get("/auth")
|
||||
@router.get(
|
||||
"/auth",
|
||||
dependencies=[Depends(allow_public())],
|
||||
summary="Authenticate request",
|
||||
description="Authenticates the current request based on proxy headers or JWT token. Returns user role and permissions for camera access.",
|
||||
)
|
||||
def auth(request: Request):
|
||||
auth_config: AuthConfig = request.app.frigate_config.auth
|
||||
proxy_config: ProxyConfig = request.app.frigate_config.proxy
|
||||
@@ -451,13 +653,27 @@ def auth(request: Request):
|
||||
return fail_response
|
||||
|
||||
# if the jwt cookie is expiring soon
|
||||
elif jwt_source == "cookie" and expiration - JWT_REFRESH <= current_time:
|
||||
if jwt_source == "cookie" and expiration - JWT_REFRESH <= current_time:
|
||||
logger.debug("jwt token expiring soon, refreshing cookie")
|
||||
# ensure the user hasn't been deleted
|
||||
|
||||
# Check if password has been changed since token was issued
|
||||
# If so, force re-login by rejecting the refresh
|
||||
try:
|
||||
User.get_by_id(user)
|
||||
user_obj = User.get_by_id(user)
|
||||
if user_obj.password_changed_at is not None:
|
||||
token_iat = int(token.claims.get("iat", 0))
|
||||
password_changed_timestamp = int(
|
||||
user_obj.password_changed_at.timestamp()
|
||||
)
|
||||
if token_iat < password_changed_timestamp:
|
||||
logger.debug(
|
||||
"jwt token issued before password change, rejecting refresh"
|
||||
)
|
||||
return fail_response
|
||||
except DoesNotExist:
|
||||
logger.debug("user not found")
|
||||
return fail_response
|
||||
|
||||
new_expiration = current_time + JWT_SESSION_LENGTH
|
||||
new_encoded_jwt = create_encoded_jwt(
|
||||
user, role, new_expiration, request.app.jwt_token
|
||||
@@ -478,7 +694,12 @@ def auth(request: Request):
|
||||
return fail_response
|
||||
|
||||
|
||||
@router.get("/profile")
|
||||
@router.get(
|
||||
"/profile",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get user profile",
|
||||
description="Returns the current authenticated user's profile including username, role, and allowed cameras.",
|
||||
)
|
||||
def profile(request: Request):
|
||||
username = request.headers.get("remote-user", "anonymous")
|
||||
role = request.headers.get("remote-role", "viewer")
|
||||
@@ -492,7 +713,12 @@ def profile(request: Request):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/logout")
|
||||
@router.get(
|
||||
"/logout",
|
||||
dependencies=[Depends(allow_public())],
|
||||
summary="Logout user",
|
||||
description="Logs out the current user by clearing the session cookie.",
|
||||
)
|
||||
def logout(request: Request):
|
||||
auth_config: AuthConfig = request.app.frigate_config.auth
|
||||
response = RedirectResponse("/login", status_code=303)
|
||||
@@ -503,7 +729,12 @@ def logout(request: Request):
|
||||
limiter = Limiter(key_func=get_remote_addr)
|
||||
|
||||
|
||||
@router.post("/login")
|
||||
@router.post(
|
||||
"/login",
|
||||
dependencies=[Depends(allow_public())],
|
||||
summary="Login with credentials",
|
||||
description="Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The token can also be retrieved and used as a Bearer token in the Authorization header.",
|
||||
)
|
||||
@limiter.limit(limit_value=rateLimiter.get_limit)
|
||||
def login(request: Request, body: AppPostLoginBody):
|
||||
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
|
||||
@@ -541,7 +772,12 @@ def login(request: Request, body: AppPostLoginBody):
|
||||
return JSONResponse(content={"message": "Login failed"}, status_code=401)
|
||||
|
||||
|
||||
@router.get("/users", dependencies=[Depends(require_role(["admin"]))])
|
||||
@router.get(
|
||||
"/users",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get all users",
|
||||
description="Returns a list of all users with their usernames and roles. Requires admin role.",
|
||||
)
|
||||
def get_users():
|
||||
exports = (
|
||||
User.select(User.username, User.role).order_by(User.username).dicts().iterator()
|
||||
@@ -549,7 +785,12 @@ def get_users():
|
||||
return JSONResponse([e for e in exports])
|
||||
|
||||
|
||||
@router.post("/users", dependencies=[Depends(require_role(["admin"]))])
|
||||
@router.post(
|
||||
"/users",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create new user",
|
||||
description="Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements.",
|
||||
)
|
||||
def create_user(
|
||||
request: Request,
|
||||
body: AppPostUsersBody,
|
||||
@@ -578,13 +819,29 @@ def create_user(
|
||||
return JSONResponse(content={"username": body.username})
|
||||
|
||||
|
||||
@router.delete("/users/{username}")
|
||||
def delete_user(username: str):
|
||||
@router.delete(
|
||||
"/users/{username}",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete user",
|
||||
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role.",
|
||||
)
|
||||
def delete_user(request: Request, username: str):
|
||||
# Prevent deletion of the built-in admin user
|
||||
if username == "admin":
|
||||
return JSONResponse(
|
||||
content={"message": "Cannot delete admin user"}, status_code=403
|
||||
)
|
||||
|
||||
User.delete_by_id(username)
|
||||
return JSONResponse(content={"success": True})
|
||||
|
||||
|
||||
@router.put("/users/{username}/password")
|
||||
@router.put(
|
||||
"/users/{username}/password",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Update user password",
|
||||
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity. Password must meet strength requirements (minimum 8 characters, uppercase letter, digit, and special character).",
|
||||
)
|
||||
async def update_password(
|
||||
request: Request,
|
||||
username: str,
|
||||
@@ -606,15 +863,70 @@ async def update_password(
|
||||
|
||||
HASH_ITERATIONS = request.app.frigate_config.auth.hash_iterations
|
||||
|
||||
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
|
||||
User.set_by_id(username, {User.password_hash: password_hash})
|
||||
try:
|
||||
user = User.get_by_id(username)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(content={"message": "User not found"}, status_code=404)
|
||||
|
||||
return JSONResponse(content={"success": True})
|
||||
# Require old_password when:
|
||||
# 1. Non-admin user is changing another user's password (admin only action)
|
||||
# 2. Any user is changing their own password
|
||||
is_changing_own_password = current_username == username
|
||||
is_non_admin = current_role != "admin"
|
||||
|
||||
if is_changing_own_password or is_non_admin:
|
||||
if not body.old_password:
|
||||
return JSONResponse(
|
||||
content={"message": "Current password is required"},
|
||||
status_code=400,
|
||||
)
|
||||
if not verify_password(body.old_password, user.password_hash):
|
||||
return JSONResponse(
|
||||
content={"message": "Current password is incorrect"},
|
||||
status_code=401,
|
||||
)
|
||||
|
||||
# Validate new password strength
|
||||
is_valid, error_message = validate_password_strength(body.password)
|
||||
if not is_valid:
|
||||
return JSONResponse(
|
||||
content={"message": error_message},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
|
||||
User.update(
|
||||
{
|
||||
User.password_hash: password_hash,
|
||||
User.password_changed_at: datetime.now(),
|
||||
}
|
||||
).where(User.username == username).execute()
|
||||
|
||||
response = JSONResponse(content={"success": True})
|
||||
|
||||
# If user changed their own password, issue a new JWT to keep them logged in
|
||||
if current_username == username:
|
||||
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
|
||||
JWT_COOKIE_SECURE = request.app.frigate_config.auth.cookie_secure
|
||||
JWT_SESSION_LENGTH = request.app.frigate_config.auth.session_length
|
||||
|
||||
expiration = int(time.time()) + JWT_SESSION_LENGTH
|
||||
encoded_jwt = create_encoded_jwt(
|
||||
username, current_role, expiration, request.app.jwt_token
|
||||
)
|
||||
# Set new JWT cookie on response
|
||||
set_jwt_cookie(
|
||||
response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
@router.put(
|
||||
"/users/{username}/role",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Update user role",
|
||||
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role.",
|
||||
)
|
||||
async def update_role(
|
||||
request: Request,
|
||||
|
||||
@@ -15,7 +15,11 @@ from onvif import ONVIFCamera, ONVIFError
|
||||
from zeep.exceptions import Fault, TransportError
|
||||
from zeep.transports import AsyncTransport
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.config.config import FrigateConfig
|
||||
from frigate.util.builtin import clean_camera_user_pass
|
||||
@@ -50,7 +54,7 @@ def _is_valid_host(host: str) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
@router.get("/go2rtc/streams")
|
||||
@router.get("/go2rtc/streams", dependencies=[Depends(allow_any_authenticated())])
|
||||
def go2rtc_streams():
|
||||
r = requests.get("http://127.0.0.1:1984/api/streams")
|
||||
if not r.ok:
|
||||
@@ -66,7 +70,9 @@ def go2rtc_streams():
|
||||
return JSONResponse(content=stream_data)
|
||||
|
||||
|
||||
@router.get("/go2rtc/streams/{camera_name}")
|
||||
@router.get(
|
||||
"/go2rtc/streams/{camera_name}", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
def go2rtc_camera_stream(request: Request, camera_name: str):
|
||||
r = requests.get(
|
||||
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=allµphone"
|
||||
@@ -161,7 +167,7 @@ def go2rtc_delete_stream(stream_name: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/ffprobe")
|
||||
@router.get("/ffprobe", dependencies=[Depends(require_role(["admin"]))])
|
||||
def ffprobe(request: Request, paths: str = "", detailed: bool = False):
|
||||
path_param = paths
|
||||
|
||||
|
||||
@@ -710,7 +710,7 @@ def delete_classification_dataset_images(
|
||||
if os.path.isfile(file_path):
|
||||
os.unlink(file_path)
|
||||
|
||||
if os.path.exists(folder) and not os.listdir(folder):
|
||||
if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none":
|
||||
os.rmdir(folder)
|
||||
|
||||
return JSONResponse(
|
||||
@@ -870,6 +870,46 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/classification/{name}/dataset/{category}/create",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create an empty classification category folder",
|
||||
description="""Creates an empty folder for a classification category.
|
||||
This is used to create folders for categories that don't have images yet.
|
||||
Returns a success message or an error if the name is invalid.""",
|
||||
)
|
||||
def create_classification_category(request: Request, name: str, category: str):
|
||||
config: FrigateConfig = request.app.frigate_config
|
||||
|
||||
if name not in config.classification.custom:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": False,
|
||||
"message": f"{name} is not a known classification model.",
|
||||
}
|
||||
),
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
category_folder = os.path.join(
|
||||
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
|
||||
)
|
||||
|
||||
os.makedirs(category_folder, exist_ok=True)
|
||||
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": True,
|
||||
"message": f"Successfully created category folder: {category}",
|
||||
}
|
||||
),
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/classification/{name}/train/delete",
|
||||
response_model=GenericResponse,
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
from enum import Enum
|
||||
from typing import Optional, Union
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
|
||||
|
||||
class Extension(str, Enum):
|
||||
@@ -48,15 +47,3 @@ class MediaMjpegFeedQueryParams(BaseModel):
|
||||
mask: Optional[int] = None
|
||||
motion: Optional[int] = None
|
||||
regions: Optional[int] = None
|
||||
|
||||
|
||||
class MediaRecordingsSummaryQueryParams(BaseModel):
|
||||
timezone: str = "utc"
|
||||
cameras: Optional[str] = "all"
|
||||
|
||||
|
||||
class MediaRecordingsAvailabilityQueryParams(BaseModel):
|
||||
cameras: str = "all"
|
||||
before: Union[float, SkipJsonSchema[None]] = None
|
||||
after: Union[float, SkipJsonSchema[None]] = None
|
||||
scale: int = 30
|
||||
|
||||
21
frigate/api/defs/query/recordings_query_parameters.py
Normal file
21
frigate/api/defs/query/recordings_query_parameters.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
|
||||
|
||||
class MediaRecordingsSummaryQueryParams(BaseModel):
|
||||
timezone: str = "utc"
|
||||
cameras: Optional[str] = "all"
|
||||
|
||||
|
||||
class MediaRecordingsAvailabilityQueryParams(BaseModel):
|
||||
cameras: str = "all"
|
||||
before: Union[float, SkipJsonSchema[None]] = None
|
||||
after: Union[float, SkipJsonSchema[None]] = None
|
||||
scale: int = 30
|
||||
|
||||
|
||||
class RecordingsDeleteQueryParams(BaseModel):
|
||||
keep: Optional[str] = None
|
||||
cameras: Optional[str] = "all"
|
||||
@@ -1,6 +1,6 @@
|
||||
from typing import Any, Dict, Optional
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class AppConfigSetBody(BaseModel):
|
||||
@@ -11,6 +11,7 @@ class AppConfigSetBody(BaseModel):
|
||||
|
||||
class AppPutPasswordBody(BaseModel):
|
||||
password: str
|
||||
old_password: Optional[str] = None
|
||||
|
||||
|
||||
class AppPostUsersBody(BaseModel):
|
||||
@@ -26,3 +27,16 @@ class AppPostLoginBody(BaseModel):
|
||||
|
||||
class AppPutRoleBody(BaseModel):
|
||||
role: str
|
||||
|
||||
|
||||
class MediaSyncBody(BaseModel):
|
||||
dry_run: bool = Field(
|
||||
default=True, description="If True, only report orphans without deleting them"
|
||||
)
|
||||
media_types: List[str] = Field(
|
||||
default=["all"],
|
||||
description="Types of media to sync: 'all', 'event_snapshots', 'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'",
|
||||
)
|
||||
force: bool = Field(
|
||||
default=False, description="If True, bypass safety threshold checks"
|
||||
)
|
||||
|
||||
@@ -29,7 +29,6 @@ class EventsDescriptionBody(BaseModel):
|
||||
|
||||
|
||||
class EventsCreateBody(BaseModel):
|
||||
source_type: Optional[str] = "api"
|
||||
sub_label: Optional[str] = None
|
||||
score: Optional[float] = 0
|
||||
duration: Optional[int] = 30
|
||||
|
||||
35
frigate/api/defs/request/export_case_body.py
Normal file
35
frigate/api/defs/request/export_case_body.py
Normal file
@@ -0,0 +1,35 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class ExportCaseCreateBody(BaseModel):
|
||||
"""Request body for creating a new export case."""
|
||||
|
||||
name: str = Field(max_length=100, description="Friendly name of the export case")
|
||||
description: Optional[str] = Field(
|
||||
default=None, description="Optional description of the export case"
|
||||
)
|
||||
|
||||
|
||||
class ExportCaseUpdateBody(BaseModel):
|
||||
"""Request body for updating an existing export case."""
|
||||
|
||||
name: Optional[str] = Field(
|
||||
default=None,
|
||||
max_length=100,
|
||||
description="Updated friendly name of the export case",
|
||||
)
|
||||
description: Optional[str] = Field(
|
||||
default=None, description="Updated description of the export case"
|
||||
)
|
||||
|
||||
|
||||
class ExportCaseAssignBody(BaseModel):
|
||||
"""Request body for assigning or unassigning an export to a case."""
|
||||
|
||||
export_case_id: Optional[str] = Field(
|
||||
default=None,
|
||||
max_length=30,
|
||||
description="Case ID to assign to the export, or null to unassign",
|
||||
)
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Union
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
@@ -18,3 +18,9 @@ class ExportRecordingsBody(BaseModel):
|
||||
)
|
||||
name: str = Field(title="Friendly name", default=None, max_length=256)
|
||||
image_path: Union[str, SkipJsonSchema[None]] = None
|
||||
export_case_id: Optional[str] = Field(
|
||||
default=None,
|
||||
title="Export case ID",
|
||||
max_length=30,
|
||||
description="ID of the export case to assign this export to",
|
||||
)
|
||||
|
||||
22
frigate/api/defs/response/export_case_response.py
Normal file
22
frigate/api/defs/response/export_case_response.py
Normal file
@@ -0,0 +1,22 @@
|
||||
from typing import List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class ExportCaseModel(BaseModel):
|
||||
"""Model representing a single export case."""
|
||||
|
||||
id: str = Field(description="Unique identifier for the export case")
|
||||
name: str = Field(description="Friendly name of the export case")
|
||||
description: Optional[str] = Field(
|
||||
default=None, description="Optional description of the export case"
|
||||
)
|
||||
created_at: float = Field(
|
||||
description="Unix timestamp when the export case was created"
|
||||
)
|
||||
updated_at: float = Field(
|
||||
description="Unix timestamp when the export case was last updated"
|
||||
)
|
||||
|
||||
|
||||
ExportCasesResponse = List[ExportCaseModel]
|
||||
@@ -15,6 +15,9 @@ class ExportModel(BaseModel):
|
||||
in_progress: bool = Field(
|
||||
description="Whether the export is currently being processed"
|
||||
)
|
||||
export_case_id: Optional[str] = Field(
|
||||
default=None, description="ID of the export case this export belongs to"
|
||||
)
|
||||
|
||||
|
||||
class StartExportResponse(BaseModel):
|
||||
|
||||
@@ -3,13 +3,14 @@ from enum import Enum
|
||||
|
||||
class Tags(Enum):
|
||||
app = "App"
|
||||
auth = "Auth"
|
||||
camera = "Camera"
|
||||
preview = "Preview"
|
||||
events = "Events"
|
||||
export = "Export"
|
||||
classification = "Classification"
|
||||
logs = "Logs"
|
||||
media = "Media"
|
||||
notifications = "Notifications"
|
||||
preview = "Preview"
|
||||
recordings = "Recordings"
|
||||
review = "Review"
|
||||
export = "Export"
|
||||
events = "Events"
|
||||
classification = "Classification"
|
||||
auth = "Auth"
|
||||
|
||||
@@ -22,6 +22,7 @@ from peewee import JOIN, DoesNotExist, fn, operator
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
@@ -69,6 +70,7 @@ router = APIRouter(tags=[Tags.events])
|
||||
@router.get(
|
||||
"/events",
|
||||
response_model=list[EventResponse],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get events",
|
||||
description="Returns a list of events.",
|
||||
)
|
||||
@@ -343,7 +345,8 @@ def events(
|
||||
@router.get(
|
||||
"/events/explore",
|
||||
response_model=list[EventResponse],
|
||||
summary="Get summary of objects.",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get summary of objects",
|
||||
description="""Gets a summary of objects from the database.
|
||||
Returns a list of objects with a max of `limit` objects for each label.
|
||||
""",
|
||||
@@ -435,7 +438,8 @@ def events_explore(
|
||||
@router.get(
|
||||
"/event_ids",
|
||||
response_model=list[EventResponse],
|
||||
summary="Get events by ids.",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get events by ids",
|
||||
description="""Gets events by a list of ids.
|
||||
Returns a list of events.
|
||||
""",
|
||||
@@ -468,7 +472,8 @@ async def event_ids(ids: str, request: Request):
|
||||
|
||||
@router.get(
|
||||
"/events/search",
|
||||
summary="Search events.",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Search events",
|
||||
description="""Searches for events in the database.
|
||||
Returns a list of events.
|
||||
""",
|
||||
@@ -808,7 +813,7 @@ def events_search(
|
||||
return JSONResponse(content=processed_events)
|
||||
|
||||
|
||||
@router.get("/events/summary")
|
||||
@router.get("/events/summary", dependencies=[Depends(allow_any_authenticated())])
|
||||
def events_summary(
|
||||
params: EventsSummaryQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
@@ -918,7 +923,8 @@ def events_summary(
|
||||
@router.get(
|
||||
"/events/{event_id}",
|
||||
response_model=EventResponse,
|
||||
summary="Get event by id.",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get event by id",
|
||||
description="Gets an event by its id.",
|
||||
)
|
||||
async def event(event_id: str, request: Request):
|
||||
@@ -961,7 +967,8 @@ def set_retain(event_id: str):
|
||||
@router.post(
|
||||
"/events/{event_id}/plus",
|
||||
response_model=EventUploadPlusResponse,
|
||||
summary="Send event to Frigate+.",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Send event to Frigate+",
|
||||
description="""Sends an event to Frigate+.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1101,6 +1108,7 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
|
||||
@router.put(
|
||||
"/events/{event_id}/false_positive",
|
||||
response_model=EventUploadPlusResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Submit false positive to Frigate+",
|
||||
description="""Submit an event as a false positive to Frigate+.
|
||||
This endpoint is the same as the standard Frigate+ submission endpoint,
|
||||
@@ -1199,7 +1207,7 @@ async def false_positive(request: Request, event_id: str):
|
||||
"/events/{event_id}/retain",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Stop event from being retained indefinitely.",
|
||||
summary="Stop event from being retained indefinitely",
|
||||
description="""Stops an event from being retained indefinitely.
|
||||
Returns a success message or an error if the event is not found.
|
||||
NOTE: This is a legacy endpoint and is not supported in the frontend.
|
||||
@@ -1228,7 +1236,7 @@ async def delete_retain(event_id: str, request: Request):
|
||||
"/events/{event_id}/sub_label",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Set event sub label.",
|
||||
summary="Set event sub label",
|
||||
description="""Sets an event's sub label.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1287,7 +1295,7 @@ async def set_sub_label(
|
||||
"/events/{event_id}/recognized_license_plate",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Set event license plate.",
|
||||
summary="Set event license plate",
|
||||
description="""Sets an event's license plate.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1347,7 +1355,7 @@ async def set_plate(
|
||||
"/events/{event_id}/description",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Set event description.",
|
||||
summary="Set event description",
|
||||
description="""Sets an event's description.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1403,7 +1411,7 @@ async def set_description(
|
||||
"/events/{event_id}/description/regenerate",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Regenerate event description.",
|
||||
summary="Regenerate event description",
|
||||
description="""Regenerates an event's description.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1455,8 +1463,8 @@ async def regenerate_description(
|
||||
@router.post(
|
||||
"/description/generate",
|
||||
response_model=GenericResponse,
|
||||
# dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Generate description embedding.",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Generate description embedding",
|
||||
description="""Generates an embedding for an event's description.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1521,7 +1529,7 @@ async def delete_single_event(event_id: str, request: Request) -> dict:
|
||||
"/events/{event_id}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete event.",
|
||||
summary="Delete event",
|
||||
description="""Deletes an event from the database.
|
||||
Returns a success message or an error if the event is not found.
|
||||
""",
|
||||
@@ -1536,7 +1544,7 @@ async def delete_event(request: Request, event_id: str):
|
||||
"/events/",
|
||||
response_model=EventMultiDeleteResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete events.",
|
||||
summary="Delete events",
|
||||
description="""Deletes a list of events from the database.
|
||||
Returns a success message or an error if the events are not found.
|
||||
""",
|
||||
@@ -1570,7 +1578,7 @@ async def delete_events(request: Request, body: EventsDeleteBody):
|
||||
"/events/{camera_name}/{label}/create",
|
||||
response_model=EventCreateResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create manual event.",
|
||||
summary="Create manual event",
|
||||
description="""Creates a manual event in the database.
|
||||
Returns a success message or an error if the event is not found.
|
||||
NOTES:
|
||||
@@ -1612,7 +1620,7 @@ def create_event(
|
||||
body.score,
|
||||
body.sub_label,
|
||||
body.duration,
|
||||
body.source_type,
|
||||
"api",
|
||||
body.draw,
|
||||
),
|
||||
EventMetadataTypeEnum.manual_event_create.value,
|
||||
@@ -1634,7 +1642,7 @@ def create_event(
|
||||
"/events/{event_id}/end",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="End manual event.",
|
||||
summary="End manual event",
|
||||
description="""Ends a manual event.
|
||||
Returns a success message or an error if the event is not found.
|
||||
NOTE: This should only be used for manual events.
|
||||
@@ -1644,10 +1652,27 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
await require_camera_access(event.camera, request=request)
|
||||
|
||||
if body.end_time is not None and body.end_time < event.start_time:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": False,
|
||||
"message": f"end_time ({body.end_time}) cannot be before start_time ({event.start_time}).",
|
||||
}
|
||||
),
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
end_time = body.end_time or datetime.datetime.now().timestamp()
|
||||
request.app.event_metadata_updater.publish(
|
||||
(event_id, end_time), EventMetadataTypeEnum.manual_event_end.value
|
||||
)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content=({"success": False, "message": f"Event {event_id} not found."}),
|
||||
status_code=404,
|
||||
)
|
||||
except Exception:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
@@ -1666,7 +1691,7 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
|
||||
"/trigger/embedding",
|
||||
response_model=dict,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create trigger embedding.",
|
||||
summary="Create trigger embedding",
|
||||
description="""Creates a trigger embedding for a specific trigger.
|
||||
Returns a success message or an error if the trigger is not found.
|
||||
""",
|
||||
@@ -1723,37 +1748,40 @@ def create_trigger_embedding(
|
||||
if event.data.get("type") != "object":
|
||||
return
|
||||
|
||||
if thumbnail := get_event_thumbnail_bytes(event):
|
||||
cursor = context.db.execute_sql(
|
||||
"""
|
||||
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
|
||||
""",
|
||||
[body.data],
|
||||
# Get the thumbnail
|
||||
thumbnail = get_event_thumbnail_bytes(event)
|
||||
|
||||
if thumbnail is None:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
row = cursor.fetchone() if cursor else None
|
||||
# Try to reuse existing embedding from database
|
||||
cursor = context.db.execute_sql(
|
||||
"""
|
||||
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
|
||||
""",
|
||||
[body.data],
|
||||
)
|
||||
|
||||
if row:
|
||||
query_embedding = row[0]
|
||||
embedding = np.frombuffer(query_embedding, dtype=np.float32)
|
||||
row = cursor.fetchone() if cursor else None
|
||||
|
||||
if row:
|
||||
query_embedding = row[0]
|
||||
embedding = np.frombuffer(query_embedding, dtype=np.float32)
|
||||
else:
|
||||
# Extract valid thumbnail
|
||||
thumbnail = get_event_thumbnail_bytes(event)
|
||||
|
||||
if thumbnail is None:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
# Generate new embedding
|
||||
embedding = context.generate_image_embedding(
|
||||
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
|
||||
)
|
||||
|
||||
if embedding is None:
|
||||
if embedding is None or (
|
||||
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
|
||||
):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@@ -1821,7 +1849,7 @@ def create_trigger_embedding(
|
||||
"/trigger/embedding/{camera_name}/{name}",
|
||||
response_model=dict,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Update trigger embedding.",
|
||||
summary="Update trigger embedding",
|
||||
description="""Updates a trigger embedding for a specific trigger.
|
||||
Returns a success message or an error if the trigger is not found.
|
||||
""",
|
||||
@@ -1888,7 +1916,9 @@ def update_trigger_embedding(
|
||||
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
|
||||
)
|
||||
|
||||
if embedding is None:
|
||||
if embedding is None or (
|
||||
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
|
||||
):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@@ -1984,7 +2014,7 @@ def update_trigger_embedding(
|
||||
"/trigger/embedding/{camera_name}/{name}",
|
||||
response_model=dict,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete trigger embedding.",
|
||||
summary="Delete trigger embedding",
|
||||
description="""Deletes a trigger embedding for a specific trigger.
|
||||
Returns a success message or an error if the trigger is not found.
|
||||
""",
|
||||
@@ -2058,7 +2088,7 @@ def delete_trigger_embedding(
|
||||
"/triggers/status/{camera_name}",
|
||||
response_model=dict,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get triggers status.",
|
||||
summary="Get triggers status",
|
||||
description="""Gets the status of all triggers for a specific camera.
|
||||
Returns a success message or an error if the camera is not found.
|
||||
""",
|
||||
|
||||
@@ -4,22 +4,32 @@ import logging
|
||||
import random
|
||||
import string
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from typing import List, Optional
|
||||
|
||||
import psutil
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi import APIRouter, Depends, Query, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from pathvalidate import sanitize_filepath
|
||||
from peewee import DoesNotExist
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.request.export_case_body import (
|
||||
ExportCaseAssignBody,
|
||||
ExportCaseCreateBody,
|
||||
ExportCaseUpdateBody,
|
||||
)
|
||||
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
|
||||
from frigate.api.defs.request.export_rename_body import ExportRenameBody
|
||||
from frigate.api.defs.response.export_case_response import (
|
||||
ExportCaseModel,
|
||||
ExportCasesResponse,
|
||||
)
|
||||
from frigate.api.defs.response.export_response import (
|
||||
ExportModel,
|
||||
ExportsResponse,
|
||||
@@ -28,7 +38,7 @@ from frigate.api.defs.response.export_response import (
|
||||
from frigate.api.defs.response.generic_response import GenericResponse
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import CLIPS_DIR, EXPORT_DIR
|
||||
from frigate.models import Export, Previews, Recordings
|
||||
from frigate.models import Export, ExportCase, Previews, Recordings
|
||||
from frigate.record.export import (
|
||||
PlaybackFactorEnum,
|
||||
PlaybackSourceEnum,
|
||||
@@ -44,23 +54,189 @@ router = APIRouter(tags=[Tags.export])
|
||||
@router.get(
|
||||
"/exports",
|
||||
response_model=ExportsResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get exports",
|
||||
description="""Gets all exports from the database for cameras the user has access to.
|
||||
Returns a list of exports ordered by date (most recent first).""",
|
||||
)
|
||||
def get_exports(
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
export_case_id: Optional[str] = None,
|
||||
cameras: Optional[str] = Query(default="all"),
|
||||
start_date: Optional[float] = None,
|
||||
end_date: Optional[float] = None,
|
||||
):
|
||||
exports = (
|
||||
Export.select()
|
||||
.where(Export.camera << allowed_cameras)
|
||||
.order_by(Export.date.desc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
query = Export.select().where(Export.camera << allowed_cameras)
|
||||
|
||||
if export_case_id is not None:
|
||||
if export_case_id == "unassigned":
|
||||
query = query.where(Export.export_case.is_null(True))
|
||||
else:
|
||||
query = query.where(Export.export_case == export_case_id)
|
||||
|
||||
if cameras and cameras != "all":
|
||||
requested = set(cameras.split(","))
|
||||
filtered_cameras = list(requested.intersection(allowed_cameras))
|
||||
if not filtered_cameras:
|
||||
return JSONResponse(content=[])
|
||||
query = query.where(Export.camera << filtered_cameras)
|
||||
|
||||
if start_date is not None:
|
||||
query = query.where(Export.date >= start_date)
|
||||
|
||||
if end_date is not None:
|
||||
query = query.where(Export.date <= end_date)
|
||||
|
||||
exports = query.order_by(Export.date.desc()).dicts().iterator()
|
||||
return JSONResponse(content=[e for e in exports])
|
||||
|
||||
|
||||
@router.get(
|
||||
"/cases",
|
||||
response_model=ExportCasesResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get export cases",
|
||||
description="Gets all export cases from the database.",
|
||||
)
|
||||
def get_export_cases():
|
||||
cases = (
|
||||
ExportCase.select().order_by(ExportCase.created_at.desc()).dicts().iterator()
|
||||
)
|
||||
return JSONResponse(content=[c for c in cases])
|
||||
|
||||
|
||||
@router.post(
|
||||
"/cases",
|
||||
response_model=ExportCaseModel,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create export case",
|
||||
description="Creates a new export case.",
|
||||
)
|
||||
def create_export_case(body: ExportCaseCreateBody):
|
||||
case = ExportCase.create(
|
||||
id="".join(random.choices(string.ascii_lowercase + string.digits, k=12)),
|
||||
name=body.name,
|
||||
description=body.description,
|
||||
created_at=Path().stat().st_mtime,
|
||||
updated_at=Path().stat().st_mtime,
|
||||
)
|
||||
return JSONResponse(content=model_to_dict(case))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/cases/{case_id}",
|
||||
response_model=ExportCaseModel,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get a single export case",
|
||||
description="Gets a specific export case by ID.",
|
||||
)
|
||||
def get_export_case(case_id: str):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
return JSONResponse(content=model_to_dict(case))
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
|
||||
@router.patch(
|
||||
"/cases/{case_id}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Update export case",
|
||||
description="Updates an existing export case.",
|
||||
)
|
||||
def update_export_case(case_id: str, body: ExportCaseUpdateBody):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
if body.name is not None:
|
||||
case.name = body.name
|
||||
if body.description is not None:
|
||||
case.description = body.description
|
||||
|
||||
case.save()
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Successfully updated export case."}
|
||||
)
|
||||
|
||||
|
||||
@router.delete(
|
||||
"/cases/{case_id}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete export case",
|
||||
description="""Deletes an export case.\n Exports that reference this case will have their export_case set to null.\n """,
|
||||
)
|
||||
def delete_export_case(case_id: str):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
# Unassign exports from this case but keep the exports themselves
|
||||
Export.update(export_case=None).where(Export.export_case == case).execute()
|
||||
|
||||
case.delete_instance()
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Successfully deleted export case."}
|
||||
)
|
||||
|
||||
|
||||
@router.patch(
|
||||
"/export/{export_id}/case",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Assign export to case",
|
||||
description=(
|
||||
"Assigns an export to a case, or unassigns it if export_case_id is null."
|
||||
),
|
||||
)
|
||||
async def assign_export_case(
|
||||
export_id: str,
|
||||
body: ExportCaseAssignBody,
|
||||
request: Request,
|
||||
):
|
||||
try:
|
||||
export: Export = Export.get(Export.id == export_id)
|
||||
await require_camera_access(export.camera, request=request)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export not found."},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
if body.export_case_id is not None:
|
||||
try:
|
||||
ExportCase.get(ExportCase.id == body.export_case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found."},
|
||||
status_code=404,
|
||||
)
|
||||
export.export_case = body.export_case_id
|
||||
else:
|
||||
export.export_case = None
|
||||
|
||||
export.save()
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Successfully updated export case."}
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/export/{camera_name}/start/{start_time}/end/{end_time}",
|
||||
response_model=StartExportResponse,
|
||||
@@ -91,6 +267,16 @@ def export_recording(
|
||||
friendly_name = body.name
|
||||
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
|
||||
|
||||
export_case_id = body.export_case_id
|
||||
if export_case_id is not None:
|
||||
try:
|
||||
ExportCase.get(ExportCase.id == export_case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
# Ensure that existing_image is a valid path
|
||||
if existing_image and not existing_image.startswith(CLIPS_DIR):
|
||||
return JSONResponse(
|
||||
@@ -159,6 +345,7 @@ def export_recording(
|
||||
if playback_source in PlaybackSourceEnum.__members__.values()
|
||||
else PlaybackSourceEnum.recordings
|
||||
),
|
||||
export_case_id,
|
||||
)
|
||||
exporter.start()
|
||||
return JSONResponse(
|
||||
@@ -272,6 +459,7 @@ async def export_delete(event_id: str, request: Request):
|
||||
@router.get(
|
||||
"/exports/{export_id}",
|
||||
response_model=ExportModel,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get a single export",
|
||||
description="""Gets a specific export by ID. The user must have access to the camera
|
||||
associated with the export.""",
|
||||
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi import Depends, FastAPI, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from joserfc.jwk import OctKey
|
||||
from playhouse.sqliteq import SqliteQueueDatabase
|
||||
@@ -22,9 +22,10 @@ from frigate.api import (
|
||||
media,
|
||||
notification,
|
||||
preview,
|
||||
record,
|
||||
review,
|
||||
)
|
||||
from frigate.api.auth import get_jwt_secret, limiter
|
||||
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
|
||||
from frigate.comms.event_metadata_updater import (
|
||||
EventMetadataPublisher,
|
||||
)
|
||||
@@ -62,11 +63,15 @@ def create_fastapi_app(
|
||||
stats_emitter: StatsEmitter,
|
||||
event_metadata_updater: EventMetadataPublisher,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
enforce_default_admin: bool = True,
|
||||
):
|
||||
logger.info("Starting FastAPI app")
|
||||
app = FastAPI(
|
||||
debug=False,
|
||||
swagger_ui_parameters={"apisSorter": "alpha", "operationsSorter": "alpha"},
|
||||
dependencies=[Depends(require_admin_by_default())]
|
||||
if enforce_default_admin
|
||||
else [],
|
||||
)
|
||||
|
||||
# update the request_address with the x-forwarded-for header from nginx
|
||||
@@ -124,6 +129,7 @@ def create_fastapi_app(
|
||||
app.include_router(export.router)
|
||||
app.include_router(event.router)
|
||||
app.include_router(media.router)
|
||||
app.include_router(record.router)
|
||||
# App Properties
|
||||
app.frigate_config = frigate_config
|
||||
app.embeddings = embeddings
|
||||
|
||||
@@ -8,9 +8,8 @@ import os
|
||||
import subprocess as sp
|
||||
import time
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from functools import reduce
|
||||
from pathlib import Path as FilePath
|
||||
from typing import Any, List
|
||||
from typing import Any
|
||||
from urllib.parse import unquote
|
||||
|
||||
import cv2
|
||||
@@ -19,17 +18,18 @@ import pytz
|
||||
from fastapi import APIRouter, Depends, Path, Query, Request, Response
|
||||
from fastapi.responses import FileResponse, JSONResponse, StreamingResponse
|
||||
from pathvalidate import sanitize_filename
|
||||
from peewee import DoesNotExist, fn, operator
|
||||
from peewee import DoesNotExist, fn
|
||||
from tzlocal import get_localzone_name
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, require_camera_access
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
require_camera_access,
|
||||
)
|
||||
from frigate.api.defs.query.media_query_parameters import (
|
||||
Extension,
|
||||
MediaEventsSnapshotQueryParams,
|
||||
MediaLatestFrameQueryParams,
|
||||
MediaMjpegFeedQueryParams,
|
||||
MediaRecordingsAvailabilityQueryParams,
|
||||
MediaRecordingsSummaryQueryParams,
|
||||
)
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.camera.state import CameraState
|
||||
@@ -40,13 +40,11 @@ from frigate.const import (
|
||||
INSTALL_DIR,
|
||||
MAX_SEGMENT_DURATION,
|
||||
PREVIEW_FRAME_TYPE,
|
||||
RECORD_DIR,
|
||||
)
|
||||
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
|
||||
from frigate.track.object_processing import TrackedObjectProcessor
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.image import get_image_from_recording
|
||||
from frigate.util.time import get_dst_transitions
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -393,329 +391,6 @@ async def submit_recording_snapshot_to_plus(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/recordings/storage")
|
||||
def get_recordings_storage_usage(request: Request):
|
||||
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
|
||||
"storage"
|
||||
][RECORD_DIR]
|
||||
|
||||
if not recording_stats:
|
||||
return JSONResponse({})
|
||||
|
||||
total_mb = recording_stats["total"]
|
||||
|
||||
camera_usages: dict[str, dict] = (
|
||||
request.app.storage_maintainer.calculate_camera_usages()
|
||||
)
|
||||
|
||||
for camera_name in camera_usages.keys():
|
||||
if camera_usages.get(camera_name, {}).get("usage"):
|
||||
camera_usages[camera_name]["usage_percent"] = (
|
||||
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
|
||||
) * 100
|
||||
|
||||
return JSONResponse(content=camera_usages)
|
||||
|
||||
|
||||
@router.get("/recordings/summary")
|
||||
def all_recordings_summary(
|
||||
request: Request,
|
||||
params: MediaRecordingsSummaryQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Returns true/false by day indicating if recordings exist"""
|
||||
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content={})
|
||||
camera_list = list(filtered)
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera << camera_list)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content={})
|
||||
|
||||
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
|
||||
|
||||
days: dict[str, bool] = {}
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
period_query = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("day")
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera << camera_list)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
)
|
||||
)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
for g in period_query:
|
||||
days[g.day] = True
|
||||
|
||||
return JSONResponse(content=dict(sorted(days.items())))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
async def recordings_summary(camera_name: str, timezone: str = "utc"):
|
||||
"""Returns hourly summary for recordings of given camera"""
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera == camera_name)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
days: dict[str, dict] = {}
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
dst_periods = get_dst_transitions(timezone, min_time, max_time)
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
recording_groups = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.SUM(Recordings.duration).alias("duration"),
|
||||
fn.SUM(Recordings.motion).alias("motion"),
|
||||
fn.SUM(Recordings.objects).alias("objects"),
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera == camera_name)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_groups = (
|
||||
Event.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Event.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.COUNT(Event.id).alias("count"),
|
||||
)
|
||||
.where(Event.camera == camera_name, Event.has_clip)
|
||||
.where(
|
||||
(Event.start_time >= period_start) & (Event.start_time <= period_end)
|
||||
)
|
||||
.group_by((Event.start_time + period_offset).cast("int") / 3600)
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_map = {g.hour: g.count for g in event_groups}
|
||||
|
||||
for recording_group in recording_groups:
|
||||
parts = recording_group.hour.split()
|
||||
hour = parts[1]
|
||||
day = parts[0]
|
||||
events_count = event_map.get(recording_group.hour, 0)
|
||||
hour_data = {
|
||||
"hour": hour,
|
||||
"events": events_count,
|
||||
"motion": recording_group.motion,
|
||||
"objects": recording_group.objects,
|
||||
"duration": round(recording_group.duration),
|
||||
}
|
||||
if day in days:
|
||||
# merge counts if already present (edge-case at DST boundary)
|
||||
days[day]["events"] += events_count or 0
|
||||
days[day]["hours"].append(hour_data)
|
||||
else:
|
||||
days[day] = {
|
||||
"events": events_count or 0,
|
||||
"hours": [hour_data],
|
||||
"day": day,
|
||||
}
|
||||
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
|
||||
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
|
||||
async def recordings(
|
||||
camera_name: str,
|
||||
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
|
||||
before: float = datetime.now().timestamp(),
|
||||
):
|
||||
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
|
||||
recordings = (
|
||||
Recordings.select(
|
||||
Recordings.id,
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
Recordings.segment_size,
|
||||
Recordings.motion,
|
||||
Recordings.objects,
|
||||
Recordings.duration,
|
||||
)
|
||||
.where(
|
||||
Recordings.camera == camera_name,
|
||||
Recordings.end_time >= after,
|
||||
Recordings.start_time <= before,
|
||||
)
|
||||
.order_by(Recordings.start_time)
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
return JSONResponse(content=list(recordings))
|
||||
|
||||
|
||||
@router.get("/recordings/unavailable", response_model=list[dict])
|
||||
async def no_recordings(
|
||||
request: Request,
|
||||
params: MediaRecordingsAvailabilityQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Get time ranges with no recordings."""
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content=[])
|
||||
cameras = ",".join(filtered)
|
||||
else:
|
||||
cameras = allowed_cameras
|
||||
|
||||
before = params.before or datetime.datetime.now().timestamp()
|
||||
after = (
|
||||
params.after
|
||||
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
|
||||
)
|
||||
scale = params.scale
|
||||
|
||||
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
|
||||
if cameras != "all":
|
||||
camera_list = cameras.split(",")
|
||||
clauses.append((Recordings.camera << camera_list))
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
# Get recording start times
|
||||
data: list[Recordings] = (
|
||||
Recordings.select(Recordings.start_time, Recordings.end_time)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.order_by(Recordings.start_time.asc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
# Convert recordings to list of (start, end) tuples
|
||||
recordings = [(r["start_time"], r["end_time"]) for r in data]
|
||||
|
||||
# Iterate through time segments and check if each has any recording
|
||||
no_recording_segments = []
|
||||
current = after
|
||||
current_gap_start = None
|
||||
|
||||
while current < before:
|
||||
segment_end = min(current + scale, before)
|
||||
|
||||
# Check if this segment overlaps with any recording
|
||||
has_recording = any(
|
||||
rec_start < segment_end and rec_end > current
|
||||
for rec_start, rec_end in recordings
|
||||
)
|
||||
|
||||
if not has_recording:
|
||||
# This segment has no recordings
|
||||
if current_gap_start is None:
|
||||
current_gap_start = current # Start a new gap
|
||||
else:
|
||||
# This segment has recordings
|
||||
if current_gap_start is not None:
|
||||
# End the current gap and append it
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(current)}
|
||||
)
|
||||
current_gap_start = None
|
||||
|
||||
current = segment_end
|
||||
|
||||
# Append the last gap if it exists
|
||||
if current_gap_start is not None:
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(before)}
|
||||
)
|
||||
|
||||
return JSONResponse(content=no_recording_segments)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
@@ -829,7 +504,19 @@ async def recording_clip(
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
description="Returns an HLS playlist for the specified timestamp-range on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
|
||||
)
|
||||
async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
async def vod_ts(
|
||||
camera_name: str,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
force_discontinuity: bool = False,
|
||||
):
|
||||
logger.debug(
|
||||
"VOD: Generating VOD for %s from %s to %s with force_discontinuity=%s",
|
||||
camera_name,
|
||||
start_ts,
|
||||
end_ts,
|
||||
force_discontinuity,
|
||||
)
|
||||
recordings = (
|
||||
Recordings.select(
|
||||
Recordings.path,
|
||||
@@ -854,6 +541,14 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
|
||||
recording: Recordings
|
||||
for recording in recordings:
|
||||
logger.debug(
|
||||
"VOD: processing recording: %s start=%s end=%s duration=%s",
|
||||
recording.path,
|
||||
recording.start_time,
|
||||
recording.end_time,
|
||||
recording.duration,
|
||||
)
|
||||
|
||||
clip = {"type": "source", "path": recording.path}
|
||||
duration = int(recording.duration * 1000)
|
||||
|
||||
@@ -862,6 +557,11 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
inpoint = int((start_ts - recording.start_time) * 1000)
|
||||
clip["clipFrom"] = inpoint
|
||||
duration -= inpoint
|
||||
logger.debug(
|
||||
"VOD: applied clipFrom %sms to %s",
|
||||
inpoint,
|
||||
recording.path,
|
||||
)
|
||||
|
||||
# adjust end if recording.end_time is after end_ts
|
||||
if recording.end_time > end_ts:
|
||||
@@ -869,12 +569,23 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
|
||||
if duration < min_duration_ms:
|
||||
# skip if the clip has no valid duration (too short to contain frames)
|
||||
logger.debug(
|
||||
"VOD: skipping recording %s - resulting duration %sms too short",
|
||||
recording.path,
|
||||
duration,
|
||||
)
|
||||
continue
|
||||
|
||||
if min_duration_ms <= duration < max_duration_ms:
|
||||
clip["keyFrameDurations"] = [duration]
|
||||
clips.append(clip)
|
||||
durations.append(duration)
|
||||
logger.debug(
|
||||
"VOD: added clip %s duration_ms=%s clipFrom=%s",
|
||||
recording.path,
|
||||
duration,
|
||||
clip.get("clipFrom"),
|
||||
)
|
||||
else:
|
||||
logger.warning(f"Recording clip is missing or empty: {recording.path}")
|
||||
|
||||
@@ -894,7 +605,7 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"cache": hour_ago.timestamp() > start_ts,
|
||||
"discontinuity": False,
|
||||
"discontinuity": force_discontinuity,
|
||||
"consistentSequenceMediaInfo": True,
|
||||
"durations": durations,
|
||||
"segment_duration": max(durations),
|
||||
@@ -937,6 +648,7 @@ async def vod_hour(
|
||||
|
||||
@router.get(
|
||||
"/vod/event/{event_id}",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
|
||||
)
|
||||
async def vod_event(
|
||||
@@ -977,6 +689,19 @@ async def vod_event(
|
||||
return vod_response
|
||||
|
||||
|
||||
@router.get(
|
||||
"/vod/clip/{camera_name}/start/{start_ts}/end/{end_ts}",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
description="Returns an HLS playlist for a timestamp range with HLS discontinuity enabled. Append /master.m3u8 or /index.m3u8 for HLS playback.",
|
||||
)
|
||||
async def vod_clip(
|
||||
camera_name: str,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
):
|
||||
return await vod_ts(camera_name, start_ts, end_ts, force_discontinuity=True)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/events/{event_id}/snapshot.jpg",
|
||||
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
|
||||
@@ -1053,7 +778,10 @@ async def event_snapshot(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/thumbnail.{extension}")
|
||||
@router.get(
|
||||
"/events/{event_id}/thumbnail.{extension}",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
)
|
||||
async def event_thumbnail(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
@@ -1251,7 +979,10 @@ def grid_snapshot(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/snapshot-clean.webp")
|
||||
@router.get(
|
||||
"/events/{event_id}/snapshot-clean.webp",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
)
|
||||
def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
|
||||
webp_bytes = None
|
||||
try:
|
||||
@@ -1375,7 +1106,9 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/clip.mp4")
|
||||
@router.get(
|
||||
"/events/{event_id}/clip.mp4", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
async def event_clip(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
@@ -1403,7 +1136,9 @@ async def event_clip(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/preview.gif")
|
||||
@router.get(
|
||||
"/events/{event_id}/preview.gif", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
def event_preview(request: Request, event_id: str):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
@@ -1756,7 +1491,7 @@ def preview_mp4(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/review/{event_id}/preview")
|
||||
@router.get("/review/{event_id}/preview", dependencies=[Depends(require_camera_access)])
|
||||
def review_preview(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
@@ -1782,8 +1517,12 @@ def review_preview(
|
||||
return preview_mp4(request, review.camera, start_ts, end_ts)
|
||||
|
||||
|
||||
@router.get("/preview/{file_name}/thumbnail.jpg")
|
||||
@router.get("/preview/{file_name}/thumbnail.webp")
|
||||
@router.get(
|
||||
"/preview/{file_name}/thumbnail.jpg", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
@router.get(
|
||||
"/preview/{file_name}/thumbnail.webp", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
def preview_thumbnail(file_name: str):
|
||||
"""Get a thumbnail from the cached preview frames."""
|
||||
if len(file_name) > 1000:
|
||||
|
||||
@@ -5,11 +5,12 @@ import os
|
||||
from typing import Any
|
||||
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from peewee import DoesNotExist
|
||||
from py_vapid import Vapid01, utils
|
||||
|
||||
from frigate.api.auth import allow_any_authenticated
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import CONFIG_DIR
|
||||
from frigate.models import User
|
||||
@@ -21,6 +22,7 @@ router = APIRouter(tags=[Tags.notifications])
|
||||
|
||||
@router.get(
|
||||
"/notifications/pubkey",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get VAPID public key",
|
||||
description="""Gets the VAPID public key for the notifications.
|
||||
Returns the public key or an error if notifications are not enabled.
|
||||
@@ -47,6 +49,7 @@ def get_vapid_pub_key(request: Request):
|
||||
|
||||
@router.post(
|
||||
"/notifications/register",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Register notifications",
|
||||
description="""Registers a notifications subscription.
|
||||
Returns a success message or an error if the subscription is not provided.
|
||||
|
||||
@@ -5,10 +5,14 @@ import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
import pytz
|
||||
from fastapi import APIRouter, Depends
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from frigate.api.auth import require_camera_access
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
)
|
||||
from frigate.api.defs.response.preview_response import (
|
||||
PreviewFramesResponse,
|
||||
PreviewsResponse,
|
||||
@@ -26,19 +30,32 @@ router = APIRouter(tags=[Tags.preview])
|
||||
@router.get(
|
||||
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}",
|
||||
response_model=PreviewsResponse,
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get preview clips for time range",
|
||||
description="""Gets all preview clips for a specified camera and time range.
|
||||
Returns a list of preview video clips that overlap with the requested time period,
|
||||
ordered by start time. Use camera_name='all' to get previews from all cameras.
|
||||
Returns an error if no previews are found.""",
|
||||
)
|
||||
def preview_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
def preview_ts(
|
||||
camera_name: str,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
allowed_cameras: list[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Get all mp4 previews relevant for time period."""
|
||||
if camera_name != "all":
|
||||
camera_clause = Previews.camera == camera_name
|
||||
if camera_name not in allowed_cameras:
|
||||
raise HTTPException(status_code=403, detail="Access denied for camera")
|
||||
camera_list = [camera_name]
|
||||
else:
|
||||
camera_clause = True
|
||||
camera_list = allowed_cameras
|
||||
|
||||
if not camera_list:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "No previews found."},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
previews = (
|
||||
Previews.select(
|
||||
@@ -53,7 +70,7 @@ def preview_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
| Previews.end_time.between(start_ts, end_ts)
|
||||
| ((start_ts > Previews.start_time) & (end_ts < Previews.end_time))
|
||||
)
|
||||
.where(camera_clause)
|
||||
.where(Previews.camera << camera_list)
|
||||
.order_by(Previews.start_time.asc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
@@ -88,14 +105,21 @@ def preview_ts(camera_name: str, start_ts: float, end_ts: float):
|
||||
@router.get(
|
||||
"/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
|
||||
response_model=PreviewsResponse,
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get preview clips for specific hour",
|
||||
description="""Gets all preview clips for a specific hour in a given timezone.
|
||||
Converts the provided date/time from the specified timezone to UTC and retrieves
|
||||
all preview clips for that hour. Use camera_name='all' to get previews from all cameras.
|
||||
The tz_name should be a timezone like 'America/New_York' (use commas instead of slashes).""",
|
||||
)
|
||||
def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
|
||||
def preview_hour(
|
||||
year_month: str,
|
||||
day: int,
|
||||
hour: int,
|
||||
camera_name: str,
|
||||
tz_name: str,
|
||||
allowed_cameras: list[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Get all mp4 previews relevant for time period given the timezone"""
|
||||
parts = year_month.split("-")
|
||||
start_date = (
|
||||
@@ -106,7 +130,7 @@ def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name
|
||||
start_ts = start_date.timestamp()
|
||||
end_ts = end_date.timestamp()
|
||||
|
||||
return preview_ts(camera_name, start_ts, end_ts)
|
||||
return preview_ts(camera_name, start_ts, end_ts, allowed_cameras)
|
||||
|
||||
|
||||
@router.get(
|
||||
|
||||
479
frigate/api/record.py
Normal file
479
frigate/api/record.py
Normal file
@@ -0,0 +1,479 @@
|
||||
"""Recording APIs."""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from functools import reduce
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from urllib.parse import unquote
|
||||
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi import Path as PathParam
|
||||
from fastapi.responses import JSONResponse
|
||||
from peewee import fn, operator
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.query.recordings_query_parameters import (
|
||||
MediaRecordingsAvailabilityQueryParams,
|
||||
MediaRecordingsSummaryQueryParams,
|
||||
RecordingsDeleteQueryParams,
|
||||
)
|
||||
from frigate.api.defs.response.generic_response import GenericResponse
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import RECORD_DIR
|
||||
from frigate.models import Event, Recordings
|
||||
from frigate.util.time import get_dst_transitions
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(tags=[Tags.recordings])
|
||||
|
||||
|
||||
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_recordings_storage_usage(request: Request):
|
||||
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
|
||||
"storage"
|
||||
][RECORD_DIR]
|
||||
|
||||
if not recording_stats:
|
||||
return JSONResponse({})
|
||||
|
||||
total_mb = recording_stats["total"]
|
||||
|
||||
camera_usages: dict[str, dict] = (
|
||||
request.app.storage_maintainer.calculate_camera_usages()
|
||||
)
|
||||
|
||||
for camera_name in camera_usages.keys():
|
||||
if camera_usages.get(camera_name, {}).get("usage"):
|
||||
camera_usages[camera_name]["usage_percent"] = (
|
||||
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
|
||||
) * 100
|
||||
|
||||
return JSONResponse(content=camera_usages)
|
||||
|
||||
|
||||
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
|
||||
def all_recordings_summary(
|
||||
request: Request,
|
||||
params: MediaRecordingsSummaryQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Returns true/false by day indicating if recordings exist"""
|
||||
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content={})
|
||||
camera_list = list(filtered)
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera << camera_list)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content={})
|
||||
|
||||
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
|
||||
|
||||
days: dict[str, bool] = {}
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
period_query = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("day")
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera << camera_list)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
)
|
||||
)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
for g in period_query:
|
||||
days[g.day] = True
|
||||
|
||||
return JSONResponse(content=dict(sorted(days.items())))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
async def recordings_summary(camera_name: str, timezone: str = "utc"):
|
||||
"""Returns hourly summary for recordings of given camera"""
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera == camera_name)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
days: dict[str, dict] = {}
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
dst_periods = get_dst_transitions(timezone, min_time, max_time)
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
recording_groups = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.SUM(Recordings.duration).alias("duration"),
|
||||
fn.SUM(Recordings.motion).alias("motion"),
|
||||
fn.SUM(Recordings.objects).alias("objects"),
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera == camera_name)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_groups = (
|
||||
Event.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Event.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.COUNT(Event.id).alias("count"),
|
||||
)
|
||||
.where(Event.camera == camera_name, Event.has_clip)
|
||||
.where(
|
||||
(Event.start_time >= period_start) & (Event.start_time <= period_end)
|
||||
)
|
||||
.group_by((Event.start_time + period_offset).cast("int") / 3600)
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_map = {g.hour: g.count for g in event_groups}
|
||||
|
||||
for recording_group in recording_groups:
|
||||
parts = recording_group.hour.split()
|
||||
hour = parts[1]
|
||||
day = parts[0]
|
||||
events_count = event_map.get(recording_group.hour, 0)
|
||||
hour_data = {
|
||||
"hour": hour,
|
||||
"events": events_count,
|
||||
"motion": recording_group.motion,
|
||||
"objects": recording_group.objects,
|
||||
"duration": round(recording_group.duration),
|
||||
}
|
||||
if day in days:
|
||||
# merge counts if already present (edge-case at DST boundary)
|
||||
days[day]["events"] += events_count or 0
|
||||
days[day]["hours"].append(hour_data)
|
||||
else:
|
||||
days[day] = {
|
||||
"events": events_count or 0,
|
||||
"hours": [hour_data],
|
||||
"day": day,
|
||||
}
|
||||
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
|
||||
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
|
||||
async def recordings(
|
||||
camera_name: str,
|
||||
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
|
||||
before: float = datetime.now().timestamp(),
|
||||
):
|
||||
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
|
||||
recordings = (
|
||||
Recordings.select(
|
||||
Recordings.id,
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
Recordings.segment_size,
|
||||
Recordings.motion,
|
||||
Recordings.objects,
|
||||
Recordings.duration,
|
||||
)
|
||||
.where(
|
||||
Recordings.camera == camera_name,
|
||||
Recordings.end_time >= after,
|
||||
Recordings.start_time <= before,
|
||||
)
|
||||
.order_by(Recordings.start_time)
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
return JSONResponse(content=list(recordings))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/recordings/unavailable",
|
||||
response_model=list[dict],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def no_recordings(
|
||||
request: Request,
|
||||
params: MediaRecordingsAvailabilityQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Get time ranges with no recordings."""
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content=[])
|
||||
cameras = ",".join(filtered)
|
||||
else:
|
||||
cameras = allowed_cameras
|
||||
|
||||
before = params.before or datetime.datetime.now().timestamp()
|
||||
after = (
|
||||
params.after
|
||||
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
|
||||
)
|
||||
scale = params.scale
|
||||
|
||||
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
|
||||
if cameras != "all":
|
||||
camera_list = cameras.split(",")
|
||||
clauses.append((Recordings.camera << camera_list))
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
# Get recording start times
|
||||
data: list[Recordings] = (
|
||||
Recordings.select(Recordings.start_time, Recordings.end_time)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.order_by(Recordings.start_time.asc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
# Convert recordings to list of (start, end) tuples
|
||||
recordings = [(r["start_time"], r["end_time"]) for r in data]
|
||||
|
||||
# Iterate through time segments and check if each has any recording
|
||||
no_recording_segments = []
|
||||
current = after
|
||||
current_gap_start = None
|
||||
|
||||
while current < before:
|
||||
segment_end = min(current + scale, before)
|
||||
|
||||
# Check if this segment overlaps with any recording
|
||||
has_recording = any(
|
||||
rec_start < segment_end and rec_end > current
|
||||
for rec_start, rec_end in recordings
|
||||
)
|
||||
|
||||
if not has_recording:
|
||||
# This segment has no recordings
|
||||
if current_gap_start is None:
|
||||
current_gap_start = current # Start a new gap
|
||||
else:
|
||||
# This segment has recordings
|
||||
if current_gap_start is not None:
|
||||
# End the current gap and append it
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(current)}
|
||||
)
|
||||
current_gap_start = None
|
||||
|
||||
current = segment_end
|
||||
|
||||
# Append the last gap if it exists
|
||||
if current_gap_start is not None:
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(before)}
|
||||
)
|
||||
|
||||
return JSONResponse(content=no_recording_segments)
|
||||
|
||||
|
||||
@router.delete(
|
||||
"/recordings/start/{start}/end/{end}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete recordings",
|
||||
description="""Deletes recordings within the specified time range.
|
||||
Recordings can be filtered by cameras and kept based on motion, objects, or audio attributes.
|
||||
""",
|
||||
)
|
||||
async def delete_recordings(
|
||||
start: float = PathParam(..., description="Start timestamp (unix)"),
|
||||
end: float = PathParam(..., description="End timestamp (unix)"),
|
||||
params: RecordingsDeleteQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Delete recordings in the specified time range."""
|
||||
if start >= end:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Start time must be less than end time.",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
cameras = params.cameras
|
||||
|
||||
if cameras != "all":
|
||||
requested = set(cameras.split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
|
||||
if not filtered:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "No valid cameras found in the request.",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
camera_list = list(filtered)
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
# Parse keep parameter
|
||||
keep_set = set()
|
||||
|
||||
if params.keep:
|
||||
keep_set = set(params.keep.split(","))
|
||||
|
||||
# Build query to find overlapping recordings
|
||||
clauses = [
|
||||
(
|
||||
Recordings.start_time.between(start, end)
|
||||
| Recordings.end_time.between(start, end)
|
||||
| ((start > Recordings.start_time) & (end < Recordings.end_time))
|
||||
),
|
||||
(Recordings.camera << camera_list),
|
||||
]
|
||||
|
||||
keep_clauses = []
|
||||
|
||||
if "motion" in keep_set:
|
||||
keep_clauses.append(Recordings.motion.is_null(False) & (Recordings.motion > 0))
|
||||
|
||||
if "object" in keep_set:
|
||||
keep_clauses.append(
|
||||
Recordings.objects.is_null(False) & (Recordings.objects > 0)
|
||||
)
|
||||
|
||||
if "audio" in keep_set:
|
||||
keep_clauses.append(Recordings.dBFS.is_null(False))
|
||||
|
||||
if keep_clauses:
|
||||
keep_condition = reduce(operator.or_, keep_clauses)
|
||||
clauses.append(~keep_condition)
|
||||
|
||||
recordings_to_delete = (
|
||||
Recordings.select(Recordings.id, Recordings.path)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
recording_ids = []
|
||||
deleted_count = 0
|
||||
error_count = 0
|
||||
|
||||
for recording in recordings_to_delete:
|
||||
recording_ids.append(recording["id"])
|
||||
|
||||
try:
|
||||
Path(recording["path"]).unlink(missing_ok=True)
|
||||
deleted_count += 1
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete recording file {recording['path']}: {e}")
|
||||
error_count += 1
|
||||
|
||||
if recording_ids:
|
||||
max_deletes = 100000
|
||||
recording_ids_list = list(recording_ids)
|
||||
|
||||
for i in range(0, len(recording_ids_list), max_deletes):
|
||||
Recordings.delete().where(
|
||||
Recordings.id << recording_ids_list[i : i + max_deletes]
|
||||
).execute()
|
||||
|
||||
message = f"Successfully deleted {deleted_count} recording(s)."
|
||||
|
||||
if error_count > 0:
|
||||
message += f" {error_count} file deletion error(s) occurred."
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": message},
|
||||
status_code=200,
|
||||
)
|
||||
@@ -14,6 +14,7 @@ from peewee import Case, DoesNotExist, IntegrityError, fn, operator
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
get_current_user,
|
||||
require_camera_access,
|
||||
@@ -43,7 +44,11 @@ logger = logging.getLogger(__name__)
|
||||
router = APIRouter(tags=[Tags.review])
|
||||
|
||||
|
||||
@router.get("/review", response_model=list[ReviewSegmentResponse])
|
||||
@router.get(
|
||||
"/review",
|
||||
response_model=list[ReviewSegmentResponse],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def review(
|
||||
params: ReviewQueryParams = Depends(),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
@@ -152,7 +157,11 @@ async def review(
|
||||
return JSONResponse(content=[r for r in review_query])
|
||||
|
||||
|
||||
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
|
||||
@router.get(
|
||||
"/review_ids",
|
||||
response_model=list[ReviewSegmentResponse],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def review_ids(request: Request, ids: str):
|
||||
ids = ids.split(",")
|
||||
|
||||
@@ -186,7 +195,11 @@ async def review_ids(request: Request, ids: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/review/summary", response_model=ReviewSummaryResponse)
|
||||
@router.get(
|
||||
"/review/summary",
|
||||
response_model=ReviewSummaryResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def review_summary(
|
||||
params: ReviewSummaryQueryParams = Depends(),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
@@ -461,7 +474,11 @@ async def review_summary(
|
||||
return JSONResponse(content=data)
|
||||
|
||||
|
||||
@router.post("/reviews/viewed", response_model=GenericResponse)
|
||||
@router.post(
|
||||
"/reviews/viewed",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def set_multiple_reviewed(
|
||||
request: Request,
|
||||
body: ReviewModifyMultipleBody,
|
||||
@@ -560,7 +577,9 @@ def delete_reviews(body: ReviewModifyMultipleBody):
|
||||
|
||||
|
||||
@router.get(
|
||||
"/review/activity/motion", response_model=list[ReviewActivityMotionResponse]
|
||||
"/review/activity/motion",
|
||||
response_model=list[ReviewActivityMotionResponse],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
def motion_activity(
|
||||
params: ReviewActivityMotionQueryParams = Depends(),
|
||||
@@ -644,7 +663,11 @@ def motion_activity(
|
||||
return JSONResponse(content=normalized)
|
||||
|
||||
|
||||
@router.get("/review/event/{event_id}", response_model=ReviewSegmentResponse)
|
||||
@router.get(
|
||||
"/review/event/{event_id}",
|
||||
response_model=ReviewSegmentResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def get_review_from_event(request: Request, event_id: str):
|
||||
try:
|
||||
review = ReviewSegment.get(
|
||||
@@ -659,7 +682,11 @@ async def get_review_from_event(request: Request, event_id: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/review/{review_id}", response_model=ReviewSegmentResponse)
|
||||
@router.get(
|
||||
"/review/{review_id}",
|
||||
response_model=ReviewSegmentResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def get_review(request: Request, review_id: str):
|
||||
try:
|
||||
review = ReviewSegment.get(ReviewSegment.id == review_id)
|
||||
@@ -672,7 +699,11 @@ async def get_review(request: Request, review_id: str):
|
||||
)
|
||||
|
||||
|
||||
@router.delete("/review/{review_id}/viewed", response_model=GenericResponse)
|
||||
@router.delete(
|
||||
"/review/{review_id}/viewed",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def set_not_reviewed(
|
||||
review_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
@@ -710,6 +741,7 @@ async def set_not_reviewed(
|
||||
|
||||
@router.post(
|
||||
"/review/summarize/start/{start_ts}/end/{end_ts}",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
description="Use GenAI to summarize review items over a period of time.",
|
||||
)
|
||||
def generate_review_summary(request: Request, start_ts: float, end_ts: float):
|
||||
|
||||
@@ -19,6 +19,8 @@ class CameraMetrics:
|
||||
process_pid: Synchronized
|
||||
capture_process_pid: Synchronized
|
||||
ffmpeg_pid: Synchronized
|
||||
reconnects_last_hour: Synchronized
|
||||
stalls_last_hour: Synchronized
|
||||
|
||||
def __init__(self, manager: SyncManager):
|
||||
self.camera_fps = manager.Value("d", 0)
|
||||
@@ -35,6 +37,8 @@ class CameraMetrics:
|
||||
self.process_pid = manager.Value("i", 0)
|
||||
self.capture_process_pid = manager.Value("i", 0)
|
||||
self.ffmpeg_pid = manager.Value("i", 0)
|
||||
self.reconnects_last_hour = manager.Value("i", 0)
|
||||
self.stalls_last_hour = manager.Value("i", 0)
|
||||
|
||||
|
||||
class PTZMetrics:
|
||||
|
||||
@@ -28,6 +28,7 @@ from frigate.const import (
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
UPDATE_JOB_STATE,
|
||||
UPDATE_MODEL_STATE,
|
||||
UPDATE_REVIEW_DESCRIPTION,
|
||||
UPSERT_REVIEW_SEGMENT,
|
||||
@@ -60,6 +61,7 @@ class Dispatcher:
|
||||
self.camera_activity = CameraActivityManager(config, self.publish)
|
||||
self.audio_activity = AudioActivityManager(config, self.publish)
|
||||
self.model_state: dict[str, ModelStatusTypesEnum] = {}
|
||||
self.job_state: dict[str, dict[str, Any]] = {} # {job_type: job_data}
|
||||
self.embeddings_reindex: dict[str, Any] = {}
|
||||
self.birdseye_layout: dict[str, Any] = {}
|
||||
self.audio_transcription_state: str = "idle"
|
||||
@@ -180,6 +182,19 @@ class Dispatcher:
|
||||
def handle_model_state() -> None:
|
||||
self.publish("model_state", json.dumps(self.model_state.copy()))
|
||||
|
||||
def handle_update_job_state() -> None:
|
||||
if payload and isinstance(payload, dict):
|
||||
job_type = payload.get("job_type")
|
||||
if job_type:
|
||||
self.job_state[job_type] = payload
|
||||
self.publish(
|
||||
"job_state",
|
||||
json.dumps(self.job_state),
|
||||
)
|
||||
|
||||
def handle_job_state() -> None:
|
||||
self.publish("job_state", json.dumps(self.job_state.copy()))
|
||||
|
||||
def handle_update_audio_transcription_state() -> None:
|
||||
if payload:
|
||||
self.audio_transcription_state = payload
|
||||
@@ -277,6 +292,7 @@ class Dispatcher:
|
||||
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
|
||||
UPDATE_REVIEW_DESCRIPTION: handle_update_review_description,
|
||||
UPDATE_MODEL_STATE: handle_update_model_state,
|
||||
UPDATE_JOB_STATE: handle_update_job_state,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
|
||||
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE: handle_update_audio_transcription_state,
|
||||
@@ -284,6 +300,7 @@ class Dispatcher:
|
||||
"restart": handle_restart,
|
||||
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
|
||||
"modelState": handle_model_state,
|
||||
"jobState": handle_job_state,
|
||||
"audioTranscriptionState": handle_audio_transcription_state,
|
||||
"birdseyeLayout": handle_birdseye_layout,
|
||||
"onConnect": handle_on_connect,
|
||||
@@ -607,23 +624,27 @@ class Dispatcher:
|
||||
)
|
||||
self.publish(f"{camera_name}/snapshots/state", payload, retain=True)
|
||||
|
||||
def _on_ptz_command(self, camera_name: str, payload: str) -> None:
|
||||
def _on_ptz_command(self, camera_name: str, payload: str | bytes) -> None:
|
||||
"""Callback for ptz topic."""
|
||||
try:
|
||||
if "preset" in payload.lower():
|
||||
preset: str = (
|
||||
payload.decode("utf-8") if isinstance(payload, bytes) else payload
|
||||
).lower()
|
||||
|
||||
if "preset" in preset:
|
||||
command = OnvifCommandEnum.preset
|
||||
param = payload.lower()[payload.index("_") + 1 :]
|
||||
elif "move_relative" in payload.lower():
|
||||
param = preset[preset.index("_") + 1 :]
|
||||
elif "move_relative" in preset:
|
||||
command = OnvifCommandEnum.move_relative
|
||||
param = payload.lower()[payload.index("_") + 1 :]
|
||||
param = preset[preset.index("_") + 1 :]
|
||||
else:
|
||||
command = OnvifCommandEnum[payload.lower()]
|
||||
command = OnvifCommandEnum[preset]
|
||||
param = ""
|
||||
|
||||
self.onvif.handle_command(camera_name, command, param)
|
||||
logger.info(f"Setting ptz command to {command} for {camera_name}")
|
||||
except KeyError as k:
|
||||
logger.error(f"Invalid PTZ command {payload}: {k}")
|
||||
logger.error(f"Invalid PTZ command {preset}: {k}")
|
||||
|
||||
def _on_birdseye_command(self, camera_name: str, payload: str) -> None:
|
||||
"""Callback for birdseye topic."""
|
||||
|
||||
@@ -21,7 +21,7 @@ from frigate.config.camera.updater import (
|
||||
CameraConfigUpdateEnum,
|
||||
CameraConfigUpdateSubscriber,
|
||||
)
|
||||
from frigate.const import CONFIG_DIR
|
||||
from frigate.const import BASE_DIR, CONFIG_DIR
|
||||
from frigate.models import User
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -371,14 +371,39 @@ class WebPushClient(Communicator):
|
||||
|
||||
sorted_objects.update(payload["after"]["data"]["sub_labels"])
|
||||
|
||||
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
|
||||
image = f"{payload['after']['thumb_path'].replace(BASE_DIR, '')}"
|
||||
ended = state == "end" or state == "genai"
|
||||
|
||||
if state == "genai" and payload["after"]["data"]["metadata"]:
|
||||
title = payload["after"]["data"]["metadata"]["title"]
|
||||
base_title = payload["after"]["data"]["metadata"]["title"]
|
||||
threat_level = payload["after"]["data"]["metadata"].get(
|
||||
"potential_threat_level", 0
|
||||
)
|
||||
|
||||
# Add prefix for threat levels 1 and 2
|
||||
if threat_level == 1:
|
||||
title = f"Needs Review: {base_title}"
|
||||
elif threat_level == 2:
|
||||
title = f"Security Concern: {base_title}"
|
||||
else:
|
||||
title = base_title
|
||||
|
||||
message = payload["after"]["data"]["metadata"]["scene"]
|
||||
else:
|
||||
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {titlecase(', '.join(payload['after']['data']['zones']).replace('_', ' '))}"
|
||||
zone_names = payload["after"]["data"]["zones"]
|
||||
formatted_zone_names = []
|
||||
|
||||
for zone_name in zone_names:
|
||||
if zone_name in self.config.cameras[camera].zones:
|
||||
formatted_zone_names.append(
|
||||
self.config.cameras[camera]
|
||||
.zones[zone_name]
|
||||
.get_formatted_name(zone_name)
|
||||
)
|
||||
else:
|
||||
formatted_zone_names.append(titlecase(zone_name.replace("_", " ")))
|
||||
|
||||
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {', '.join(formatted_zone_names)}"
|
||||
message = f"Detected on {camera_name}"
|
||||
|
||||
if ended:
|
||||
|
||||
@@ -20,7 +20,7 @@ class AuthConfig(FrigateBaseModel):
|
||||
default=86400, title="Session length for jwt session tokens", ge=60
|
||||
)
|
||||
refresh_time: int = Field(
|
||||
default=43200,
|
||||
default=1800,
|
||||
title="Refresh the session if it is going to expire in this many seconds",
|
||||
ge=30,
|
||||
)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from enum import Enum
|
||||
from typing import Optional
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
@@ -70,13 +70,13 @@ class RecordExportConfig(FrigateBaseModel):
|
||||
timelapse_args: str = Field(
|
||||
default=DEFAULT_TIME_LAPSE_FFMPEG_ARGS, title="Timelapse Args"
|
||||
)
|
||||
hwaccel_args: Union[str, list[str]] = Field(
|
||||
default="auto", title="Export-specific FFmpeg hardware acceleration arguments."
|
||||
)
|
||||
|
||||
|
||||
class RecordConfig(FrigateBaseModel):
|
||||
enabled: bool = Field(default=False, title="Enable record on all cameras.")
|
||||
sync_recordings: bool = Field(
|
||||
default=False, title="Sync recordings with disk on startup and once a day."
|
||||
)
|
||||
expire_interval: int = Field(
|
||||
default=60,
|
||||
title="Number of minutes to wait between cleanup runs.",
|
||||
|
||||
@@ -105,6 +105,11 @@ class CustomClassificationConfig(FrigateBaseModel):
|
||||
threshold: float = Field(
|
||||
default=0.8, title="Classification score threshold to change the state."
|
||||
)
|
||||
save_attempts: int | None = Field(
|
||||
default=None,
|
||||
title="Number of classification attempts to save in the recent classifications tab. If not specified, defaults to 200 for object classification and 100 for state classification.",
|
||||
ge=0,
|
||||
)
|
||||
object_config: CustomClassificationObjectConfig | None = Field(default=None)
|
||||
state_config: CustomClassificationStateConfig | None = Field(default=None)
|
||||
|
||||
|
||||
@@ -523,6 +523,14 @@ class FrigateConfig(FrigateBaseModel):
|
||||
if camera_config.ffmpeg.hwaccel_args == "auto":
|
||||
camera_config.ffmpeg.hwaccel_args = self.ffmpeg.hwaccel_args
|
||||
|
||||
# Resolve export hwaccel_args: camera export -> camera ffmpeg -> global ffmpeg
|
||||
# This allows per-camera override for exports (e.g., when camera resolution
|
||||
# exceeds hardware encoder limits)
|
||||
if camera_config.record.export.hwaccel_args == "auto":
|
||||
camera_config.record.export.hwaccel_args = (
|
||||
camera_config.ffmpeg.hwaccel_args
|
||||
)
|
||||
|
||||
for input in camera_config.ffmpeg.inputs:
|
||||
need_detect_dimensions = "detect" in input.roles and (
|
||||
camera_config.detect.height is None
|
||||
|
||||
@@ -37,9 +37,6 @@ class UIConfig(FrigateBaseModel):
|
||||
time_style: DateTimeStyleEnum = Field(
|
||||
default=DateTimeStyleEnum.medium, title="Override UI timeStyle."
|
||||
)
|
||||
strftime_fmt: Optional[str] = Field(
|
||||
default=None, title="Override date and time format using strftime syntax."
|
||||
)
|
||||
unit_system: UnitSystemEnum = Field(
|
||||
default=UnitSystemEnum.metric, title="The unit system to use for measurements."
|
||||
)
|
||||
|
||||
@@ -119,6 +119,7 @@ UPDATE_REVIEW_DESCRIPTION = "update_review_description"
|
||||
UPDATE_MODEL_STATE = "update_model_state"
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS = "handle_embeddings_reindex_progress"
|
||||
UPDATE_BIRDSEYE_LAYOUT = "update_birdseye_layout"
|
||||
UPDATE_JOB_STATE = "update_job_state"
|
||||
NOTIFICATION_TEST = "notification_test"
|
||||
|
||||
# IO Nice Values
|
||||
|
||||
@@ -12,6 +12,7 @@ from typing import Any
|
||||
|
||||
import cv2
|
||||
from peewee import DoesNotExist
|
||||
from titlecase import titlecase
|
||||
|
||||
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
@@ -208,10 +209,22 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
logger.debug(
|
||||
f"Found GenAI Review Summary request for {start_ts} to {end_ts}"
|
||||
)
|
||||
items: list[dict[str, Any]] = [
|
||||
r["data"]["metadata"]
|
||||
|
||||
# Query all review segments with camera and time information
|
||||
segments: list[dict[str, Any]] = [
|
||||
{
|
||||
"camera": r["camera"].replace("_", " ").title(),
|
||||
"start_time": r["start_time"],
|
||||
"end_time": r["end_time"],
|
||||
"metadata": r["data"]["metadata"],
|
||||
}
|
||||
for r in (
|
||||
ReviewSegment.select(ReviewSegment.data)
|
||||
ReviewSegment.select(
|
||||
ReviewSegment.camera,
|
||||
ReviewSegment.start_time,
|
||||
ReviewSegment.end_time,
|
||||
ReviewSegment.data,
|
||||
)
|
||||
.where(
|
||||
(ReviewSegment.data["metadata"].is_null(False))
|
||||
& (ReviewSegment.start_time < end_ts)
|
||||
@@ -223,21 +236,72 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
)
|
||||
]
|
||||
|
||||
if len(items) == 0:
|
||||
if len(segments) == 0:
|
||||
logger.debug("No review items with metadata found during time period")
|
||||
return "No activity was found during this time."
|
||||
return "No activity was found during this time period."
|
||||
|
||||
important_items = list(
|
||||
filter(
|
||||
lambda item: item.get("potential_threat_level", 0) > 0
|
||||
or item.get("other_concerns"),
|
||||
items,
|
||||
)
|
||||
)
|
||||
# Identify primary items (important items that need review)
|
||||
primary_segments = [
|
||||
seg
|
||||
for seg in segments
|
||||
if seg["metadata"].get("potential_threat_level", 0) > 0
|
||||
or seg["metadata"].get("other_concerns")
|
||||
]
|
||||
|
||||
if not important_items:
|
||||
if not primary_segments:
|
||||
return "No concerns were found during this time period."
|
||||
|
||||
# Build hierarchical structure: each primary event with its contextual items
|
||||
events_with_context = []
|
||||
|
||||
for primary_seg in primary_segments:
|
||||
# Start building the primary event structure
|
||||
primary_item = copy.deepcopy(primary_seg["metadata"])
|
||||
primary_item["camera"] = primary_seg["camera"]
|
||||
primary_item["start_time"] = primary_seg["start_time"]
|
||||
primary_item["end_time"] = primary_seg["end_time"]
|
||||
|
||||
# Find overlapping contextual items from other cameras
|
||||
primary_start = primary_seg["start_time"]
|
||||
primary_end = primary_seg["end_time"]
|
||||
primary_camera = primary_seg["camera"]
|
||||
contextual_items = []
|
||||
seen_contextual_cameras = set()
|
||||
|
||||
for seg in segments:
|
||||
seg_camera = seg["camera"]
|
||||
|
||||
if seg_camera == primary_camera:
|
||||
continue
|
||||
|
||||
if seg in primary_segments:
|
||||
continue
|
||||
|
||||
seg_start = seg["start_time"]
|
||||
seg_end = seg["end_time"]
|
||||
|
||||
if seg_start < primary_end and primary_start < seg_end:
|
||||
# Avoid duplicates if same camera has multiple overlapping segments
|
||||
if seg_camera not in seen_contextual_cameras:
|
||||
contextual_item = copy.deepcopy(seg["metadata"])
|
||||
contextual_item["camera"] = seg_camera
|
||||
contextual_item["start_time"] = seg_start
|
||||
contextual_item["end_time"] = seg_end
|
||||
contextual_items.append(contextual_item)
|
||||
seen_contextual_cameras.add(seg_camera)
|
||||
|
||||
# Add context array to primary item
|
||||
primary_item["context"] = contextual_items
|
||||
events_with_context.append(primary_item)
|
||||
|
||||
total_context_items = sum(
|
||||
len(event.get("context", [])) for event in events_with_context
|
||||
)
|
||||
logger.debug(
|
||||
f"Summary includes {len(events_with_context)} primary events with "
|
||||
f"{total_context_items} total contextual items"
|
||||
)
|
||||
|
||||
if self.config.review.genai.debug_save_thumbnails:
|
||||
Path(
|
||||
os.path.join(CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}")
|
||||
@@ -246,7 +310,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
return self.genai_client.generate_review_summary(
|
||||
start_ts,
|
||||
end_ts,
|
||||
important_items,
|
||||
events_with_context,
|
||||
self.config.review.genai.debug_save_thumbnails,
|
||||
)
|
||||
else:
|
||||
@@ -455,14 +519,14 @@ def run_analysis(
|
||||
|
||||
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
|
||||
object_type = verified_label.replace("-verified", "").replace("_", " ")
|
||||
name = sub_labels_list[i].replace("_", " ").title()
|
||||
name = titlecase(sub_labels_list[i].replace("_", " "))
|
||||
unified_objects.append(f"{name} ({object_type})")
|
||||
|
||||
for label in objects_list:
|
||||
if "-verified" in label:
|
||||
continue
|
||||
elif label in labelmap_objects:
|
||||
object_type = label.replace("_", " ").title()
|
||||
object_type = titlecase(label.replace("_", " "))
|
||||
|
||||
if label in attribute_labels:
|
||||
unified_objects.append(f"{object_type} (delivery/service)")
|
||||
|
||||
@@ -99,6 +99,42 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
if self.inference_speed:
|
||||
self.inference_speed.update(duration)
|
||||
|
||||
def _should_save_image(
|
||||
self, camera: str, detected_state: str, score: float = 1.0
|
||||
) -> bool:
|
||||
"""
|
||||
Determine if we should save the image for training.
|
||||
Save when:
|
||||
- State is changing or being verified (regardless of score)
|
||||
- Score is less than 100% (even if state matches, useful for training)
|
||||
Don't save when:
|
||||
- State is stable (matches current_state) AND score is 100%
|
||||
"""
|
||||
if camera not in self.state_history:
|
||||
# First detection for this camera, save it
|
||||
return True
|
||||
|
||||
verification = self.state_history[camera]
|
||||
current_state = verification.get("current_state")
|
||||
pending_state = verification.get("pending_state")
|
||||
|
||||
# Save if there's a pending state change being verified
|
||||
if pending_state is not None:
|
||||
return True
|
||||
|
||||
# Save if the detected state differs from the current verified state
|
||||
# (state is changing)
|
||||
if current_state is not None and detected_state != current_state:
|
||||
return True
|
||||
|
||||
# If score is less than 100%, save even if state matches
|
||||
# (useful for training to improve confidence)
|
||||
if score < 1.0:
|
||||
return True
|
||||
|
||||
# Don't save if state is stable (detected_state == current_state) AND score is 100%
|
||||
return False
|
||||
|
||||
def verify_state_change(self, camera: str, detected_state: str) -> str | None:
|
||||
"""
|
||||
Verify state change requires 3 consecutive identical states before publishing.
|
||||
@@ -212,14 +248,22 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
return
|
||||
|
||||
if self.interpreter is None:
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
|
||||
"none-none",
|
||||
now,
|
||||
"unknown",
|
||||
0.0,
|
||||
)
|
||||
# When interpreter is None, always save (score is 0.0, which is < 1.0)
|
||||
if self._should_save_image(camera, "unknown", 0.0):
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 100
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
|
||||
"none-none",
|
||||
now,
|
||||
"unknown",
|
||||
0.0,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
return
|
||||
|
||||
input = np.expand_dims(resized_frame, axis=0)
|
||||
@@ -236,14 +280,23 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
score = round(probs[best_id], 2)
|
||||
self.__update_metrics(datetime.datetime.now().timestamp() - now)
|
||||
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
|
||||
"none-none",
|
||||
now,
|
||||
self.labelmap[best_id],
|
||||
score,
|
||||
)
|
||||
detected_state = self.labelmap[best_id]
|
||||
|
||||
if self._should_save_image(camera, detected_state, score):
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 100
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
|
||||
"none-none",
|
||||
now,
|
||||
detected_state,
|
||||
score,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
|
||||
if score < self.model_config.threshold:
|
||||
logger.debug(
|
||||
@@ -251,7 +304,6 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
)
|
||||
return
|
||||
|
||||
detected_state = self.labelmap[best_id]
|
||||
verified_state = self.verify_state_change(camera, detected_state)
|
||||
|
||||
if verified_state is not None:
|
||||
@@ -405,9 +457,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
if obj_data.get("end_time") is not None:
|
||||
return
|
||||
|
||||
if obj_data.get("stationary"):
|
||||
return
|
||||
|
||||
object_id = obj_data["id"]
|
||||
|
||||
if (
|
||||
@@ -445,6 +494,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
return
|
||||
|
||||
if self.interpreter is None:
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 200
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
|
||||
@@ -452,6 +506,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
now,
|
||||
"unknown",
|
||||
0.0,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
return
|
||||
|
||||
@@ -469,6 +524,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
score = round(probs[best_id], 2)
|
||||
self.__update_metrics(datetime.datetime.now().timestamp() - now)
|
||||
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 200
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
|
||||
@@ -476,7 +536,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
now,
|
||||
self.labelmap[best_id],
|
||||
score,
|
||||
max_files=200,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
|
||||
if score < self.model_config.threshold:
|
||||
@@ -579,14 +639,14 @@ def write_classification_attempt(
|
||||
os.makedirs(folder, exist_ok=True)
|
||||
cv2.imwrite(file, frame)
|
||||
|
||||
files = sorted(
|
||||
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
|
||||
key=lambda f: os.path.getctime(os.path.join(folder, f)),
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
# delete oldest face image if maximum is reached
|
||||
try:
|
||||
files = sorted(
|
||||
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
|
||||
key=lambda f: os.path.getctime(os.path.join(folder, f)),
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
if len(files) > max_files:
|
||||
os.unlink(os.path.join(folder, files[-1]))
|
||||
except FileNotFoundError:
|
||||
|
||||
@@ -1,19 +1,20 @@
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
from pydantic import Field
|
||||
from typing_extensions import Literal
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
|
||||
|
||||
try:
|
||||
from tflite_runtime.interpreter import Interpreter, load_delegate
|
||||
except ModuleNotFoundError:
|
||||
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DETECTOR_KEY = "edgetpu"
|
||||
@@ -26,6 +27,10 @@ class EdgeTpuDetectorConfig(BaseDetectorConfig):
|
||||
|
||||
class EdgeTpuTfl(DetectionApi):
|
||||
type_key = DETECTOR_KEY
|
||||
supported_models = [
|
||||
ModelTypeEnum.ssd,
|
||||
ModelTypeEnum.yologeneric,
|
||||
]
|
||||
|
||||
def __init__(self, detector_config: EdgeTpuDetectorConfig):
|
||||
device_config = {}
|
||||
@@ -63,31 +68,294 @@ class EdgeTpuTfl(DetectionApi):
|
||||
|
||||
self.tensor_input_details = self.interpreter.get_input_details()
|
||||
self.tensor_output_details = self.interpreter.get_output_details()
|
||||
self.model_width = detector_config.model.width
|
||||
self.model_height = detector_config.model.height
|
||||
|
||||
self.min_score = 0.4
|
||||
self.max_detections = 20
|
||||
|
||||
self.model_type = detector_config.model.model_type
|
||||
self.model_requires_int8 = self.tensor_input_details[0]["dtype"] == np.int8
|
||||
|
||||
if self.model_type == ModelTypeEnum.yologeneric:
|
||||
logger.debug("Using YOLO preprocessing/postprocessing")
|
||||
|
||||
if len(self.tensor_output_details) not in [2, 3]:
|
||||
logger.error(
|
||||
f"Invalid count of output tensors in YOLO model. Found {len(self.tensor_output_details)}, expecting 2 or 3."
|
||||
)
|
||||
raise
|
||||
|
||||
self.reg_max = 16 # = 64 dfl_channels // 4 # YOLO standard
|
||||
self.min_logit_value = np.log(
|
||||
self.min_score / (1 - self.min_score)
|
||||
) # for filtering
|
||||
self._generate_anchors_and_strides() # decode bounding box DFL
|
||||
self.project = np.arange(
|
||||
self.reg_max, dtype=np.float32
|
||||
) # for decoding bounding box DFL information
|
||||
|
||||
# Determine YOLO tensor indices and quantization scales for
|
||||
# boxes and class_scores the tensor ordering and names are
|
||||
# not reliable, so use tensor shape to detect which tensor
|
||||
# holds boxes or class scores.
|
||||
# The tensors have shapes (B, N, C)
|
||||
# where N is the number of candidates (=2100 for 320x320)
|
||||
# this may guess wrong if the number of classes is exactly 64
|
||||
output_boxes_index = None
|
||||
output_classes_index = None
|
||||
for i, x in enumerate(self.tensor_output_details):
|
||||
# the nominal index seems to start at 1 instead of 0
|
||||
if len(x["shape"]) == 3 and x["shape"][2] == 64:
|
||||
output_boxes_index = i
|
||||
elif len(x["shape"]) == 3 and x["shape"][2] > 1:
|
||||
# require the number of classes to be more than 1
|
||||
# to differentiate from (not used) max score tensor
|
||||
output_classes_index = i
|
||||
if output_boxes_index is None or output_classes_index is None:
|
||||
logger.warning("Unrecognized model output, unexpected tensor shapes.")
|
||||
output_classes_index = (
|
||||
0
|
||||
if (output_boxes_index is None or output_classes_index == 1)
|
||||
else 1
|
||||
) # 0 is default guess
|
||||
output_boxes_index = 1 if (output_boxes_index == 0) else 0
|
||||
|
||||
scores_details = self.tensor_output_details[output_classes_index]
|
||||
self.scores_tensor_index = scores_details["index"]
|
||||
self.scores_scale, self.scores_zero_point = scores_details["quantization"]
|
||||
# calculate the quantized version of the min_score
|
||||
self.min_score_quantized = int(
|
||||
(self.min_logit_value / self.scores_scale) + self.scores_zero_point
|
||||
)
|
||||
self.logit_shift_to_positive_values = (
|
||||
max(0, math.ceil((128 + self.scores_zero_point) * self.scores_scale))
|
||||
+ 1
|
||||
) # round up
|
||||
|
||||
boxes_details = self.tensor_output_details[output_boxes_index]
|
||||
self.boxes_tensor_index = boxes_details["index"]
|
||||
self.boxes_scale, self.boxes_zero_point = boxes_details["quantization"]
|
||||
|
||||
elif self.model_type == ModelTypeEnum.ssd:
|
||||
logger.debug("Using SSD preprocessing/postprocessing")
|
||||
|
||||
# SSD model indices (4 outputs: boxes, class_ids, scores, count)
|
||||
for x in self.tensor_output_details:
|
||||
if len(x["shape"]) == 3:
|
||||
self.output_boxes_index = x["index"]
|
||||
elif len(x["shape"]) == 1:
|
||||
self.output_count_index = x["index"]
|
||||
|
||||
self.output_class_ids_index = None
|
||||
self.output_class_scores_index = None
|
||||
|
||||
else:
|
||||
raise Exception(
|
||||
f"{self.model_type} is currently not supported for edgetpu. See the docs for more info on supported models."
|
||||
)
|
||||
|
||||
def _generate_anchors_and_strides(self):
|
||||
# for decoding the bounding box DFL information into xy coordinates
|
||||
all_anchors = []
|
||||
all_strides = []
|
||||
strides = (8, 16, 32) # YOLO's small, medium, large detection heads
|
||||
|
||||
for stride in strides:
|
||||
feat_h, feat_w = self.model_height // stride, self.model_width // stride
|
||||
|
||||
grid_y, grid_x = np.meshgrid(
|
||||
np.arange(feat_h, dtype=np.float32),
|
||||
np.arange(feat_w, dtype=np.float32),
|
||||
indexing="ij",
|
||||
)
|
||||
|
||||
grid_coords = np.stack((grid_x.flatten(), grid_y.flatten()), axis=1)
|
||||
anchor_points = grid_coords + 0.5
|
||||
|
||||
all_anchors.append(anchor_points)
|
||||
all_strides.append(np.full((feat_h * feat_w, 1), stride, dtype=np.float32))
|
||||
|
||||
self.anchors = np.concatenate(all_anchors, axis=0)
|
||||
self.anchor_strides = np.concatenate(all_strides, axis=0)
|
||||
|
||||
def determine_indexes_for_non_yolo_models(self):
|
||||
"""Legacy method for SSD models."""
|
||||
if (
|
||||
self.output_class_ids_index is None
|
||||
or self.output_class_scores_index is None
|
||||
):
|
||||
for i in range(4):
|
||||
index = self.tensor_output_details[i]["index"]
|
||||
if (
|
||||
index != self.output_boxes_index
|
||||
and index != self.output_count_index
|
||||
):
|
||||
if (
|
||||
np.mod(np.float32(self.interpreter.tensor(index)()[0][0]), 1)
|
||||
== 0.0
|
||||
):
|
||||
self.output_class_ids_index = index
|
||||
else:
|
||||
self.output_scores_index = index
|
||||
|
||||
def pre_process(self, tensor_input):
|
||||
if self.model_requires_int8:
|
||||
tensor_input = np.bitwise_xor(tensor_input, 128).view(
|
||||
np.int8
|
||||
) # shift by -128
|
||||
return tensor_input
|
||||
|
||||
def detect_raw(self, tensor_input):
|
||||
tensor_input = self.pre_process(tensor_input)
|
||||
|
||||
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], tensor_input)
|
||||
self.interpreter.invoke()
|
||||
|
||||
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
|
||||
class_ids = self.interpreter.tensor(self.tensor_output_details[1]["index"])()[0]
|
||||
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[0]
|
||||
count = int(
|
||||
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
|
||||
)
|
||||
if self.model_type == ModelTypeEnum.yologeneric:
|
||||
# Multi-tensor YOLO model with (non-standard B(H*W)C output format).
|
||||
# (the comments indicate the shape of tensors,
|
||||
# using "2100" as the anchor count (for image size of 320x320),
|
||||
# "NC" as number of classes,
|
||||
# "N" as the count that survive after min-score filtering)
|
||||
# TENSOR A) class scores (1, 2100, NC) with logit values
|
||||
# TENSOR B) box coordinates (1, 2100, 64) encoded as dfl scores
|
||||
# Recommend that the model clamp the logit values in tensor (A)
|
||||
# to the range [-4,+4] to preserve precision from [2%,98%]
|
||||
# and because NMS requires the min_score parameter to be >= 0
|
||||
|
||||
detections = np.zeros((20, 6), np.float32)
|
||||
# don't dequantize scores data yet, wait until the low-confidence
|
||||
# candidates are filtered out from the overall result set.
|
||||
# This reduces the work and makes post-processing faster.
|
||||
# this method works with raw quantized numbers when possible,
|
||||
# which relies on the value of the scale factor to be >0.
|
||||
# This speeds up max and argmax operations.
|
||||
# Get max confidence for each detection and create the mask
|
||||
detections = np.zeros(
|
||||
(self.max_detections, 6), np.float32
|
||||
) # initialize zero results
|
||||
scores_output_quantized = self.interpreter.get_tensor(
|
||||
self.scores_tensor_index
|
||||
)[0] # (2100, NC)
|
||||
max_scores_quantized = np.max(scores_output_quantized, axis=1) # (2100,)
|
||||
mask = max_scores_quantized >= self.min_score_quantized # (2100,)
|
||||
|
||||
for i in range(count):
|
||||
if scores[i] < 0.4 or i == 20:
|
||||
break
|
||||
detections[i] = [
|
||||
class_ids[i],
|
||||
float(scores[i]),
|
||||
boxes[i][0],
|
||||
boxes[i][1],
|
||||
boxes[i][2],
|
||||
boxes[i][3],
|
||||
if not np.any(mask):
|
||||
return detections # empty results
|
||||
|
||||
max_scores_filtered_shiftedpositive = (
|
||||
(max_scores_quantized[mask] - self.scores_zero_point)
|
||||
* self.scores_scale
|
||||
) + self.logit_shift_to_positive_values # (N,1) shifted logit values
|
||||
scores_output_quantized_filtered = scores_output_quantized[mask]
|
||||
|
||||
# dequantize boxes. NMS needs them to be in float format
|
||||
# remove candidates with probabilities < threshold
|
||||
boxes_output_quantized_filtered = (
|
||||
self.interpreter.get_tensor(self.boxes_tensor_index)[0]
|
||||
)[mask] # (N, 64)
|
||||
boxes_output_filtered = (
|
||||
boxes_output_quantized_filtered.astype(np.float32)
|
||||
- self.boxes_zero_point
|
||||
) * self.boxes_scale
|
||||
|
||||
# 2. Decode DFL to distances (ltrb)
|
||||
dfl_distributions = boxes_output_filtered.reshape(
|
||||
-1, 4, self.reg_max
|
||||
) # (N, 4, 16)
|
||||
|
||||
# Softmax over the 16 bins
|
||||
dfl_max = np.max(dfl_distributions, axis=2, keepdims=True)
|
||||
dfl_exp = np.exp(dfl_distributions - dfl_max)
|
||||
dfl_probs = dfl_exp / np.sum(dfl_exp, axis=2, keepdims=True) # (N, 4, 16)
|
||||
|
||||
# Weighted sum: (N, 4, 16) * (16,) -> (N, 4)
|
||||
distances = np.einsum("pcr,r->pc", dfl_probs, self.project)
|
||||
|
||||
# Calculate box corners in pixel coordinates
|
||||
anchors_filtered = self.anchors[mask]
|
||||
anchor_strides_filtered = self.anchor_strides[mask]
|
||||
x1y1 = (
|
||||
anchors_filtered - distances[:, [0, 1]]
|
||||
) * anchor_strides_filtered # (N, 2)
|
||||
x2y2 = (
|
||||
anchors_filtered + distances[:, [2, 3]]
|
||||
) * anchor_strides_filtered # (N, 2)
|
||||
boxes_filtered_decoded = np.concatenate((x1y1, x2y2), axis=-1) # (N, 4)
|
||||
|
||||
# 9. Apply NMS. Use logit scores here to defer sigmoid()
|
||||
# until after filtering out redundant boxes
|
||||
# Shift the logit scores to be non-negative (required by cv2)
|
||||
indices = cv2.dnn.NMSBoxes(
|
||||
bboxes=boxes_filtered_decoded,
|
||||
scores=max_scores_filtered_shiftedpositive,
|
||||
score_threshold=(
|
||||
self.min_logit_value + self.logit_shift_to_positive_values
|
||||
),
|
||||
nms_threshold=0.4, # should this be a model config setting?
|
||||
)
|
||||
num_detections = len(indices)
|
||||
if num_detections == 0:
|
||||
return detections # empty results
|
||||
|
||||
nms_indices = np.array(indices, dtype=np.int32).ravel() # or .flatten()
|
||||
if num_detections > self.max_detections:
|
||||
nms_indices = nms_indices[: self.max_detections]
|
||||
num_detections = self.max_detections
|
||||
kept_logits_quantized = scores_output_quantized_filtered[nms_indices]
|
||||
class_ids_post_nms = np.argmax(kept_logits_quantized, axis=1)
|
||||
|
||||
# Extract the final boxes and scores using fancy indexing
|
||||
final_boxes = boxes_filtered_decoded[nms_indices]
|
||||
final_scores_logits = (
|
||||
max_scores_filtered_shiftedpositive[nms_indices]
|
||||
- self.logit_shift_to_positive_values
|
||||
) # Unshifted logits
|
||||
|
||||
# Detections array format: [class_id, score, ymin, xmin, ymax, xmax]
|
||||
detections[:num_detections, 0] = class_ids_post_nms
|
||||
detections[:num_detections, 1] = 1.0 / (
|
||||
1.0 + np.exp(-final_scores_logits)
|
||||
) # sigmoid
|
||||
detections[:num_detections, 2] = final_boxes[:, 1] / self.model_height
|
||||
detections[:num_detections, 3] = final_boxes[:, 0] / self.model_width
|
||||
detections[:num_detections, 4] = final_boxes[:, 3] / self.model_height
|
||||
detections[:num_detections, 5] = final_boxes[:, 2] / self.model_width
|
||||
return detections
|
||||
|
||||
elif self.model_type == ModelTypeEnum.ssd:
|
||||
self.determine_indexes_for_non_yolo_models()
|
||||
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
|
||||
class_ids = self.interpreter.tensor(
|
||||
self.tensor_output_details[1]["index"]
|
||||
)()[0]
|
||||
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[
|
||||
0
|
||||
]
|
||||
count = int(
|
||||
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
|
||||
)
|
||||
|
||||
return detections
|
||||
detections = np.zeros((self.max_detections, 6), np.float32)
|
||||
|
||||
for i in range(count):
|
||||
if scores[i] < self.min_score:
|
||||
break
|
||||
if i == self.max_detections:
|
||||
logger.debug(f"Too many detections ({count})!")
|
||||
break
|
||||
detections[i] = [
|
||||
class_ids[i],
|
||||
float(scores[i]),
|
||||
boxes[i][0],
|
||||
boxes[i][1],
|
||||
boxes[i][2],
|
||||
boxes[i][3],
|
||||
]
|
||||
|
||||
return detections
|
||||
|
||||
else:
|
||||
raise Exception(
|
||||
f"{self.model_type} is currently not supported for edgetpu. See the docs for more info on supported models."
|
||||
)
|
||||
|
||||
@@ -2,7 +2,6 @@ import glob
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import time
|
||||
import urllib.request
|
||||
import zipfile
|
||||
from queue import Queue
|
||||
@@ -55,6 +54,9 @@ class MemryXDetector(DetectionApi):
|
||||
)
|
||||
return
|
||||
|
||||
# Initialize stop_event as None, will be set later by set_stop_event()
|
||||
self.stop_event = None
|
||||
|
||||
model_cfg = getattr(detector_config, "model", None)
|
||||
|
||||
# Check if model_type was explicitly set by the user
|
||||
@@ -363,26 +365,43 @@ class MemryXDetector(DetectionApi):
|
||||
def process_input(self):
|
||||
"""Input callback function: wait for frames in the input queue, preprocess, and send to MX3 (return)"""
|
||||
while True:
|
||||
# Check if shutdown is requested
|
||||
if self.stop_event and self.stop_event.is_set():
|
||||
logger.debug("[process_input] Stop event detected, returning None")
|
||||
return None
|
||||
try:
|
||||
# Wait for a frame from the queue (blocking call)
|
||||
frame = self.capture_queue.get(
|
||||
block=True
|
||||
) # Blocks until data is available
|
||||
# Wait for a frame from the queue with timeout to check stop_event periodically
|
||||
frame = self.capture_queue.get(block=True, timeout=0.5)
|
||||
|
||||
return frame
|
||||
|
||||
except Exception as e:
|
||||
logger.info(f"[process_input] Error processing input: {e}")
|
||||
time.sleep(0.1) # Prevent busy waiting in case of error
|
||||
# Silently handle queue.Empty timeouts (expected during normal operation)
|
||||
# Log any other unexpected exceptions
|
||||
if "Empty" not in str(type(e).__name__):
|
||||
logger.warning(f"[process_input] Unexpected error: {e}")
|
||||
# Loop continues and will check stop_event at the top
|
||||
|
||||
def receive_output(self):
|
||||
"""Retrieve processed results from MemryX output queue + a copy of the original frame"""
|
||||
connection_id = (
|
||||
self.capture_id_queue.get()
|
||||
) # Get the corresponding connection ID
|
||||
detections = self.output_queue.get() # Get detections from MemryX
|
||||
try:
|
||||
# Get connection ID with timeout
|
||||
connection_id = self.capture_id_queue.get(
|
||||
block=True, timeout=1.0
|
||||
) # Get the corresponding connection ID
|
||||
detections = self.output_queue.get() # Get detections from MemryX
|
||||
|
||||
return connection_id, detections
|
||||
return connection_id, detections
|
||||
|
||||
except Exception as e:
|
||||
# On timeout or stop event, return None
|
||||
if self.stop_event and self.stop_event.is_set():
|
||||
logger.debug("[receive_output] Stop event detected, exiting")
|
||||
# Silently handle queue.Empty timeouts, they're expected during normal operation
|
||||
elif "Empty" not in str(type(e).__name__):
|
||||
logger.warning(f"[receive_output] Error receiving output: {e}")
|
||||
|
||||
return None, None
|
||||
|
||||
def post_process_yolonas(self, output):
|
||||
predictions = output[0]
|
||||
@@ -831,6 +850,19 @@ class MemryXDetector(DetectionApi):
|
||||
f"{self.memx_model_type} is currently not supported for memryx. See the docs for more info on supported models."
|
||||
)
|
||||
|
||||
def set_stop_event(self, stop_event):
|
||||
"""Set the stop event for graceful shutdown."""
|
||||
self.stop_event = stop_event
|
||||
|
||||
def shutdown(self):
|
||||
"""Gracefully shutdown the MemryX accelerator"""
|
||||
try:
|
||||
if hasattr(self, "accl") and self.accl is not None:
|
||||
self.accl.shutdown()
|
||||
logger.info("MemryX accelerator shutdown complete")
|
||||
except Exception as e:
|
||||
logger.error(f"Error during MemryX shutdown: {e}")
|
||||
|
||||
def detect_raw(self, tensor_input: np.ndarray):
|
||||
"""Removed synchronous detect_raw() function so that we only use async"""
|
||||
return 0
|
||||
|
||||
@@ -12,9 +12,6 @@ from peewee import DoesNotExist, IntegrityError
|
||||
from PIL import Image
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.comms.embeddings_updater import (
|
||||
EmbeddingsRequestEnum,
|
||||
)
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.config.classification import SemanticSearchModelEnum
|
||||
@@ -495,44 +492,49 @@ class Embeddings:
|
||||
or thumbnail_missing
|
||||
):
|
||||
existing_trigger.embedding = self._calculate_trigger_embedding(
|
||||
trigger
|
||||
trigger, trigger_name, camera.name
|
||||
)
|
||||
needs_embedding_update = True
|
||||
|
||||
if needs_embedding_update:
|
||||
existing_trigger.save()
|
||||
continue
|
||||
else:
|
||||
# Create new trigger
|
||||
try:
|
||||
try:
|
||||
event: Event = Event.get(Event.id == trigger.data)
|
||||
except DoesNotExist:
|
||||
logger.warning(
|
||||
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
|
||||
# For thumbnail triggers, validate the event exists
|
||||
if trigger.type == "thumbnail":
|
||||
try:
|
||||
event: Event = Event.get(Event.id == trigger.data)
|
||||
except DoesNotExist:
|
||||
logger.warning(
|
||||
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
|
||||
)
|
||||
continue
|
||||
|
||||
# Skip the event if not an object
|
||||
if event.data.get("type") != "object":
|
||||
logger.warning(
|
||||
f"Event ID {trigger.data} for trigger {trigger_name} is not a tracked object."
|
||||
)
|
||||
continue
|
||||
|
||||
thumbnail = get_event_thumbnail_bytes(event)
|
||||
|
||||
if not thumbnail:
|
||||
logger.warning(
|
||||
f"Unable to retrieve thumbnail for event ID {trigger.data} for {trigger_name}."
|
||||
)
|
||||
continue
|
||||
|
||||
self.write_trigger_thumbnail(
|
||||
camera.name, trigger.data, thumbnail
|
||||
)
|
||||
continue
|
||||
|
||||
# Skip the event if not an object
|
||||
if event.data.get("type") != "object":
|
||||
logger.warning(
|
||||
f"Event ID {trigger.data} for trigger {trigger_name} is not a tracked object."
|
||||
)
|
||||
continue
|
||||
|
||||
thumbnail = get_event_thumbnail_bytes(event)
|
||||
|
||||
if not thumbnail:
|
||||
logger.warning(
|
||||
f"Unable to retrieve thumbnail for event ID {trigger.data} for {trigger_name}."
|
||||
)
|
||||
continue
|
||||
|
||||
self.write_trigger_thumbnail(
|
||||
camera.name, trigger.data, thumbnail
|
||||
)
|
||||
|
||||
# Calculate embedding for new trigger
|
||||
embedding = self._calculate_trigger_embedding(trigger)
|
||||
embedding = self._calculate_trigger_embedding(
|
||||
trigger, trigger_name, camera.name
|
||||
)
|
||||
|
||||
Trigger.create(
|
||||
camera=camera.name,
|
||||
@@ -558,7 +560,11 @@ class Embeddings:
|
||||
Trigger.camera == camera.name, Trigger.name.in_(triggers_to_remove)
|
||||
).execute()
|
||||
for trigger_name in triggers_to_remove:
|
||||
self.remove_trigger_thumbnail(camera.name, trigger_name)
|
||||
# Only remove thumbnail files for thumbnail triggers
|
||||
if existing_triggers[trigger_name].type == "thumbnail":
|
||||
self.remove_trigger_thumbnail(
|
||||
camera.name, existing_triggers[trigger_name].data
|
||||
)
|
||||
|
||||
def write_trigger_thumbnail(
|
||||
self, camera: str, event_id: str, thumbnail: bytes
|
||||
@@ -588,14 +594,13 @@ class Embeddings:
|
||||
f"Failed to delete thumbnail for trigger with data {event_id} in {camera}: {e}"
|
||||
)
|
||||
|
||||
def _calculate_trigger_embedding(self, trigger) -> bytes:
|
||||
def _calculate_trigger_embedding(
|
||||
self, trigger, trigger_name: str, camera_name: str
|
||||
) -> bytes:
|
||||
"""Calculate embedding for a trigger based on its type and data."""
|
||||
if trigger.type == "description":
|
||||
logger.debug(f"Generating embedding for trigger description {trigger.name}")
|
||||
embedding = self.requestor.send_data(
|
||||
EmbeddingsRequestEnum.embed_description.value,
|
||||
{"id": None, "description": trigger.data, "upsert": False},
|
||||
)
|
||||
logger.debug(f"Generating embedding for trigger description {trigger_name}")
|
||||
embedding = self.embed_description(None, trigger.data, upsert=False)
|
||||
return embedding.astype(np.float32).tobytes()
|
||||
|
||||
elif trigger.type == "thumbnail":
|
||||
@@ -615,28 +620,21 @@ class Embeddings:
|
||||
|
||||
try:
|
||||
with open(
|
||||
os.path.join(
|
||||
TRIGGER_DIR, trigger.camera, f"{trigger.data}.webp"
|
||||
),
|
||||
os.path.join(TRIGGER_DIR, camera_name, f"{trigger.data}.webp"),
|
||||
"rb",
|
||||
) as f:
|
||||
thumbnail = f.read()
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to read thumbnail for trigger {trigger.name} with ID {trigger.data}: {e}"
|
||||
f"Failed to read thumbnail for trigger {trigger_name} with ID {trigger.data}: {e}"
|
||||
)
|
||||
return b""
|
||||
|
||||
logger.debug(
|
||||
f"Generating embedding for trigger thumbnail {trigger.name} with ID {trigger.data}"
|
||||
f"Generating embedding for trigger thumbnail {trigger_name} with ID {trigger.data}"
|
||||
)
|
||||
embedding = self.requestor.send_data(
|
||||
EmbeddingsRequestEnum.embed_thumbnail.value,
|
||||
{
|
||||
"id": str(trigger.data),
|
||||
"thumbnail": str(thumbnail),
|
||||
"upsert": False,
|
||||
},
|
||||
embedding = self.embed_thumbnail(
|
||||
str(trigger.data), thumbnail, upsert=False
|
||||
)
|
||||
return embedding.astype(np.float32).tobytes()
|
||||
|
||||
|
||||
@@ -153,7 +153,7 @@ PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
|
||||
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
|
||||
"preset-intel-qsv-h264": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {2}",
|
||||
"preset-intel-qsv-h265": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v main -level:v 4.1 -async_depth:v 1 {2}",
|
||||
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
|
||||
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel cuda -hwaccel_device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
|
||||
"preset-jetson-h264": "{0} -hide_banner {1} -c:v h264_nvmpi -profile high {2}",
|
||||
"preset-jetson-h265": "{0} -hide_banner {1} -c:v h264_nvmpi -profile main {2}",
|
||||
FFMPEG_HWACCEL_RKMPP: "{0} -hide_banner {1} -c:v h264_rkmpp -profile:v high {2}",
|
||||
|
||||
@@ -177,50 +177,60 @@ Each line represents a detection state, not necessarily unique individuals. Pare
|
||||
self,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
segments: list[dict[str, Any]],
|
||||
events: list[dict[str, Any]],
|
||||
debug_save: bool,
|
||||
) -> str | None:
|
||||
"""Generate a summary of review item descriptions over a period of time."""
|
||||
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%B %d, %Y at %I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%B %d, %Y at %I:%M %p')}"
|
||||
timeline_summary_prompt = f"""
|
||||
You are a security officer.
|
||||
Time range: {time_range}.
|
||||
Input: JSON list with "title", "scene", "confidence", "potential_threat_level" (1-2), "other_concerns".
|
||||
You are a security officer writing a concise security report.
|
||||
|
||||
Task: Write a concise, human-presentable security report in markdown format.
|
||||
Time range: {time_range}
|
||||
|
||||
Rules for the report:
|
||||
Input format: Each event is a JSON object with:
|
||||
- "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "camera", "time", "start_time", "end_time"
|
||||
- "context": array of related events from other cameras that occurred during overlapping time periods
|
||||
|
||||
- Title & overview
|
||||
- Start with:
|
||||
# Security Summary - {time_range}
|
||||
- Write a 1-2 sentence situational overview capturing the general pattern of the period.
|
||||
Report Structure - Use this EXACT format:
|
||||
|
||||
- Event details
|
||||
- Present events in chronological order as a bullet list.
|
||||
- **If multiple events occur within the same minute or overlapping time range, COMBINE them into a single bullet.**
|
||||
- Summarize the distinct activities as sub-points under the shared timestamp.
|
||||
- If no timestamp is given, preserve order but label as “Time not specified.”
|
||||
- Use bold timestamps for clarity.
|
||||
- Group bullets under subheadings when multiple events fall into the same category (e.g., Vehicle Activity, Porch Activity, Unusual Behavior).
|
||||
# Security Summary - {time_range}
|
||||
|
||||
- Threat levels
|
||||
- Always show (threat level: X) for each event.
|
||||
- If multiple events at the same time share the same threat level, only state it once.
|
||||
## Overview
|
||||
[Write 1-2 sentences summarizing the overall activity pattern during this period.]
|
||||
|
||||
- Final assessment
|
||||
- End with a Final Assessment section.
|
||||
- If all events are threat level 1 with no escalation:
|
||||
Final assessment: Only normal residential activity observed during this period.
|
||||
- If threat level 2+ events are present, clearly summarize them as Potential concerns requiring review.
|
||||
---
|
||||
|
||||
- Conciseness
|
||||
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
|
||||
- Summarize similar routine events instead of restating full scene descriptions.
|
||||
## Timeline
|
||||
|
||||
[Group events by time periods (e.g., "Morning (6:00 AM - 12:00 PM)", "Afternoon (12:00 PM - 5:00 PM)", "Evening (5:00 PM - 9:00 PM)", "Night (9:00 PM - 6:00 AM)"). Use appropriate time blocks based on when events occurred.]
|
||||
|
||||
### [Time Block Name]
|
||||
|
||||
**HH:MM AM/PM** | [Camera Name] | [Threat Level Indicator]
|
||||
- [Event title]: [Clear description incorporating contextual information from the "context" array]
|
||||
- Context: [If context array has items, mention them here, e.g., "Delivery truck present on Front Driveway Cam (HH:MM AM/PM)"]
|
||||
- Assessment: [Brief assessment incorporating context - if context explains the event, note it here]
|
||||
|
||||
[Repeat for each event in chronological order within the time block]
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
[One sentence summarizing the period. If all events are normal/explained: "Routine activity observed." If review needed: "Some activity requires review but no security concerns." If security concerns: "Security concerns requiring immediate attention."]
|
||||
|
||||
Guidelines:
|
||||
- List ALL events in chronological order, grouped by time blocks
|
||||
- Threat level indicators: ✓ Normal, ⚠️ Needs review, 🔴 Security concern
|
||||
- Integrate contextual information naturally - use the "context" array to enrich each event's description
|
||||
- If context explains the event (e.g., delivery truck explains person at door), describe it accordingly (e.g., "delivery person" not "unidentified person")
|
||||
- Be concise but informative - focus on what happened and what it means
|
||||
- If contextual information makes an event clearly normal, reflect that in your assessment
|
||||
- Only create time blocks that have events - don't create empty sections
|
||||
"""
|
||||
|
||||
for item in segments:
|
||||
timeline_summary_prompt += f"\n{item}"
|
||||
timeline_summary_prompt += "\n\nEvents:\n"
|
||||
for event in events:
|
||||
timeline_summary_prompt += f"\n{event}\n"
|
||||
|
||||
if debug_save:
|
||||
with open(
|
||||
|
||||
0
frigate/jobs/__init__.py
Normal file
0
frigate/jobs/__init__.py
Normal file
21
frigate/jobs/job.py
Normal file
21
frigate/jobs/job.py
Normal file
@@ -0,0 +1,21 @@
|
||||
"""Generic base class for long-running background jobs."""
|
||||
|
||||
from dataclasses import asdict, dataclass, field
|
||||
from typing import Any, Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class Job:
|
||||
"""Base class for long-running background jobs."""
|
||||
|
||||
id: str = field(default_factory=lambda: __import__("uuid").uuid4().__str__()[:12])
|
||||
job_type: str = "" # Must be set by subclasses
|
||||
status: str = "queued" # queued, running, success, failed, cancelled
|
||||
results: Optional[dict[str, Any]] = None
|
||||
start_time: Optional[float] = None
|
||||
end_time: Optional[float] = None
|
||||
error_message: Optional[str] = None
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert to dictionary for WebSocket transmission."""
|
||||
return asdict(self)
|
||||
70
frigate/jobs/manager.py
Normal file
70
frigate/jobs/manager.py
Normal file
@@ -0,0 +1,70 @@
|
||||
"""Generic job management for long-running background tasks."""
|
||||
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
from frigate.jobs.job import Job
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
|
||||
# Global state and locks for enforcing single concurrent job per job type
|
||||
_job_locks: dict[str, threading.Lock] = {}
|
||||
_current_jobs: dict[str, Optional[Job]] = {}
|
||||
# Keep completed jobs for retrieval, keyed by (job_type, job_id)
|
||||
_completed_jobs: dict[tuple[str, str], Job] = {}
|
||||
|
||||
|
||||
def _get_lock(job_type: str) -> threading.Lock:
|
||||
"""Get or create a lock for the specified job type."""
|
||||
if job_type not in _job_locks:
|
||||
_job_locks[job_type] = threading.Lock()
|
||||
return _job_locks[job_type]
|
||||
|
||||
|
||||
def set_current_job(job: Job) -> None:
|
||||
"""Set the current job for a given job type."""
|
||||
lock = _get_lock(job.job_type)
|
||||
with lock:
|
||||
# Store the previous job if it was completed
|
||||
old_job = _current_jobs.get(job.job_type)
|
||||
if old_job and old_job.status in (
|
||||
JobStatusTypesEnum.success,
|
||||
JobStatusTypesEnum.failed,
|
||||
JobStatusTypesEnum.cancelled,
|
||||
):
|
||||
_completed_jobs[(job.job_type, old_job.id)] = old_job
|
||||
_current_jobs[job.job_type] = job
|
||||
|
||||
|
||||
def clear_current_job(job_type: str, job_id: Optional[str] = None) -> None:
|
||||
"""Clear the current job for a given job type, optionally checking the ID."""
|
||||
lock = _get_lock(job_type)
|
||||
with lock:
|
||||
if job_type in _current_jobs:
|
||||
current = _current_jobs[job_type]
|
||||
if current is None or (job_id is None or current.id == job_id):
|
||||
_current_jobs[job_type] = None
|
||||
|
||||
|
||||
def get_current_job(job_type: str) -> Optional[Job]:
|
||||
"""Get the current running/queued job for a given job type, if any."""
|
||||
lock = _get_lock(job_type)
|
||||
with lock:
|
||||
return _current_jobs.get(job_type)
|
||||
|
||||
|
||||
def get_job_by_id(job_type: str, job_id: str) -> Optional[Job]:
|
||||
"""Get job by ID. Checks current job first, then completed jobs."""
|
||||
lock = _get_lock(job_type)
|
||||
with lock:
|
||||
# Check if it's the current job
|
||||
current = _current_jobs.get(job_type)
|
||||
if current and current.id == job_id:
|
||||
return current
|
||||
# Check if it's a completed job
|
||||
return _completed_jobs.get((job_type, job_id))
|
||||
|
||||
|
||||
def job_is_running(job_type: str) -> bool:
|
||||
"""Check if a job of the given type is currently running or queued."""
|
||||
job = get_current_job(job_type)
|
||||
return job is not None and job.status in ("queued", "running")
|
||||
135
frigate/jobs/media_sync.py
Normal file
135
frigate/jobs/media_sync.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""Media sync job management with background execution."""
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.const import UPDATE_JOB_STATE
|
||||
from frigate.jobs.job import Job
|
||||
from frigate.jobs.manager import (
|
||||
get_current_job,
|
||||
get_job_by_id,
|
||||
job_is_running,
|
||||
set_current_job,
|
||||
)
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
from frigate.util.media import sync_all_media
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class MediaSyncJob(Job):
|
||||
"""In-memory job state for media sync operations."""
|
||||
|
||||
job_type: str = "media_sync"
|
||||
dry_run: bool = False
|
||||
media_types: list[str] = field(default_factory=lambda: ["all"])
|
||||
force: bool = False
|
||||
|
||||
|
||||
class MediaSyncRunner(threading.Thread):
|
||||
"""Thread-based runner for media sync jobs."""
|
||||
|
||||
def __init__(self, job: MediaSyncJob) -> None:
|
||||
super().__init__(daemon=True, name="media_sync")
|
||||
self.job = job
|
||||
self.requestor = InterProcessRequestor()
|
||||
|
||||
def run(self) -> None:
|
||||
"""Execute the media sync job and broadcast status updates."""
|
||||
try:
|
||||
# Update job status to running
|
||||
self.job.status = JobStatusTypesEnum.running
|
||||
self.job.start_time = datetime.now().timestamp()
|
||||
self._broadcast_status()
|
||||
|
||||
# Execute sync with provided parameters
|
||||
logger.debug(
|
||||
f"Starting media sync job {self.job.id}: "
|
||||
f"media_types={self.job.media_types}, "
|
||||
f"dry_run={self.job.dry_run}, "
|
||||
f"force={self.job.force}"
|
||||
)
|
||||
|
||||
results = sync_all_media(
|
||||
dry_run=self.job.dry_run,
|
||||
media_types=self.job.media_types,
|
||||
force=self.job.force,
|
||||
)
|
||||
|
||||
# Store results and mark as complete
|
||||
self.job.results = results.to_dict()
|
||||
self.job.status = JobStatusTypesEnum.success
|
||||
self.job.end_time = datetime.now().timestamp()
|
||||
|
||||
logger.debug(f"Media sync job {self.job.id} completed successfully")
|
||||
self._broadcast_status()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Media sync job {self.job.id} failed: {e}", exc_info=True)
|
||||
self.job.status = JobStatusTypesEnum.failed
|
||||
self.job.error_message = str(e)
|
||||
self.job.end_time = datetime.now().timestamp()
|
||||
self._broadcast_status()
|
||||
|
||||
finally:
|
||||
if self.requestor:
|
||||
self.requestor.stop()
|
||||
|
||||
def _broadcast_status(self) -> None:
|
||||
"""Broadcast job status update via IPC to all WebSocket subscribers."""
|
||||
try:
|
||||
self.requestor.send_data(
|
||||
UPDATE_JOB_STATE,
|
||||
self.job.to_dict(),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to broadcast media sync status: {e}")
|
||||
|
||||
|
||||
def start_media_sync_job(
|
||||
dry_run: bool = False,
|
||||
media_types: Optional[list[str]] = None,
|
||||
force: bool = False,
|
||||
) -> Optional[str]:
|
||||
"""Start a new media sync job if none is currently running.
|
||||
|
||||
Returns job ID on success, None if job already running.
|
||||
"""
|
||||
# Check if a job is already running
|
||||
if job_is_running("media_sync"):
|
||||
current = get_current_job("media_sync")
|
||||
logger.warning(
|
||||
f"Media sync job {current.id} is already running. Rejecting new request."
|
||||
)
|
||||
return None
|
||||
|
||||
# Create and start new job
|
||||
job = MediaSyncJob(
|
||||
dry_run=dry_run,
|
||||
media_types=media_types or ["all"],
|
||||
force=force,
|
||||
)
|
||||
|
||||
logger.debug(f"Creating new media sync job: {job.id}")
|
||||
set_current_job(job)
|
||||
|
||||
# Start the background runner
|
||||
runner = MediaSyncRunner(job)
|
||||
runner.start()
|
||||
|
||||
return job.id
|
||||
|
||||
|
||||
def get_current_media_sync_job() -> Optional[MediaSyncJob]:
|
||||
"""Get the current running/queued media sync job, if any."""
|
||||
return get_current_job("media_sync")
|
||||
|
||||
|
||||
def get_media_sync_job_by_id(job_id: str) -> Optional[MediaSyncJob]:
|
||||
"""Get media sync job by ID. Currently only tracks the current job."""
|
||||
return get_job_by_id("media_sync", job_id)
|
||||
@@ -80,6 +80,14 @@ class Recordings(Model):
|
||||
regions = IntegerField(null=True)
|
||||
|
||||
|
||||
class ExportCase(Model):
|
||||
id = CharField(null=False, primary_key=True, max_length=30)
|
||||
name = CharField(index=True, max_length=100)
|
||||
description = TextField(null=True)
|
||||
created_at = DateTimeField()
|
||||
updated_at = DateTimeField()
|
||||
|
||||
|
||||
class Export(Model):
|
||||
id = CharField(null=False, primary_key=True, max_length=30)
|
||||
camera = CharField(index=True, max_length=20)
|
||||
@@ -88,6 +96,12 @@ class Export(Model):
|
||||
video_path = CharField(unique=True)
|
||||
thumb_path = CharField(unique=True)
|
||||
in_progress = BooleanField()
|
||||
export_case = ForeignKeyField(
|
||||
ExportCase,
|
||||
null=True,
|
||||
backref="exports",
|
||||
column_name="export_case_id",
|
||||
)
|
||||
|
||||
|
||||
class ReviewSegment(Model):
|
||||
@@ -133,6 +147,7 @@ class User(Model):
|
||||
default="admin",
|
||||
)
|
||||
password_hash = CharField(null=False, max_length=120)
|
||||
password_changed_at = DateTimeField(null=True)
|
||||
notification_tokens = JSONField()
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -239,6 +239,12 @@ class ImprovedMotionDetector(MotionDetector):
|
||||
)
|
||||
self.mask = np.where(resized_mask == [0])
|
||||
|
||||
# Reset motion detection state when mask changes
|
||||
# so motion detection can quickly recalibrate with the new mask
|
||||
self.avg_frame = np.zeros(self.motion_frame_size, np.float32)
|
||||
self.calibrating = True
|
||||
self.motion_frame_count = 0
|
||||
|
||||
def stop(self) -> None:
|
||||
"""stop the motion detector."""
|
||||
pass
|
||||
|
||||
@@ -43,6 +43,7 @@ class BaseLocalDetector(ObjectDetector):
|
||||
self,
|
||||
detector_config: BaseDetectorConfig = None,
|
||||
labels: str = None,
|
||||
stop_event: MpEvent = None,
|
||||
):
|
||||
self.fps = EventsPerSecond()
|
||||
if labels is None:
|
||||
@@ -60,6 +61,10 @@ class BaseLocalDetector(ObjectDetector):
|
||||
|
||||
self.detect_api = create_detector(detector_config)
|
||||
|
||||
# If the detector supports stop_event, pass it
|
||||
if hasattr(self.detect_api, "set_stop_event") and stop_event:
|
||||
self.detect_api.set_stop_event(stop_event)
|
||||
|
||||
def _transform_input(self, tensor_input: np.ndarray) -> np.ndarray:
|
||||
if self.input_transform:
|
||||
tensor_input = np.transpose(tensor_input, self.input_transform)
|
||||
@@ -240,6 +245,10 @@ class AsyncDetectorRunner(FrigateProcess):
|
||||
while not self.stop_event.is_set():
|
||||
connection_id, detections = self._detector.async_receive_output()
|
||||
|
||||
# Handle timeout case (queue.Empty) - just continue
|
||||
if connection_id is None:
|
||||
continue
|
||||
|
||||
if not self.send_times:
|
||||
# guard; shouldn't happen if send/recv are balanced
|
||||
continue
|
||||
@@ -266,21 +275,38 @@ class AsyncDetectorRunner(FrigateProcess):
|
||||
|
||||
self._frame_manager = SharedMemoryFrameManager()
|
||||
self._publisher = ObjectDetectorPublisher()
|
||||
self._detector = AsyncLocalObjectDetector(detector_config=self.detector_config)
|
||||
self._detector = AsyncLocalObjectDetector(
|
||||
detector_config=self.detector_config, stop_event=self.stop_event
|
||||
)
|
||||
|
||||
for name in self.cameras:
|
||||
self.create_output_shm(name)
|
||||
|
||||
t_detect = threading.Thread(target=self._detect_worker, daemon=True)
|
||||
t_result = threading.Thread(target=self._result_worker, daemon=True)
|
||||
t_detect = threading.Thread(target=self._detect_worker, daemon=False)
|
||||
t_result = threading.Thread(target=self._result_worker, daemon=False)
|
||||
t_detect.start()
|
||||
t_result.start()
|
||||
|
||||
while not self.stop_event.is_set():
|
||||
time.sleep(0.5)
|
||||
try:
|
||||
while not self.stop_event.is_set():
|
||||
time.sleep(0.5)
|
||||
|
||||
self._publisher.stop()
|
||||
logger.info("Exited async detection process...")
|
||||
logger.info(
|
||||
"Stop event detected, waiting for detector threads to finish..."
|
||||
)
|
||||
|
||||
# Wait for threads to finish processing
|
||||
t_detect.join(timeout=5)
|
||||
t_result.join(timeout=5)
|
||||
|
||||
# Shutdown the AsyncDetector
|
||||
self._detector.detect_api.shutdown()
|
||||
|
||||
self._publisher.stop()
|
||||
except Exception as e:
|
||||
logger.error(f"Error during async detector shutdown: {e}")
|
||||
finally:
|
||||
logger.info("Exited Async detection process...")
|
||||
|
||||
|
||||
class ObjectDetectProcess:
|
||||
@@ -308,7 +334,7 @@ class ObjectDetectProcess:
|
||||
# if the process has already exited on its own, just return
|
||||
if self.detect_process and self.detect_process.exitcode:
|
||||
return
|
||||
self.detect_process.terminate()
|
||||
|
||||
logging.info("Waiting for detection process to exit gracefully...")
|
||||
self.detect_process.join(timeout=30)
|
||||
if self.detect_process.exitcode is None:
|
||||
|
||||
@@ -95,12 +95,21 @@ class OnvifController:
|
||||
|
||||
cam = self.camera_configs[cam_name]
|
||||
try:
|
||||
user = cam.onvif.user
|
||||
password = cam.onvif.password
|
||||
|
||||
if user is not None and isinstance(user, bytes):
|
||||
user = user.decode("utf-8")
|
||||
|
||||
if password is not None and isinstance(password, bytes):
|
||||
password = password.decode("utf-8")
|
||||
|
||||
self.cams[cam_name] = {
|
||||
"onvif": ONVIFCamera(
|
||||
cam.onvif.host,
|
||||
cam.onvif.port,
|
||||
cam.onvif.user,
|
||||
cam.onvif.password,
|
||||
user,
|
||||
password,
|
||||
wsdl_dir=str(Path(find_spec("onvif").origin).parent / "wsdl"),
|
||||
adjust_time=cam.onvif.ignore_time_mismatch,
|
||||
encrypt=not cam.onvif.tls_insecure,
|
||||
@@ -190,7 +199,11 @@ class OnvifController:
|
||||
ptz: ONVIFService = await onvif.create_ptz_service()
|
||||
self.cams[camera_name]["ptz"] = ptz
|
||||
|
||||
imaging: ONVIFService = await onvif.create_imaging_service()
|
||||
try:
|
||||
imaging: ONVIFService = await onvif.create_imaging_service()
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.debug(f"Imaging service not supported for {camera_name}: {e}")
|
||||
imaging = None
|
||||
self.cams[camera_name]["imaging"] = imaging
|
||||
try:
|
||||
video_sources = await media.GetVideoSources()
|
||||
@@ -321,9 +334,15 @@ class OnvifController:
|
||||
presets = []
|
||||
|
||||
for preset in presets:
|
||||
self.cams[camera_name]["presets"][
|
||||
(getattr(preset, "Name") or f"preset {preset['token']}").lower()
|
||||
] = preset["token"]
|
||||
# Ensure preset name is a Unicode string and handle UTF-8 characters correctly
|
||||
preset_name = getattr(preset, "Name") or f"preset {preset['token']}"
|
||||
|
||||
if isinstance(preset_name, bytes):
|
||||
preset_name = preset_name.decode("utf-8")
|
||||
|
||||
# Convert to lowercase while preserving UTF-8 characters
|
||||
preset_name_lower = preset_name.lower()
|
||||
self.cams[camera_name]["presets"][preset_name_lower] = preset["token"]
|
||||
|
||||
# get list of supported features
|
||||
supported_features = []
|
||||
@@ -381,7 +400,10 @@ class OnvifController:
|
||||
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported. Exception: {e}"
|
||||
)
|
||||
|
||||
if self.cams[camera_name]["video_source_token"] is not None:
|
||||
if (
|
||||
self.cams[camera_name]["video_source_token"] is not None
|
||||
and imaging is not None
|
||||
):
|
||||
try:
|
||||
imaging_capabilities = await imaging.GetImagingSettings(
|
||||
{"VideoSourceToken": self.cams[camera_name]["video_source_token"]}
|
||||
@@ -421,6 +443,7 @@ class OnvifController:
|
||||
if (
|
||||
"focus" in self.cams[camera_name]["features"]
|
||||
and self.cams[camera_name]["video_source_token"]
|
||||
and self.cams[camera_name]["imaging"] is not None
|
||||
):
|
||||
try:
|
||||
stop_request = self.cams[camera_name]["imaging"].create_type("Stop")
|
||||
@@ -555,6 +578,11 @@ class OnvifController:
|
||||
self.cams[camera_name]["active"] = False
|
||||
|
||||
async def _move_to_preset(self, camera_name: str, preset: str) -> None:
|
||||
if isinstance(preset, bytes):
|
||||
preset = preset.decode("utf-8")
|
||||
|
||||
preset = preset.lower()
|
||||
|
||||
if preset not in self.cams[camera_name]["presets"]:
|
||||
logger.error(f"{preset} is not a valid preset for {camera_name}")
|
||||
return
|
||||
@@ -648,6 +676,7 @@ class OnvifController:
|
||||
if (
|
||||
"focus" not in self.cams[camera_name]["features"]
|
||||
or not self.cams[camera_name]["video_source_token"]
|
||||
or self.cams[camera_name]["imaging"] is None
|
||||
):
|
||||
logger.error(f"{camera_name} does not support ONVIF continuous focus.")
|
||||
return
|
||||
|
||||
@@ -13,9 +13,8 @@ from playhouse.sqlite_ext import SqliteExtDatabase
|
||||
from frigate.config import CameraConfig, FrigateConfig, RetainModeEnum
|
||||
from frigate.const import CACHE_DIR, CLIPS_DIR, MAX_WAL_SIZE, RECORD_DIR
|
||||
from frigate.models import Previews, Recordings, ReviewSegment, UserReviewStatus
|
||||
from frigate.record.util import remove_empty_directories, sync_recordings
|
||||
from frigate.util.builtin import clear_and_unlink
|
||||
from frigate.util.time import get_tomorrow_at_time
|
||||
from frigate.util.media import remove_empty_directories
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -347,11 +346,6 @@ class RecordingCleanup(threading.Thread):
|
||||
logger.debug("End expire recordings.")
|
||||
|
||||
def run(self) -> None:
|
||||
# on startup sync recordings with disk if enabled
|
||||
if self.config.record.sync_recordings:
|
||||
sync_recordings(limited=False)
|
||||
next_sync = get_tomorrow_at_time(3)
|
||||
|
||||
# Expire tmp clips every minute, recordings and clean directories every hour.
|
||||
for counter in itertools.cycle(range(self.config.record.expire_interval)):
|
||||
if self.stop_event.wait(60):
|
||||
@@ -360,14 +354,6 @@ class RecordingCleanup(threading.Thread):
|
||||
|
||||
self.clean_tmp_previews()
|
||||
|
||||
if (
|
||||
self.config.record.sync_recordings
|
||||
and datetime.datetime.now().astimezone(datetime.timezone.utc)
|
||||
> next_sync
|
||||
):
|
||||
sync_recordings(limited=True)
|
||||
next_sync = get_tomorrow_at_time(3)
|
||||
|
||||
if counter == 0:
|
||||
self.clean_tmp_clips()
|
||||
self.expire_recordings()
|
||||
|
||||
@@ -64,6 +64,7 @@ class RecordingExporter(threading.Thread):
|
||||
end_time: int,
|
||||
playback_factor: PlaybackFactorEnum,
|
||||
playback_source: PlaybackSourceEnum,
|
||||
export_case_id: Optional[str] = None,
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.config = config
|
||||
@@ -75,6 +76,7 @@ class RecordingExporter(threading.Thread):
|
||||
self.end_time = end_time
|
||||
self.playback_factor = playback_factor
|
||||
self.playback_source = playback_source
|
||||
self.export_case_id = export_case_id
|
||||
|
||||
# ensure export thumb dir
|
||||
Path(os.path.join(CLIPS_DIR, "export")).mkdir(exist_ok=True)
|
||||
@@ -226,7 +228,7 @@ class RecordingExporter(threading.Thread):
|
||||
ffmpeg_cmd = (
|
||||
parse_preset_hardware_acceleration_encode(
|
||||
self.config.ffmpeg.ffmpeg_path,
|
||||
self.config.ffmpeg.hwaccel_args,
|
||||
self.config.cameras[self.camera].record.export.hwaccel_args,
|
||||
f"-an {ffmpeg_input}",
|
||||
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart",
|
||||
EncodeTypeEnum.timelapse,
|
||||
@@ -317,7 +319,7 @@ class RecordingExporter(threading.Thread):
|
||||
ffmpeg_cmd = (
|
||||
parse_preset_hardware_acceleration_encode(
|
||||
self.config.ffmpeg.ffmpeg_path,
|
||||
self.config.ffmpeg.hwaccel_args,
|
||||
self.config.cameras[self.camera].record.export.hwaccel_args,
|
||||
f"{TIMELAPSE_DATA_INPUT_ARGS} {ffmpeg_input}",
|
||||
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart {video_path}",
|
||||
EncodeTypeEnum.timelapse,
|
||||
@@ -348,17 +350,20 @@ class RecordingExporter(threading.Thread):
|
||||
video_path = f"{EXPORT_DIR}/{self.camera}_{filename_start_datetime}-{filename_end_datetime}_{cleaned_export_id}.mp4"
|
||||
thumb_path = self.save_thumbnail(self.export_id)
|
||||
|
||||
Export.insert(
|
||||
{
|
||||
Export.id: self.export_id,
|
||||
Export.camera: self.camera,
|
||||
Export.name: export_name,
|
||||
Export.date: self.start_time,
|
||||
Export.video_path: video_path,
|
||||
Export.thumb_path: thumb_path,
|
||||
Export.in_progress: True,
|
||||
}
|
||||
).execute()
|
||||
export_values = {
|
||||
Export.id: self.export_id,
|
||||
Export.camera: self.camera,
|
||||
Export.name: export_name,
|
||||
Export.date: self.start_time,
|
||||
Export.video_path: video_path,
|
||||
Export.thumb_path: thumb_path,
|
||||
Export.in_progress: True,
|
||||
}
|
||||
|
||||
if self.export_case_id is not None:
|
||||
export_values[Export.export_case] = self.export_case_id
|
||||
|
||||
Export.insert(export_values).execute()
|
||||
|
||||
try:
|
||||
if self.playback_source == PlaybackSourceEnum.recordings:
|
||||
|
||||
@@ -1,147 +0,0 @@
|
||||
"""Recordings Utilities."""
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import os
|
||||
|
||||
from peewee import DatabaseError, chunked
|
||||
|
||||
from frigate.const import RECORD_DIR
|
||||
from frigate.models import Recordings, RecordingsToDelete
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def remove_empty_directories(directory: str) -> None:
|
||||
# list all directories recursively and sort them by path,
|
||||
# longest first
|
||||
paths = sorted(
|
||||
[x[0] for x in os.walk(directory)],
|
||||
key=lambda p: len(str(p)),
|
||||
reverse=True,
|
||||
)
|
||||
for path in paths:
|
||||
# don't delete the parent
|
||||
if path == directory:
|
||||
continue
|
||||
if len(os.listdir(path)) == 0:
|
||||
os.rmdir(path)
|
||||
|
||||
|
||||
def sync_recordings(limited: bool) -> None:
|
||||
"""Check the db for stale recordings entries that don't exist in the filesystem."""
|
||||
|
||||
def delete_db_entries_without_file(check_timestamp: float) -> bool:
|
||||
"""Delete db entries where file was deleted outside of frigate."""
|
||||
|
||||
if limited:
|
||||
recordings = Recordings.select(Recordings.id, Recordings.path).where(
|
||||
Recordings.start_time >= check_timestamp
|
||||
)
|
||||
else:
|
||||
# get all recordings in the db
|
||||
recordings = Recordings.select(Recordings.id, Recordings.path)
|
||||
|
||||
# Use pagination to process records in chunks
|
||||
page_size = 1000
|
||||
num_pages = (recordings.count() + page_size - 1) // page_size
|
||||
recordings_to_delete = set()
|
||||
|
||||
for page in range(num_pages):
|
||||
for recording in recordings.paginate(page, page_size):
|
||||
if not os.path.exists(recording.path):
|
||||
recordings_to_delete.add(recording.id)
|
||||
|
||||
if len(recordings_to_delete) == 0:
|
||||
return True
|
||||
|
||||
logger.info(
|
||||
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
|
||||
)
|
||||
|
||||
# convert back to list of dictionaries for insertion
|
||||
recordings_to_delete = [
|
||||
{"id": recording_id} for recording_id in recordings_to_delete
|
||||
]
|
||||
|
||||
if float(len(recordings_to_delete)) / max(1, recordings.count()) > 0.5:
|
||||
logger.warning(
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings.count()) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
return False
|
||||
|
||||
# create a temporary table for deletion
|
||||
RecordingsToDelete.create_table(temporary=True)
|
||||
|
||||
# insert ids to the temporary table
|
||||
max_inserts = 1000
|
||||
for batch in chunked(recordings_to_delete, max_inserts):
|
||||
RecordingsToDelete.insert_many(batch).execute()
|
||||
|
||||
try:
|
||||
# delete records in the main table that exist in the temporary table
|
||||
query = Recordings.delete().where(
|
||||
Recordings.id.in_(RecordingsToDelete.select(RecordingsToDelete.id))
|
||||
)
|
||||
query.execute()
|
||||
except DatabaseError as e:
|
||||
logger.error(f"Database error during recordings db cleanup: {e}")
|
||||
|
||||
return True
|
||||
|
||||
def delete_files_without_db_entry(files_on_disk: list[str]):
|
||||
"""Delete files where file is not inside frigate db."""
|
||||
files_to_delete = []
|
||||
|
||||
for file in files_on_disk:
|
||||
if not Recordings.select().where(Recordings.path == file).exists():
|
||||
files_to_delete.append(file)
|
||||
|
||||
if len(files_to_delete) == 0:
|
||||
return True
|
||||
|
||||
logger.info(
|
||||
f"Deleting {len(files_to_delete)} recordings files with missing DB entries"
|
||||
)
|
||||
|
||||
if float(len(files_to_delete)) / max(1, len(files_on_disk)) > 0.5:
|
||||
logger.debug(
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
return False
|
||||
|
||||
for file in files_to_delete:
|
||||
os.unlink(file)
|
||||
|
||||
return True
|
||||
|
||||
logger.debug("Start sync recordings.")
|
||||
|
||||
# start checking on the hour 36 hours ago
|
||||
check_point = datetime.datetime.now().replace(
|
||||
minute=0, second=0, microsecond=0
|
||||
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
|
||||
db_success = delete_db_entries_without_file(check_point.timestamp())
|
||||
|
||||
# only try to cleanup files if db cleanup was successful
|
||||
if db_success:
|
||||
if limited:
|
||||
# get recording files from last 36 hours
|
||||
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
if root > hour_check
|
||||
}
|
||||
else:
|
||||
# get all recordings files on disk and put them in a set
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
}
|
||||
|
||||
delete_files_without_db_entry(files_on_disk)
|
||||
|
||||
logger.debug("End sync recordings.")
|
||||
@@ -22,6 +22,7 @@ from frigate.util.services import (
|
||||
get_bandwidth_stats,
|
||||
get_cpu_stats,
|
||||
get_fs_type,
|
||||
get_hailo_temps,
|
||||
get_intel_gpu_stats,
|
||||
get_jetson_stats,
|
||||
get_nvidia_gpu_stats,
|
||||
@@ -91,9 +92,80 @@ def get_temperatures() -> dict[str, float]:
|
||||
if temp is not None:
|
||||
temps[apex] = temp
|
||||
|
||||
# Get temperatures for Hailo devices
|
||||
temps.update(get_hailo_temps())
|
||||
|
||||
return temps
|
||||
|
||||
|
||||
def get_detector_temperature(
|
||||
detector_type: str,
|
||||
detector_index_by_type: dict[str, int],
|
||||
) -> Optional[float]:
|
||||
"""Get temperature for a specific detector based on its type."""
|
||||
if detector_type == "edgetpu":
|
||||
# Get temperatures for all attached Corals
|
||||
base = "/sys/class/apex/"
|
||||
if os.path.isdir(base):
|
||||
apex_devices = sorted(os.listdir(base))
|
||||
index = detector_index_by_type.get("edgetpu", 0)
|
||||
if index < len(apex_devices):
|
||||
apex_name = apex_devices[index]
|
||||
temp = read_temperature(os.path.join(base, apex_name, "temp"))
|
||||
if temp is not None:
|
||||
return temp
|
||||
elif detector_type == "hailo8l":
|
||||
# Get temperatures for Hailo devices
|
||||
hailo_temps = get_hailo_temps()
|
||||
if hailo_temps:
|
||||
hailo_device_names = sorted(hailo_temps.keys())
|
||||
index = detector_index_by_type.get("hailo8l", 0)
|
||||
if index < len(hailo_device_names):
|
||||
device_name = hailo_device_names[index]
|
||||
return hailo_temps[device_name]
|
||||
elif detector_type == "rknn":
|
||||
# Rockchip temperatures are handled by the GPU / NPU stats
|
||||
# as there are not detector specific temperatures
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_detector_stats(
|
||||
stats_tracking: StatsTrackingTypes,
|
||||
) -> dict[str, dict[str, Any]]:
|
||||
"""Get stats for all detectors, including temperatures based on detector type."""
|
||||
detector_stats: dict[str, dict[str, Any]] = {}
|
||||
detector_type_indices: dict[str, int] = {}
|
||||
|
||||
for name, detector in stats_tracking["detectors"].items():
|
||||
pid = detector.detect_process.pid if detector.detect_process else None
|
||||
detector_type = detector.detector_config.type
|
||||
|
||||
# Keep track of the index for each detector type to match temperatures correctly
|
||||
current_index = detector_type_indices.get(detector_type, 0)
|
||||
detector_type_indices[detector_type] = current_index + 1
|
||||
|
||||
detector_stat = {
|
||||
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"pid": pid,
|
||||
}
|
||||
|
||||
temp = get_detector_temperature(detector_type, {detector_type: current_index})
|
||||
|
||||
if temp is not None:
|
||||
detector_stat["temperature"] = round(temp, 1)
|
||||
|
||||
detector_stats[name] = detector_stat
|
||||
|
||||
return detector_stats
|
||||
|
||||
|
||||
def get_processing_stats(
|
||||
config: FrigateConfig, stats: dict[str, str], hwaccel_errors: list[str]
|
||||
) -> None:
|
||||
@@ -174,6 +246,7 @@ async def set_gpu_stats(
|
||||
"mem": str(round(float(nvidia_usage[i]["mem"]), 2)) + "%",
|
||||
"enc": str(round(float(nvidia_usage[i]["enc"]), 2)) + "%",
|
||||
"dec": str(round(float(nvidia_usage[i]["dec"]), 2)) + "%",
|
||||
"temp": str(nvidia_usage[i]["temp"]),
|
||||
}
|
||||
|
||||
else:
|
||||
@@ -279,6 +352,32 @@ def stats_snapshot(
|
||||
if camera_stats.capture_process_pid.value
|
||||
else None
|
||||
)
|
||||
# Calculate connection quality based on current state
|
||||
# This is computed at stats-collection time so offline cameras
|
||||
# correctly show as unusable rather than excellent
|
||||
expected_fps = config.cameras[name].detect.fps
|
||||
current_fps = camera_stats.camera_fps.value
|
||||
reconnects = camera_stats.reconnects_last_hour.value
|
||||
stalls = camera_stats.stalls_last_hour.value
|
||||
|
||||
if current_fps < 0.1:
|
||||
quality_str = "unusable"
|
||||
elif reconnects == 0 and current_fps >= 0.9 * expected_fps and stalls < 5:
|
||||
quality_str = "excellent"
|
||||
elif reconnects <= 2 and current_fps >= 0.6 * expected_fps:
|
||||
quality_str = "fair"
|
||||
elif reconnects > 10 or current_fps < 1.0 or stalls > 100:
|
||||
quality_str = "unusable"
|
||||
else:
|
||||
quality_str = "poor"
|
||||
|
||||
connection_quality = {
|
||||
"connection_quality": quality_str,
|
||||
"expected_fps": expected_fps,
|
||||
"reconnects_last_hour": reconnects,
|
||||
"stalls_last_hour": stalls,
|
||||
}
|
||||
|
||||
stats["cameras"][name] = {
|
||||
"camera_fps": round(camera_stats.camera_fps.value, 2),
|
||||
"process_fps": round(camera_stats.process_fps.value, 2),
|
||||
@@ -290,20 +389,10 @@ def stats_snapshot(
|
||||
"ffmpeg_pid": ffmpeg_pid,
|
||||
"audio_rms": round(camera_stats.audio_rms.value, 4),
|
||||
"audio_dBFS": round(camera_stats.audio_dBFS.value, 4),
|
||||
**connection_quality,
|
||||
}
|
||||
|
||||
stats["detectors"] = {}
|
||||
for name, detector in stats_tracking["detectors"].items():
|
||||
pid = detector.detect_process.pid if detector.detect_process else None
|
||||
stats["detectors"][name] = {
|
||||
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"pid": pid,
|
||||
}
|
||||
stats["detectors"] = get_detector_stats(stats_tracking)
|
||||
stats["camera_fps"] = round(total_camera_fps, 2)
|
||||
stats["process_fps"] = round(total_process_fps, 2)
|
||||
stats["skipped_fps"] = round(total_skipped_fps, 2)
|
||||
@@ -389,7 +478,6 @@ def stats_snapshot(
|
||||
"version": VERSION,
|
||||
"latest_version": stats_tracking["latest_frigate_version"],
|
||||
"storage": {},
|
||||
"temperatures": get_temperatures(),
|
||||
"last_updated": int(time.time()),
|
||||
}
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ import shutil
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from peewee import fn
|
||||
from peewee import SQL, fn
|
||||
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import RECORD_DIR
|
||||
@@ -44,13 +44,19 @@ class StorageMaintainer(threading.Thread):
|
||||
)
|
||||
}
|
||||
|
||||
# calculate MB/hr
|
||||
# calculate MB/hr from last 100 segments
|
||||
try:
|
||||
bandwidth = round(
|
||||
Recordings.select(fn.AVG(bandwidth_equation))
|
||||
# Subquery to get last 100 segments, then average their bandwidth
|
||||
last_100 = (
|
||||
Recordings.select(bandwidth_equation.alias("bw"))
|
||||
.where(Recordings.camera == camera, Recordings.segment_size > 0)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.limit(100)
|
||||
.scalar()
|
||||
.alias("recent")
|
||||
)
|
||||
|
||||
bandwidth = round(
|
||||
Recordings.select(fn.AVG(SQL("bw"))).from_(last_100).scalar()
|
||||
* 3600,
|
||||
2,
|
||||
)
|
||||
|
||||
@@ -3,6 +3,8 @@ import logging
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from fastapi import Request
|
||||
from fastapi.testclient import TestClient
|
||||
from peewee_migrate import Router
|
||||
from playhouse.sqlite_ext import SqliteExtDatabase
|
||||
from playhouse.sqliteq import SqliteQueueDatabase
|
||||
@@ -16,6 +18,20 @@ from frigate.review.types import SeverityEnum
|
||||
from frigate.test.const import TEST_DB, TEST_DB_CLEANUPS
|
||||
|
||||
|
||||
class AuthTestClient(TestClient):
|
||||
"""TestClient that automatically adds auth headers to all requests."""
|
||||
|
||||
def request(self, *args, **kwargs):
|
||||
# Add default auth headers if not already present
|
||||
headers = kwargs.get("headers") or {}
|
||||
if "remote-user" not in headers:
|
||||
headers["remote-user"] = "admin"
|
||||
if "remote-role" not in headers:
|
||||
headers["remote-role"] = "admin"
|
||||
kwargs["headers"] = headers
|
||||
return super().request(*args, **kwargs)
|
||||
|
||||
|
||||
class BaseTestHttp(unittest.TestCase):
|
||||
def setUp(self, models):
|
||||
# setup clean database for each test run
|
||||
@@ -113,7 +129,9 @@ class BaseTestHttp(unittest.TestCase):
|
||||
pass
|
||||
|
||||
def create_app(self, stats=None, event_metadata_publisher=None):
|
||||
return create_fastapi_app(
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
|
||||
app = create_fastapi_app(
|
||||
FrigateConfig(**self.minimal_config),
|
||||
self.db,
|
||||
None,
|
||||
@@ -123,8 +141,33 @@ class BaseTestHttp(unittest.TestCase):
|
||||
stats,
|
||||
event_metadata_publisher,
|
||||
None,
|
||||
enforce_default_admin=False,
|
||||
)
|
||||
|
||||
# Default test mocks for authentication
|
||||
# Tests can override these in their setUp if needed
|
||||
# This mock uses headers set by AuthTestClient
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return list(self.minimal_config.get("cameras", {}).keys())
|
||||
|
||||
app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
return app
|
||||
|
||||
def insert_mock_event(
|
||||
self,
|
||||
id: str,
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
from unittest.mock import Mock
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.stats.emitter import StatsEmitter
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpApp(BaseTestHttp):
|
||||
@@ -20,7 +18,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
stats.get_latest_stats.return_value = self.test_stats
|
||||
app = super().create_app(stats)
|
||||
|
||||
with TestClient(app) as client:
|
||||
with AuthTestClient(app) as client:
|
||||
response = client.get("/stats")
|
||||
response_json = response.json()
|
||||
assert response_json == self.test_stats
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
from unittest.mock import patch
|
||||
|
||||
from fastapi import HTTPException, Request
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from frigate.api.auth import (
|
||||
get_allowed_cameras_for_filter,
|
||||
get_current_user,
|
||||
)
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestCameraAccessEventReview(BaseTestHttp):
|
||||
@@ -16,9 +15,17 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().setUp([Event, ReviewSegment, Recordings])
|
||||
self.app = super().create_app()
|
||||
|
||||
# Mock get_current_user to return valid user for all tests
|
||||
async def mock_get_current_user():
|
||||
return {"username": "test_user", "role": "user"}
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
|
||||
@@ -30,21 +37,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_event("event1", camera="front_door")
|
||||
super().insert_mock_event("event2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events")
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
assert "event1" in ids
|
||||
assert "event2" not in ids
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events")
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
@@ -54,21 +65,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_review_segment("rev1", camera="front_door")
|
||||
super().insert_mock_review_segment("rev2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review")
|
||||
assert resp.status_code == 200
|
||||
ids = [r["id"] for r in resp.json()]
|
||||
assert "rev1" in ids
|
||||
assert "rev2" not in ids
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review")
|
||||
assert resp.status_code == 200
|
||||
ids = [r["id"] for r in resp.json()]
|
||||
@@ -84,7 +99,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.event.require_camera_access", mock_require_allowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/event1")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["id"] == "event1"
|
||||
@@ -94,7 +109,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.event.require_camera_access", mock_require_disallowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/event1")
|
||||
assert resp.status_code == 403
|
||||
|
||||
@@ -108,7 +123,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.review.require_camera_access", mock_require_allowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review/rev1")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["id"] == "rev1"
|
||||
@@ -118,7 +133,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.review.require_camera_access", mock_require_disallowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review/rev1")
|
||||
assert resp.status_code == 403
|
||||
|
||||
@@ -126,21 +141,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_event("event1", camera="front_door")
|
||||
super().insert_mock_event("event2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events", params={"cameras": "all"})
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
assert "event1" in ids
|
||||
assert "event2" not in ids
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events", params={"cameras": "all"})
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
@@ -150,20 +169,24 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_event("event1", camera="front_door")
|
||||
super().insert_mock_event("event2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/summary")
|
||||
assert resp.status_code == 200
|
||||
summary_list = resp.json()
|
||||
assert len(summary_list) == 1
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/summary")
|
||||
summary_list = resp.json()
|
||||
assert len(summary_list) == 2
|
||||
|
||||
@@ -2,14 +2,13 @@ from datetime import datetime
|
||||
from typing import Any
|
||||
from unittest.mock import Mock
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
from frigate.comms.event_metadata_updater import EventMetadataPublisher
|
||||
from frigate.models import Event, Recordings, ReviewSegment, Timeline
|
||||
from frigate.stats.emitter import StatsEmitter
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp, Request
|
||||
from frigate.test.test_storage import _insert_mock_event
|
||||
|
||||
|
||||
@@ -18,14 +17,26 @@ class TestHttpApp(BaseTestHttp):
|
||||
super().setUp([Event, Recordings, ReviewSegment, Timeline])
|
||||
self.app = super().create_app()
|
||||
|
||||
# Mock auth to bypass camera access for tests
|
||||
async def mock_get_current_user(request: Any):
|
||||
return {"username": "test_user", "role": "admin"}
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
self.app.dependency_overrides.clear()
|
||||
@@ -35,20 +46,20 @@ class TestHttpApp(BaseTestHttp):
|
||||
################################### GET /events Endpoint #########################################################
|
||||
####################################################################################################################
|
||||
def test_get_event_list_no_events(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_no_match_event_id(self):
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events", params={"event_id": "abc"}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_match_event_id(self):
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events", params={"event_id": id}).json()
|
||||
assert len(events) == 1
|
||||
@@ -58,7 +69,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id, now, now + 1)
|
||||
events = client.get(
|
||||
"/events", params={"max_length": 1, "min_length": 1}
|
||||
@@ -69,7 +80,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_no_match_max_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"max_length": 1}).json()
|
||||
@@ -78,7 +89,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_no_match_min_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"min_length": 3}).json()
|
||||
@@ -88,7 +99,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 1
|
||||
@@ -108,14 +119,14 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_no_match_has_clip(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"has_clip": 0}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_has_clip(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, has_clip=True)
|
||||
events = client.get("/events", params={"has_clip": 1}).json()
|
||||
@@ -123,7 +134,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
assert events[0]["id"] == id
|
||||
|
||||
def test_get_event_list_sort_score(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
super().insert_mock_event(id, top_score=37, score=37, data={"score": 50})
|
||||
@@ -141,7 +152,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_sort_start_time(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
super().insert_mock_event(id, start_time=now + 3)
|
||||
@@ -159,7 +170,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_good_event(self):
|
||||
id = "123456.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
event = client.get(f"/events/{id}").json()
|
||||
|
||||
@@ -171,7 +182,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
id = "123456.random"
|
||||
bad_id = "654321.other"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
event_response = client.get(f"/events/{bad_id}")
|
||||
assert event_response.status_code == 404
|
||||
@@ -180,7 +191,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_delete_event(self):
|
||||
id = "123456.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
event = client.get(f"/events/{id}").json()
|
||||
assert event
|
||||
@@ -193,7 +204,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_event_retention(self):
|
||||
id = "123456.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
client.post(f"/events/{id}/retain", headers={"remote-role": "admin"})
|
||||
event = client.get(f"/events/{id}").json()
|
||||
@@ -212,12 +223,11 @@ class TestHttpApp(BaseTestHttp):
|
||||
morning = 1656590400 # 06/30/2022 6 am (GMT)
|
||||
evening = 1656633600 # 06/30/2022 6 pm (GMT)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(morning_id, morning)
|
||||
super().insert_mock_event(evening_id, evening)
|
||||
# both events come back
|
||||
events = client.get("/events").json()
|
||||
print("events!!!", events)
|
||||
assert events
|
||||
assert len(events) == 2
|
||||
# morning event is excluded
|
||||
@@ -248,7 +258,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
|
||||
mock_event_updater.publish.side_effect = update_event
|
||||
|
||||
with TestClient(app) as client:
|
||||
with AuthTestClient(app) as client:
|
||||
super().insert_mock_event(id)
|
||||
new_sub_label_response = client.post(
|
||||
f"/events/{id}/sub_label",
|
||||
@@ -285,7 +295,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
|
||||
mock_event_updater.publish.side_effect = update_event
|
||||
|
||||
with TestClient(app) as client:
|
||||
with AuthTestClient(app) as client:
|
||||
super().insert_mock_event(id)
|
||||
client.post(
|
||||
f"/events/{id}/sub_label",
|
||||
@@ -301,7 +311,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
####################################################################################################################
|
||||
def test_get_metrics(self):
|
||||
"""ensure correct prometheus metrics api response"""
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
ts_start = datetime.now().timestamp()
|
||||
ts_end = ts_start + 30
|
||||
_insert_mock_event(
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
"""Unit tests for recordings/media API endpoints."""
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
|
||||
import pytz
|
||||
from fastapi.testclient import TestClient
|
||||
from fastapi import Request
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
from frigate.models import Recordings
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpMedia(BaseTestHttp):
|
||||
@@ -19,15 +18,26 @@ class TestHttpMedia(BaseTestHttp):
|
||||
super().setUp([Recordings])
|
||||
self.app = super().create_app()
|
||||
|
||||
# Mock auth to bypass camera access for tests
|
||||
async def mock_get_current_user(request: Any):
|
||||
return {"username": "test_user", "role": "admin"}
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
"""Clean up after tests."""
|
||||
@@ -52,7 +62,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# March 11, 2024 at 12:00 PM EDT (after DST)
|
||||
march_11_noon = tz.localize(datetime(2024, 3, 11, 12, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Insert recordings for each day
|
||||
Recordings.insert(
|
||||
id="recording_march_9",
|
||||
@@ -128,7 +138,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# November 4, 2024 at 12:00 PM EST (after DST)
|
||||
nov_4_noon = tz.localize(datetime(2024, 11, 4, 12, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Insert recordings for each day
|
||||
Recordings.insert(
|
||||
id="recording_nov_2",
|
||||
@@ -195,7 +205,15 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# March 10, 2024 at 3:00 PM EDT (after DST transition)
|
||||
march_10_afternoon = tz.localize(datetime(2024, 3, 10, 15, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Override allowed cameras for this test to include both
|
||||
async def mock_get_allowed_cameras_for_filter(_request: Request):
|
||||
return ["front_door", "back_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
# Insert recordings for front_door on March 9
|
||||
Recordings.insert(
|
||||
id="front_march_9",
|
||||
@@ -236,6 +254,14 @@ class TestHttpMedia(BaseTestHttp):
|
||||
assert summary["2024-03-09"] is True
|
||||
assert summary["2024-03-10"] is True
|
||||
|
||||
# Reset dependency override back to default single camera for other tests
|
||||
async def reset_allowed_cameras(_request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
reset_allowed_cameras
|
||||
)
|
||||
|
||||
def test_recordings_summary_at_dst_transition_time(self):
|
||||
"""
|
||||
Test recordings that span the exact DST transition time.
|
||||
@@ -250,7 +276,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# This is 1.5 hours of actual time but spans the "missing" hour
|
||||
after_transition = tz.localize(datetime(2024, 3, 10, 3, 30, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
Recordings.insert(
|
||||
id="recording_during_transition",
|
||||
path="/media/recordings/transition.mp4",
|
||||
@@ -283,7 +309,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
march_9_utc = datetime(2024, 3, 9, 17, 0, 0, tzinfo=timezone.utc).timestamp()
|
||||
march_10_utc = datetime(2024, 3, 10, 17, 0, 0, tzinfo=timezone.utc).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
Recordings.insert(
|
||||
id="recording_march_9_utc",
|
||||
path="/media/recordings/march_9_utc.mp4",
|
||||
@@ -325,7 +351,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
"""
|
||||
Test recordings summary when no recordings exist.
|
||||
"""
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
response = client.get(
|
||||
"/recordings/summary",
|
||||
params={"timezone": "America/New_York", "cameras": "all"},
|
||||
@@ -342,7 +368,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
tz = pytz.timezone("America/New_York")
|
||||
march_10_noon = tz.localize(datetime(2024, 3, 10, 12, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Insert recordings for both cameras
|
||||
Recordings.insert(
|
||||
id="front_recording",
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
from fastapi import Request
|
||||
from peewee import DoesNotExist
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
from frigate.models import Event, Recordings, ReviewSegment, UserReviewStatus
|
||||
from frigate.review.types import SeverityEnum
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpReview(BaseTestHttp):
|
||||
@@ -16,14 +16,26 @@ class TestHttpReview(BaseTestHttp):
|
||||
self.user_id = "admin"
|
||||
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user():
|
||||
return {"username": self.user_id, "role": "admin"}
|
||||
# This mock uses headers set by AuthTestClient
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
self.app.dependency_overrides.clear()
|
||||
@@ -57,7 +69,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
but ends after is included in the results."""
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random", now, now + 2)
|
||||
response = client.get("/review")
|
||||
assert response.status_code == 200
|
||||
@@ -67,7 +79,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_no_filters(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now - 2, now - 1)
|
||||
response = client.get("/review")
|
||||
@@ -81,7 +93,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
"""Test that review items outside the range are not returned."""
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now - 2, now - 1)
|
||||
super().insert_mock_review_segment(f"{id}2", now + 4, now + 5)
|
||||
@@ -97,7 +109,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_time_filter(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2)
|
||||
params = {
|
||||
@@ -113,7 +125,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_limit_filter(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "654321.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2)
|
||||
@@ -132,7 +144,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_severity_filters_no_matches(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
|
||||
params = {
|
||||
@@ -149,7 +161,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_severity_filters(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
|
||||
params = {
|
||||
@@ -165,7 +177,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_all_filters(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2)
|
||||
params = {
|
||||
@@ -188,7 +200,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### GET /review/summary Endpoint #################################################
|
||||
####################################################################################################################
|
||||
def test_get_review_summary_all_filters(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
params = {
|
||||
"cameras": "front_door",
|
||||
@@ -219,7 +231,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
self.assertEqual(response_json, expected_response)
|
||||
|
||||
def test_get_review_summary_no_filters(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
response = client.get("/review/summary")
|
||||
assert response.status_code == 200
|
||||
@@ -247,7 +259,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
now = datetime.now()
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment(
|
||||
"123456.random", now.timestamp() - 2, now.timestamp() - 1
|
||||
)
|
||||
@@ -291,7 +303,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
now = datetime.now()
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random", now.timestamp())
|
||||
five_days_ago_ts = five_days_ago.timestamp()
|
||||
for i in range(20):
|
||||
@@ -342,7 +354,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_summary_multiple_in_same_day_with_reviewed(self):
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
five_days_ago_ts = five_days_ago.timestamp()
|
||||
for i in range(10):
|
||||
id = f"123456_{i}.random_alert_not_reviewed"
|
||||
@@ -393,14 +405,14 @@ class TestHttpReview(BaseTestHttp):
|
||||
####################################################################################################################
|
||||
|
||||
def test_post_reviews_viewed_no_body(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
response = client.post("/reviews/viewed")
|
||||
# Missing ids
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_viewed_no_body_ids(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
body = {"ids": [""]}
|
||||
response = client.post("/reviews/viewed", json=body)
|
||||
@@ -408,7 +420,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_viewed_non_existent_id(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": ["1"]}
|
||||
@@ -425,7 +437,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
)
|
||||
|
||||
def test_post_reviews_viewed(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": [id]}
|
||||
@@ -445,14 +457,14 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### POST reviews/delete Endpoint ################################################
|
||||
####################################################################################################################
|
||||
def test_post_reviews_delete_no_body(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
response = client.post("/reviews/delete", headers={"remote-role": "admin"})
|
||||
# Missing ids
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_delete_no_body_ids(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
body = {"ids": [""]}
|
||||
response = client.post(
|
||||
@@ -462,7 +474,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_delete_non_existent_id(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": ["1"]}
|
||||
@@ -479,7 +491,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert review_ids_in_db_after[0].id == id
|
||||
|
||||
def test_post_reviews_delete(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": [id]}
|
||||
@@ -495,7 +507,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert len(review_ids_in_db_after) == 0
|
||||
|
||||
def test_post_reviews_delete_many(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
ids = ["123456.random", "654321.random"]
|
||||
for id in ids:
|
||||
super().insert_mock_review_segment(id)
|
||||
@@ -527,7 +539,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_activity_motion_no_data_for_time_range(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
params = {
|
||||
"after": now,
|
||||
"before": now + 3,
|
||||
@@ -540,7 +552,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_activity_motion(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
one_m = int((datetime.now() + timedelta(minutes=1)).timestamp())
|
||||
id = "123456.random"
|
||||
id2 = "123451.random"
|
||||
@@ -573,7 +585,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### GET /review/event/{event_id} Endpoint #######################################
|
||||
####################################################################################################################
|
||||
def test_review_event_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
response = client.get("/review/event/123456.random")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
@@ -585,7 +597,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_event_not_found_in_data(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now + 1, now + 2)
|
||||
response = client.get(f"/review/event/{id}")
|
||||
@@ -599,7 +611,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_get_specific_event(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
event_id = "123456.event.random"
|
||||
super().insert_mock_event(event_id)
|
||||
review_id = "123456.review.random"
|
||||
@@ -626,7 +638,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### GET /review/{review_id} Endpoint #######################################
|
||||
####################################################################################################################
|
||||
def test_review_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
response = client.get("/review/123456.random")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
@@ -638,7 +650,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(review_id, now + 1, now + 2)
|
||||
response = client.get(f"/review/{review_id}")
|
||||
@@ -662,7 +674,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
####################################################################################################################
|
||||
|
||||
def test_delete_review_viewed_review_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
review_id = "123456.random"
|
||||
response = client.delete(f"/review/{review_id}/viewed")
|
||||
assert response.status_code == 404
|
||||
@@ -675,7 +687,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_delete_review_viewed(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(review_id, now + 1, now + 2)
|
||||
self._insert_user_review_status(review_id, reviewed=True)
|
||||
|
||||
@@ -78,6 +78,8 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
[
|
||||
CameraConfigUpdateEnum.add,
|
||||
CameraConfigUpdateEnum.enabled,
|
||||
CameraConfigUpdateEnum.motion,
|
||||
CameraConfigUpdateEnum.objects,
|
||||
CameraConfigUpdateEnum.remove,
|
||||
CameraConfigUpdateEnum.zones,
|
||||
],
|
||||
|
||||
@@ -26,6 +26,15 @@ class ModelStatusTypesEnum(str, Enum):
|
||||
failed = "failed"
|
||||
|
||||
|
||||
class JobStatusTypesEnum(str, Enum):
|
||||
pending = "pending"
|
||||
queued = "queued"
|
||||
running = "running"
|
||||
success = "success"
|
||||
failed = "failed"
|
||||
cancelled = "cancelled"
|
||||
|
||||
|
||||
class TrackedObjectUpdateTypesEnum(str, Enum):
|
||||
description = "description"
|
||||
face = "face"
|
||||
|
||||
@@ -330,7 +330,7 @@ def collect_state_classification_examples(
|
||||
1. Queries review items from specified cameras
|
||||
2. Selects 100 balanced timestamps across the data
|
||||
3. Extracts keyframes from recordings (cropped to specified regions)
|
||||
4. Selects 20 most visually distinct images
|
||||
4. Selects 24 most visually distinct images
|
||||
5. Saves them to the dataset directory
|
||||
|
||||
Args:
|
||||
@@ -660,7 +660,6 @@ def collect_object_classification_examples(
|
||||
Args:
|
||||
model_name: Name of the classification model
|
||||
label: Object label to collect (e.g., "person", "car")
|
||||
cameras: List of camera names to collect examples from
|
||||
"""
|
||||
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
|
||||
temp_dir = os.path.join(dataset_dir, "temp")
|
||||
|
||||
@@ -13,7 +13,7 @@ from frigate.util.services import get_video_properties
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
CURRENT_CONFIG_VERSION = "0.17-0"
|
||||
CURRENT_CONFIG_VERSION = "0.18-0"
|
||||
DEFAULT_CONFIG_FILE = os.path.join(CONFIG_DIR, "config.yml")
|
||||
|
||||
|
||||
@@ -98,6 +98,13 @@ def migrate_frigate_config(config_file: str):
|
||||
yaml.dump(new_config, f)
|
||||
previous_version = "0.17-0"
|
||||
|
||||
if previous_version < "0.18-0":
|
||||
logger.info(f"Migrating frigate config from {previous_version} to 0.18-0...")
|
||||
new_config = migrate_018_0(config)
|
||||
with open(config_file, "w") as f:
|
||||
yaml.dump(new_config, f)
|
||||
previous_version = "0.18-0"
|
||||
|
||||
logger.info("Finished frigate config migration...")
|
||||
|
||||
|
||||
@@ -348,7 +355,7 @@ def migrate_016_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
|
||||
|
||||
def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
|
||||
"""Handle migrating frigate config to 0.16-0"""
|
||||
"""Handle migrating frigate config to 0.17-0"""
|
||||
new_config = config.copy()
|
||||
|
||||
# migrate global to new recording configuration
|
||||
@@ -380,7 +387,7 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
|
||||
if global_genai:
|
||||
new_genai_config = {}
|
||||
new_object_config = config.get("objects", {})
|
||||
new_object_config = new_config.get("objects", {})
|
||||
new_object_config["genai"] = {}
|
||||
|
||||
for key in global_genai.keys():
|
||||
@@ -389,7 +396,8 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
else:
|
||||
new_object_config["genai"][key] = global_genai[key]
|
||||
|
||||
config["genai"] = new_genai_config
|
||||
new_config["genai"] = new_genai_config
|
||||
new_config["objects"] = new_object_config
|
||||
|
||||
for name, camera in config.get("cameras", {}).items():
|
||||
camera_config: dict[str, dict[str, Any]] = camera.copy()
|
||||
@@ -415,8 +423,9 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
camera_genai = camera_config.get("genai", {})
|
||||
|
||||
if camera_genai:
|
||||
new_object_config = config.get("objects", {})
|
||||
new_object_config["genai"] = camera_genai
|
||||
camera_object_config = camera_config.get("objects", {})
|
||||
camera_object_config["genai"] = camera_genai
|
||||
camera_config["objects"] = camera_object_config
|
||||
del camera_config["genai"]
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
@@ -425,6 +434,27 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
return new_config
|
||||
|
||||
|
||||
def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
|
||||
"""Handle migrating frigate config to 0.18-0"""
|
||||
new_config = config.copy()
|
||||
|
||||
# Remove deprecated sync_recordings from global record config
|
||||
if new_config.get("record", {}).get("sync_recordings") is not None:
|
||||
del new_config["record"]["sync_recordings"]
|
||||
|
||||
# Remove deprecated sync_recordings from camera-specific record configs
|
||||
for name, camera in config.get("cameras", {}).items():
|
||||
camera_config: dict[str, dict[str, Any]] = camera.copy()
|
||||
|
||||
if camera_config.get("record", {}).get("sync_recordings") is not None:
|
||||
del camera_config["record"]["sync_recordings"]
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
|
||||
new_config["version"] = "0.18-0"
|
||||
return new_config
|
||||
|
||||
|
||||
def get_relative_coordinates(
|
||||
mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
|
||||
) -> Union[str, list]:
|
||||
|
||||
789
frigate/util/media.py
Normal file
789
frigate/util/media.py
Normal file
@@ -0,0 +1,789 @@
|
||||
"""Recordings Utilities."""
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from peewee import DatabaseError, chunked
|
||||
|
||||
from frigate.const import CLIPS_DIR, EXPORT_DIR, RECORD_DIR, THUMB_DIR
|
||||
from frigate.models import (
|
||||
Event,
|
||||
Export,
|
||||
Previews,
|
||||
Recordings,
|
||||
RecordingsToDelete,
|
||||
ReviewSegment,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Safety threshold - abort if more than 50% of files would be deleted
|
||||
SAFETY_THRESHOLD = 0.5
|
||||
|
||||
|
||||
@dataclass
|
||||
class SyncResult:
|
||||
"""Result of a sync operation."""
|
||||
|
||||
media_type: str
|
||||
files_checked: int = 0
|
||||
orphans_found: int = 0
|
||||
orphans_deleted: int = 0
|
||||
orphan_paths: list[str] = field(default_factory=list)
|
||||
aborted: bool = False
|
||||
error: str | None = None
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
"media_type": self.media_type,
|
||||
"files_checked": self.files_checked,
|
||||
"orphans_found": self.orphans_found,
|
||||
"orphans_deleted": self.orphans_deleted,
|
||||
"aborted": self.aborted,
|
||||
"error": self.error,
|
||||
}
|
||||
|
||||
|
||||
def remove_empty_directories(directory: str) -> None:
|
||||
# list all directories recursively and sort them by path,
|
||||
# longest first
|
||||
paths = sorted(
|
||||
[x[0] for x in os.walk(directory)],
|
||||
key=lambda p: len(str(p)),
|
||||
reverse=True,
|
||||
)
|
||||
for path in paths:
|
||||
# don't delete the parent
|
||||
if path == directory:
|
||||
continue
|
||||
if len(os.listdir(path)) == 0:
|
||||
os.rmdir(path)
|
||||
|
||||
|
||||
def sync_recordings(
|
||||
limited: bool = False, dry_run: bool = False, force: bool = False
|
||||
) -> SyncResult:
|
||||
"""Sync recordings between the database and disk using the SyncResult format."""
|
||||
|
||||
result = SyncResult(media_type="recordings")
|
||||
|
||||
try:
|
||||
logger.debug("Start sync recordings.")
|
||||
|
||||
# start checking on the hour 36 hours ago
|
||||
check_point = datetime.datetime.now().replace(
|
||||
minute=0, second=0, microsecond=0
|
||||
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
|
||||
|
||||
# Gather DB recordings to inspect
|
||||
if limited:
|
||||
recordings_query = Recordings.select(Recordings.id, Recordings.path).where(
|
||||
Recordings.start_time >= check_point.timestamp()
|
||||
)
|
||||
else:
|
||||
recordings_query = Recordings.select(Recordings.id, Recordings.path)
|
||||
|
||||
recordings_count = recordings_query.count()
|
||||
page_size = 1000
|
||||
num_pages = (recordings_count + page_size - 1) // page_size
|
||||
recordings_to_delete: list[dict] = []
|
||||
|
||||
for page in range(num_pages):
|
||||
for recording in recordings_query.paginate(page, page_size):
|
||||
if not os.path.exists(recording.path):
|
||||
recordings_to_delete.append(
|
||||
{"id": recording.id, "path": recording.path}
|
||||
)
|
||||
|
||||
result.files_checked += recordings_count
|
||||
result.orphans_found += len(recordings_to_delete)
|
||||
result.orphan_paths.extend(
|
||||
[
|
||||
recording["path"]
|
||||
for recording in recordings_to_delete
|
||||
if recording.get("path")
|
||||
]
|
||||
)
|
||||
|
||||
if (
|
||||
recordings_count
|
||||
and len(recordings_to_delete) / recordings_count > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries (force=True, bypassing safety threshold)"
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if recordings_to_delete and not dry_run:
|
||||
logger.info(
|
||||
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
|
||||
)
|
||||
|
||||
RecordingsToDelete.create_table(temporary=True)
|
||||
|
||||
max_inserts = 1000
|
||||
for batch in chunked(recordings_to_delete, max_inserts):
|
||||
RecordingsToDelete.insert_many(batch).execute()
|
||||
|
||||
try:
|
||||
deleted = (
|
||||
Recordings.delete()
|
||||
.where(
|
||||
Recordings.id.in_(
|
||||
RecordingsToDelete.select(RecordingsToDelete.id)
|
||||
)
|
||||
)
|
||||
.execute()
|
||||
)
|
||||
result.orphans_deleted += int(deleted)
|
||||
except DatabaseError as e:
|
||||
logger.error(f"Database error during recordings db cleanup: {e}")
|
||||
result.error = str(e)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if result.aborted:
|
||||
logger.warning("Recording DB sync aborted; skipping file cleanup.")
|
||||
return result
|
||||
|
||||
# Only try to cleanup files if db cleanup was successful or dry_run
|
||||
if limited:
|
||||
# get recording files from last 36 hours
|
||||
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
if root > hour_check
|
||||
}
|
||||
else:
|
||||
# get all recordings files on disk and put them in a set
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
}
|
||||
|
||||
result.files_checked += len(files_on_disk)
|
||||
|
||||
files_to_delete: list[str] = []
|
||||
for file in files_on_disk:
|
||||
if not Recordings.select().where(Recordings.path == file).exists():
|
||||
files_to_delete.append(file)
|
||||
|
||||
result.orphans_found += len(files_to_delete)
|
||||
result.orphan_paths.extend(files_to_delete)
|
||||
|
||||
if (
|
||||
files_on_disk
|
||||
and len(files_to_delete) / len(files_on_disk) > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files (force=True, bypassing safety threshold)"
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files, could be due to configuration error. Aborting..."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Recordings sync (dry run): Found {len(files_to_delete)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(files_to_delete)} orphaned recordings files")
|
||||
for file in files_to_delete:
|
||||
try:
|
||||
os.unlink(file)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file}: {e}")
|
||||
|
||||
logger.debug("End sync recordings.")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing recordings: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_event_snapshots(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync event snapshots - delete files not referenced by any event.
|
||||
|
||||
Event snapshots are stored at: CLIPS_DIR/{camera}-{event_id}.jpg
|
||||
Also checks for clean variants: {camera}-{event_id}-clean.webp and -clean.png
|
||||
"""
|
||||
result = SyncResult(media_type="event_snapshots")
|
||||
|
||||
try:
|
||||
# Get all event IDs with snapshots from DB
|
||||
events_with_snapshots = set(
|
||||
f"{e.camera}-{e.id}"
|
||||
for e in Event.select(Event.id, Event.camera).where(
|
||||
Event.has_snapshot == True
|
||||
)
|
||||
)
|
||||
|
||||
# Find snapshot files on disk (directly in CLIPS_DIR, not subdirectories)
|
||||
snapshot_files: list[tuple[str, str]] = [] # (full_path, base_name)
|
||||
if os.path.isdir(CLIPS_DIR):
|
||||
for file in os.listdir(CLIPS_DIR):
|
||||
file_path = os.path.join(CLIPS_DIR, file)
|
||||
if os.path.isfile(file_path) and file.endswith(
|
||||
(".jpg", "-clean.webp", "-clean.png")
|
||||
):
|
||||
# Extract base name (camera-event_id) from filename
|
||||
base_name = file
|
||||
for suffix in ["-clean.webp", "-clean.png", ".jpg"]:
|
||||
if file.endswith(suffix):
|
||||
base_name = file[: -len(suffix)]
|
||||
break
|
||||
snapshot_files.append((file_path, base_name))
|
||||
|
||||
result.files_checked = len(snapshot_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path, base_name in snapshot_files:
|
||||
if base_name not in events_with_snapshots:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Event snapshots sync (dry run): Found {len(orphans)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned event snapshot files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing event snapshots: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_event_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync event thumbnails - delete files not referenced by any event.
|
||||
|
||||
Event thumbnails are stored at: THUMB_DIR/{camera}/{event_id}.webp
|
||||
Only events without inline thumbnail (thumbnail field is None/empty) use files.
|
||||
"""
|
||||
result = SyncResult(media_type="event_thumbnails")
|
||||
|
||||
try:
|
||||
# Get all events that use file-based thumbnails
|
||||
# Events with thumbnail field populated don't need files
|
||||
events_with_file_thumbs = set(
|
||||
(e.camera, e.id)
|
||||
for e in Event.select(Event.id, Event.camera, Event.thumbnail).where(
|
||||
(Event.thumbnail.is_null(True)) | (Event.thumbnail == "")
|
||||
)
|
||||
)
|
||||
|
||||
# Find thumbnail files on disk
|
||||
thumbnail_files: list[
|
||||
tuple[str, str, str]
|
||||
] = [] # (full_path, camera, event_id)
|
||||
if os.path.isdir(THUMB_DIR):
|
||||
for camera_dir in os.listdir(THUMB_DIR):
|
||||
camera_path = os.path.join(THUMB_DIR, camera_dir)
|
||||
if not os.path.isdir(camera_path):
|
||||
continue
|
||||
for file in os.listdir(camera_path):
|
||||
if file.endswith(".webp"):
|
||||
event_id = file[:-5] # Remove .webp
|
||||
file_path = os.path.join(camera_path, file)
|
||||
thumbnail_files.append((file_path, camera_dir, event_id))
|
||||
|
||||
result.files_checked = len(thumbnail_files)
|
||||
|
||||
# Find orphans - files where event doesn't exist or event has inline thumbnail
|
||||
orphans: list[str] = []
|
||||
for file_path, camera, event_id in thumbnail_files:
|
||||
if (camera, event_id) not in events_with_file_thumbs:
|
||||
# Check if event exists with inline thumbnail
|
||||
event_exists = Event.select().where(Event.id == event_id).exists()
|
||||
if not event_exists:
|
||||
orphans.append(file_path)
|
||||
# If event exists with inline thumbnail, the file is also orphaned
|
||||
elif event_exists:
|
||||
event = Event.get_or_none(Event.id == event_id)
|
||||
if event and event.thumbnail:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Event thumbnails sync (dry run): Found {len(orphans)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned event thumbnail files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing event thumbnails: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_review_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync review segment thumbnails - delete files not referenced by any review segment.
|
||||
|
||||
Review thumbnails are stored at: CLIPS_DIR/review/thumb-{camera}-{review_id}.webp
|
||||
The full path is stored in ReviewSegment.thumb_path
|
||||
"""
|
||||
result = SyncResult(media_type="review_thumbnails")
|
||||
|
||||
try:
|
||||
# Get all thumb paths from DB
|
||||
review_thumb_paths = set(
|
||||
r.thumb_path
|
||||
for r in ReviewSegment.select(ReviewSegment.thumb_path)
|
||||
if r.thumb_path
|
||||
)
|
||||
|
||||
# Find review thumbnail files on disk
|
||||
review_dir = os.path.join(CLIPS_DIR, "review")
|
||||
thumbnail_files: list[str] = []
|
||||
if os.path.isdir(review_dir):
|
||||
for file in os.listdir(review_dir):
|
||||
if file.startswith("thumb-") and file.endswith(".webp"):
|
||||
file_path = os.path.join(review_dir, file)
|
||||
thumbnail_files.append(file_path)
|
||||
|
||||
result.files_checked = len(thumbnail_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path in thumbnail_files:
|
||||
if file_path not in review_thumb_paths:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Review thumbnails sync (dry run): Found {len(orphans)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned review thumbnail files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing review thumbnails: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_previews(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync preview files - delete files not referenced by any preview record.
|
||||
|
||||
Previews are stored at: CLIPS_DIR/previews/{camera}/*.mp4
|
||||
The full path is stored in Previews.path
|
||||
"""
|
||||
result = SyncResult(media_type="previews")
|
||||
|
||||
try:
|
||||
# Get all preview paths from DB
|
||||
preview_paths = set(p.path for p in Previews.select(Previews.path) if p.path)
|
||||
|
||||
# Find preview files on disk
|
||||
previews_dir = os.path.join(CLIPS_DIR, "previews")
|
||||
preview_files: list[str] = []
|
||||
if os.path.isdir(previews_dir):
|
||||
for camera_dir in os.listdir(previews_dir):
|
||||
camera_path = os.path.join(previews_dir, camera_dir)
|
||||
if not os.path.isdir(camera_path):
|
||||
continue
|
||||
for file in os.listdir(camera_path):
|
||||
if file.endswith(".mp4"):
|
||||
file_path = os.path.join(camera_path, file)
|
||||
preview_files.append(file_path)
|
||||
|
||||
result.files_checked = len(preview_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path in preview_files:
|
||||
if file_path not in preview_paths:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(f"Previews sync (dry run): Found {len(orphans)} orphaned files")
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned preview files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing previews: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_exports(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync export files - delete files not referenced by any export record.
|
||||
|
||||
Export videos are stored at: EXPORT_DIR/*.mp4
|
||||
Export thumbnails are stored at: CLIPS_DIR/export/*.jpg
|
||||
The paths are stored in Export.video_path and Export.thumb_path
|
||||
"""
|
||||
result = SyncResult(media_type="exports")
|
||||
|
||||
try:
|
||||
# Get all export paths from DB
|
||||
export_video_paths = set()
|
||||
export_thumb_paths = set()
|
||||
for e in Export.select(Export.video_path, Export.thumb_path):
|
||||
if e.video_path:
|
||||
export_video_paths.add(e.video_path)
|
||||
if e.thumb_path:
|
||||
export_thumb_paths.add(e.thumb_path)
|
||||
|
||||
# Find export video files on disk
|
||||
export_files: list[str] = []
|
||||
if os.path.isdir(EXPORT_DIR):
|
||||
for file in os.listdir(EXPORT_DIR):
|
||||
if file.endswith(".mp4"):
|
||||
file_path = os.path.join(EXPORT_DIR, file)
|
||||
export_files.append(file_path)
|
||||
|
||||
# Find export thumbnail files on disk
|
||||
export_thumb_dir = os.path.join(CLIPS_DIR, "export")
|
||||
thumb_files: list[str] = []
|
||||
if os.path.isdir(export_thumb_dir):
|
||||
for file in os.listdir(export_thumb_dir):
|
||||
if file.endswith(".jpg"):
|
||||
file_path = os.path.join(export_thumb_dir, file)
|
||||
thumb_files.append(file_path)
|
||||
|
||||
result.files_checked = len(export_files) + len(thumb_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path in export_files:
|
||||
if file_path not in export_video_paths:
|
||||
orphans.append(file_path)
|
||||
for file_path in thumb_files:
|
||||
if file_path not in export_thumb_paths:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(f"Exports sync (dry run): Found {len(orphans)} orphaned files")
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned export files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing exports: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@dataclass
|
||||
class MediaSyncResults:
|
||||
"""Combined results from all media sync operations."""
|
||||
|
||||
event_snapshots: SyncResult | None = None
|
||||
event_thumbnails: SyncResult | None = None
|
||||
review_thumbnails: SyncResult | None = None
|
||||
previews: SyncResult | None = None
|
||||
exports: SyncResult | None = None
|
||||
recordings: SyncResult | None = None
|
||||
|
||||
@property
|
||||
def total_files_checked(self) -> int:
|
||||
total = 0
|
||||
for result in [
|
||||
self.event_snapshots,
|
||||
self.event_thumbnails,
|
||||
self.review_thumbnails,
|
||||
self.previews,
|
||||
self.exports,
|
||||
self.recordings,
|
||||
]:
|
||||
if result:
|
||||
total += result.files_checked
|
||||
return total
|
||||
|
||||
@property
|
||||
def total_orphans_found(self) -> int:
|
||||
total = 0
|
||||
for result in [
|
||||
self.event_snapshots,
|
||||
self.event_thumbnails,
|
||||
self.review_thumbnails,
|
||||
self.previews,
|
||||
self.exports,
|
||||
self.recordings,
|
||||
]:
|
||||
if result:
|
||||
total += result.orphans_found
|
||||
return total
|
||||
|
||||
@property
|
||||
def total_orphans_deleted(self) -> int:
|
||||
total = 0
|
||||
for result in [
|
||||
self.event_snapshots,
|
||||
self.event_thumbnails,
|
||||
self.review_thumbnails,
|
||||
self.previews,
|
||||
self.exports,
|
||||
self.recordings,
|
||||
]:
|
||||
if result:
|
||||
total += result.orphans_deleted
|
||||
return total
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert results to dictionary for API response."""
|
||||
results = {}
|
||||
for name, result in [
|
||||
("event_snapshots", self.event_snapshots),
|
||||
("event_thumbnails", self.event_thumbnails),
|
||||
("review_thumbnails", self.review_thumbnails),
|
||||
("previews", self.previews),
|
||||
("exports", self.exports),
|
||||
("recordings", self.recordings),
|
||||
]:
|
||||
if result:
|
||||
results[name] = {
|
||||
"files_checked": result.files_checked,
|
||||
"orphans_found": result.orphans_found,
|
||||
"orphans_deleted": result.orphans_deleted,
|
||||
"aborted": result.aborted,
|
||||
"error": result.error,
|
||||
}
|
||||
results["totals"] = {
|
||||
"files_checked": self.total_files_checked,
|
||||
"orphans_found": self.total_orphans_found,
|
||||
"orphans_deleted": self.total_orphans_deleted,
|
||||
}
|
||||
return results
|
||||
|
||||
|
||||
def sync_all_media(
|
||||
dry_run: bool = False, media_types: list[str] = ["all"], force: bool = False
|
||||
) -> MediaSyncResults:
|
||||
"""Sync specified media types with the database.
|
||||
|
||||
Args:
|
||||
dry_run: If True, only report orphans without deleting them.
|
||||
media_types: List of media types to sync. Can include: 'all', 'event_snapshots',
|
||||
'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'
|
||||
force: If True, bypass safety threshold checks.
|
||||
|
||||
Returns:
|
||||
MediaSyncResults with details of each sync operation.
|
||||
"""
|
||||
logger.debug(
|
||||
f"Starting media sync (dry_run={dry_run}, media_types={media_types}, force={force})"
|
||||
)
|
||||
|
||||
results = MediaSyncResults()
|
||||
|
||||
# Determine which media types to sync
|
||||
sync_all = "all" in media_types
|
||||
|
||||
if sync_all or "event_snapshots" in media_types:
|
||||
results.event_snapshots = sync_event_snapshots(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "event_thumbnails" in media_types:
|
||||
results.event_thumbnails = sync_event_thumbnails(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "review_thumbnails" in media_types:
|
||||
results.review_thumbnails = sync_review_thumbnails(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "previews" in media_types:
|
||||
results.previews = sync_previews(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "exports" in media_types:
|
||||
results.exports = sync_exports(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "recordings" in media_types:
|
||||
results.recordings = sync_recordings(dry_run=dry_run, force=force)
|
||||
|
||||
logger.info(
|
||||
f"Media sync complete: checked {results.total_files_checked} files, "
|
||||
f"found {results.total_orphans_found} orphans, "
|
||||
f"deleted {results.total_orphans_deleted}"
|
||||
)
|
||||
|
||||
return results
|
||||
@@ -1,7 +1,10 @@
|
||||
import atexit
|
||||
import faulthandler
|
||||
import logging
|
||||
import multiprocessing as mp
|
||||
import os
|
||||
import pathlib
|
||||
import subprocess
|
||||
import threading
|
||||
from logging.handlers import QueueHandler
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
@@ -11,6 +14,7 @@ from setproctitle import setproctitle
|
||||
|
||||
import frigate.log
|
||||
from frigate.config.logger import LoggerConfig
|
||||
from frigate.const import CONFIG_DIR
|
||||
|
||||
|
||||
class BaseProcess(mp.Process):
|
||||
@@ -48,6 +52,7 @@ class FrigateProcess(BaseProcess):
|
||||
|
||||
def before_start(self) -> None:
|
||||
self.__log_queue = frigate.log.log_listener.queue
|
||||
self.__memray_tracker = None
|
||||
|
||||
def pre_run_setup(self, logConfig: LoggerConfig | None = None) -> None:
|
||||
os.nice(self.priority)
|
||||
@@ -64,3 +69,86 @@ class FrigateProcess(BaseProcess):
|
||||
frigate.log.apply_log_levels(
|
||||
logConfig.default.value.upper(), logConfig.logs
|
||||
)
|
||||
|
||||
self._setup_memray()
|
||||
|
||||
def _setup_memray(self) -> None:
|
||||
"""Setup memray profiling if enabled via environment variable."""
|
||||
memray_modules = os.environ.get("FRIGATE_MEMRAY_MODULES", "")
|
||||
|
||||
if not memray_modules:
|
||||
return
|
||||
|
||||
# Extract module name from process name (e.g., "frigate.capture:camera" -> "frigate.capture")
|
||||
process_name = self.name
|
||||
module_name = (
|
||||
process_name.split(":")[0] if ":" in process_name else process_name
|
||||
)
|
||||
|
||||
enabled_modules = [m.strip() for m in memray_modules.split(",")]
|
||||
|
||||
if module_name not in enabled_modules and process_name not in enabled_modules:
|
||||
return
|
||||
|
||||
try:
|
||||
import memray
|
||||
|
||||
reports_dir = pathlib.Path(CONFIG_DIR) / "memray_reports"
|
||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
||||
safe_name = (
|
||||
process_name.replace(":", "_").replace("/", "_").replace("\\", "_")
|
||||
)
|
||||
|
||||
binary_file = reports_dir / f"{safe_name}.bin"
|
||||
|
||||
self.__memray_tracker = memray.Tracker(str(binary_file))
|
||||
self.__memray_tracker.__enter__()
|
||||
|
||||
# Register cleanup handler to stop tracking and generate HTML report
|
||||
# atexit runs on normal exits and most signal-based terminations (SIGTERM, SIGINT)
|
||||
# For hard kills (SIGKILL) or segfaults, the binary file is preserved for manual generation
|
||||
atexit.register(self._cleanup_memray, safe_name, binary_file)
|
||||
|
||||
self.logger.info(
|
||||
f"Memray profiling enabled for module {module_name} (process: {self.name}). "
|
||||
f"Binary file (updated continuously): {binary_file}. "
|
||||
f"HTML report will be generated on exit: {reports_dir}/{safe_name}.html. "
|
||||
f"If process crashes, manually generate with: memray flamegraph {binary_file}"
|
||||
)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to setup memray profiling: {e}", exc_info=True)
|
||||
|
||||
def _cleanup_memray(self, safe_name: str, binary_file: pathlib.Path) -> None:
|
||||
"""Stop memray tracking and generate HTML report."""
|
||||
if self.__memray_tracker is None:
|
||||
return
|
||||
|
||||
try:
|
||||
self.__memray_tracker.__exit__(None, None, None)
|
||||
self.__memray_tracker = None
|
||||
|
||||
reports_dir = pathlib.Path(CONFIG_DIR) / "memray_reports"
|
||||
html_file = reports_dir / f"{safe_name}.html"
|
||||
|
||||
result = subprocess.run(
|
||||
["memray", "flamegraph", "--output", str(html_file), str(binary_file)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
self.logger.info(f"Memray report generated: {html_file}")
|
||||
else:
|
||||
self.logger.error(
|
||||
f"Failed to generate memray report: {result.stderr}. "
|
||||
f"Binary file preserved at {binary_file} for manual generation."
|
||||
)
|
||||
|
||||
# Keep the binary file for manual report generation if needed
|
||||
# Users can run: memray flamegraph {binary_file}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
self.logger.error("Memray report generation timed out")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to cleanup memray profiling: {e}", exc_info=True)
|
||||
|
||||
@@ -417,12 +417,12 @@ def get_openvino_npu_stats() -> Optional[dict[str, str]]:
|
||||
else:
|
||||
usage = 0.0
|
||||
|
||||
return {"npu": f"{round(usage, 2)}", "mem": "-"}
|
||||
return {"npu": f"{round(usage, 2)}", "mem": "-%"}
|
||||
except (FileNotFoundError, PermissionError, ValueError):
|
||||
return None
|
||||
|
||||
|
||||
def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
|
||||
def get_rockchip_gpu_stats() -> Optional[dict[str, str | float]]:
|
||||
"""Get GPU stats using rk."""
|
||||
try:
|
||||
with open("/sys/kernel/debug/rkrga/load", "r") as f:
|
||||
@@ -440,7 +440,16 @@ def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
|
||||
return None
|
||||
|
||||
average_load = f"{round(sum(load_values) / len(load_values), 2)}%"
|
||||
return {"gpu": average_load, "mem": "-"}
|
||||
stats: dict[str, str | float] = {"gpu": average_load, "mem": "-%"}
|
||||
|
||||
try:
|
||||
with open("/sys/class/thermal/thermal_zone5/temp", "r") as f:
|
||||
line = f.readline().strip()
|
||||
stats["temp"] = round(int(line) / 1000, 1)
|
||||
except (FileNotFoundError, OSError, ValueError):
|
||||
pass
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def get_rockchip_npu_stats() -> Optional[dict[str, float | str]]:
|
||||
@@ -463,13 +472,25 @@ def get_rockchip_npu_stats() -> Optional[dict[str, float | str]]:
|
||||
|
||||
percentages = [int(load) for load in core_loads]
|
||||
mean = round(sum(percentages) / len(percentages), 2)
|
||||
return {"npu": mean, "mem": "-"}
|
||||
stats: dict[str, float | str] = {"npu": mean, "mem": "-%"}
|
||||
|
||||
try:
|
||||
with open("/sys/class/thermal/thermal_zone6/temp", "r") as f:
|
||||
line = f.readline().strip()
|
||||
stats["temp"] = round(int(line) / 1000, 1)
|
||||
except (FileNotFoundError, OSError, ValueError):
|
||||
pass
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def try_get_info(f, h, default="N/A"):
|
||||
def try_get_info(f, h, default="N/A", sensor=None):
|
||||
try:
|
||||
if h:
|
||||
v = f(h)
|
||||
if sensor is not None:
|
||||
v = f(h, sensor)
|
||||
else:
|
||||
v = f(h)
|
||||
else:
|
||||
v = f()
|
||||
except nvml.NVMLError_NotSupported:
|
||||
@@ -498,6 +519,9 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
|
||||
enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle)
|
||||
dec = try_get_info(nvml.nvmlDeviceGetDecoderUtilization, handle)
|
||||
temp = try_get_info(
|
||||
nvml.nvmlDeviceGetTemperature, handle, default=None, sensor=0
|
||||
)
|
||||
pstate = try_get_info(nvml.nvmlDeviceGetPowerState, handle, default=None)
|
||||
|
||||
if util != "N/A":
|
||||
@@ -510,6 +534,11 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
else:
|
||||
gpu_mem_util = -1
|
||||
|
||||
if temp != "N/A" and temp is not None:
|
||||
temp = float(temp)
|
||||
else:
|
||||
temp = None
|
||||
|
||||
if enc != "N/A":
|
||||
enc_util = enc[0]
|
||||
else:
|
||||
@@ -527,6 +556,7 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
"enc": enc_util,
|
||||
"dec": dec_util,
|
||||
"pstate": pstate or "unknown",
|
||||
"temp": temp,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
@@ -549,6 +579,53 @@ def get_jetson_stats() -> Optional[dict[int, dict]]:
|
||||
return results
|
||||
|
||||
|
||||
def get_hailo_temps() -> dict[str, float]:
|
||||
"""Get temperatures for Hailo devices."""
|
||||
try:
|
||||
from hailo_platform import Device
|
||||
except ModuleNotFoundError:
|
||||
return {}
|
||||
|
||||
temps = {}
|
||||
|
||||
try:
|
||||
device_ids = Device.scan()
|
||||
for i, device_id in enumerate(device_ids):
|
||||
try:
|
||||
with Device(device_id) as device:
|
||||
temp_info = device.control.get_chip_temperature()
|
||||
|
||||
# Get board name and normalise it
|
||||
identity = device.control.identify()
|
||||
board_name = None
|
||||
for line in str(identity).split("\n"):
|
||||
if line.startswith("Board Name:"):
|
||||
board_name = (
|
||||
line.split(":", 1)[1].strip().lower().replace("-", "")
|
||||
)
|
||||
break
|
||||
|
||||
if not board_name:
|
||||
board_name = f"hailo{i}"
|
||||
|
||||
# Use indexed name if multiple devices, otherwise just the board name
|
||||
device_name = (
|
||||
f"{board_name}-{i}" if len(device_ids) > 1 else board_name
|
||||
)
|
||||
|
||||
# ts1_temperature is also available, but appeared to be the same as ts0 in testing.
|
||||
temps[device_name] = round(temp_info.ts0_temperature, 1)
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
f"Failed to get temperature for Hailo device {device_id}: {e}"
|
||||
)
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to scan for Hailo devices: {e}")
|
||||
|
||||
return temps
|
||||
|
||||
|
||||
def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedProcess:
|
||||
"""Run ffprobe on stream."""
|
||||
clean_path = escape_special_characters(path)
|
||||
@@ -584,12 +661,17 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
|
||||
|
||||
def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess:
|
||||
"""Run vainfo."""
|
||||
ffprobe_cmd = (
|
||||
["vainfo"]
|
||||
if not device_name
|
||||
else ["vainfo", "--display", "drm", "--device", f"/dev/dri/{device_name}"]
|
||||
)
|
||||
return sp.run(ffprobe_cmd, capture_output=True)
|
||||
if not device_name:
|
||||
cmd = ["vainfo"]
|
||||
else:
|
||||
if os.path.isabs(device_name) and device_name.startswith("/dev/dri/"):
|
||||
device_path = device_name
|
||||
else:
|
||||
device_path = f"/dev/dri/{device_name}"
|
||||
|
||||
cmd = ["vainfo", "--display", "drm", "--device", device_path]
|
||||
|
||||
return sp.run(cmd, capture_output=True)
|
||||
|
||||
|
||||
def get_nvidia_driver_info() -> dict[str, Any]:
|
||||
|
||||
197
frigate/video.py
197
frigate/video.py
@@ -3,6 +3,7 @@ import queue
|
||||
import subprocess as sp
|
||||
import threading
|
||||
import time
|
||||
from collections import deque
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from multiprocessing import Queue, Value
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
@@ -115,6 +116,7 @@ def capture_frames(
|
||||
frame_rate.start()
|
||||
skipped_eps = EventsPerSecond()
|
||||
skipped_eps.start()
|
||||
|
||||
config_subscriber = CameraConfigUpdateSubscriber(
|
||||
None, {config.name: config}, [CameraConfigUpdateEnum.enabled]
|
||||
)
|
||||
@@ -124,45 +126,50 @@ def capture_frames(
|
||||
config_subscriber.check_for_updates()
|
||||
return config.enabled
|
||||
|
||||
while not stop_event.is_set():
|
||||
if not get_enabled_state():
|
||||
logger.debug(f"Stopping capture thread for disabled {config.name}")
|
||||
break
|
||||
|
||||
fps.value = frame_rate.eps()
|
||||
skipped_fps.value = skipped_eps.eps()
|
||||
current_frame.value = datetime.now().timestamp()
|
||||
frame_name = f"{config.name}_frame{frame_index}"
|
||||
frame_buffer = frame_manager.write(frame_name)
|
||||
try:
|
||||
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)
|
||||
except Exception:
|
||||
# shutdown has been initiated
|
||||
if stop_event.is_set():
|
||||
try:
|
||||
while not stop_event.is_set():
|
||||
if not get_enabled_state():
|
||||
logger.debug(f"Stopping capture thread for disabled {config.name}")
|
||||
break
|
||||
|
||||
logger.error(f"{config.name}: Unable to read frames from ffmpeg process.")
|
||||
fps.value = frame_rate.eps()
|
||||
skipped_fps.value = skipped_eps.eps()
|
||||
current_frame.value = datetime.now().timestamp()
|
||||
frame_name = f"{config.name}_frame{frame_index}"
|
||||
frame_buffer = frame_manager.write(frame_name)
|
||||
try:
|
||||
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)
|
||||
except Exception:
|
||||
# shutdown has been initiated
|
||||
if stop_event.is_set():
|
||||
break
|
||||
|
||||
if ffmpeg_process.poll() is not None:
|
||||
logger.error(
|
||||
f"{config.name}: ffmpeg process is not running. exiting capture thread..."
|
||||
f"{config.name}: Unable to read frames from ffmpeg process."
|
||||
)
|
||||
break
|
||||
|
||||
continue
|
||||
if ffmpeg_process.poll() is not None:
|
||||
logger.error(
|
||||
f"{config.name}: ffmpeg process is not running. exiting capture thread..."
|
||||
)
|
||||
break
|
||||
|
||||
frame_rate.update()
|
||||
continue
|
||||
|
||||
# don't lock the queue to check, just try since it should rarely be full
|
||||
try:
|
||||
# add to the queue
|
||||
frame_queue.put((frame_name, current_frame.value), False)
|
||||
frame_manager.close(frame_name)
|
||||
except queue.Full:
|
||||
# if the queue is full, skip this frame
|
||||
skipped_eps.update()
|
||||
frame_rate.update()
|
||||
|
||||
frame_index = 0 if frame_index == shm_frame_count - 1 else frame_index + 1
|
||||
# don't lock the queue to check, just try since it should rarely be full
|
||||
try:
|
||||
# add to the queue
|
||||
frame_queue.put((frame_name, current_frame.value), False)
|
||||
frame_manager.close(frame_name)
|
||||
except queue.Full:
|
||||
# if the queue is full, skip this frame
|
||||
skipped_eps.update()
|
||||
|
||||
frame_index = 0 if frame_index == shm_frame_count - 1 else frame_index + 1
|
||||
finally:
|
||||
config_subscriber.stop()
|
||||
|
||||
|
||||
class CameraWatchdog(threading.Thread):
|
||||
@@ -174,6 +181,9 @@ class CameraWatchdog(threading.Thread):
|
||||
camera_fps,
|
||||
skipped_fps,
|
||||
ffmpeg_pid,
|
||||
stalls,
|
||||
reconnects,
|
||||
detection_frame,
|
||||
stop_event,
|
||||
):
|
||||
threading.Thread.__init__(self)
|
||||
@@ -194,6 +204,10 @@ class CameraWatchdog(threading.Thread):
|
||||
self.frame_index = 0
|
||||
self.stop_event = stop_event
|
||||
self.sleeptime = self.config.ffmpeg.retry_interval
|
||||
self.reconnect_timestamps = deque()
|
||||
self.stalls = stalls
|
||||
self.reconnects = reconnects
|
||||
self.detection_frame = detection_frame
|
||||
|
||||
self.config_subscriber = CameraConfigUpdateSubscriber(
|
||||
None,
|
||||
@@ -208,6 +222,35 @@ class CameraWatchdog(threading.Thread):
|
||||
self.latest_invalid_segment_time: float = 0
|
||||
self.latest_cache_segment_time: float = 0
|
||||
|
||||
# Stall tracking (based on last processed frame)
|
||||
self._stall_timestamps: deque[float] = deque()
|
||||
self._stall_active: bool = False
|
||||
|
||||
# Status caching to reduce message volume
|
||||
self._last_detect_status: str | None = None
|
||||
self._last_record_status: str | None = None
|
||||
self._last_status_update_time: float = 0.0
|
||||
|
||||
def _send_detect_status(self, status: str, now: float) -> None:
|
||||
"""Send detect status only if changed or retry_interval has elapsed."""
|
||||
if (
|
||||
status != self._last_detect_status
|
||||
or (now - self._last_status_update_time) >= self.sleeptime
|
||||
):
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", status)
|
||||
self._last_detect_status = status
|
||||
self._last_status_update_time = now
|
||||
|
||||
def _send_record_status(self, status: str, now: float) -> None:
|
||||
"""Send record status only if changed or retry_interval has elapsed."""
|
||||
if (
|
||||
status != self._last_record_status
|
||||
or (now - self._last_status_update_time) >= self.sleeptime
|
||||
):
|
||||
self.requestor.send_data(f"{self.config.name}/status/record", status)
|
||||
self._last_record_status = status
|
||||
self._last_status_update_time = now
|
||||
|
||||
def _update_enabled_state(self) -> bool:
|
||||
"""Fetch the latest config and update enabled state."""
|
||||
self.config_subscriber.check_for_updates()
|
||||
@@ -234,6 +277,24 @@ class CameraWatchdog(threading.Thread):
|
||||
else:
|
||||
self.ffmpeg_detect_process.wait()
|
||||
|
||||
# Update reconnects
|
||||
now = datetime.now().timestamp()
|
||||
self.reconnect_timestamps.append(now)
|
||||
while self.reconnect_timestamps and self.reconnect_timestamps[0] < now - 3600:
|
||||
self.reconnect_timestamps.popleft()
|
||||
if self.reconnects:
|
||||
self.reconnects.value = len(self.reconnect_timestamps)
|
||||
|
||||
# Wait for old capture thread to fully exit before starting a new one
|
||||
if self.capture_thread is not None and self.capture_thread.is_alive():
|
||||
self.logger.info("Waiting for capture thread to exit...")
|
||||
self.capture_thread.join(timeout=5)
|
||||
|
||||
if self.capture_thread.is_alive():
|
||||
self.logger.warning(
|
||||
f"Capture thread for {self.config.name} did not exit in time"
|
||||
)
|
||||
|
||||
self.logger.error(
|
||||
"The following ffmpeg logs include the last 100 lines prior to exit."
|
||||
)
|
||||
@@ -246,7 +307,10 @@ class CameraWatchdog(threading.Thread):
|
||||
self.start_all_ffmpeg()
|
||||
|
||||
time.sleep(self.sleeptime)
|
||||
while not self.stop_event.wait(self.sleeptime):
|
||||
last_restart_time = datetime.now().timestamp()
|
||||
|
||||
# 1 second watchdog loop
|
||||
while not self.stop_event.wait(1):
|
||||
enabled = self._update_enabled_state()
|
||||
if enabled != self.was_enabled:
|
||||
if enabled:
|
||||
@@ -262,12 +326,9 @@ class CameraWatchdog(threading.Thread):
|
||||
self.stop_all_ffmpeg()
|
||||
|
||||
# update camera status
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/detect", "disabled"
|
||||
)
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/record", "disabled"
|
||||
)
|
||||
now = datetime.now().timestamp()
|
||||
self._send_detect_status("disabled", now)
|
||||
self._send_record_status("disabled", now)
|
||||
self.was_enabled = enabled
|
||||
continue
|
||||
|
||||
@@ -306,36 +367,44 @@ class CameraWatchdog(threading.Thread):
|
||||
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
# Check if enough time has passed to allow ffmpeg restart (backoff pacing)
|
||||
time_since_last_restart = now - last_restart_time
|
||||
can_restart = time_since_last_restart >= self.sleeptime
|
||||
|
||||
if not self.capture_thread.is_alive():
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
|
||||
self._send_detect_status("offline", now)
|
||||
self.camera_fps.value = 0
|
||||
self.logger.error(
|
||||
f"Ffmpeg process crashed unexpectedly for {self.config.name}."
|
||||
)
|
||||
self.reset_capture_thread(terminate=False)
|
||||
if can_restart:
|
||||
self.reset_capture_thread(terminate=False)
|
||||
last_restart_time = now
|
||||
elif self.camera_fps.value >= (self.config.detect.fps + 10):
|
||||
self.fps_overflow_count += 1
|
||||
|
||||
if self.fps_overflow_count == 3:
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/detect", "offline"
|
||||
)
|
||||
self._send_detect_status("offline", now)
|
||||
self.fps_overflow_count = 0
|
||||
self.camera_fps.value = 0
|
||||
self.logger.info(
|
||||
f"{self.config.name} exceeded fps limit. Exiting ffmpeg..."
|
||||
)
|
||||
self.reset_capture_thread(drain_output=False)
|
||||
if can_restart:
|
||||
self.reset_capture_thread(drain_output=False)
|
||||
last_restart_time = now
|
||||
elif now - self.capture_thread.current_frame.value > 20:
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
|
||||
self._send_detect_status("offline", now)
|
||||
self.camera_fps.value = 0
|
||||
self.logger.info(
|
||||
f"No frames received from {self.config.name} in 20 seconds. Exiting ffmpeg..."
|
||||
)
|
||||
self.reset_capture_thread()
|
||||
if can_restart:
|
||||
self.reset_capture_thread()
|
||||
last_restart_time = now
|
||||
else:
|
||||
# process is running normally
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", "online")
|
||||
self._send_detect_status("online", now)
|
||||
self.fps_overflow_count = 0
|
||||
|
||||
for p in self.ffmpeg_other_processes:
|
||||
@@ -406,9 +475,7 @@ class CameraWatchdog(threading.Thread):
|
||||
|
||||
continue
|
||||
else:
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/record", "online"
|
||||
)
|
||||
self._send_record_status("online", now)
|
||||
p["latest_segment_time"] = self.latest_cache_segment_time
|
||||
|
||||
if poll is None:
|
||||
@@ -424,6 +491,34 @@ class CameraWatchdog(threading.Thread):
|
||||
p["cmd"], self.logger, p["logpipe"], ffmpeg_process=p["process"]
|
||||
)
|
||||
|
||||
# Update stall metrics based on last processed frame timestamp
|
||||
now = datetime.now().timestamp()
|
||||
processed_ts = (
|
||||
float(self.detection_frame.value) if self.detection_frame else 0.0
|
||||
)
|
||||
if processed_ts > 0:
|
||||
delta = now - processed_ts
|
||||
observed_fps = (
|
||||
self.camera_fps.value
|
||||
if self.camera_fps.value > 0
|
||||
else self.config.detect.fps
|
||||
)
|
||||
interval = 1.0 / max(observed_fps, 0.1)
|
||||
stall_threshold = max(2.0 * interval, 2.0)
|
||||
|
||||
if delta > stall_threshold:
|
||||
if not self._stall_active:
|
||||
self._stall_timestamps.append(now)
|
||||
self._stall_active = True
|
||||
else:
|
||||
self._stall_active = False
|
||||
|
||||
while self._stall_timestamps and self._stall_timestamps[0] < now - 3600:
|
||||
self._stall_timestamps.popleft()
|
||||
|
||||
if self.stalls:
|
||||
self.stalls.value = len(self._stall_timestamps)
|
||||
|
||||
self.stop_all_ffmpeg()
|
||||
self.logpipe.close()
|
||||
self.config_subscriber.stop()
|
||||
@@ -561,6 +656,9 @@ class CameraCapture(FrigateProcess):
|
||||
self.camera_metrics.camera_fps,
|
||||
self.camera_metrics.skipped_fps,
|
||||
self.camera_metrics.ffmpeg_pid,
|
||||
self.camera_metrics.stalls_last_hour,
|
||||
self.camera_metrics.reconnects_last_hour,
|
||||
self.camera_metrics.detection_frame,
|
||||
self.stop_event,
|
||||
)
|
||||
camera_watchdog.start()
|
||||
@@ -756,6 +854,7 @@ def process_frames(
|
||||
camera_enabled = camera_config.enabled
|
||||
|
||||
if "motion" in updated_configs:
|
||||
motion_detector.config = camera_config.motion
|
||||
motion_detector.update_mask()
|
||||
|
||||
if (
|
||||
|
||||
@@ -5,7 +5,7 @@ Some examples (model - class or model name)::
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.run(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
|
||||
@@ -5,7 +5,7 @@ Some examples (model - class or model name)::
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.run(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
|
||||
@@ -5,7 +5,7 @@ Some examples (model - class or model name)::
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.run(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user