Compare commits

...

112 Commits

Author SHA1 Message Date
Josh Hawkins
6dfc9cbf0f fix ExportRecordingsBody to allow optional name field
fixes https://github.com/blakeblackshear/frigate/discussions/21413 because of https://github.com/blakeblackshear/frigate-hass-integration/pull/1021
2025-12-23 22:09:03 -06:00
Josh Hawkins
8c1071355c version bump on updating page 2025-12-23 12:06:43 -06:00
Josh Hawkins
46033cce8f clarify docs for none class 2025-12-23 11:43:36 -06:00
Nicolas Mowen
aaec0f6e92 Correctly catch JSONDecodeError 2025-12-23 06:57:12 -07:00
Nicolas Mowen
ec6fd2289c Ensure genai client exists 2025-12-23 06:10:36 -07:00
Josh Hawkins
92c6138feb add slovak 2025-12-22 22:07:52 -06:00
Josh Hawkins
d84aab09b1 add triggers to note 2025-12-22 18:33:17 -06:00
Josh Hawkins
2bf7ecb0b6 0.17 2025-12-22 17:56:00 -06:00
Josh Hawkins
bcc2d37a4c remove footnote about 0.17 2025-12-22 17:53:11 -06:00
Nicolas Mowen
bf4007d66a Reset the wizard state after closing with model 2025-12-22 16:01:20 -07:00
Josh Hawkins
6e1b2447ac only show allowed cameras and groups in camera filter button 2025-12-22 15:39:55 -06:00
Josh Hawkins
bf74e74696 fix weekday starting point on explore when set to monday in UI settings 2025-12-22 13:41:28 -06:00
Nicolas Mowen
3fb9abc97d Add review thumbnail URL to integration docs 2025-12-22 10:53:25 -07:00
Josh Hawkins
fb0838558f use fallback timeout for opening media source
covers the case where there is no active connection to the go2rtc stream and the camera takes a long time to start
2025-12-21 22:09:21 -06:00
Nicolas Mowen
54f4af3c6a Miscellaneous fixes (#21373)
* Send preferred language for report service

* make object lifecycle scrollable in tracking details

* fix info popovers in live camera drawer

* ensure metrics are initialized if genai is enabled

* docs

* ollama cloud model docs

* Ensure object descriptions get claened up

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-20 18:30:34 -06:00
GuoQing Liu
8a4d5f34da fix: fix system enrichments view classification i18n (#21366) 2025-12-20 05:45:31 -07:00
Josh Hawkins
60052e5f9f Miscellaneous Fixes (0.17 beta) (#21355)
* remove footer messages and add update topic to motion tuner view

restart after changing values is no longer required

* add cache key and activity indicator for loading classification wizard images

* Always mark model as untrained when a classname is changed

* clarify object classification docs

* add debug logs for individual lpr replace_rules

* update memray docs

* memray tweaks

* Don't fail for audio transcription when semantic search is not enabled

* Fix incorrect mismatch for object vs sub label

* Check if the video is currently playing when deciding to seek due to misalignment

* Refactor timeline event handling to allow multiple timeline entries per update

* Check if zones have actually changed (not just count) for event state update

* show event icon on mobile

* move div inside conditional

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-19 18:59:26 -06:00
Nicolas Mowen
e636449d56 Miscellaneous fixes (0.17 beta) (#21350)
* Fix genai callbacks in MQTT

* Cleanup cursor pointer for classification cards

* Cleanup

* Handle unknown SOCs for RKNN converter by only using known SOCs

* don't allow "none" as a classification class name

* change internal port user to admin and default unspecified username to viewer

* keep 5000 as anonymous user

* suppress tensorflow logging during classification training

* Always apply base log level suppressions for noisy third-party libraries even if no specific logConfig is provided

* remove decorator and specifically suppress TFLite delegate creation messages

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-18 15:12:10 -07:00
Josh Hawkins
6a0e31dcf9 Add object classification attributes to Tracked Object Details (#21348)
* attributes endpoint

* event endpoints

* add attributes to more filters

* add to suggestions and query in explore

* support attributes in search input

* i18n

* add object type filter to endpoint

* add attributes to tracked object details pane

* add generic multi select dialog

* save object attributes endpoint

* add group by param to fetch attributes endpoint

* add attribute editing to tracked object details

* docs

* fix docs

* update openapi spec to match python
2025-12-18 08:35:47 -06:00
GuoQing Liu
074b060e9c fix: temp directory is only created when there are review_items. (#21344) 2025-12-18 07:08:45 -07:00
Josh Hawkins
ae009b9861 Miscellaneous Fixes (0.17 beta) (#21336)
* fix coral docs

* add note about sub label object classification with person

* Catch OSError for deleting classification image

* add docs for dummy camera debugging

* add to sidebar

* fix formatting

* fix

* avx instructions are required for classification

* break text on classification card to prevent button overflow

* Ensure there is no NameError when processing

* Don't use region for state classification models

* fix spelling

* Handle attribute based models

* Catch case of non-trained model that doesn't add infinite number of classification images

* Actually train object classification models automatically

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-17 16:52:27 -07:00
GuoQing Liu
13957fec00 classification i18n fix (#21331)
* fix: fix classification pages none label i18n

* fix: fix README_CN formatting issue
2025-12-17 15:26:11 -07:00
Blake Blackshear
3edfd905de consider anonymous user authenticated (#21335)
* consider anonymous user authenticated

* simplify and update comments
2025-12-17 08:01:20 -06:00
Nicolas Mowen
78eace258e Miscellaneous Fixes (0.17 Beta) (#21320)
* Exclude D-FINE from using CUDA Graphs

* fix objects count in detail stream

* Add debugging for classification models

* validate idb stored stream name and reset if invalid

fixes https://github.com/blakeblackshear/frigate/discussions/21311

* ensure jina loading takes place in the main thread to prevent lazily importing tensorflow in another thread later

reverts atexit changes in https://github.com/blakeblackshear/frigate/pull/21301 and fixes https://github.com/blakeblackshear/frigate/discussions/21306

* revert old atexit change in bird too

* revert types

* ensure we bail in the live mode hook for empty camera groups

prevent infinite rendering on camera groups with no cameras

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-16 22:35:43 -06:00
Issy Szemeti
c292cd207d Align node versions used in GHA PR workflow (#21302)
* Add node/npm version config to package.json

* Bump npm version/fix node version format

* Version range

* Use package.json for github actions node version

* Unification

* Move it all to the bottom

* Remove this

* Bump versions in docs

* Add volta config here too

* Revert changes

* Revert this
2025-12-16 20:28:35 -07:00
Josh Hawkins
e7d047715d Miscellaneous Fixes (0.17 beta) (#21301)
* Wait for config to load before evaluating route access

Fix race condition where custom role users are temporarily denied access after login while config is still loading. Defer route rendering in DefaultAppView until config is available so the complete role list is known before ProtectedRoute evaluates permissions

* Use batching for state classification generation

* Ignore incorrect scoring images if they make it through the deletion

* Delete unclassified images

* mitigate tensorflow atexit crash by pre-importing tflite/tensorflow on main thread

Pre-import Interpreter in embeddings maintainer and add defensive lazy imports in classification processors to avoid worker-thread tensorflow imports causing "can't register atexit after shutdown"

* don't require old password for users with admin role when changing passwords

* don't render actions menu if no options are available

* Remove hwaccel arg as it is not used for encoding

* change password button text

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-16 08:11:53 -06:00
Issy Szemeti
818cccb2e3 Settings page layout shift - follow up (#21300)
* Fix layout shift with camera filter

* Move min height
2025-12-15 11:42:11 -07:00
Issy Szemeti
f543d0ab31 Fix layout shift with camera filter (#21298) 2025-12-15 11:18:41 -07:00
GuoQing Liu
39af85625e feat: add train classification download weights file endpoint (#21294)
* feat: add train classification download weights file endpoint: "TF_KERAS_MOBILENET_V2_ENDPOINT"

* refactor: custom weights file url
2025-12-15 08:59:13 -07:00
Nicolas Mowen
fa16539429 Miscellaneous Fixes (#21289)
* Exclude yolov9 license plate from migraphx runner

* clarify auth endpoint return in openapi schema

* Clarify ROCm enrichments

* fix object mask creation

* Consider audio activity when deciding if recording segments should be kept due to motion

* ensure python defs match openapi spec for auth endpoints

* Fix check for audio activity to keep a segemnt

* fix calendar popover modal bug on export dialog

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-15 09:32:11 -06:00
Josh Hawkins
e1545a8db8 Miscellaneous Fixes (0.17 beta) (#21279)
* Fix Safari popover issue in classification wizard

* use name for key instead of title

prevents duplicate key warnings when users mix vaapi and qsv

* update auth api endpoint descriptions and docs

* tweak headings

* fix note

* clarify classification docs

* Fix cuda birdseye

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-14 16:41:38 -07:00
Josh Hawkins
51ee6f26e6 Fix yolov9 coral docs labelmap path (#21278) 2025-12-14 11:00:48 -07:00
Josh Hawkins
430cebecda Fix trigger sync (#21264)
- don't look for event ids when trigger type is description
- don't try to delete thumbnail whe trigger type is description
- pass correct event ID into thumbnail deletion
2025-12-13 12:15:25 -06:00
Hosted Weblate
af7af33645 Translated using Weblate (Norwegian Bokmål)
Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (654 of 654 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 99.1% (119 of 120 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (48 of 48 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nb_NO/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
9822716df5 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (654 of 654 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (119 of 119 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/zh_Hans/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
66bc4e6f96 Translated using Weblate (Chinese (Traditional Han script))
Currently translated at 92.3% (85 of 92 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 44.5% (53 of 119 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 44.5% (53 of 119 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 15.7% (79 of 501 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 11.8% (76 of 639 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 92.1% (118 of 128 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 98.1% (54 of 55 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 85.3% (111 of 130 strings)

Co-authored-by: Ban <3637117+Ban921@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Yu Chun Huang <yujun@bo2.tw>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/zh_Hant/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-input
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
23dbe098a5 Translated using Weblate (Slovenian)
Currently translated at 98.0% (51 of 52 strings)

Co-authored-by: Emircanos <emircan368@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sl/
Translation: Frigate NVR/views-facelibrary
2025-12-13 08:02:05 -07:00
Hosted Weblate
c7e6c23f59 Translated using Weblate (Slovak)
Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Slovak)

Currently translated at 99.1% (118 of 119 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (128 of 128 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Co-authored-by: Michal Klacan <mkbebe@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
de968de878 Translated using Weblate (Swedish)
Currently translated at 100.0% (654 of 654 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Swedish)

Currently translated at 98.3% (117 of 119 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (55 of 55 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
b951cd9aef Translated using Weblate (French)
Currently translated at 100.0% (654 of 654 strings)

Translated using Weblate (French)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (French)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (French)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (French)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (French)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (French)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (French)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (French)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (French)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (French)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (French)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (French)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (French)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (French)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (French)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (French)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (French)

Currently translated at 100.0% (128 of 128 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fr/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-input
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
6286010014 Translated using Weblate (Spanish)
Currently translated at 90.2% (83 of 92 strings)

Co-authored-by: Hernán Rossetto <hmronline@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/es/
Translation: Frigate NVR/views-live
2025-12-13 08:02:05 -07:00
Hosted Weblate
1a0111a683 Translated using Weblate (Dutch)
Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (128 of 128 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
5716674703 Translated using Weblate (Italian)
Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (55 of 55 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/it/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
2025-12-13 08:02:05 -07:00
Hosted Weblate
dd22c8e2a3 Translated using Weblate (Polish)
Currently translated at 55.8% (67 of 120 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Polish)

Currently translated at 84.6% (110 of 130 strings)

Translated using Weblate (Polish)

Currently translated at 95.0% (38 of 40 strings)

Translated using Weblate (Polish)

Currently translated at 83.0% (108 of 130 strings)

Translated using Weblate (Polish)

Currently translated at 38.6% (46 of 119 strings)

Translated using Weblate (Polish)

Currently translated at 98.1% (54 of 55 strings)

Translated using Weblate (Polish)

Currently translated at 89.2% (570 of 639 strings)

Translated using Weblate (Polish)

Currently translated at 92.9% (119 of 128 strings)

Translated using Weblate (Polish)

Currently translated at 98.1% (210 of 214 strings)

Translated using Weblate (Polish)

Currently translated at 85.4% (546 of 639 strings)

Translated using Weblate (Polish)

Currently translated at 95.0% (38 of 40 strings)

Translated using Weblate (Polish)

Currently translated at 83.5% (107 of 128 strings)

Translated using Weblate (Polish)

Currently translated at 98.0% (51 of 52 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Polish)

Currently translated at 37.8% (45 of 119 strings)

Co-authored-by: Artur <wy66m6xm@anonaddy.me>
Co-authored-by: Bartlomiej Puls <bartlomiej.puls@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Mateusz Kulis <kulis.matis@gmail.com>
Co-authored-by: Paweł Bauer <pawol87@gmail.com>
Co-authored-by: piesu <dogiiee@proton.me>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/pl/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
61ca688b14 Translated using Weblate (Vietnamese)
Currently translated at 59.0% (385 of 652 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Vietnamese)

Currently translated at 32.5% (39 of 120 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Vietnamese)

Currently translated at 56.4% (368 of 652 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Vietnamese)

Currently translated at 23.3% (28 of 120 strings)

Translated using Weblate (Vietnamese)

Currently translated at 72.3% (94 of 130 strings)

Translated using Weblate (Vietnamese)

Currently translated at 96.1% (50 of 52 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (128 of 128 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Hào Nguyễn Xuân <it.xuanhao@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/vi/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
68f5eab839 Translated using Weblate (Czech)
Currently translated at 63.5% (406 of 639 strings)

Translated using Weblate (Czech)

Currently translated at 92.3% (48 of 52 strings)

Translated using Weblate (Czech)

Currently translated at 75.0% (30 of 40 strings)

Translated using Weblate (Czech)

Currently translated at 16.8% (20 of 119 strings)

Translated using Weblate (Czech)

Currently translated at 63.2% (404 of 639 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Libor Vymětalík <vymetalik.libor@gmail.com>
Co-authored-by: Martin Brož <code@martin-broz.cz>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
2e125dcab2 Translated using Weblate (Catalan)
Currently translated at 100.0% (654 of 654 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (55 of 55 strings)

Co-authored-by: Eduardo Pastor Fernández <123eduardoneko123@gmail.com>
Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
83753fb2ce Translated using Weblate (Ukrainian)
Currently translated at 100.0% (654 of 654 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (128 of 128 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
95368d39d1 Translated using Weblate (Bulgarian)
Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Bulgarian)

Currently translated at 9.0% (5 of 55 strings)

Translated using Weblate (Bulgarian)

Currently translated at 0.7% (5 of 639 strings)

Translated using Weblate (Bulgarian)

Currently translated at 15.3% (2 of 13 strings)

Translated using Weblate (Bulgarian)

Currently translated at 31.5% (29 of 92 strings)

Translated using Weblate (Bulgarian)

Currently translated at 2.3% (3 of 128 strings)

Translated using Weblate (Bulgarian)

Currently translated at 20.0% (2 of 10 strings)

Translated using Weblate (Bulgarian)

Currently translated at 9.6% (5 of 52 strings)

Translated using Weblate (Bulgarian)

Currently translated at 22.5% (9 of 40 strings)

Translated using Weblate (Bulgarian)

Currently translated at 20.0% (2 of 10 strings)

Translated using Weblate (Bulgarian)

Currently translated at 6.2% (3 of 48 strings)

Translated using Weblate (Bulgarian)

Currently translated at 0.8% (1 of 119 strings)

Translated using Weblate (Bulgarian)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Bulgarian)

Currently translated at 2.3% (3 of 128 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Skye Fox <mardymcfly1985@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/bg/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
8cb1fafed3 Translated using Weblate (Romanian)
Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (128 of 128 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
b75c85114e Translated using Weblate (Russian)
Currently translated at 92.9% (608 of 654 strings)

Co-authored-by: Artem Vladimirov <artyomka71@mail.ru>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
Translation: Frigate NVR/views-settings
2025-12-13 08:02:05 -07:00
Hosted Weblate
e1a1baa98d Translated using Weblate (Estonian)
Currently translated at 63.0% (29 of 46 strings)

Translated using Weblate (Estonian)

Currently translated at 41.3% (19 of 46 strings)

Translated using Weblate (Estonian)

Currently translated at 42.3% (50 of 118 strings)

Translated using Weblate (Estonian)

Currently translated at 1.8% (1 of 53 strings)

Translated using Weblate (Estonian)

Currently translated at 5.1% (26 of 501 strings)

Translated using Weblate (Estonian)

Currently translated at 4.1% (2 of 48 strings)

Translated using Weblate (Estonian)

Currently translated at 9.5% (62 of 652 strings)

Translated using Weblate (Estonian)

Currently translated at 39.2% (84 of 214 strings)

Translated using Weblate (Estonian)

Currently translated at 0.7% (1 of 128 strings)

Translated using Weblate (Estonian)

Currently translated at 51.2% (21 of 41 strings)

Translated using Weblate (Estonian)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Estonian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Estonian)

Currently translated at 36.4% (43 of 118 strings)

Translated using Weblate (Estonian)

Currently translated at 2.8% (6 of 214 strings)

Translated using Weblate (Estonian)

Currently translated at 9.0% (59 of 652 strings)

Translated using Weblate (Estonian)

Currently translated at 4.6% (6 of 130 strings)

Translated using Weblate (Estonian)

Currently translated at 7.5% (3 of 40 strings)

Translated using Weblate (Estonian)

Currently translated at 8.6% (8 of 92 strings)

Translated using Weblate (Estonian)

Currently translated at 29.1% (21 of 72 strings)

Translated using Weblate (Estonian)

Currently translated at 8.6% (4 of 46 strings)

Translated using Weblate (Estonian)

Currently translated at 4.9% (25 of 501 strings)

Translated using Weblate (Estonian)

Currently translated at 2.0% (1 of 48 strings)

Translated using Weblate (Estonian)

Currently translated at 35.5% (42 of 118 strings)

Translated using Weblate (Estonian)

Currently translated at 2.3% (12 of 501 strings)

Translated using Weblate (Estonian)

Currently translated at 0.8% (1 of 120 strings)

Translated using Weblate (Estonian)

Currently translated at 29.6% (35 of 118 strings)

Translated using Weblate (Estonian)

Currently translated at 2.1% (11 of 501 strings)

Translated using Weblate (Estonian)

Currently translated at 1.4% (3 of 214 strings)

Translated using Weblate (Estonian)

Currently translated at 4.3% (28 of 639 strings)

Translated using Weblate (Estonian)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Estonian)

Currently translated at 7.6% (1 of 13 strings)

Translated using Weblate (Estonian)

Currently translated at 2.1% (1 of 46 strings)

Translated using Weblate (Estonian)

Currently translated at 2.5% (1 of 40 strings)

Translated using Weblate (Estonian)

Currently translated at 1.3% (1 of 72 strings)

Translated using Weblate (Estonian)

Currently translated at 3.6% (2 of 55 strings)

Translated using Weblate (Estonian)

Currently translated at 20.0% (2 of 10 strings)

Translated using Weblate (Estonian)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Estonian)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Estonian)

Currently translated at 4.0% (1 of 25 strings)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Added translation using Weblate (Estonian)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Priit Jõerüüt <jrthwlate@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/et/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/et/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
4098db2441 Translated using Weblate (Danish)
Currently translated at 85.5% (183 of 214 strings)

Translated using Weblate (Danish)

Currently translated at 36.1% (26 of 72 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: kklar <karred.larsen@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/da/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-filter
2025-12-13 08:02:05 -07:00
Hosted Weblate
71f4417314 Translated using Weblate (German)
Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (German)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (German)

Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (German)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (German)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (German)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (German)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (German)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (German)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (German)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (German)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (German)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (German)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (German)

Currently translated at 99.8% (638 of 639 strings)

Translated using Weblate (German)

Currently translated at 99.8% (638 of 639 strings)

Translated using Weblate (German)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (German)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (German)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (German)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (German)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (German)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (German)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (German)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (German)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (German)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (German)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (German)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (German)

Currently translated at 99.5% (213 of 214 strings)

Translated using Weblate (German)

Currently translated at 99.5% (213 of 214 strings)

Translated using Weblate (German)

Currently translated at 83.5% (534 of 639 strings)

Translated using Weblate (German)

Currently translated at 93.8% (470 of 501 strings)

Translated using Weblate (German)

Currently translated at 98.9% (91 of 92 strings)

Translated using Weblate (German)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (German)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (German)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (German)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (German)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (German)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (German)

Currently translated at 34.4% (40 of 116 strings)

Translated using Weblate (German)

Currently translated at 94.8% (37 of 39 strings)

Translated using Weblate (German)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (German)

Currently translated at 78.0% (499 of 639 strings)

Translated using Weblate (German)

Currently translated at 98.4% (126 of 128 strings)

Translated using Weblate (German)

Currently translated at 29.3% (34 of 116 strings)

Translated using Weblate (German)

Currently translated at 96.0% (123 of 128 strings)

Translated using Weblate (German)

Currently translated at 78.0% (499 of 639 strings)

Co-authored-by: Emircanos <emircan368@gmail.com>
Co-authored-by: Fuxle <moritz.hofmann2005@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Sebastian Sie <sebastian.neuplanitz@googlemail.com>
Co-authored-by: bgriese0 <kontakt@bjoern-griese.de>
Co-authored-by: jmtatsch <julian@tatsch.it>
Co-authored-by: mvdberge <micha.vordemberge@christmann.info>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/de/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Hosted Weblate
4002feb9f8 Translated using Weblate (Portuguese (Brazil))
Currently translated at 29.3% (34 of 116 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jose Machado <machado.jm4@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pt_BR/
Translation: Frigate NVR/views-classificationmodel
2025-12-13 08:02:05 -07:00
Hosted Weblate
7a399c842d Translated using Weblate (Turkish)
Currently translated at 100.0% (41 of 41 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (652 of 652 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (120 of 120 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (130 of 130 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (40 of 40 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (119 of 119 strings)

Translated using Weblate (Turkish)

Currently translated at 64.6% (413 of 639 strings)

Translated using Weblate (Turkish)

Currently translated at 98.5% (211 of 214 strings)

Translated using Weblate (Turkish)

Currently translated at 66.3% (77 of 116 strings)

Translated using Weblate (Turkish)

Currently translated at 63.7% (74 of 116 strings)

Translated using Weblate (Turkish)

Currently translated at 97.6% (209 of 214 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (55 of 55 strings)

Translated using Weblate (Turkish)

Currently translated at 94.5% (121 of 128 strings)

Translated using Weblate (Turkish)

Currently translated at 93.7% (120 of 128 strings)

Translated using Weblate (Turkish)

Currently translated at 94.5% (87 of 92 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Turkish)

Currently translated at 58.9% (377 of 639 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (52 of 52 strings)

Co-authored-by: Emircanos <emircan368@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/tr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-12-13 08:02:05 -07:00
Josh Hawkins
bc5d6cf1c2 docs tweaks (#21261) 2025-12-13 07:59:49 -07:00
Martin Weinelt
dde02cadb2 Update peewee-migrate to 0.14.x (#21243)
Replaces two functions calls, that were deprecated and aliases for the
new function name:

- Migrator.python -> Migration.run
- Migrator.change_column -> Migrator.change_field
2025-12-13 07:13:04 -07:00
Nicolas Mowen
8ddcbf9a8d Improve handling of backchannel audio in camera wizard (#21250)
* Improve handling of backchannel audio in camera wizard

* Cleanup

* look for backchannel on all registered streams on save

avoids potential issues with a timeout in stream registration

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-13 07:12:37 -07:00
wozz
6b9b3778f5 fix: attribute error in embedding maintainer ini… (#21252)
* fix: Resolve deadlock and attribute error in embedding maintainer initialization

Updates the trigger embedding calculation to call embedding methods directly instead of using ZMQ. This prevents a deadlock during initialization where the ZMQ responder is not yet polling for requests.

Also updates sync_triggers to pass the camera name and trigger name to the calculation method, fixing an AttributeError where trigger.name was accessed on a TriggerConfig object.

* mocked repro

* Revert "mocked repro"

This reverts commit dea5b5d4db.

* fix formatting

* Update embeddings.py

new line
2025-12-13 07:12:09 -07:00
Josh Hawkins
308e692732 Miscellaneous Fixes (#21241)
* only show jwt secret tip for admin users

* fix preview endpoint 403 for viewer role when "all" param is used

* Update docs dependencies

* add warning if ffmpeg isn't selected for reolink http streams

* Update the motion for motion masks

* Also update objects

* Add docs about backchannel and two way talk takeover

* don't require restart when deleting zone or mask

* Ensure motion is correctly set when adjusting masks

* don't use python style raw prefixes in yaml examples in LPR docs

* wording

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-12 07:45:03 -06:00
Martin Weinelt
67e18eff94 Replace stringy paths with constants (#21247) 2025-12-12 06:22:09 -07:00
Nicolas Mowen
649ca49e55 Beta discussion template (#21239)
* Add beta support template for discussions

* Add note to bug
2025-12-11 09:37:46 -06:00
Josh Hawkins
fa6dda6735 Miscellaneous Fixes (#21208)
* conditionally display actions for admin role only

* only allow admins to save annotation offset

* Fix classification reset filter

* fix explore context menu from blocking pointer events on the body element after dialog close

applying modal=false to the menu (not to the dialog) to fix this in the same way as elsewhere in the codebase

* add select all link to face library, classification, and explore

* Disable iOS image dragging for classification card

* add proxmox ballooning comment

* lpr docs tweaks

* yaml list

* clarify tls_insecure

* Improve security summary format and usefulness

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-11 07:23:34 -07:00
dependabot[bot]
9cdc10008d Bump actions/checkout from 5 to 6 (#20987)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-09 12:52:14 -07:00
Nicolas Mowen
4cf4520ea7 Miscellaneous Fixes (#21193)
* Fix saving zone friendly name when it wasn't set

* Fix UTF-8 handling for Onvif

* Don't remove none directory for classes

* Lookup all event IDs for review item immediately

* Cleanup typing

* Only fetch events when review group is open

* Cleanup

* disable debug paths switch for autotracking cameras

* fix clickable birdseye

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-09 12:08:44 -06:00
Josh Hawkins
dfd837cfb0 refactor to use react-hook-form and zod (#21195) 2025-12-08 09:19:34 -07:00
Josh Hawkins
152e585206 Authentication improvements (#21194)
* jwt permissions

* add old password to body req

* add model and migration

need to track the datetime that passwords were changed for the jwt

* auth api backend changes

- use os.open to create jwt secret with restrictive permissions (0o600: read/write for owner only)
- add backend validation for password strength
- add iat claim to jwt so the server can determine when a token was issued and reject any jwts issued before a user's password_changed_at timestamp, ensuring old tokens are invalidated after a password change
- set logout route to public to avoid 401 when logging out
- issue new jwt for users who change their own password so they stay logged in

* improve set password dialog

- add field to verify old password
- add password strength requirements

* frontend tweaks for password dialog

* i18n

* use verify endpoint for existing password verification

avoid /login side effects (creating a new session)

* public logout

* only check if password has changed on jwt refresh

* fix tests

Fix migration 030 by using raw sql to select usernames (avoid ORM selecting nonexistent columns)

* add multi device warning to password dialog

* remove password verification endpoint

Just send old_password + new password in one request, let the backend handle verification in a single operation
2025-12-08 09:02:28 -07:00
Josh Hawkins
28b0ad782a Fix intermittent hangs in Tracking Details videos (#21185)
* remove extra gap controller overrides

* new vod endpoint for clips to set discontinuity

ensure tracking-detail playlists emit #EXT-X-DISCONTINUITY (avoids fMP4 timestamp rewrites and playback stalls) while leaving standard recordings behavior unchanged

* use new endpoint
2025-12-07 12:58:33 -06:00
GuoQing Liu
644c7fa6b4 fix: fix classification missing i18n (#21179) 2025-12-07 11:35:48 -07:00
Josh Hawkins
88a8de0b1c Miscellaneous Fixes (#21166)
* Improve model titles

* remove deprecated strftime_fmt

* remove

* remove restart wording

* add copilot instructions

* fix docs

* Move files into try for classification rollover

* Use friendly names for zones in notifications

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-07 07:57:46 -07:00
Nicolas Mowen
c136e5e8bd Miscellaneous fixes (#21141)
* Remove source_type from API

* Don't require state classification models to select all classes

* Specifically validate provided end_time for manual events

* Remove yolov9 specification for warning

* Remove warning for coral

* clarify zone name tip

* clarify replace rules in lpr docs

* remove periods

* Add explanation for review report

* adjust HLS gap controller params

defaults to false, should help to recover from hangs and stalling in tracking details videos on chrome

* only redirect to login page once on 401

attempt to fix ios pwa safari redirect storm

* Use contextual information from other cameras to inform report summary

* Formatting and prompt improvements for review summary report

* More improvements to prompt

* Remove examples

* Don't show admin action buttons on export card

* fix redirect race condition

Coordinate 401 redirect logic between ApiProvider and ProtectedRoute using a shared flag to prevent multiple simultaneous redirects that caused UI flashing. Ensure both auth error paths check and set the redirect flag before navigating to login, eliminating race conditions where both mechanisms could trigger at once

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-04 12:19:07 -06:00
Dan Brown
9ab78f496c Adds support for YOLO v9 models running on Google Coral (#21124)
* Adds support for YOLO v9 models running on Google Coral

* fix format by using ruff instead of black

* Remove comment

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Remove log message

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* revert to hard-coded settings. use ModelTypeEnum directly

* remove log messages. detect invalid output tensor count

* remove 1-tensor processing. add pre_process() function

* check for valid model type

* fix formatting

* remove unused import and variable

* remove tip that indicates other YOLO models may be supported.

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-02 13:26:57 -07:00
Nicolas Mowen
8a360eecf8 Refactor ROCm Support (#21132)
* Remove gfx 900 support and only keep ROCm build with all variants by default

* Include C++ for JIT header compilation
2025-12-02 09:41:02 -07:00
Josh Hawkins
1f9669bbe5 Miscellaneous Fixes (#21102)
* ensure audio events display timeline entries in tracking details

* tweak tracking details layout for small desktop sizes

* update transcription docs

* Update classification docs for training recommendations

* Make number of classification images to be kept configurable

* Add bird to classification reference

* Fix incorrect averaging of the segments so it correctly only uses the most recent segments

* fix trigger logic

* add ability to download clean snapshot

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-02 07:21:15 -07:00
GuoQing Liu
9d4aac2b8e Revise the README_CN (#21048)
* docs: update chinese readme

* style: Improve the styling of the Chinese document jump tips bar in dark mode

* docs: add license translation
2025-12-01 10:52:30 -07:00
Nicolas Mowen
aa09132dfd Update ROCm to 7.1.1 (#21113)
* Update ROCm to 7.1.1

* testing for build

* Fix

* remove debug
2025-12-01 08:07:35 -07:00
Josh Hawkins
24766ce427 Use user-namespaced keys for idb persistence (#21110)
* add new hooks

* use new hooks for user based keys

* fix layout race condition
2025-12-01 07:59:54 -06:00
Nicolas Mowen
97b29d177a Miscellaneous Fixes (#21072)
* Implement renaming in model editing dialog

* add transcription faq

* remove incorrect constraint for viewer as username

should be able to change anyone's role other than admin

* Don't save redundant state changes

* prevent crash when a camera doesn't support onvif imaging service required for focus support

* Fine tune behavior

* Stop redundant go2rtc stream metadata requests and defer audio information to allow bandwidth for image requests

* Improve cleanup logic for capture process

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-30 06:54:42 -06:00
Ryan Hass
1a75251ffb Add yolov9 inference speeds for UHD 730 GPU. (#21090)
This adds the inference speeds measured on an i5-11400T with a UHD 730
GPU running at nominal temperatures.
2025-11-29 07:32:16 -06:00
Josh Hawkins
048475e750 API admin exemptions and route guard updates (#21094)
* update exempt paths and add missing guard to api endpoints

* admin only frigate+ submission
2025-11-29 07:30:04 -06:00
Nicolas Mowen
1b57fb15a7 Miscellaneous Fixes (#21063)
* Fix history management failing when updating URL

* Handle case where user doesn't have images that represent all states

If a user selects all imags and can't proceed we show a warning that they can still proceed but the model won't be trained until they get at least one image for every state.

* Still create all classes

We stil need to create all classes even if the user didn't assign images to them.

* fix camera group access for non admin users

changes from previous PR wrongly included users from the standard viewer role (but excluded custom viewer roles)

* Adjust threat level interaction to be less strict

* use base path when fetching go2rtc data

* show config error message when starting in safe mode

* fix genai migration

* fix genai

* Fix genai migration

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-27 07:58:35 -06:00
Josh Hawkins
cd606ad240 Enforce default admin role requirement for API endpoints (#21065)
* require admin role by default

* update all endpoint access guards

* explicit paths and prefixes exception lists

* fix tests to use mock auth

* add helper and simplify auth conditions

* add missing exempt path

* fix test

* make metrics endpoint require auth
2025-11-26 15:07:28 -06:00
Nicolas Mowen
de2144f158 Miscellaneous Fixes (#21050)
* Don't add to history when opening search dialog

* Update caniuse

* Revamp the history handling for dialog components

* clarify audio transcription docs

* Use titlecase helper

* Allow running object clasasification on stationary objects

* small spacing tweaks for tablets

* require admin role to delete users

* explicitly prevent deletion of admin user

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-26 07:23:51 -06:00
Nicolas Mowen
e79ff9a079 Add built in support for memray memory debugging (#21057) 2025-11-25 16:34:01 -06:00
Abinila Siva
fe47620153 [MemryX] Clean shutdown of detector process (#21035)
* update code for clean exit

* ruff format

* remove unused time import

* update stop_event handling

* remove hasattr check
2025-11-25 10:25:07 -07:00
Hosted Weblate
8520ade5c4 Translated using Weblate (Norwegian Bokmål)
Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 93.1% (108 of 116 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (118 of 118 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nb_NO/
Translation: Frigate NVR/common
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
1c7ed45f21 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (214 of 214 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/zh_Hans/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
130c7c9eec Translated using Weblate (Slovenian)
Currently translated at 100.0% (214 of 214 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kaboom <kaboom083@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sl/
Translation: Frigate NVR/common
2025-11-25 07:06:47 -07:00
Hosted Weblate
26e630aa8c Translated using Weblate (Slovak)
Currently translated at 97.6% (125 of 128 strings)

Translated using Weblate (Slovak)

Currently translated at 99.1% (115 of 116 strings)

Translated using Weblate (Slovak)

Currently translated at 99.5% (213 of 214 strings)

Translated using Weblate (Slovak)

Currently translated at 83.8% (536 of 639 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (92 of 92 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Michal K <michal@totaljs.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sk/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
a478da45a3 Translated using Weblate (Swedish)
Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (118 of 118 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: Noah <noah@hack.se>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
694f72d577 Translated using Weblate (French)
Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (French)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (French)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (French)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (French)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (French)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (French)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (French)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (French)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (French)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (635 of 635 strings)

Translated using Weblate (French)

Currently translated at 100.0% (113 of 113 strings)

Translated using Weblate (French)

Currently translated at 100.0% (108 of 108 strings)

Translated using Weblate (French)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (French)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (French)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (French)

Currently translated at 97.1% (103 of 106 strings)

Translated using Weblate (French)

Currently translated at 97.1% (103 of 106 strings)

Translated using Weblate (French)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (French)

Currently translated at 100.0% (127 of 127 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fr/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
1e42cedf9e Translated using Weblate (Spanish)
Currently translated at 90.2% (83 of 92 strings)

Translated using Weblate (Spanish)

Currently translated at 30.1% (35 of 116 strings)

Translated using Weblate (Spanish)

Currently translated at 64.0% (409 of 639 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Spanish)

Currently translated at 76.3% (97 of 127 strings)

Translated using Weblate (Spanish)

Currently translated at 29.3% (34 of 116 strings)

Translated using Weblate (Spanish)

Currently translated at 24.1% (28 of 116 strings)

Translated using Weblate (Spanish)

Currently translated at 25.4% (27 of 106 strings)

Translated using Weblate (Spanish)

Currently translated at 26.4% (28 of 106 strings)

Translated using Weblate (Spanish)

Currently translated at 76.3% (97 of 127 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (39 of 39 strings)

Co-authored-by: Adrian C <adriancuervo@gmail.com>
Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Josep Olivé <josepolive89@gmail.com>
Co-authored-by: Ramòn Rueda <virem1@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-11-25 07:06:47 -07:00
Hosted Weblate
a35a0fc8ba Translated using Weblate (Dutch)
Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (635 of 635 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (113 of 113 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (108 of 108 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Dutch)

Currently translated at 97.1% (103 of 106 strings)

Translated using Weblate (Dutch)

Currently translated at 97.1% (103 of 106 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (127 of 127 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nl/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
10b7ffe3d1 Translated using Weblate (Italian)
Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/it/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
42c6cfc9a2 Translated using Weblate (Polish)
Currently translated at 63.8% (408 of 639 strings)

Translated using Weblate (Polish)

Currently translated at 30.0% (34 of 113 strings)

Translated using Weblate (Polish)

Currently translated at 75.5% (96 of 127 strings)

Translated using Weblate (Polish)

Currently translated at 27.3% (29 of 106 strings)

Translated using Weblate (Polish)

Currently translated at 68.3% (409 of 598 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Polish)

Currently translated at 98.1% (53 of 54 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Polish)

Currently translated at 74.8% (95 of 127 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Mateusz Paś <piciuok@gmail.com>
Co-authored-by: Wojciech Niziński <niziak-weblate@spox.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-25 07:06:47 -07:00
Hosted Weblate
e8bf570d21 Translated using Weblate (Hungarian)
Currently translated at 7.7% (9 of 116 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: ugfus1630 <katona.ta@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/hu/
Translation: Frigate NVR/views-classificationmodel
2025-11-25 07:06:47 -07:00
Hosted Weblate
cdbd9038b8 Translated using Weblate (Croatian)
Currently translated at 33.3% (2 of 6 strings)

Translated using Weblate (Croatian)

Currently translated at 21.1% (11 of 52 strings)

Translated using Weblate (Croatian)

Currently translated at 2.7% (2 of 72 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Josip <josipmiki54@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/hr/
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-recording
2025-11-25 07:06:47 -07:00
Hosted Weblate
1e05abb0ea Translated using Weblate (Czech)
Currently translated at 14.6% (17 of 116 strings)

Translated using Weblate (Czech)

Currently translated at 13.7% (16 of 116 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Czech)

Currently translated at 63.0% (403 of 639 strings)

Translated using Weblate (Czech)

Currently translated at 76.9% (30 of 39 strings)

Translated using Weblate (Czech)

Currently translated at 4.7% (5 of 106 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub Sojka <sojkubu@seznam.cz>
Co-authored-by: Martin Janda <janda@chilliit.cz>
Co-authored-by: Michal K <michal@totaljs.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-settings
2025-11-25 07:06:47 -07:00
Hosted Weblate
70d1c2e041 Translated using Weblate (Catalan)
Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (113 of 113 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (108 of 108 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Catalan)

Currently translated at 97.1% (103 of 106 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (127 of 127 strings)

Co-authored-by: Eduardo Pastor Fernández <123eduardoneko123@gmail.com>
Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ca/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
f4d128b3ee Translated using Weblate (Ukrainian)
Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (635 of 635 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (113 of 113 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (108 of 108 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Ukrainian)

Currently translated at 97.1% (103 of 106 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (127 of 127 strings)

Co-authored-by: Alex Taran <oleksii.taran@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/uk/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
dd64ffca6c Translated using Weblate (Bulgarian)
Currently translated at 31.1% (28 of 90 strings)

Translated using Weblate (Bulgarian)

Currently translated at 7.6% (1 of 13 strings)

Translated using Weblate (Bulgarian)

Currently translated at 50.0% (1 of 2 strings)

Translated using Weblate (Bulgarian)

Currently translated at 50.0% (1 of 2 strings)

Translated using Weblate (Bulgarian)

Currently translated at 7.4% (4 of 54 strings)

Translated using Weblate (Bulgarian)

Currently translated at 10.0% (1 of 10 strings)

Translated using Weblate (Bulgarian)

Currently translated at 17.7% (21 of 118 strings)

Co-authored-by: Borislav <sartheris@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/bg/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/bg/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-live
2025-11-25 07:06:47 -07:00
Hosted Weblate
fce1f78bdc Translated using Weblate (Romanian)
Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (128 of 128 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (92 of 92 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (639 of 639 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (214 of 214 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (113 of 113 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (108 of 108 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (Romanian)

Currently translated at 97.1% (103 of 106 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ro/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
69ca63d608 Translated using Weblate (Russian)
Currently translated at 68.7% (439 of 639 strings)

Translated using Weblate (Russian)

Currently translated at 98.5% (211 of 214 strings)

Translated using Weblate (Russian)

Currently translated at 95.5% (108 of 113 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (108 of 108 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Russian)

Currently translated at 78.0% (467 of 598 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (127 of 127 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Russian)

Currently translated at 73.9% (442 of 598 strings)

Translated using Weblate (Russian)

Currently translated at 95.5% (86 of 90 strings)

Translated using Weblate (Russian)

Currently translated at 98.0% (51 of 52 strings)

Translated using Weblate (Russian)

Currently translated at 71.6% (91 of 127 strings)

Translated using Weblate (Russian)

Currently translated at 86.4% (433 of 501 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Артём Владимиров <artyomka71@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-11-25 07:06:47 -07:00
Hosted Weblate
111b83e8e3 Translated using Weblate (Greek)
Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/el/
Translation: Frigate NVR/components-auth
2025-11-25 07:06:47 -07:00
Hosted Weblate
198733b729 Translated using Weblate (Danish)
Currently translated at 48.3% (57 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 16.6% (9 of 54 strings)

Translated using Weblate (Danish)

Currently translated at 7.7% (9 of 116 strings)

Translated using Weblate (Danish)

Currently translated at 6.7% (8 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 1.4% (9 of 639 strings)

Translated using Weblate (Danish)

Currently translated at 16.6% (8 of 48 strings)

Translated using Weblate (Danish)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Danish)

Currently translated at 10.0% (9 of 90 strings)

Translated using Weblate (Danish)

Currently translated at 17.3% (9 of 52 strings)

Translated using Weblate (Danish)

Currently translated at 61.5% (8 of 13 strings)

Translated using Weblate (Danish)

Currently translated at 9.4% (12 of 127 strings)

Translated using Weblate (Danish)

Currently translated at 25.6% (10 of 39 strings)

Translated using Weblate (Danish)

Currently translated at 80.0% (8 of 10 strings)

Translated using Weblate (Danish)

Currently translated at 36.0% (9 of 25 strings)

Translated using Weblate (Danish)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Danish)

Currently translated at 12.5% (9 of 72 strings)

Translated using Weblate (Danish)

Currently translated at 14.8% (8 of 54 strings)

Translated using Weblate (Danish)

Currently translated at 19.5% (9 of 46 strings)

Translated using Weblate (Danish)

Currently translated at 90.0% (9 of 10 strings)

Translated using Weblate (Danish)

Currently translated at 16.9% (85 of 501 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: demention666 <anders+GITHUB@familien-harder.dk>
Co-authored-by: dinf60 <dinf60@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/da/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-11-25 07:06:47 -07:00
Hosted Weblate
03d9fd6f19 Translated using Weblate (German)
Currently translated at 21.5% (25 of 116 strings)

Translated using Weblate (German)

Currently translated at 92.3% (36 of 39 strings)

Translated using Weblate (German)

Currently translated at 93.7% (119 of 127 strings)

Translated using Weblate (German)

Currently translated at 19.8% (23 of 116 strings)

Translated using Weblate (German)

Currently translated at 89.7% (35 of 39 strings)

Translated using Weblate (German)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (German)

Currently translated at 77.7% (497 of 639 strings)

Translated using Weblate (German)

Currently translated at 98.0% (51 of 52 strings)

Translated using Weblate (German)

Currently translated at 18.1% (21 of 116 strings)

Translated using Weblate (German)

Currently translated at 84.6% (33 of 39 strings)

Translated using Weblate (German)

Currently translated at 6.0% (7 of 116 strings)

Translated using Weblate (German)

Currently translated at 92.3% (48 of 52 strings)

Translated using Weblate (German)

Currently translated at 93.7% (119 of 127 strings)

Translated using Weblate (German)

Currently translated at 71.6% (91 of 127 strings)

Translated using Weblate (German)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (635 of 635 strings)

Translated using Weblate (German)

Currently translated at 100.0% (209 of 209 strings)

Translated using Weblate (German)

Currently translated at 88.4% (443 of 501 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: Fuxle <moritz.hofmann2005@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Co-authored-by: mvdberge <micha.vordemberge@christmann.info>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-25 07:06:47 -07:00
Hosted Weblate
f90a54f1d9 Translated using Weblate (Portuguese (Brazil))
Currently translated at 96.7% (89 of 92 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 24.1% (28 of 116 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 68.7% (439 of 639 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 97.4% (38 of 39 strings)

Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-11-25 07:06:47 -07:00
Hosted Weblate
bbec4c4a60 Translated using Weblate (Lithuanian)
Currently translated at 30.1% (32 of 106 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/lt/
Translation: Frigate NVR/views-classificationmodel
2025-11-25 07:06:47 -07:00
Hosted Weblate
9fe16d7b17 Added translation using Weblate (Latvian)
Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Added translation using Weblate (Latvian)

Update translation files

Updated by "Squash Git commits" add-on in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/
Translation: Frigate NVR/common
2025-11-25 07:06:47 -07:00
Hosted Weblate
dc886b11f3 Translated using Weblate (Turkish)
Currently translated at 35.8% (38 of 106 strings)

Co-authored-by: Emircanos <emircan368@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/tr/
Translation: Frigate NVR/views-classificationmodel
2025-11-25 07:06:47 -07:00
Josh Hawkins
3bbe24f5f8 Miscellaneous Fixes (#21033)
* catch failed image embedding in triggers

* move scrollbar to edge on platform aware dialog drawers

* add i18n key

* show negotiated mse codecs in console on error

* try changing rocm

* Improve toast consistency

* add attribute area and score to detail stream tooltip

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-25 06:34:20 -07:00
508 changed files with 15918 additions and 5101 deletions

View File

@@ -22,6 +22,7 @@ autotrack
autotracked
autotracker
autotracking
backchannel
balena
Beelink
BGRA
@@ -191,6 +192,7 @@ ONVIF
openai
opencv
openvino
overfitting
OWASP
paddleocr
paho
@@ -315,4 +317,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency

View File

@@ -0,0 +1,129 @@
title: "[Beta Support]: "
labels: ["support", "triage", "beta"]
body:
- type: markdown
attributes:
value: |
Thank you for testing Frigate beta versions! Use this form for support with beta releases.
**Note:** Beta versions may have incomplete features, known issues, or unexpected behavior. Please check the [release notes](https://github.com/blakeblackshear/frigate/releases) and [recent discussions][discussions] for known beta issues before submitting.
Before submitting, read the [beta documentation][docs].
[docs]: https://deploy-preview-19787--frigate-docs.netlify.app/
- type: textarea
id: description
attributes:
label: Describe the problem you are having
description: Please be as detailed as possible. Include what you expected to happen vs what actually happened.
validations:
required: true
- type: input
id: version
attributes:
label: Beta Version
description: Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.17.0-beta1)
placeholder: "0.17.0-beta1"
validations:
required: true
- type: dropdown
id: issue-category
attributes:
label: Issue Category
description: What area is your issue related to? This helps us understand the context.
options:
- Object Detection / Detectors
- Hardware Acceleration
- Configuration / Setup
- WebUI / Frontend
- Recordings / Storage
- Notifications / Events
- Integration (Home Assistant, etc)
- Performance / Stability
- Installation / Updates
- Other
validations:
required: true
- type: textarea
id: config
attributes:
label: Frigate config file
description: This will be automatically formatted into code, so no need for backticks. Remove any sensitive information like passwords or URLs.
render: yaml
validations:
required: true
- type: textarea
id: frigatelogs
attributes:
label: Relevant Frigate log output
description: Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: true
- type: textarea
id: go2rtclogs
attributes:
label: Relevant go2rtc log output (if applicable)
description: If your issue involves cameras, streams, or playback, please include go2rtc logs. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
id: install-method
attributes:
label: Install method
options:
- Home Assistant Add-on
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required: true
- type: textarea
id: docker
attributes:
label: docker-compose file or Docker CLI command
description: This will be automatically formatted into code, so no need for backticks. Include relevant environment variables and device mappings.
render: yaml
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating system
options:
- Home Assistant OS
- Debian
- Ubuntu
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required: true
- type: input
id: hardware
attributes:
label: CPU / GPU / Hardware
description: Provide details about your hardware (e.g., Intel i5-9400, NVIDIA RTX 3060, Raspberry Pi 4, etc)
placeholder: "Intel i7-10700, NVIDIA GTX 1660"
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: Screenshots of the issue, System metrics pages, or any relevant UI. Drag and drop or paste images directly.
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to reproduce
description: If applicable, provide detailed steps to reproduce the issue
placeholder: |
1. Go to '...'
2. Click on '...'
3. See error
- type: textarea
id: other
attributes:
label: Any other information that may be helpful
description: Additional context, related issues, when the problem started appearing, etc.

View File

@@ -6,6 +6,8 @@ body:
value: |
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
**⚠️ If you are running a beta version (0.17.0-beta or similar), please use the [Beta Support template](https://github.com/blakeblackshear/frigate/discussions/new?category=beta-support) instead.**
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**

2
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,2 @@
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.

View File

@@ -15,7 +15,7 @@ concurrency:
cancel-in-progress: true
env:
PYTHON_VERSION: 3.9
PYTHON_VERSION: 3.11
jobs:
amd64_build:
@@ -23,7 +23,7 @@ jobs:
name: AMD64 Build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -47,7 +47,7 @@ jobs:
name: ARM Build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -82,7 +82,7 @@ jobs:
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -113,7 +113,7 @@ jobs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -136,7 +136,6 @@ jobs:
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-tensorrt,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
@@ -155,7 +154,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -180,7 +179,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up QEMU and Buildx

View File

@@ -16,12 +16,12 @@ jobs:
name: Web - Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@master
- uses: actions/setup-node@v6
with:
node-version: 16.x
node-version: 20.x
- run: npm install
working-directory: ./web
- name: Lint
@@ -32,10 +32,10 @@ jobs:
name: Web - Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@master
- uses: actions/setup-node@v6
with:
node-version: 20.x
- run: npm install
@@ -52,7 +52,7 @@ jobs:
name: Python Checks
steps:
- name: Check out the repository
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
@@ -75,10 +75,10 @@ jobs:
name: Python Tests
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@master
- uses: actions/setup-node@v6
with:
node-version: 20.x
- name: Install devcontainer cli

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
persist-credentials: false
- id: lowercaseRepo

View File

@@ -1,28 +1,31 @@
<p align="center">
<img align="center" alt="logo" src="docs/static/img/frigate.png">
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
</p>
# Frigate - 一个具有实时目标检测的本地NVR
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计具备AI物体检测功能。使用OpenCV和TensorFlow在本地为IP摄像头执行实时物体检测。
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
强烈推荐使用GPU或者AI加速器例如[Google Coral加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)。它们的性能甚至超过目前的顶级CPU并且可以以极低的耗电实现更优的性能。
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与Home Assistant紧密集成
- 设计上通过仅在必要时和必要地点寻找物体,最大限度地减少资源使用并最大化性能
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
一个完整的本地网络视频录像机NVR专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU并且功耗也极低。
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与 Home Assistant 紧密集成
- 设计上通过仅在必要时和必要地点寻找目标,最大限度地减少资源使用并最大化性能
- 大量利用多进程处理,强调实时性而非处理每一帧
- 使用非常低开销的运动检测来确定运行物体检测的位置
- 使用TensorFlow进行物体检测运行在单独的进程中以达到最大FPS
- 通过MQTT进行通信便于集成到其他系统中
- 使用非常低开销的画面变动检测(也叫运动检测来确定运行目标检测的位置
- 使用 TensorFlow 进行目标检测,运行在单独的进程中以达到最大 FPS
- 通过 MQTT 进行通信,便于集成到其他系统中
- 根据检测到的物体设置保留时间进行视频录制
- 24/7全天候录制
- 通过RTSP重新流传输以减少摄像头的连接数
- 支持WebRTCMSE实现低延迟的实时观看
- 24/7 全天候录制
- 通过 RTSP 重新流传输以减少摄像头的连接数
- 支持 WebRTCMSE实现低延迟的实时观看
## 社区中文翻译文档
@@ -32,39 +35,56 @@
如果您想通过捐赠支持开发,请使用 [Github Sponsors](https://github.com/sponsors/blakeblackshear)。
## 协议
本项目采用 **MIT 许可证**授权。
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
**商标部分**“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标****不在** MIT 许可证覆盖范围内。
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
## 截图
### 实时监控面板
<div>
<img width="800" alt="实时监控面板" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
</div>
### 简单的核查工作流程
<div>
<img width="800" alt="简单的审查工作流程" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
</div>
### 多摄像头可按时间轴查看
<div>
<img width="800" alt="多摄像头可按时间轴查看" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
</div>
### 内置遮罩和区域编辑器
<div>
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## 翻译
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
## 非官方中文讨论社区
欢迎加入中文讨论QQ群[1043861059](https://qm.qq.com/q/7vQKsTmSz)
欢迎加入中文讨论 QQ 群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
Bilibilihttps://space.bilibili.com/3546894915602564
## 中文社区赞助商
[![EdgeOne](https://edgeone.ai/media/34fe3a45-492d-4ea4-ae5d-ea1087ca7b4b.png)](https://edgeone.ai/zh?from=github)
本项目 CDN 加速及安全防护由 Tencent EdgeOne 赞助
---
**Copyright © 2025 Frigate LLC.**

View File

@@ -237,8 +237,18 @@ ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
# TensorFlow error only
# TensorFlow C++ logging suppression (must be set before import)
# TF_CPP_MIN_LOG_LEVEL: 0=all, 1=INFO+, 2=WARNING+, 3=ERROR+ (we use 3 for errors only)
ENV TF_CPP_MIN_LOG_LEVEL=3
# Suppress verbose logging from TensorFlow C++ code
ENV TF_CPP_MIN_VLOG_LEVEL=3
# Disable oneDNN optimization messages ("optimized with oneDNN...")
ENV TF_ENABLE_ONEDNN_OPTS=0
# Suppress AutoGraph verbosity during conversion
ENV AUTOGRAPH_VERBOSITY=0
# Google Logging (GLOG) suppression for TensorFlow components
ENV GLOG_minloglevel=3
ENV GLOG_logtostderr=0
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"

View File

@@ -21,7 +21,7 @@ onvif-zeep-async == 4.0.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.13.*
peewee_migrate == 1.14.*
psutil == 7.1.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
@@ -81,3 +81,5 @@ librosa==0.11.*
soundfile==0.13.*
# DeGirum detector
degirum == 0.16.*
# Memory profiling
memray == 1.15.*

View File

@@ -3,7 +3,6 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=1
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
@@ -11,11 +10,10 @@ ARG HSA_OVERRIDE
FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt update -qq && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1/ubuntu/jammy/amdgpu-install_7.1.70100-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1.1/ubuntu/jammy/amdgpu-install_7.1.1.70101-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -qq -y rocm
@@ -36,7 +34,10 @@ FROM deps AS deps-prelim
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
RUN apt-get update && \
apt-get install -y libnuma1 && \
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers && \
# Install C++ standard library headers for HIPRTC kernel compilation fallback
apt-get install -qq -y libstdc++-12-dev && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/frigate
COPY --from=rootfs / /
@@ -54,12 +55,14 @@ RUN pip3 uninstall -y onnxruntime \
FROM scratch AS rocm-dist
ARG ROCM
ARG AMDGPU
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
# Copy MIOpen database files for gfx10xx and gfx11xx only (RDNA2/RDNA3)
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx10* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx11* /opt/rocm-$ROCM/share/miopen/db/
# Copy rocBLAS library files for gfx10xx and gfx11xx only
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx10* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx11* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
#######################################################################

View File

@@ -1,8 +1,5 @@
variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "7.1"
default = "7.1.1"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""
@@ -38,7 +35,6 @@ target rocm {
}
platforms = ["linux/amd64"]
args = {
AMDGPU = AMDGPU,
ROCM = ROCM,
HSA_OVERRIDE_GFX_VERSION = HSA_OVERRIDE_GFX_VERSION,
HSA_OVERRIDE = HSA_OVERRIDE

View File

@@ -1,53 +1,15 @@
BOARDS += rocm
# AMD/ROCm is chunky so we build couple of smaller images for specific chipsets
ROCM_CHIPSETS:=gfx900:9.0.0 gfx1030:10.3.0 gfx1100:11.0.0
local-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) \
--load \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=frigate:latest-rocm \
--load
build-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm
push-rocm: build-rocm
$(foreach chipset,$(ROCM_CHIPSETS), \
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
HSA_OVERRIDE=1 \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
--push \
&&) true
unset HSA_OVERRIDE_GFX_VERSION && \
HSA_OVERRIDE=0 \
AMDGPU=gfx \
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm \
--push

View File

@@ -75,7 +75,13 @@ audio:
### Audio Transcription
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
Transcription accuracy also depends heavily on the quality of your camera's microphone and recording conditions. Many cameras use inexpensive microphones, and distance to the speaker, low audio bitrate, or background noise can significantly reduce transcription quality. If you need higher accuracy, more robust long-running queues, or large-scale automatic transcription, consider using the HTTP API in combination with an automation platform and a cloud transcription service.
#### Configuration
To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
```yaml
audio_transcription:
@@ -151,3 +157,21 @@ Only one `speech` event may be transcribed at a time. Frigate does not automatic
:::
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a supported Nvidia GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.
#### FAQ
1. Why doesn't Frigate automatically transcribe all `speech` events?
Frigate does not implement a queue mechanism for speech transcription, and adding one is not trivial. A proper queue would need backpressure, prioritization, memory/disk buffering, retry logic, crash recovery, and safeguards to prevent unbounded growth when events outpace processing. Thats a significant amount of complexity for a feature that, in most real-world environments, would mostly just churn through low-value noise.
Because transcription is **serialized (one event at a time)** and speech events can be generated far faster than they can be processed, an auto-transcribe toggle would very quickly create an ever-growing backlog and degrade core functionality. For the amount of engineering and risk involved, it adds **very little practical value** for the majority of deployments, which are often on low-powered, edge hardware.
If you hear speech thats actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
Other options are being considered for future versions of Frigate to add transcription options that support external `whisper` Docker containers. A single transcription service could then be shared by Frigate and other applications (for example, Home Assistant Voice), and run on more powerful machines when available.
2. Why don't you save live transcription text and use that for `speech` events?
Theres no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
Automatically persisting that data would often result in **misaligned, partial, or irrelevant transcripts**, while still incurring all of the CPU, storage, and privacy costs of transcription. Thats why Frigate treats transcription as an **explicit, user-initiated action** rather than an automatic side-effect of every `speech` event.

View File

@@ -270,3 +270,42 @@ To use role-based access control, you must connect to Frigate via the **authenti
1. Log in as an **admin** user via port `8971`.
2. Navigate to **Settings > Users**.
3. Edit a users role by selecting **admin** or **viewer**.
## API Authentication Guide
### Getting a Bearer Token
To use the Frigate API, you need to authenticate first. Follow these steps to obtain a Bearer token:
#### 1. Login
Make a POST request to `/login` with your credentials:
```bash
curl -i -X POST https://frigate_ip:8971/api/login \
-H "Content-Type: application/json" \
-d '{"user": "admin", "password": "your_password"}'
```
:::note
You may need to include `-k` in the argument list in these steps (eg: `curl -k -i -X POST ...`) if your Frigate instance is using a self-signed certificate.
:::
The response will contain a cookie with the JWT token.
#### 2. Using the Bearer Token
Once you have the token, include it in the Authorization header for subsequent requests:
```bash
curl -H "Authorization: Bearer <your_token>" https://frigate_ip:8971/api/profile
```
#### 3. Token Lifecycle
- Tokens are valid for the configured session length
- Tokens are automatically refreshed when you visit the `/auth` endpoint
- Tokens are invalidated when the user's password is changed
- Use `/logout` to clear your session cookie

View File

@@ -3,7 +3,7 @@ id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
## Minimum System Requirements
@@ -11,6 +11,8 @@ Object classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
@@ -31,9 +33,15 @@ For object classification:
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
:::note
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
:::
## Assignment Requirements
@@ -73,13 +81,17 @@ classification:
classification_type: sub_label # or: attribute
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
### Step 2: Assign Training Examples
@@ -87,6 +99,8 @@ The system will automatically generate example images from detected objects matc
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
@@ -94,3 +108,23 @@ When choosing which objects to classify, start with a small number of visually d
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
## Debugging Classification Models
To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- Consensus calculations and when assignments are made
- Object classification history and weighted scores

View File

@@ -3,7 +3,7 @@ id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
## Minimum System Requirements
@@ -11,6 +11,8 @@ State classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
@@ -46,6 +48,8 @@ classification:
crop: [0, 180, 220, 400]
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
@@ -60,13 +64,44 @@ Choose one or more cameras and draw a rectangle over the area of interest for ea
### Step 3: Assign Training Examples
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state. It's not strictly required to select all images you see. If a state is missing from the samples, you can train it from the Recent tab later.
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
Once all images are assigned, training will begin automatically.
Once some images are assigned, training will begin automatically.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
## Debugging Classification Models
To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```
The debug logs will show:
- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- State verification progress (consecutive detections needed)
- When state changes are published
### Recent Classifications
For state classification, images are only added to recent classifications under specific circumstances:
- **First detection**: The first classification attempt for a camera is always saved
- **State changes**: Images are saved when the detected state differs from the current verified state
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)
Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.

View File

@@ -56,7 +56,7 @@ Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
@@ -64,6 +64,10 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml

View File

@@ -111,3 +111,9 @@ review:
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.

View File

@@ -13,7 +13,7 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **AMD**
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
- **Intel**

View File

@@ -107,23 +107,23 @@ Fine-tune the LPR feature using these optional parameters at the global level of
### Normalization Rules
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially. Each rule must have a `pattern` (which can be a string or a regex, prepended by `r`) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially and are applied _before_ the `format` regex, if specified. Each rule must have a `pattern` (which can be a string or a regex) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
These rules must be defined at the global level of your `lpr` config.
```yaml
lpr:
replace_rules:
- pattern: r'[%#*?]' # Remove noise symbols
- pattern: "[%#*?]" # Remove noise symbols
replacement: ""
- pattern: r'[= ]' # Normalize = or space to dash
- pattern: "[= ]" # Normalize = or space to dash
replacement: "-"
- pattern: "O" # Swap 'O' to '0' (common OCR error)
replacement: "0"
- pattern: r'I' # Swap 'I' to '1'
- pattern: "I" # Swap 'I' to '1'
replacement: "1"
- pattern: r'(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123)
replacement: r'\1-\2'
- pattern: '(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123) - use single quotes to preserve backslashes
replacement: '\1-\2'
```
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
@@ -374,9 +374,19 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
1. Enable debug logs to see exactly what Frigate is doing.
1. Start with a simplified LPR config.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
lpr:
enabled: true
debug_save_plates: true
```
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
@@ -385,7 +395,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
frigate.data_processing.common.license_plate: debug
```
2. Ensure your plates are being _detected_.
3. Ensure your plates are being _detected_.
If you are using a Frigate+ or `license_plate` detecting model:
@@ -398,7 +408,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
- Watch the debug logs for messages from the YOLOv9 plate detector.
- You may need to adjust your `detection_threshold` if your plates are not being detected.
3. Ensure the characters on detected plates are being _recognized_.
4. Ensure the characters on detected plates are being _recognized_.
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.

View File

@@ -178,6 +178,8 @@ To use the Reolink Doorbell with two way talk, you should use the [recommended R
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
To prevent go2rtc from blocking other applications from accessing your camera's two-way audio, you must configure your stream with `#backchannel=0`. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
### Streaming options on camera group dashboards
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
@@ -232,7 +234,7 @@ When your browser runs into problems playing back your camera streams, it will l
- **mse-decode**
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- What to try: Check the browser console for the supported and negotiated codecs. Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- Possible console messages from the player code:

View File

@@ -28,7 +28,6 @@ To create a poly mask:
5. Click the plus icon under the type of mask or zone you would like to create
6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon.
7. When you've finished creating your mask, press Save.
8. Restart Frigate to apply your changes.
Your config file will be updated with the relative coordinates of the mask/zone:

View File

@@ -13,7 +13,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB, Mini PCIe, and m.2 formats allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
@@ -69,12 +69,10 @@ Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8
## Edge TPU Detector
The Edge TPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the `"type"` attribute to `"edgetpu"`.
The Edge TPU detector type runs TensorFlow Lite models utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the `"type"` attribute to `"edgetpu"`.
The Edge TPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
:::tip
See [common Edge TPU troubleshooting steps](/troubleshooting/edgetpu) if the Edge TPU is not detected.
@@ -146,6 +144,44 @@ detectors:
device: pci
```
### EdgeTPU Supported Models
| Model | Notes |
| ----------------------- | ------------------------------------------- |
| [Mobiledet](#mobiledet) | Default model |
| [YOLOv9](#yolov9) | More accurate but slower than default model |
#### Mobiledet
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
#### YOLOv9
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
<details>
<summary>YOLOv9 Setup & Config</summary>
After placing the downloaded files for the tflite model and labels in your config folder, you can use the following configuration:
```yaml
detectors:
coral:
type: edgetpu
device: usb
model:
model_type: yolo-generic
width: 320 # <--- should match the imgsize of the model, typically 320
height: 320 # <--- should match the imgsize of the model, typically 320
path: /config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite
labelmap_path: /config/labels-coco17.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 17 objects.
</details>
---
## Hailo-8
@@ -364,7 +400,7 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
If you are using a Frigate+ model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
@@ -704,7 +740,7 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
If you are using a Frigate+ model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::

View File

@@ -123,7 +123,7 @@ auth:
# Optional: Refresh time in seconds (default: shown below)
# When the session is going to expire in less time than this setting,
# it will be refreshed back to the session_length.
refresh_time: 43200 # 12 hours
refresh_time: 1800 # 30 minutes
# Optional: Rate limiting for login failures to help prevent brute force
# login attacks (default: shown below)
# See the docs for more information on valid values
@@ -710,6 +710,44 @@ audio_transcription:
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
language: en
# Optional: Configuration for classification models
classification:
# Optional: Configuration for bird classification
bird:
# Optional: Enable bird classification (default: shown below)
enabled: False
# Optional: Minimum classification score required to be considered a match (default: shown below)
threshold: 0.9
custom:
# Required: name of the classification model
model_name:
# Optional: Enable running the model (default: shown below)
enabled: True
# Optional: Name of classification model (default: shown below)
name: None
# Optional: Classification score threshold to change the state (default: shown below)
threshold: 0.8
# Optional: Number of classification attempts to save in the recent classifications tab (default: shown below)
# NOTE: Defaults to 200 for object classification and 100 for state classification if not specified
save_attempts: None
# Optional: Object classification configuration
object_config:
# Required: Object types to classify
objects: [dog]
# Optional: Type of classification that is applied (default: shown below)
classification_type: sub_label
# Optional: State classification configuration
state_config:
# Required: Cameras to run classification on
cameras:
camera_name:
# Required: Crop of image frame on this camera to run classification on
crop: [0, 180, 220, 400]
# Optional: If classification should be run when motion is detected in the crop (default: shown below)
motion: False
# Optional: Interval to run classification on in seconds (default: shown below)
interval: None
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# NOTE: The default go2rtc API port (1984) must be used,
@@ -873,7 +911,7 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
# Optional: Skip TLS verification and disable digest authentication for the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
@@ -964,10 +1002,6 @@ ui:
# full: 8:15:22 PM Mountain Standard Time
# (default: shown below).
time_style: medium
# Optional: Ability to manually override the date / time styling to use strftime format
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
# Used in the UI and in MQTT topics
unit_system: metric

View File

@@ -24,11 +24,12 @@ birdseye:
restream: True
```
:::tip
:::tip
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `12`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `12`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
:::
### Securing Restream With Authentication
The go2rtc restream can be secured with RTSP based username / password authentication. Ex:
@@ -159,6 +160,31 @@ go2rtc:
See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
## Preventing go2rtc from blocking two-way audio {#two-way-talk-restream}
For cameras that support two-way talk, go2rtc will automatically establish an audio output backchannel when connecting to an RTSP stream. This backchannel blocks access to the camera's audio output for two-way talk functionality, preventing both Frigate and other applications from using it.
To prevent this, you must configure two separate stream instances:
1. One stream instance with `#backchannel=0` for Frigate's viewing, recording, and detection (prevents go2rtc from establishing the blocking backchannel)
2. A second stream instance without `#backchannel=0` for two-way talk functionality (can be used by Frigate's WebRTC viewer or other applications)
Configuration example:
```yaml
go2rtc:
streams:
front_door:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2#backchannel=0
front_door_twoway:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
```
In this configuration:
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
@@ -169,4 +195,4 @@ NOTE: The output will need to be passed with two curly braces `{{output}}`
go2rtc:
streams:
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
```
```

View File

@@ -159,7 +159,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 730 | ~ 10 ms | t-320: 14ms s-320: 24ms t-640: 34ms s-640: 65ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |

View File

@@ -135,6 +135,7 @@ Finally, configure [hardware object detection](/configuration/object_detectors#h
### MemryX MX3
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
- x86 (Intel/AMD) PCs
- Raspberry Pi 5
- Orange Pi 5 Plus/Max
@@ -142,7 +143,6 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
#### Configuration
#### Installation
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
@@ -156,7 +156,7 @@ Then follow these steps for installing the correct driver/runtime configuration:
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
@@ -173,7 +173,7 @@ In your `docker-compose.yml`, also add:
privileged: true
volumes:
/run/mxa_manager:/run/mxa_manager
- /run/mxa_manager:/run/mxa_manager
```
If you can't use Docker Compose, you can run the container with something similar to this:
@@ -411,7 +411,7 @@ To install make sure you have the [community app plugin here](https://forums.unr
## Proxmox
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers.
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers. Ensure that ballooning is **disabled**, especially if you are passing through a GPU to the VM.
:::warning

View File

@@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.16.2
image: ghcr.io/blakeblackshear/frigate:0.17.0
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
```
3. **Start the Container**:
@@ -105,8 +105,8 @@ If an update causes issues:
1. Stop Frigate.
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again.

View File

@@ -113,7 +113,8 @@ section.
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
3. If your camera supports two-way talk, you must configure your stream with `#backchannel=0` to prevent go2rtc from blocking other applications from accessing the camera's audio output. See [preventing go2rtc from blocking two-way audio](/configuration/restream#two-way-talk-restream) in the restream documentation.
## Homekit Configuration
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.

View File

@@ -245,6 +245,12 @@ To load a preview gif of a review item:
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
```
To load the thumbnail of a review item:
```
https://HA_URL/api/frigate/notifications/<review-id>/<camera>/review_thumbnail.webp
```
<a name="streams"></a>
## RTSP stream

View File

@@ -15,13 +15,11 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
_\* Support coming in 0.17_
| Model Type | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
### YOLOv9 Details
@@ -39,7 +37,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
## Supported detector types
@@ -55,7 +53,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
## Improving your model

View File

@@ -0,0 +1,60 @@
---
id: dummy-camera
title: Troubleshooting Detection
---
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
## When to use
- Replaying an exported clip to reproduce incorrect detections
- Testing configuration changes (model settings, trackers, filters) against a known clip
- Gathering deterministic logs and recordings for debugging or issue reports
## Example Config
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
```yaml
cameras:
test:
ffmpeg:
inputs:
- path: /media/frigate/car-stopping.mp4
input_args: -re -stream_loop -1 -fflags +genpts
roles:
- detect
detect:
enabled: true
record:
enabled: false
snapshots:
enabled: false
```
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate.
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
## Variables to consider in object tracking
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
## Troubleshooting
- No video: verify the path is correct and accessible from the Frigate process/container.
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.

View File

@@ -0,0 +1,135 @@
---
id: memory
title: Memory Troubleshooting
---
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
## Enabling Memory Profiling
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
```yaml
# docker-compose example
services:
frigate:
...
environment:
- FRIGATE_MEMRAY_MODULES=frigate.embeddings,frigate.capture
```
```bash
# docker run example
docker run -e FRIGATE_MEMRAY_MODULES="frigate.embeddings" \
...
--name frigate <frigate_image>
```
### Module Names
Frigate processes are named using a module-based naming scheme. Common module names include:
- `frigate.review_segment_manager` - Review segment processing
- `frigate.recording_manager` - Recording management
- `frigate.capture` - Camera capture processes (all cameras with this module name)
- `frigate.process` - Camera processing/tracking (all cameras with this module name)
- `frigate.output` - Output processing
- `frigate.audio_manager` - Audio processing
- `frigate.embeddings` - Embeddings processing
- `frigate.embeddings_manager` - Embeddings manager
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
```bash
FRIGATE_MEMRAY_MODULES=frigate.capture:front_door
```
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
## How It Works
1. **Binary File Creation**: When profiling is enabled, memray creates a binary file (`.bin`) in `/config/memray_reports/` that is updated continuously in real-time as the process runs.
2. **Automatic HTML Generation**: On normal process exit, Frigate automatically:
- Stops memray tracking
- Generates an HTML flamegraph report
- Saves it to `/config/memray_reports/<module_name>.html`
3. **Crash Recovery**: If a process crashes (SIGKILL, segfault, etc.), the binary file is preserved with all data up to the crash point. You can manually generate the HTML report from the binary file.
## Viewing Reports
### Automatic Reports
After a process exits normally, you'll find HTML reports in `/config/memray_reports/`. Open these files in a web browser to view interactive flamegraphs showing memory usage patterns.
### Manual Report Generation
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
- Run `memray` inside the Frigate container:
```bash
docker-compose exec frigate memray flamegraph /config/memray_reports/<module_name>.bin
# or
docker exec -it <container_name_or_id> memray flamegraph /config/memray_reports/<module_name>.bin
```
- You can also copy the `.bin` file to the host and run `memray` locally if you have it installed:
```bash
docker cp <container_name_or_id>:/config/memray_reports/<module_name>.bin /tmp/
memray flamegraph /tmp/<module_name>.bin
```
## Understanding the Reports
Memray flamegraphs show:
- **Memory allocations over time**: See where memory is being allocated in your code
- **Call stacks**: Understand the full call chain leading to allocations
- **Memory hotspots**: Identify functions or code paths that allocate the most memory
- **Memory leaks**: Spot patterns where memory is allocated but not freed
The interactive HTML reports allow you to:
- Zoom into specific time ranges
- Filter by function names
- View detailed allocation information
- Export data for further analysis
## Best Practices
1. **Profile During Issues**: Enable profiling when you're experiencing memory issues, not all the time, as it adds some overhead.
2. **Profile Specific Modules**: Instead of profiling everything, focus on the modules you suspect are causing issues.
3. **Let Processes Run**: Allow processes to run for a meaningful duration to capture representative memory usage patterns.
4. **Check Binary Files**: If HTML reports aren't generated automatically (e.g., after a crash), check for `.bin` files in `/config/memray_reports/` and generate reports manually.
5. **Compare Reports**: Generate reports at different times to compare memory usage patterns and identify trends.
## Troubleshooting
### No Reports Generated
- Check that the environment variable is set correctly
- Verify the module name matches exactly (case-sensitive)
- Check logs for memray-related errors
- Ensure `/config/memray_reports/` directory exists and is writable
### Process Crashed Before Report Generation
- Look for `.bin` files in `/config/memray_reports/`
- Manually generate HTML reports using: `memray flamegraph <file>.bin`
- The binary file contains all data up to the crash point
### Reports Show No Data
- Ensure the process ran long enough to generate meaningful data
- Check that memray is properly installed (included by default in Frigate)
- Verify the process actually started and ran (check process logs)
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).

3199
docs/package-lock.json generated
View File

File diff suppressed because it is too large Load Diff

View File

@@ -18,14 +18,14 @@
},
"dependencies": {
"@docusaurus/core": "^3.7.0",
"@docusaurus/plugin-content-docs": "^3.6.3",
"@docusaurus/plugin-content-docs": "^3.7.0",
"@docusaurus/preset-classic": "^3.7.0",
"@docusaurus/theme-mermaid": "^3.6.3",
"@docusaurus/theme-mermaid": "^3.7.0",
"@inkeep/docusaurus": "^2.0.16",
"@mdx-js/react": "^3.1.0",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.3.1",
"docusaurus-theme-openapi-docs": "^4.3.1",
"docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1",
"prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2",
"react": "^18.3.1",
@@ -44,9 +44,9 @@
]
},
"devDependencies": {
"@docusaurus/module-type-aliases": "^3.4.0",
"@docusaurus/types": "^3.4.0",
"@types/react": "^18.3.7"
"@docusaurus/module-type-aliases": "^3.7.0",
"@docusaurus/types": "^3.7.0",
"@types/react": "^18.3.27"
},
"engines": {
"node": ">=18.0"

View File

@@ -131,6 +131,8 @@ const sidebars: SidebarsConfig = {
"troubleshooting/recordings",
"troubleshooting/gpu",
"troubleshooting/edgetpu",
"troubleshooting/memory",
"troubleshooting/dummy-camera",
],
Development: [
"development/contributing",

View File

@@ -1,13 +1,18 @@
.alert {
padding: 12px;
background: #fff8e6;
border-bottom: 1px solid #ffd166;
text-align: center;
font-size: 15px;
}
.alert a {
color: #1890ff;
font-weight: 500;
margin-left: 6px;
}
padding: 12px;
background: #fff8e6;
border-bottom: 1px solid #ffd166;
text-align: center;
font-size: 15px;
}
[data-theme="dark"] .alert {
background: #3b2f0b;
border-bottom: 1px solid #665c22;
}
.alert a {
color: #1890ff;
font-weight: 500;
margin-left: 6px;
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@ from markupsafe import escape
from peewee import SQL, fn, operator
from pydantic import ValidationError
from frigate.api.auth import require_role
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
@@ -56,29 +56,33 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.app])
@router.get("/", response_class=PlainTextResponse)
@router.get(
"/", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
)
def is_healthy():
return "Frigate is running. Alive and healthy!"
@router.get("/config/schema.json")
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
def config_schema(request: Request):
return Response(
content=request.app.frigate_config.schema_json(), media_type="application/json"
)
@router.get("/version", response_class=PlainTextResponse)
@router.get(
"/version", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
)
def version():
return VERSION
@router.get("/stats")
@router.get("/stats", dependencies=[Depends(allow_any_authenticated())])
def stats(request: Request):
return JSONResponse(content=request.app.stats_emitter.get_latest_stats())
@router.get("/stats/history")
@router.get("/stats/history", dependencies=[Depends(allow_any_authenticated())])
def stats_history(request: Request, keys: str = None):
if keys:
keys = keys.split(",")
@@ -86,7 +90,7 @@ def stats_history(request: Request, keys: str = None):
return JSONResponse(content=request.app.stats_emitter.get_stats_history(keys))
@router.get("/metrics")
@router.get("/metrics", dependencies=[Depends(allow_any_authenticated())])
def metrics(request: Request):
"""Expose Prometheus metrics endpoint and update metrics with latest stats"""
# Retrieve the latest statistics and update the Prometheus metrics
@@ -103,7 +107,7 @@ def metrics(request: Request):
return Response(content=content, media_type=content_type)
@router.get("/config")
@router.get("/config", dependencies=[Depends(allow_any_authenticated())])
def config(request: Request):
config_obj: FrigateConfig = request.app.frigate_config
config: dict[str, dict[str, Any]] = config_obj.model_dump(
@@ -209,7 +213,7 @@ def config_raw_paths(request: Request):
return JSONResponse(content=raw_paths)
@router.get("/config/raw")
@router.get("/config/raw", dependencies=[Depends(allow_any_authenticated())])
def config_raw():
config_file = find_config_file()
@@ -452,7 +456,7 @@ def config_set(request: Request, body: AppConfigSetBody):
)
@router.get("/vainfo")
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
def vainfo():
vainfo = vainfo_hwaccel()
return JSONResponse(
@@ -472,12 +476,16 @@ def vainfo():
)
@router.get("/nvinfo")
@router.get("/nvinfo", dependencies=[Depends(allow_any_authenticated())])
def nvinfo():
return JSONResponse(content=get_nvidia_driver_info())
@router.get("/logs/{service}", tags=[Tags.logs])
@router.get(
"/logs/{service}",
tags=[Tags.logs],
dependencies=[Depends(allow_any_authenticated())],
)
async def logs(
service: str = Path(enum=["frigate", "nginx", "go2rtc"]),
download: Optional[str] = None,
@@ -585,7 +593,7 @@ def restart():
)
@router.get("/labels")
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
def get_labels(camera: str = ""):
try:
if camera:
@@ -603,7 +611,7 @@ def get_labels(camera: str = ""):
return JSONResponse(content=labels)
@router.get("/sub_labels")
@router.get("/sub_labels", dependencies=[Depends(allow_any_authenticated())])
def get_sub_labels(split_joined: Optional[int] = None):
try:
events = Event.select(Event.sub_label).distinct()
@@ -634,7 +642,7 @@ def get_sub_labels(split_joined: Optional[int] = None):
return JSONResponse(content=sub_labels)
@router.get("/plus/models")
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
if not request.app.frigate_config.plus_api.is_active():
return JSONResponse(
@@ -676,7 +684,9 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
return JSONResponse(content=validModels)
@router.get("/recognized_license_plates")
@router.get(
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
)
def get_recognized_license_plates(split_joined: Optional[int] = None):
try:
query = (
@@ -710,7 +720,7 @@ def get_recognized_license_plates(split_joined: Optional[int] = None):
return JSONResponse(content=recognized_license_plates)
@router.get("/timeline")
@router.get("/timeline", dependencies=[Depends(allow_any_authenticated())])
def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = None):
clauses = []
@@ -747,7 +757,7 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
return JSONResponse(content=[t for t in timeline])
@router.get("/timeline/hourly")
@router.get("/timeline/hourly", dependencies=[Depends(allow_any_authenticated())])
def hourly_timeline(params: AppTimelineHourlyQueryParameters = Depends()):
"""Get hourly summary for timeline."""
cameras = params.cameras

View File

@@ -32,10 +32,164 @@ from frigate.models import User
logger = logging.getLogger(__name__)
def require_admin_by_default():
"""
Global admin requirement dependency for all endpoints by default.
This is set as the default dependency on the FastAPI app to ensure all
endpoints require admin access unless explicitly overridden with
allow_public(), allow_any_authenticated(), or require_role().
Port 5000 (internal) always has admin role set by the /auth endpoint,
so this check passes automatically for internal requests.
Certain paths are exempted from the global admin check because they must
be accessible before authentication (login, auth) or they have their own
route-level authorization dependencies that handle access control.
"""
# Paths that have route-level auth dependencies and should bypass global admin check
# These paths still have authorization - it's handled by their route-level dependencies
EXEMPT_PATHS = {
# Public auth endpoints (allow_public)
"/auth",
"/auth/first_time_login",
"/login",
"/logout",
# Authenticated user endpoints (allow_any_authenticated)
"/profile",
# Public info endpoints (allow_public)
"/",
"/version",
"/config/schema.json",
# Authenticated user endpoints (allow_any_authenticated)
"/metrics",
"/stats",
"/stats/history",
"/config",
"/config/raw",
"/vainfo",
"/nvinfo",
"/labels",
"/sub_labels",
"/plus/models",
"/recognized_license_plates",
"/timeline",
"/timeline/hourly",
"/recordings/storage",
"/recordings/summary",
"/recordings/unavailable",
"/go2rtc/streams",
"/event_ids",
"/events",
"/exports",
}
# Path prefixes that should be exempt (for paths with parameters)
EXEMPT_PREFIXES = (
"/logs/", # /logs/{service}
"/review", # /review, /review/{id}, /review/summary, /review_ids, etc.
"/reviews/", # /reviews/viewed, /reviews/delete
"/events/", # /events/{id}/thumbnail, /events/summary, etc. (camera-scoped)
"/export/", # /export/{camera}/start/..., /export/{id}/rename, /export/{id}
"/go2rtc/streams/", # /go2rtc/streams/{camera}
"/users/", # /users/{username}/password (has own auth)
"/preview/", # /preview/{file}/thumbnail.jpg
"/exports/", # /exports/{export_id}
"/vod/", # /vod/{camera_name}/...
"/notifications/", # /notifications/pubkey, /notifications/register
)
async def admin_checker(request: Request):
path = request.url.path
# Check exact path matches
if path in EXEMPT_PATHS:
return
# Check prefix matches for parameterized paths
if path.startswith(EXEMPT_PREFIXES):
return
# Dynamic camera path exemption:
# Any path whose first segment matches a configured camera name should
# bypass the global admin requirement. These endpoints enforce access
# via route-level dependencies (e.g. require_camera_access) to ensure
# per-camera authorization. This allows non-admin authenticated users
# (e.g. viewer role) to access camera-specific resources without
# needing admin privileges.
try:
if path.startswith("/"):
first_segment = path.split("/", 2)[1]
if (
first_segment
and first_segment in request.app.frigate_config.cameras
):
return
except Exception:
pass
# For all other paths, require admin role
# Port 5000 (internal) requests have admin role set automatically
role = request.headers.get("remote-role")
if role == "admin":
return
raise HTTPException(
status_code=403,
detail="Access denied. A user with the admin role is required.",
)
return admin_checker
def allow_public():
"""
Override dependency to allow unauthenticated access to an endpoint.
Use this for endpoints that should be publicly accessible without
authentication, such as login page, health checks, or pre-auth info.
Example:
@router.get("/public-endpoint", dependencies=[Depends(allow_public())])
"""
async def public_checker(request: Request):
return # Always allow
return public_checker
def allow_any_authenticated():
"""
Override dependency to allow any request that passed through the /auth endpoint.
Allows:
- Port 5000 internal requests (remote-user: "anonymous", remote-role: "admin")
- Authenticated users with JWT tokens (remote-user: username)
- Unauthenticated requests when auth is disabled (remote-user: "viewer")
Rejects:
- Requests with no remote-user header (did not pass through /auth endpoint)
Example:
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
"""
async def auth_checker(request: Request):
# Ensure a remote-user has been set by the /auth endpoint
username = request.headers.get("remote-user")
if username is None:
raise HTTPException(status_code=401, detail="Authentication required")
return
return auth_checker
router = APIRouter(tags=[Tags.auth])
@router.get("/auth/first_time_login")
@router.get("/auth/first_time_login", dependencies=[Depends(allow_public())])
def first_time_login(request: Request):
"""Return whether the admin first-time login help flag is set in config.
@@ -143,7 +297,10 @@ def get_jwt_secret() -> str:
)
jwt_secret = secrets.token_hex(64)
try:
with open(jwt_secret_file, "w") as f:
fd = os.open(
jwt_secret_file, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o600
)
with os.fdopen(fd, "w") as f:
f.write(str(jwt_secret))
except Exception:
logger.warning(
@@ -188,9 +345,35 @@ def verify_password(password, password_hash):
return secrets.compare_digest(password_hash, compare_hash)
def validate_password_strength(password: str) -> tuple[bool, Optional[str]]:
"""
Validate password strength.
Returns a tuple of (is_valid, error_message).
"""
if not password:
return False, "Password cannot be empty"
if len(password) < 8:
return False, "Password must be at least 8 characters long"
if not any(c.isupper() for c in password):
return False, "Password must contain at least one uppercase letter"
if not any(c.isdigit() for c in password):
return False, "Password must contain at least one digit"
if not any(c in '!@#$%^&*(),.?":{}|<>' for c in password):
return False, "Password must contain at least one special character"
return True, None
def create_encoded_jwt(user, role, expiration, secret):
return jwt.encode(
{"alg": "HS256"}, {"sub": user, "role": role, "exp": expiration}, secret
{"alg": "HS256"},
{"sub": user, "role": role, "exp": expiration, "iat": int(time.time())},
secret,
)
@@ -352,7 +535,37 @@ def resolve_role(
# Endpoints
@router.get("/auth")
@router.get(
"/auth",
dependencies=[Depends(allow_public())],
summary="Authenticate request",
description=(
"Authenticates the current request based on proxy headers or JWT token. "
"This endpoint verifies authentication credentials and manages JWT token refresh. "
"On success, no JSON body is returned; authentication state is communicated via response headers and cookies."
),
status_code=202,
responses={
202: {
"description": "Authentication Accepted (no response body)",
"headers": {
"remote-user": {
"description": 'Authenticated username or "viewer" in proxy-only mode',
"schema": {"type": "string"},
},
"remote-role": {
"description": "Resolved role (e.g., admin, viewer, or custom)",
"schema": {"type": "string"},
},
"Set-Cookie": {
"description": "May include refreshed JWT cookie when applicable",
"schema": {"type": "string"},
},
},
},
401: {"description": "Authentication Failed"},
},
)
def auth(request: Request):
auth_config: AuthConfig = request.app.frigate_config.auth
proxy_config: ProxyConfig = request.app.frigate_config.proxy
@@ -379,12 +592,12 @@ def auth(request: Request):
# if auth is disabled, just apply the proxy header map and return success
if not auth_config.enabled:
# pass the user header value from the upstream proxy if a mapping is specified
# or use anonymous if none are specified
# or use viewer if none are specified
user_header = proxy_config.header_map.user
success_response.headers["remote-user"] = (
request.headers.get(user_header, default="anonymous")
request.headers.get(user_header, default="viewer")
if user_header
else "anonymous"
else "viewer"
)
# parse header and resolve a valid role
@@ -451,13 +664,27 @@ def auth(request: Request):
return fail_response
# if the jwt cookie is expiring soon
elif jwt_source == "cookie" and expiration - JWT_REFRESH <= current_time:
if jwt_source == "cookie" and expiration - JWT_REFRESH <= current_time:
logger.debug("jwt token expiring soon, refreshing cookie")
# ensure the user hasn't been deleted
# Check if password has been changed since token was issued
# If so, force re-login by rejecting the refresh
try:
User.get_by_id(user)
user_obj = User.get_by_id(user)
if user_obj.password_changed_at is not None:
token_iat = int(token.claims.get("iat", 0))
password_changed_timestamp = int(
user_obj.password_changed_at.timestamp()
)
if token_iat < password_changed_timestamp:
logger.debug(
"jwt token issued before password change, rejecting refresh"
)
return fail_response
except DoesNotExist:
logger.debug("user not found")
return fail_response
new_expiration = current_time + JWT_SESSION_LENGTH
new_encoded_jwt = create_encoded_jwt(
user, role, new_expiration, request.app.jwt_token
@@ -478,9 +705,14 @@ def auth(request: Request):
return fail_response
@router.get("/profile")
@router.get(
"/profile",
dependencies=[Depends(allow_any_authenticated())],
summary="Get user profile",
description="Returns the current authenticated user's profile including username, role, and allowed cameras. This endpoint requires authentication and returns information about the user's permissions.",
)
def profile(request: Request):
username = request.headers.get("remote-user", "anonymous")
username = request.headers.get("remote-user", "viewer")
role = request.headers.get("remote-role", "viewer")
all_camera_names = set(request.app.frigate_config.cameras.keys())
@@ -492,7 +724,12 @@ def profile(request: Request):
)
@router.get("/logout")
@router.get(
"/logout",
dependencies=[Depends(allow_public())],
summary="Logout user",
description="Logs out the current user by clearing the session cookie. After logout, subsequent requests will require re-authentication.",
)
def logout(request: Request):
auth_config: AuthConfig = request.app.frigate_config.auth
response = RedirectResponse("/login", status_code=303)
@@ -503,7 +740,12 @@ def logout(request: Request):
limiter = Limiter(key_func=get_remote_addr)
@router.post("/login")
@router.post(
"/login",
dependencies=[Depends(allow_public())],
summary="Login with credentials",
description='Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The JWT token can also be retrieved from the response and used as a Bearer token in the Authorization header.\n\nExample using Bearer token:\n```\ncurl -H "Authorization: Bearer <token_value>" https://frigate_ip:8971/api/profile\n```',
)
@limiter.limit(limit_value=rateLimiter.get_limit)
def login(request: Request, body: AppPostLoginBody):
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
@@ -541,7 +783,12 @@ def login(request: Request, body: AppPostLoginBody):
return JSONResponse(content={"message": "Login failed"}, status_code=401)
@router.get("/users", dependencies=[Depends(require_role(["admin"]))])
@router.get(
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Get all users",
description="Returns a list of all users with their usernames and roles. Requires admin role. Each user object contains the username and assigned role.",
)
def get_users():
exports = (
User.select(User.username, User.role).order_by(User.username).dicts().iterator()
@@ -549,7 +796,12 @@ def get_users():
return JSONResponse([e for e in exports])
@router.post("/users", dependencies=[Depends(require_role(["admin"]))])
@router.post(
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Create new user",
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
)
def create_user(
request: Request,
body: AppPostUsersBody,
@@ -578,13 +830,29 @@ def create_user(
return JSONResponse(content={"username": body.username})
@router.delete("/users/{username}")
def delete_user(username: str):
@router.delete(
"/users/{username}",
dependencies=[Depends(require_role(["admin"]))],
summary="Delete user",
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role. Returns success message or error if user not found.",
)
def delete_user(request: Request, username: str):
# Prevent deletion of the built-in admin user
if username == "admin":
return JSONResponse(
content={"message": "Cannot delete admin user"}, status_code=403
)
User.delete_by_id(username)
return JSONResponse(content={"success": True})
@router.put("/users/{username}/password")
@router.put(
"/users/{username}/password",
dependencies=[Depends(allow_any_authenticated())],
summary="Update user password",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
)
async def update_password(
request: Request,
username: str,
@@ -606,15 +874,66 @@ async def update_password(
HASH_ITERATIONS = request.app.frigate_config.auth.hash_iterations
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
User.set_by_id(username, {User.password_hash: password_hash})
try:
user = User.get_by_id(username)
except DoesNotExist:
return JSONResponse(content={"message": "User not found"}, status_code=404)
return JSONResponse(content={"success": True})
# Require old_password when non-admin user is changing any password
# Admin users changing passwords do NOT need to provide the current password
if current_role != "admin":
if not body.old_password:
return JSONResponse(
content={"message": "Current password is required"},
status_code=400,
)
if not verify_password(body.old_password, user.password_hash):
return JSONResponse(
content={"message": "Current password is incorrect"},
status_code=401,
)
# Validate new password strength
is_valid, error_message = validate_password_strength(body.password)
if not is_valid:
return JSONResponse(
content={"message": error_message},
status_code=400,
)
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
User.update(
{
User.password_hash: password_hash,
User.password_changed_at: datetime.now(),
}
).where(User.username == username).execute()
response = JSONResponse(content={"success": True})
# If user changed their own password, issue a new JWT to keep them logged in
if current_username == username:
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
JWT_COOKIE_SECURE = request.app.frigate_config.auth.cookie_secure
JWT_SESSION_LENGTH = request.app.frigate_config.auth.session_length
expiration = int(time.time()) + JWT_SESSION_LENGTH
encoded_jwt = create_encoded_jwt(
username, current_role, expiration, request.app.jwt_token
)
# Set new JWT cookie on response
set_jwt_cookie(
response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE
)
return response
@router.put(
"/users/{username}/role",
dependencies=[Depends(require_role(["admin"]))],
summary="Update user role",
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role. Valid roles are defined in the configuration.",
)
async def update_role(
request: Request,

View File

@@ -15,7 +15,11 @@ from onvif import ONVIFCamera, ONVIFError
from zeep.exceptions import Fault, TransportError
from zeep.transports import AsyncTransport
from frigate.api.auth import require_role
from frigate.api.auth import (
allow_any_authenticated,
require_camera_access,
require_role,
)
from frigate.api.defs.tags import Tags
from frigate.config.config import FrigateConfig
from frigate.util.builtin import clean_camera_user_pass
@@ -50,7 +54,7 @@ def _is_valid_host(host: str) -> bool:
return False
@router.get("/go2rtc/streams")
@router.get("/go2rtc/streams", dependencies=[Depends(allow_any_authenticated())])
def go2rtc_streams():
r = requests.get("http://127.0.0.1:1984/api/streams")
if not r.ok:
@@ -66,7 +70,9 @@ def go2rtc_streams():
return JSONResponse(content=stream_data)
@router.get("/go2rtc/streams/{camera_name}")
@router.get(
"/go2rtc/streams/{camera_name}", dependencies=[Depends(require_camera_access)]
)
def go2rtc_camera_stream(request: Request, camera_name: str):
r = requests.get(
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=all&microphone"
@@ -161,7 +167,7 @@ def go2rtc_delete_stream(stream_name: str):
)
@router.get("/ffprobe")
@router.get("/ffprobe", dependencies=[Depends(require_role(["admin"]))])
def ffprobe(request: Request, paths: str = "", detailed: bool = False):
path_param = paths

View File

@@ -31,6 +31,7 @@ from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera import DetectConfig
from frigate.config.classification import ObjectClassificationType
from frigate.const import CLIPS_DIR, FACE_DIR, MODEL_CACHE_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event
@@ -39,6 +40,7 @@ from frigate.util.classification import (
collect_state_classification_examples,
get_dataset_image_count,
read_training_metadata,
write_training_metadata,
)
from frigate.util.file import get_event_snapshot
@@ -622,6 +624,59 @@ def get_classification_dataset(name: str):
)
@router.get(
"/classification/attributes",
summary="Get custom classification attributes",
description="""Returns custom classification attributes for a given object type.
Only includes models with classification_type set to 'attribute'.
By default returns a flat sorted list of all attribute labels.
If group_by_model is true, returns attributes grouped by model name.""",
)
def get_custom_attributes(
request: Request, object_type: str = None, group_by_model: bool = False
):
models_with_attributes = {}
for (
model_key,
model_config,
) in request.app.frigate_config.classification.custom.items():
if (
not model_config.enabled
or not model_config.object_config
or model_config.object_config.classification_type
!= ObjectClassificationType.attribute
):
continue
model_objects = getattr(model_config.object_config, "objects", []) or []
if object_type is not None and object_type not in model_objects:
continue
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
if not os.path.exists(dataset_dir):
continue
attributes = []
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if os.path.isdir(category_dir) and category_name != "none":
attributes.append(category_name)
if attributes:
model_name = model_config.name or model_key
models_with_attributes[model_name] = sorted(attributes)
if group_by_model:
return JSONResponse(content=models_with_attributes)
else:
# Flatten to a unique sorted list
all_attributes = set()
for attributes in models_with_attributes.values():
all_attributes.update(attributes)
return JSONResponse(content=sorted(list(all_attributes)))
@router.get(
"/classification/{name}/train",
summary="Get classification train images",
@@ -710,7 +765,7 @@ def delete_classification_dataset_images(
if os.path.isfile(file_path):
os.unlink(file_path)
if os.path.exists(folder) and not os.listdir(folder):
if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none":
os.rmdir(folder)
return JSONResponse(
@@ -788,6 +843,12 @@ def rename_classification_category(
try:
os.rename(old_folder, new_folder)
# Mark dataset as ready to train by resetting training metadata
# This ensures the dataset is marked as changed after renaming
sanitized_name = sanitize_filename(name)
write_training_metadata(sanitized_name, 0)
return JSONResponse(
content=(
{
@@ -870,6 +931,46 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
)
@router.post(
"/classification/{name}/dataset/{category}/create",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Create an empty classification category folder",
description="""Creates an empty folder for a classification category.
This is used to create folders for categories that don't have images yet.
Returns a success message or an error if the name is invalid.""",
)
def create_classification_category(request: Request, name: str, category: str):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
category_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
)
os.makedirs(category_folder, exist_ok=True)
return JSONResponse(
content=(
{
"success": True,
"message": f"Successfully created category folder: {category}",
}
),
status_code=200,
)
@router.post(
"/classification/{name}/train/delete",
response_model=GenericResponse,

View File

@@ -12,6 +12,7 @@ class EventsQueryParams(BaseModel):
labels: Optional[str] = "all"
sub_label: Optional[str] = "all"
sub_labels: Optional[str] = "all"
attributes: Optional[str] = "all"
zone: Optional[str] = "all"
zones: Optional[str] = "all"
limit: Optional[int] = 100
@@ -58,6 +59,8 @@ class EventsSearchQueryParams(BaseModel):
limit: Optional[int] = 50
cameras: Optional[str] = "all"
labels: Optional[str] = "all"
sub_labels: Optional[str] = "all"
attributes: Optional[str] = "all"
zones: Optional[str] = "all"
after: Optional[float] = None
before: Optional[float] = None

View File

@@ -11,6 +11,7 @@ class AppConfigSetBody(BaseModel):
class AppPutPasswordBody(BaseModel):
password: str
old_password: Optional[str] = None
class AppPostUsersBody(BaseModel):

View File

@@ -24,12 +24,18 @@ class EventsLPRBody(BaseModel):
)
class EventsAttributesBody(BaseModel):
attributes: List[str] = Field(
title="Selected classification attributes for the event",
default_factory=list,
)
class EventsDescriptionBody(BaseModel):
description: Union[str, None] = Field(title="The description of the event")
class EventsCreateBody(BaseModel):
source_type: Optional[str] = "api"
sub_label: Optional[str] = None
score: Optional[float] = 0
duration: Optional[int] = 30

View File

@@ -1,4 +1,4 @@
from typing import Union
from typing import Optional, Union
from pydantic import BaseModel, Field
from pydantic.json_schema import SkipJsonSchema
@@ -16,5 +16,5 @@ class ExportRecordingsBody(BaseModel):
source: PlaybackSourceEnum = Field(
default=PlaybackSourceEnum.recordings, title="Playback source"
)
name: str = Field(title="Friendly name", default=None, max_length=256)
name: Optional[str] = Field(title="Friendly name", default=None, max_length=256)
image_path: Union[str, SkipJsonSchema[None]] = None

View File

@@ -22,6 +22,7 @@ from peewee import JOIN, DoesNotExist, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
@@ -36,6 +37,7 @@ from frigate.api.defs.query.regenerate_query_parameters import (
RegenerateQueryParameters,
)
from frigate.api.defs.request.events_body import (
EventsAttributesBody,
EventsCreateBody,
EventsDeleteBody,
EventsDescriptionBody,
@@ -54,6 +56,7 @@ from frigate.api.defs.response.event_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
from frigate.config.classification import ObjectClassificationType
from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline, Trigger
@@ -69,6 +72,7 @@ router = APIRouter(tags=[Tags.events])
@router.get(
"/events",
response_model=list[EventResponse],
dependencies=[Depends(allow_any_authenticated())],
summary="Get events",
description="Returns a list of events.",
)
@@ -97,6 +101,8 @@ def events(
if sub_labels == "all" and sub_label != "all":
sub_labels = sub_label
attributes = unquote(params.attributes)
zone = params.zone
zones = params.zones
@@ -185,6 +191,17 @@ def events(
sub_label_clause = reduce(operator.or_, sub_label_clauses)
clauses.append((sub_label_clause))
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
attribute_clause = reduce(operator.or_, attribute_clauses)
clauses.append(attribute_clause)
if recognized_license_plate != "all":
filtered_recognized_license_plates = recognized_license_plate.split(",")
@@ -343,7 +360,8 @@ def events(
@router.get(
"/events/explore",
response_model=list[EventResponse],
summary="Get summary of objects.",
dependencies=[Depends(allow_any_authenticated())],
summary="Get summary of objects",
description="""Gets a summary of objects from the database.
Returns a list of objects with a max of `limit` objects for each label.
""",
@@ -435,7 +453,8 @@ def events_explore(
@router.get(
"/event_ids",
response_model=list[EventResponse],
summary="Get events by ids.",
dependencies=[Depends(allow_any_authenticated())],
summary="Get events by ids",
description="""Gets events by a list of ids.
Returns a list of events.
""",
@@ -468,7 +487,8 @@ async def event_ids(ids: str, request: Request):
@router.get(
"/events/search",
summary="Search events.",
dependencies=[Depends(allow_any_authenticated())],
summary="Search events",
description="""Searches for events in the database.
Returns a list of events.
""",
@@ -487,6 +507,8 @@ def events_search(
# Filters
cameras = params.cameras
labels = params.labels
sub_labels = params.sub_labels
attributes = params.attributes
zones = params.zones
after = params.after
before = params.before
@@ -561,6 +583,38 @@ def events_search(
if labels != "all":
event_filters.append((Event.label << labels.split(",")))
if sub_labels != "all":
# use matching so joined sub labels are included
# for example a sub label 'bob' would get events
# with sub labels 'bob' and 'bob, john'
sub_label_clauses = []
filtered_sub_labels = sub_labels.split(",")
if "None" in filtered_sub_labels:
filtered_sub_labels.remove("None")
sub_label_clauses.append((Event.sub_label.is_null()))
for label in filtered_sub_labels:
sub_label_clauses.append(
(Event.sub_label.cast("text") == label)
) # include exact matches
# include this label when part of a list
sub_label_clauses.append((Event.sub_label.cast("text") % f"*{label},*"))
sub_label_clauses.append((Event.sub_label.cast("text") % f"*, {label}*"))
event_filters.append((reduce(operator.or_, sub_label_clauses)))
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
event_filters.append(reduce(operator.or_, attribute_clauses))
if zones != "all":
zone_clauses = []
filtered_zones = zones.split(",")
@@ -808,7 +862,7 @@ def events_search(
return JSONResponse(content=processed_events)
@router.get("/events/summary")
@router.get("/events/summary", dependencies=[Depends(allow_any_authenticated())])
def events_summary(
params: EventsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
@@ -918,7 +972,8 @@ def events_summary(
@router.get(
"/events/{event_id}",
response_model=EventResponse,
summary="Get event by id.",
dependencies=[Depends(allow_any_authenticated())],
summary="Get event by id",
description="Gets an event by its id.",
)
async def event(event_id: str, request: Request):
@@ -961,7 +1016,8 @@ def set_retain(event_id: str):
@router.post(
"/events/{event_id}/plus",
response_model=EventUploadPlusResponse,
summary="Send event to Frigate+.",
dependencies=[Depends(require_role(["admin"]))],
summary="Send event to Frigate+",
description="""Sends an event to Frigate+.
Returns a success message or an error if the event is not found.
""",
@@ -1101,6 +1157,7 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
@router.put(
"/events/{event_id}/false_positive",
response_model=EventUploadPlusResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Submit false positive to Frigate+",
description="""Submit an event as a false positive to Frigate+.
This endpoint is the same as the standard Frigate+ submission endpoint,
@@ -1199,7 +1256,7 @@ async def false_positive(request: Request, event_id: str):
"/events/{event_id}/retain",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Stop event from being retained indefinitely.",
summary="Stop event from being retained indefinitely",
description="""Stops an event from being retained indefinitely.
Returns a success message or an error if the event is not found.
NOTE: This is a legacy endpoint and is not supported in the frontend.
@@ -1228,7 +1285,7 @@ async def delete_retain(event_id: str, request: Request):
"/events/{event_id}/sub_label",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set event sub label.",
summary="Set event sub label",
description="""Sets an event's sub label.
Returns a success message or an error if the event is not found.
""",
@@ -1287,7 +1344,7 @@ async def set_sub_label(
"/events/{event_id}/recognized_license_plate",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set event license plate.",
summary="Set event license plate",
description="""Sets an event's license plate.
Returns a success message or an error if the event is not found.
""",
@@ -1343,11 +1400,112 @@ async def set_plate(
)
@router.post(
"/events/{event_id}/attributes",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set custom classification attributes",
description=(
"Sets an event's custom classification attributes for all attribute-type "
"models that apply to the event's object type."
),
)
async def set_attributes(
request: Request,
event_id: str,
body: EventsAttributesBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found."}),
status_code=404,
)
object_type = event.label
selected_attributes = set(body.attributes or [])
applied_updates: list[dict[str, str | float | None]] = []
for (
model_key,
model_config,
) in request.app.frigate_config.classification.custom.items():
# Only apply to enabled attribute classifiers that target this object type
if (
not model_config.enabled
or not model_config.object_config
or model_config.object_config.classification_type
!= ObjectClassificationType.attribute
or object_type not in (model_config.object_config.objects or [])
):
continue
# Get available labels from dataset directory
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
available_labels = set()
if os.path.exists(dataset_dir):
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if os.path.isdir(category_dir):
available_labels.add(category_name)
if not available_labels:
logger.warning(
"No dataset found for custom attribute model %s at %s",
model_key,
dataset_dir,
)
continue
# Find all selected attributes that apply to this model
model_name = model_config.name or model_key
matching_attrs = selected_attributes & available_labels
if matching_attrs:
# Publish updates for each selected attribute
for attr in matching_attrs:
request.app.event_metadata_updater.publish(
(event_id, model_name, attr, 1.0),
EventMetadataTypeEnum.attribute.value,
)
applied_updates.append(
{"model": model_name, "label": attr, "score": 1.0}
)
else:
# Clear this model's attribute
request.app.event_metadata_updater.publish(
(event_id, model_name, None, None),
EventMetadataTypeEnum.attribute.value,
)
applied_updates.append({"model": model_name, "label": None, "score": None})
if len(applied_updates) == 0:
return JSONResponse(
content={
"success": False,
"message": "No matching attributes found for this object type.",
},
status_code=400,
)
return JSONResponse(
content={
"success": True,
"message": f"Updated {len(applied_updates)} attribute(s)",
"applied": applied_updates,
},
status_code=200,
)
@router.post(
"/events/{event_id}/description",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Set event description.",
summary="Set event description",
description="""Sets an event's description.
Returns a success message or an error if the event is not found.
""",
@@ -1403,7 +1561,7 @@ async def set_description(
"/events/{event_id}/description/regenerate",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Regenerate event description.",
summary="Regenerate event description",
description="""Regenerates an event's description.
Returns a success message or an error if the event is not found.
""",
@@ -1455,8 +1613,8 @@ async def regenerate_description(
@router.post(
"/description/generate",
response_model=GenericResponse,
# dependencies=[Depends(require_role(["admin"]))],
summary="Generate description embedding.",
dependencies=[Depends(require_role(["admin"]))],
summary="Generate description embedding",
description="""Generates an embedding for an event's description.
Returns a success message or an error if the event is not found.
""",
@@ -1521,7 +1679,7 @@ async def delete_single_event(event_id: str, request: Request) -> dict:
"/events/{event_id}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete event.",
summary="Delete event",
description="""Deletes an event from the database.
Returns a success message or an error if the event is not found.
""",
@@ -1536,7 +1694,7 @@ async def delete_event(request: Request, event_id: str):
"/events/",
response_model=EventMultiDeleteResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete events.",
summary="Delete events",
description="""Deletes a list of events from the database.
Returns a success message or an error if the events are not found.
""",
@@ -1570,7 +1728,7 @@ async def delete_events(request: Request, body: EventsDeleteBody):
"/events/{camera_name}/{label}/create",
response_model=EventCreateResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Create manual event.",
summary="Create manual event",
description="""Creates a manual event in the database.
Returns a success message or an error if the event is not found.
NOTES:
@@ -1612,7 +1770,7 @@ def create_event(
body.score,
body.sub_label,
body.duration,
body.source_type,
"api",
body.draw,
),
EventMetadataTypeEnum.manual_event_create.value,
@@ -1634,7 +1792,7 @@ def create_event(
"/events/{event_id}/end",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="End manual event.",
summary="End manual event",
description="""Ends a manual event.
Returns a success message or an error if the event is not found.
NOTE: This should only be used for manual events.
@@ -1644,10 +1802,27 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
if body.end_time is not None and body.end_time < event.start_time:
return JSONResponse(
content=(
{
"success": False,
"message": f"end_time ({body.end_time}) cannot be before start_time ({event.start_time}).",
}
),
status_code=400,
)
end_time = body.end_time or datetime.datetime.now().timestamp()
request.app.event_metadata_updater.publish(
(event_id, end_time), EventMetadataTypeEnum.manual_event_end.value
)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found."}),
status_code=404,
)
except Exception:
return JSONResponse(
content=(
@@ -1666,7 +1841,7 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
"/trigger/embedding",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Create trigger embedding.",
summary="Create trigger embedding",
description="""Creates a trigger embedding for a specific trigger.
Returns a success message or an error if the trigger is not found.
""",
@@ -1723,37 +1898,40 @@ def create_trigger_embedding(
if event.data.get("type") != "object":
return
if thumbnail := get_event_thumbnail_bytes(event):
cursor = context.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[body.data],
# Get the thumbnail
thumbnail = get_event_thumbnail_bytes(event)
if thumbnail is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
},
status_code=400,
)
row = cursor.fetchone() if cursor else None
# Try to reuse existing embedding from database
cursor = context.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[body.data],
)
if row:
query_embedding = row[0]
embedding = np.frombuffer(query_embedding, dtype=np.float32)
row = cursor.fetchone() if cursor else None
if row:
query_embedding = row[0]
embedding = np.frombuffer(query_embedding, dtype=np.float32)
else:
# Extract valid thumbnail
thumbnail = get_event_thumbnail_bytes(event)
if thumbnail is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
},
status_code=400,
)
# Generate new embedding
embedding = context.generate_image_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None:
if embedding is None or (
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
):
return JSONResponse(
content={
"success": False,
@@ -1821,7 +1999,7 @@ def create_trigger_embedding(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Update trigger embedding.",
summary="Update trigger embedding",
description="""Updates a trigger embedding for a specific trigger.
Returns a success message or an error if the trigger is not found.
""",
@@ -1888,7 +2066,9 @@ def update_trigger_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None:
if embedding is None or (
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
):
return JSONResponse(
content={
"success": False,
@@ -1984,7 +2164,7 @@ def update_trigger_embedding(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete trigger embedding.",
summary="Delete trigger embedding",
description="""Deletes a trigger embedding for a specific trigger.
Returns a success message or an error if the trigger is not found.
""",
@@ -2058,7 +2238,7 @@ def delete_trigger_embedding(
"/triggers/status/{camera_name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
summary="Get triggers status.",
summary="Get triggers status",
description="""Gets the status of all triggers for a specific camera.
Returns a success message or an error if the camera is not found.
""",

View File

@@ -14,6 +14,7 @@ from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
@@ -44,6 +45,7 @@ router = APIRouter(tags=[Tags.export])
@router.get(
"/exports",
response_model=ExportsResponse,
dependencies=[Depends(allow_any_authenticated())],
summary="Get exports",
description="""Gets all exports from the database for cameras the user has access to.
Returns a list of exports ordered by date (most recent first).""",
@@ -272,6 +274,7 @@ async def export_delete(event_id: str, request: Request):
@router.get(
"/exports/{export_id}",
response_model=ExportModel,
dependencies=[Depends(allow_any_authenticated())],
summary="Get a single export",
description="""Gets a specific export by ID. The user must have access to the camera
associated with the export.""",

View File

@@ -2,7 +2,7 @@ import logging
import re
from typing import Optional
from fastapi import FastAPI, Request
from fastapi import Depends, FastAPI, Request
from fastapi.responses import JSONResponse
from joserfc.jwk import OctKey
from playhouse.sqliteq import SqliteQueueDatabase
@@ -24,7 +24,7 @@ from frigate.api import (
preview,
review,
)
from frigate.api.auth import get_jwt_secret, limiter
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
)
@@ -62,11 +62,15 @@ def create_fastapi_app(
stats_emitter: StatsEmitter,
event_metadata_updater: EventMetadataPublisher,
config_publisher: CameraConfigUpdatePublisher,
enforce_default_admin: bool = True,
):
logger.info("Starting FastAPI app")
app = FastAPI(
debug=False,
swagger_ui_parameters={"apisSorter": "alpha", "operationsSorter": "alpha"},
dependencies=[Depends(require_admin_by_default())]
if enforce_default_admin
else [],
)
# update the request_address with the x-forwarded-for header from nginx

View File

@@ -22,7 +22,11 @@ from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn, operator
from tzlocal import get_localzone_name
from frigate.api.auth import get_allowed_cameras_for_filter, require_camera_access
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
)
from frigate.api.defs.query.media_query_parameters import (
Extension,
MediaEventsSnapshotQueryParams,
@@ -393,7 +397,7 @@ async def submit_recording_snapshot_to_plus(
)
@router.get("/recordings/storage")
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
def get_recordings_storage_usage(request: Request):
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
"storage"
@@ -417,7 +421,7 @@ def get_recordings_storage_usage(request: Request):
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary")
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
@@ -635,7 +639,11 @@ async def recordings(
return JSONResponse(content=list(recordings))
@router.get("/recordings/unavailable", response_model=list[dict])
@router.get(
"/recordings/unavailable",
response_model=list[dict],
dependencies=[Depends(allow_any_authenticated())],
)
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
@@ -829,7 +837,19 @@ async def recording_clip(
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified timestamp-range on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
async def vod_ts(
camera_name: str,
start_ts: float,
end_ts: float,
force_discontinuity: bool = False,
):
logger.debug(
"VOD: Generating VOD for %s from %s to %s with force_discontinuity=%s",
camera_name,
start_ts,
end_ts,
force_discontinuity,
)
recordings = (
Recordings.select(
Recordings.path,
@@ -854,6 +874,14 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
recording: Recordings
for recording in recordings:
logger.debug(
"VOD: processing recording: %s start=%s end=%s duration=%s",
recording.path,
recording.start_time,
recording.end_time,
recording.duration,
)
clip = {"type": "source", "path": recording.path}
duration = int(recording.duration * 1000)
@@ -862,6 +890,11 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
inpoint = int((start_ts - recording.start_time) * 1000)
clip["clipFrom"] = inpoint
duration -= inpoint
logger.debug(
"VOD: applied clipFrom %sms to %s",
inpoint,
recording.path,
)
# adjust end if recording.end_time is after end_ts
if recording.end_time > end_ts:
@@ -869,12 +902,23 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
if duration < min_duration_ms:
# skip if the clip has no valid duration (too short to contain frames)
logger.debug(
"VOD: skipping recording %s - resulting duration %sms too short",
recording.path,
duration,
)
continue
if min_duration_ms <= duration < max_duration_ms:
clip["keyFrameDurations"] = [duration]
clips.append(clip)
durations.append(duration)
logger.debug(
"VOD: added clip %s duration_ms=%s clipFrom=%s",
recording.path,
duration,
clip.get("clipFrom"),
)
else:
logger.warning(f"Recording clip is missing or empty: {recording.path}")
@@ -894,7 +938,7 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
return JSONResponse(
content={
"cache": hour_ago.timestamp() > start_ts,
"discontinuity": False,
"discontinuity": force_discontinuity,
"consistentSequenceMediaInfo": True,
"durations": durations,
"segment_duration": max(durations),
@@ -937,6 +981,7 @@ async def vod_hour(
@router.get(
"/vod/event/{event_id}",
dependencies=[Depends(allow_any_authenticated())],
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_event(
@@ -977,6 +1022,19 @@ async def vod_event(
return vod_response
@router.get(
"/vod/clip/{camera_name}/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for a timestamp range with HLS discontinuity enabled. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_clip(
camera_name: str,
start_ts: float,
end_ts: float,
):
return await vod_ts(camera_name, start_ts, end_ts, force_discontinuity=True)
@router.get(
"/events/{event_id}/snapshot.jpg",
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
@@ -1053,7 +1111,10 @@ async def event_snapshot(
)
@router.get("/events/{event_id}/thumbnail.{extension}")
@router.get(
"/events/{event_id}/thumbnail.{extension}",
dependencies=[Depends(require_camera_access)],
)
async def event_thumbnail(
request: Request,
event_id: str,
@@ -1251,7 +1312,10 @@ def grid_snapshot(
)
@router.get("/events/{event_id}/snapshot-clean.webp")
@router.get(
"/events/{event_id}/snapshot-clean.webp",
dependencies=[Depends(require_camera_access)],
)
def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
webp_bytes = None
try:
@@ -1375,7 +1439,9 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
)
@router.get("/events/{event_id}/clip.mp4")
@router.get(
"/events/{event_id}/clip.mp4", dependencies=[Depends(require_camera_access)]
)
async def event_clip(
request: Request,
event_id: str,
@@ -1403,7 +1469,9 @@ async def event_clip(
)
@router.get("/events/{event_id}/preview.gif")
@router.get(
"/events/{event_id}/preview.gif", dependencies=[Depends(require_camera_access)]
)
def event_preview(request: Request, event_id: str):
try:
event: Event = Event.get(Event.id == event_id)
@@ -1756,7 +1824,7 @@ def preview_mp4(
)
@router.get("/review/{event_id}/preview")
@router.get("/review/{event_id}/preview", dependencies=[Depends(require_camera_access)])
def review_preview(
request: Request,
event_id: str,
@@ -1782,8 +1850,12 @@ def review_preview(
return preview_mp4(request, review.camera, start_ts, end_ts)
@router.get("/preview/{file_name}/thumbnail.jpg")
@router.get("/preview/{file_name}/thumbnail.webp")
@router.get(
"/preview/{file_name}/thumbnail.jpg", dependencies=[Depends(require_camera_access)]
)
@router.get(
"/preview/{file_name}/thumbnail.webp", dependencies=[Depends(require_camera_access)]
)
def preview_thumbnail(file_name: str):
"""Get a thumbnail from the cached preview frames."""
if len(file_name) > 1000:

View File

@@ -5,11 +5,12 @@ import os
from typing import Any
from cryptography.hazmat.primitives import serialization
from fastapi import APIRouter, Request
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from peewee import DoesNotExist
from py_vapid import Vapid01, utils
from frigate.api.auth import allow_any_authenticated
from frigate.api.defs.tags import Tags
from frigate.const import CONFIG_DIR
from frigate.models import User
@@ -21,6 +22,7 @@ router = APIRouter(tags=[Tags.notifications])
@router.get(
"/notifications/pubkey",
dependencies=[Depends(allow_any_authenticated())],
summary="Get VAPID public key",
description="""Gets the VAPID public key for the notifications.
Returns the public key or an error if notifications are not enabled.
@@ -47,6 +49,7 @@ def get_vapid_pub_key(request: Request):
@router.post(
"/notifications/register",
dependencies=[Depends(allow_any_authenticated())],
summary="Register notifications",
description="""Registers a notifications subscription.
Returns a success message or an error if the subscription is not provided.

View File

@@ -5,10 +5,14 @@ import os
from datetime import datetime, timedelta, timezone
import pytz
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, HTTPException
from fastapi.responses import JSONResponse
from frigate.api.auth import require_camera_access
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
)
from frigate.api.defs.response.preview_response import (
PreviewFramesResponse,
PreviewsResponse,
@@ -26,19 +30,32 @@ router = APIRouter(tags=[Tags.preview])
@router.get(
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}",
response_model=PreviewsResponse,
dependencies=[Depends(require_camera_access)],
dependencies=[Depends(allow_any_authenticated())],
summary="Get preview clips for time range",
description="""Gets all preview clips for a specified camera and time range.
Returns a list of preview video clips that overlap with the requested time period,
ordered by start time. Use camera_name='all' to get previews from all cameras.
Returns an error if no previews are found.""",
)
def preview_ts(camera_name: str, start_ts: float, end_ts: float):
def preview_ts(
camera_name: str,
start_ts: float,
end_ts: float,
allowed_cameras: list[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get all mp4 previews relevant for time period."""
if camera_name != "all":
camera_clause = Previews.camera == camera_name
if camera_name not in allowed_cameras:
raise HTTPException(status_code=403, detail="Access denied for camera")
camera_list = [camera_name]
else:
camera_clause = True
camera_list = allowed_cameras
if not camera_list:
return JSONResponse(
content={"success": False, "message": "No previews found."},
status_code=404,
)
previews = (
Previews.select(
@@ -53,7 +70,7 @@ def preview_ts(camera_name: str, start_ts: float, end_ts: float):
| Previews.end_time.between(start_ts, end_ts)
| ((start_ts > Previews.start_time) & (end_ts < Previews.end_time))
)
.where(camera_clause)
.where(Previews.camera << camera_list)
.order_by(Previews.start_time.asc())
.dicts()
.iterator()
@@ -88,14 +105,21 @@ def preview_ts(camera_name: str, start_ts: float, end_ts: float):
@router.get(
"/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
response_model=PreviewsResponse,
dependencies=[Depends(require_camera_access)],
dependencies=[Depends(allow_any_authenticated())],
summary="Get preview clips for specific hour",
description="""Gets all preview clips for a specific hour in a given timezone.
Converts the provided date/time from the specified timezone to UTC and retrieves
all preview clips for that hour. Use camera_name='all' to get previews from all cameras.
The tz_name should be a timezone like 'America/New_York' (use commas instead of slashes).""",
)
def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
def preview_hour(
year_month: str,
day: int,
hour: int,
camera_name: str,
tz_name: str,
allowed_cameras: list[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get all mp4 previews relevant for time period given the timezone"""
parts = year_month.split("-")
start_date = (
@@ -106,7 +130,7 @@ def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name
start_ts = start_date.timestamp()
end_ts = end_date.timestamp()
return preview_ts(camera_name, start_ts, end_ts)
return preview_ts(camera_name, start_ts, end_ts, allowed_cameras)
@router.get(

View File

@@ -14,6 +14,7 @@ from peewee import Case, DoesNotExist, IntegrityError, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
get_current_user,
require_camera_access,
@@ -43,7 +44,11 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.review])
@router.get("/review", response_model=list[ReviewSegmentResponse])
@router.get(
"/review",
response_model=list[ReviewSegmentResponse],
dependencies=[Depends(allow_any_authenticated())],
)
async def review(
params: ReviewQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
@@ -152,7 +157,11 @@ async def review(
return JSONResponse(content=[r for r in review_query])
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
@router.get(
"/review_ids",
response_model=list[ReviewSegmentResponse],
dependencies=[Depends(allow_any_authenticated())],
)
async def review_ids(request: Request, ids: str):
ids = ids.split(",")
@@ -186,7 +195,11 @@ async def review_ids(request: Request, ids: str):
)
@router.get("/review/summary", response_model=ReviewSummaryResponse)
@router.get(
"/review/summary",
response_model=ReviewSummaryResponse,
dependencies=[Depends(allow_any_authenticated())],
)
async def review_summary(
params: ReviewSummaryQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
@@ -461,7 +474,11 @@ async def review_summary(
return JSONResponse(content=data)
@router.post("/reviews/viewed", response_model=GenericResponse)
@router.post(
"/reviews/viewed",
response_model=GenericResponse,
dependencies=[Depends(allow_any_authenticated())],
)
async def set_multiple_reviewed(
request: Request,
body: ReviewModifyMultipleBody,
@@ -560,7 +577,9 @@ def delete_reviews(body: ReviewModifyMultipleBody):
@router.get(
"/review/activity/motion", response_model=list[ReviewActivityMotionResponse]
"/review/activity/motion",
response_model=list[ReviewActivityMotionResponse],
dependencies=[Depends(allow_any_authenticated())],
)
def motion_activity(
params: ReviewActivityMotionQueryParams = Depends(),
@@ -644,7 +663,11 @@ def motion_activity(
return JSONResponse(content=normalized)
@router.get("/review/event/{event_id}", response_model=ReviewSegmentResponse)
@router.get(
"/review/event/{event_id}",
response_model=ReviewSegmentResponse,
dependencies=[Depends(allow_any_authenticated())],
)
async def get_review_from_event(request: Request, event_id: str):
try:
review = ReviewSegment.get(
@@ -659,7 +682,11 @@ async def get_review_from_event(request: Request, event_id: str):
)
@router.get("/review/{review_id}", response_model=ReviewSegmentResponse)
@router.get(
"/review/{review_id}",
response_model=ReviewSegmentResponse,
dependencies=[Depends(allow_any_authenticated())],
)
async def get_review(request: Request, review_id: str):
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
@@ -672,7 +699,11 @@ async def get_review(request: Request, review_id: str):
)
@router.delete("/review/{review_id}/viewed", response_model=GenericResponse)
@router.delete(
"/review/{review_id}/viewed",
response_model=GenericResponse,
dependencies=[Depends(allow_any_authenticated())],
)
async def set_not_reviewed(
review_id: str,
current_user: dict = Depends(get_current_user),
@@ -710,6 +741,7 @@ async def set_not_reviewed(
@router.post(
"/review/summarize/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(allow_any_authenticated())],
description="Use GenAI to summarize review items over a period of time.",
)
def generate_review_summary(request: Request, start_ts: float, end_ts: float):

View File

@@ -100,6 +100,10 @@ class FrigateApp:
)
if (
config.semantic_search.enabled
or any(
c.objects.genai.enabled or c.review.genai.enabled
for c in config.cameras.values()
)
or config.lpr.enabled
or config.face_recognition.enabled
or len(config.classification.custom) > 0

View File

@@ -607,23 +607,27 @@ class Dispatcher:
)
self.publish(f"{camera_name}/snapshots/state", payload, retain=True)
def _on_ptz_command(self, camera_name: str, payload: str) -> None:
def _on_ptz_command(self, camera_name: str, payload: str | bytes) -> None:
"""Callback for ptz topic."""
try:
if "preset" in payload.lower():
preset: str = (
payload.decode("utf-8") if isinstance(payload, bytes) else payload
).lower()
if "preset" in preset:
command = OnvifCommandEnum.preset
param = payload.lower()[payload.index("_") + 1 :]
elif "move_relative" in payload.lower():
param = preset[preset.index("_") + 1 :]
elif "move_relative" in preset:
command = OnvifCommandEnum.move_relative
param = payload.lower()[payload.index("_") + 1 :]
param = preset[preset.index("_") + 1 :]
else:
command = OnvifCommandEnum[payload.lower()]
command = OnvifCommandEnum[preset]
param = ""
self.onvif.handle_command(camera_name, command, param)
logger.info(f"Setting ptz command to {command} for {camera_name}")
except KeyError as k:
logger.error(f"Invalid PTZ command {payload}: {k}")
logger.error(f"Invalid PTZ command {preset}: {k}")
def _on_birdseye_command(self, camera_name: str, payload: str) -> None:
"""Callback for birdseye topic."""

View File

@@ -225,7 +225,8 @@ class MqttClient(Communicator):
"birdseye_mode",
"review_alerts",
"review_detections",
"genai",
"object_descriptions",
"review_descriptions",
]
for name in self.config.cameras.keys():

View File

@@ -21,7 +21,7 @@ from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
)
from frigate.const import CONFIG_DIR
from frigate.const import BASE_DIR, CONFIG_DIR
from frigate.models import User
logger = logging.getLogger(__name__)
@@ -371,14 +371,39 @@ class WebPushClient(Communicator):
sorted_objects.update(payload["after"]["data"]["sub_labels"])
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
image = f"{payload['after']['thumb_path'].replace(BASE_DIR, '')}"
ended = state == "end" or state == "genai"
if state == "genai" and payload["after"]["data"]["metadata"]:
title = payload["after"]["data"]["metadata"]["title"]
base_title = payload["after"]["data"]["metadata"]["title"]
threat_level = payload["after"]["data"]["metadata"].get(
"potential_threat_level", 0
)
# Add prefix for threat levels 1 and 2
if threat_level == 1:
title = f"Needs Review: {base_title}"
elif threat_level == 2:
title = f"Security Concern: {base_title}"
else:
title = base_title
message = payload["after"]["data"]["metadata"]["scene"]
else:
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {titlecase(', '.join(payload['after']['data']['zones']).replace('_', ' '))}"
zone_names = payload["after"]["data"]["zones"]
formatted_zone_names = []
for zone_name in zone_names:
if zone_name in self.config.cameras[camera].zones:
formatted_zone_names.append(
self.config.cameras[camera]
.zones[zone_name]
.get_formatted_name(zone_name)
)
else:
formatted_zone_names.append(titlecase(zone_name.replace("_", " ")))
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {', '.join(formatted_zone_names)}"
message = f"Detected on {camera_name}"
if ended:

View File

@@ -20,7 +20,7 @@ class AuthConfig(FrigateBaseModel):
default=86400, title="Session length for jwt session tokens", ge=60
)
refresh_time: int = Field(
default=43200,
default=1800,
title="Refresh the session if it is going to expire in this many seconds",
ge=30,
)

View File

@@ -105,6 +105,11 @@ class CustomClassificationConfig(FrigateBaseModel):
threshold: float = Field(
default=0.8, title="Classification score threshold to change the state."
)
save_attempts: int | None = Field(
default=None,
title="Number of classification attempts to save in the recent classifications tab. If not specified, defaults to 200 for object classification and 100 for state classification.",
ge=0,
)
object_config: CustomClassificationObjectConfig | None = Field(default=None)
state_config: CustomClassificationStateConfig | None = Field(default=None)

View File

@@ -37,9 +37,6 @@ class UIConfig(FrigateBaseModel):
time_style: DateTimeStyleEnum = Field(
default=DateTimeStyleEnum.medium, title="Override UI timeStyle."
)
strftime_fmt: Optional[str] = Field(
default=None, title="Override date and time format using strftime syntax."
)
unit_system: UnitSystemEnum = Field(
default=UnitSystemEnum.metric, title="The unit system to use for measurements."
)

View File

@@ -77,6 +77,9 @@ FFMPEG_HWACCEL_RKMPP = "preset-rkmpp"
FFMPEG_HWACCEL_AMF = "preset-amd-amf"
FFMPEG_HVC1_ARGS = ["-tag:v", "hvc1"]
# RKNN constants
SUPPORTED_RK_SOCS = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
# Regex constants
REGEX_CAMERA_NAME = r"^[a-zA-Z0-9_-]+$"

View File

@@ -374,6 +374,9 @@ class LicensePlateProcessingMixin:
combined_plate = re.sub(
pattern, replacement, combined_plate
)
logger.debug(
f"{camera}: Processing replace rule: '{pattern}' -> '{replacement}', result: '{combined_plate}'"
)
except re.error as e:
logger.warning(
f"{camera}: Invalid regex in replace_rules '{pattern}': {e}"
@@ -381,7 +384,7 @@ class LicensePlateProcessingMixin:
if combined_plate != original_combined:
logger.debug(
f"{camera}: Rules applied: '{original_combined}' -> '{combined_plate}'"
f"{camera}: All rules applied: '{original_combined}' -> '{combined_plate}'"
)
# Compute the combined area for qualifying boxes

View File

@@ -131,8 +131,9 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
},
)
# Embed the description
self.embeddings.embed_description(event_id, transcription)
# Embed the description if semantic search is enabled
if self.config.semantic_search.enabled:
self.embeddings.embed_description(event_id, transcription)
except DoesNotExist:
logger.debug("No recording found for audio transcription post-processing")

View File

@@ -131,6 +131,8 @@ class ObjectDescriptionProcessor(PostProcessorApi):
)
):
self._process_genai_description(event, camera_config, thumbnail)
else:
self.cleanup_event(event.id)
def __regenerate_description(self, event_id: str, source: str, force: bool) -> None:
"""Regenerate the description for an event."""
@@ -204,6 +206,17 @@ class ObjectDescriptionProcessor(PostProcessorApi):
)
return None
def cleanup_event(self, event_id: str) -> None:
"""Clean up tracked event data to prevent memory leaks.
This should be called when an event ends, regardless of whether
genai processing is triggered.
"""
if event_id in self.tracked_events:
del self.tracked_events[event_id]
if event_id in self.early_request_sent:
del self.early_request_sent[event_id]
def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
"""Read, decode, and crop the snapshot image."""
@@ -299,9 +312,8 @@ class ObjectDescriptionProcessor(PostProcessorApi):
),
).start()
# Delete tracked events based on the event_id
if event.id in self.tracked_events:
del self.tracked_events[event.id]
# Clean up tracked events and early request state
self.cleanup_event(event.id)
def _genai_embed_description(self, event: Event, thumbnails: list[bytes]) -> None:
"""Embed the description for an event."""

View File

@@ -12,6 +12,7 @@ from typing import Any
import cv2
from peewee import DoesNotExist
from titlecase import titlecase
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
@@ -208,10 +209,22 @@ class ReviewDescriptionProcessor(PostProcessorApi):
logger.debug(
f"Found GenAI Review Summary request for {start_ts} to {end_ts}"
)
items: list[dict[str, Any]] = [
r["data"]["metadata"]
# Query all review segments with camera and time information
segments: list[dict[str, Any]] = [
{
"camera": r["camera"].replace("_", " ").title(),
"start_time": r["start_time"],
"end_time": r["end_time"],
"metadata": r["data"]["metadata"],
}
for r in (
ReviewSegment.select(ReviewSegment.data)
ReviewSegment.select(
ReviewSegment.camera,
ReviewSegment.start_time,
ReviewSegment.end_time,
ReviewSegment.data,
)
.where(
(ReviewSegment.data["metadata"].is_null(False))
& (ReviewSegment.start_time < end_ts)
@@ -223,21 +236,72 @@ class ReviewDescriptionProcessor(PostProcessorApi):
)
]
if len(items) == 0:
if len(segments) == 0:
logger.debug("No review items with metadata found during time period")
return "No activity was found during this time."
return "No activity was found during this time period."
important_items = list(
filter(
lambda item: item.get("potential_threat_level", 0) > 0
or item.get("other_concerns"),
items,
)
)
# Identify primary items (important items that need review)
primary_segments = [
seg
for seg in segments
if seg["metadata"].get("potential_threat_level", 0) > 0
or seg["metadata"].get("other_concerns")
]
if not important_items:
if not primary_segments:
return "No concerns were found during this time period."
# Build hierarchical structure: each primary event with its contextual items
events_with_context = []
for primary_seg in primary_segments:
# Start building the primary event structure
primary_item = copy.deepcopy(primary_seg["metadata"])
primary_item["camera"] = primary_seg["camera"]
primary_item["start_time"] = primary_seg["start_time"]
primary_item["end_time"] = primary_seg["end_time"]
# Find overlapping contextual items from other cameras
primary_start = primary_seg["start_time"]
primary_end = primary_seg["end_time"]
primary_camera = primary_seg["camera"]
contextual_items = []
seen_contextual_cameras = set()
for seg in segments:
seg_camera = seg["camera"]
if seg_camera == primary_camera:
continue
if seg in primary_segments:
continue
seg_start = seg["start_time"]
seg_end = seg["end_time"]
if seg_start < primary_end and primary_start < seg_end:
# Avoid duplicates if same camera has multiple overlapping segments
if seg_camera not in seen_contextual_cameras:
contextual_item = copy.deepcopy(seg["metadata"])
contextual_item["camera"] = seg_camera
contextual_item["start_time"] = seg_start
contextual_item["end_time"] = seg_end
contextual_items.append(contextual_item)
seen_contextual_cameras.add(seg_camera)
# Add context array to primary item
primary_item["context"] = contextual_items
events_with_context.append(primary_item)
total_context_items = sum(
len(event.get("context", [])) for event in events_with_context
)
logger.debug(
f"Summary includes {len(events_with_context)} primary events with "
f"{total_context_items} total contextual items"
)
if self.config.review.genai.debug_save_thumbnails:
Path(
os.path.join(CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}")
@@ -246,7 +310,8 @@ class ReviewDescriptionProcessor(PostProcessorApi):
return self.genai_client.generate_review_summary(
start_ts,
end_ts,
important_items,
events_with_context,
self.config.review.genai.preferred_language,
self.config.review.genai.debug_save_thumbnails,
)
else:
@@ -455,14 +520,14 @@ def run_analysis(
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
object_type = verified_label.replace("-verified", "").replace("_", " ")
name = sub_labels_list[i].replace("_", " ").title()
name = titlecase(sub_labels_list[i].replace("_", " "))
unified_objects.append(f"{name} ({object_type})")
for label in objects_list:
if "-verified" in label:
continue
elif label in labelmap_objects:
object_type = label.replace("_", " ").title()
object_type = titlecase(label.replace("_", " "))
if label in attribute_labels:
unified_objects.append(f"{object_type} (delivery/service)")

View File

@@ -13,7 +13,7 @@ from frigate.comms.event_metadata_updater import (
)
from frigate.config import FrigateConfig
from frigate.const import MODEL_CACHE_DIR
from frigate.log import redirect_output_to_logger
from frigate.log import suppress_stderr_during
from frigate.util.object import calculate_region
from ..types import DataProcessorMetrics
@@ -80,13 +80,14 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
except Exception as e:
logger.error(f"Failed to download {path}: {e}")
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
self.interpreter = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
num_threads=2,
)
self.interpreter.allocate_tensors()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()

View File

@@ -21,7 +21,7 @@ from frigate.config.classification import (
ObjectClassificationType,
)
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
from frigate.log import redirect_output_to_logger
from frigate.log import suppress_stderr_during
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
from frigate.util.object import box_overlaps, calculate_region
@@ -52,7 +52,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.requestor = requestor
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter | None = None
self.interpreter: Interpreter = None
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.labelmap: dict[int, str] = {}
@@ -72,8 +72,12 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.last_run = datetime.datetime.now().timestamp()
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from tensorflow.lite.python.interpreter import Interpreter
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
@@ -84,11 +88,13 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.labelmap = {}
return
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0)
@@ -99,6 +105,42 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
if self.inference_speed:
self.inference_speed.update(duration)
def _should_save_image(
self, camera: str, detected_state: str, score: float = 1.0
) -> bool:
"""
Determine if we should save the image for training.
Save when:
- State is changing or being verified (regardless of score)
- Score is less than 100% (even if state matches, useful for training)
Don't save when:
- State is stable (matches current_state) AND score is 100%
"""
if camera not in self.state_history:
# First detection for this camera, save it
return True
verification = self.state_history[camera]
current_state = verification.get("current_state")
pending_state = verification.get("pending_state")
# Save if there's a pending state change being verified
if pending_state is not None:
return True
# Save if the detected state differs from the current verified state
# (state is changing)
if current_state is not None and detected_state != current_state:
return True
# If score is less than 100%, save even if state matches
# (useful for training to improve confidence)
if score < 1.0:
return True
# Don't save if state is stable (detected_state == current_state) AND score is 100%
return False
def verify_state_change(self, camera: str, detected_state: str) -> str | None:
"""
Verify state change requires 3 consecutive identical states before publishing.
@@ -188,38 +230,52 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
if not should_run:
return
x, y, x2, y2 = calculate_region(
frame.shape,
crop[0],
crop[1],
crop[2],
crop[3],
224,
1.0,
)
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
frame = rgb[
y:y2,
x:x2,
]
height, width = rgb.shape[:2]
if frame.shape != (224, 224):
try:
resized_frame = cv2.resize(frame, (224, 224))
except Exception:
logger.warning("Failed to resize image for state classification")
return
# Convert normalized crop coordinates to pixel values
x1 = int(camera_config.crop[0] * width)
y1 = int(camera_config.crop[1] * height)
x2 = int(camera_config.crop[2] * width)
y2 = int(camera_config.crop[3] * height)
# Clip coordinates to frame boundaries
x1 = max(0, min(x1, width))
y1 = max(0, min(y1, height))
x2 = max(0, min(x2, width))
y2 = max(0, min(y2, height))
if x2 <= x1 or y2 <= y1:
logger.warning(
f"Invalid crop coordinates for {camera}: [{x1}, {y1}, {x2}, {y2}]"
)
return
frame = rgb[y1:y2, x1:x2]
try:
resized_frame = cv2.resize(frame, (224, 224))
except Exception:
logger.warning("Failed to resize image for state classification")
return
if self.interpreter is None:
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
"unknown",
0.0,
)
# When interpreter is None, always save (score is 0.0, which is < 1.0)
if self._should_save_image(camera, "unknown", 0.0):
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 100
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
"unknown",
0.0,
max_files=save_attempts,
)
return
input = np.expand_dims(resized_frame, axis=0)
@@ -236,14 +292,23 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
self.labelmap[best_id],
score,
)
detected_state = self.labelmap[best_id]
if self._should_save_image(camera, detected_state, score):
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 100
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
"none-none",
now,
detected_state,
score,
max_files=save_attempts,
)
if score < self.model_config.threshold:
logger.debug(
@@ -251,7 +316,6 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
)
return
detected_state = self.labelmap[best_id]
verified_state = self.verify_state_change(camera, detected_state)
if verified_state is not None:
@@ -293,7 +357,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.model_config = model_config
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter | None = None
self.interpreter: Interpreter = None
self.sub_label_publisher = sub_label_publisher
self.requestor = requestor
self.tensor_input_details: dict[str, Any] | None = None
@@ -314,7 +378,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
@@ -326,11 +389,13 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.labelmap = {}
return
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0)
@@ -405,9 +470,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if obj_data.get("end_time") is not None:
return
if obj_data.get("stationary"):
return
object_id = obj_data["id"]
if (
@@ -445,6 +507,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
return
if self.interpreter is None:
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 200
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
@@ -452,7 +519,15 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
now,
"unknown",
0.0,
max_files=save_attempts,
)
# Still track history even when model doesn't exist to respect MAX_OBJECT_CLASSIFICATIONS
# Add an entry with "unknown" label so the history limit is enforced
if object_id not in self.classification_history:
self.classification_history[object_id] = []
self.classification_history[object_id].append(("unknown", 0.0, now))
return
input = np.expand_dims(resized_crop, axis=0)
@@ -469,6 +544,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
save_attempts = (
self.model_config.save_attempts
if self.model_config.save_attempts is not None
else 200
)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
@@ -476,7 +556,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
now,
self.labelmap[best_id],
score,
max_files=200,
max_files=save_attempts,
)
if score < self.model_config.threshold:
@@ -579,15 +659,15 @@ def write_classification_attempt(
os.makedirs(folder, exist_ok=True)
cv2.imwrite(file, frame)
files = sorted(
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
key=lambda f: os.path.getctime(os.path.join(folder, f)),
reverse=True,
)
# delete oldest face image if maximum is reached
try:
files = sorted(
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
key=lambda f: os.path.getctime(os.path.join(folder, f)),
reverse=True,
)
if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
except FileNotFoundError:
except (FileNotFoundError, OSError):
pass

View File

@@ -131,6 +131,7 @@ class ONNXModelRunner(BaseModelRunner):
return model_type in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.yolov9_license_plate.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.facenet.value,
@@ -169,6 +170,7 @@ class CudaGraphRunner(BaseModelRunner):
return model_type not in [
ModelTypeEnum.yolonas.value,
ModelTypeEnum.dfine.value,
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,

View File

@@ -5,7 +5,7 @@ from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from frigate.log import redirect_output_to_logger
from frigate.log import suppress_stderr_during
from ..detector_utils import tflite_detect_raw, tflite_init
@@ -28,12 +28,13 @@ class CpuDetectorConfig(BaseDetectorConfig):
class CpuTfl(DetectionApi):
type_key = DETECTOR_KEY
@redirect_output_to_logger(logger, logging.DEBUG)
def __init__(self, detector_config: CpuDetectorConfig):
interpreter = Interpreter(
model_path=detector_config.model.path,
num_threads=detector_config.num_threads or 3,
)
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
interpreter = Interpreter(
model_path=detector_config.model.path,
num_threads=detector_config.num_threads or 3,
)
tflite_init(self, interpreter)

View File

@@ -1,19 +1,20 @@
import logging
import math
import os
import cv2
import numpy as np
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
try:
from tflite_runtime.interpreter import Interpreter, load_delegate
except ModuleNotFoundError:
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
logger = logging.getLogger(__name__)
DETECTOR_KEY = "edgetpu"
@@ -26,6 +27,10 @@ class EdgeTpuDetectorConfig(BaseDetectorConfig):
class EdgeTpuTfl(DetectionApi):
type_key = DETECTOR_KEY
supported_models = [
ModelTypeEnum.ssd,
ModelTypeEnum.yologeneric,
]
def __init__(self, detector_config: EdgeTpuDetectorConfig):
device_config = {}
@@ -63,31 +68,294 @@ class EdgeTpuTfl(DetectionApi):
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.model_width = detector_config.model.width
self.model_height = detector_config.model.height
self.min_score = 0.4
self.max_detections = 20
self.model_type = detector_config.model.model_type
self.model_requires_int8 = self.tensor_input_details[0]["dtype"] == np.int8
if self.model_type == ModelTypeEnum.yologeneric:
logger.debug("Using YOLO preprocessing/postprocessing")
if len(self.tensor_output_details) not in [2, 3]:
logger.error(
f"Invalid count of output tensors in YOLO model. Found {len(self.tensor_output_details)}, expecting 2 or 3."
)
raise
self.reg_max = 16 # = 64 dfl_channels // 4 # YOLO standard
self.min_logit_value = np.log(
self.min_score / (1 - self.min_score)
) # for filtering
self._generate_anchors_and_strides() # decode bounding box DFL
self.project = np.arange(
self.reg_max, dtype=np.float32
) # for decoding bounding box DFL information
# Determine YOLO tensor indices and quantization scales for
# boxes and class_scores the tensor ordering and names are
# not reliable, so use tensor shape to detect which tensor
# holds boxes or class scores.
# The tensors have shapes (B, N, C)
# where N is the number of candidates (=2100 for 320x320)
# this may guess wrong if the number of classes is exactly 64
output_boxes_index = None
output_classes_index = None
for i, x in enumerate(self.tensor_output_details):
# the nominal index seems to start at 1 instead of 0
if len(x["shape"]) == 3 and x["shape"][2] == 64:
output_boxes_index = i
elif len(x["shape"]) == 3 and x["shape"][2] > 1:
# require the number of classes to be more than 1
# to differentiate from (not used) max score tensor
output_classes_index = i
if output_boxes_index is None or output_classes_index is None:
logger.warning("Unrecognized model output, unexpected tensor shapes.")
output_classes_index = (
0
if (output_boxes_index is None or output_classes_index == 1)
else 1
) # 0 is default guess
output_boxes_index = 1 if (output_boxes_index == 0) else 0
scores_details = self.tensor_output_details[output_classes_index]
self.scores_tensor_index = scores_details["index"]
self.scores_scale, self.scores_zero_point = scores_details["quantization"]
# calculate the quantized version of the min_score
self.min_score_quantized = int(
(self.min_logit_value / self.scores_scale) + self.scores_zero_point
)
self.logit_shift_to_positive_values = (
max(0, math.ceil((128 + self.scores_zero_point) * self.scores_scale))
+ 1
) # round up
boxes_details = self.tensor_output_details[output_boxes_index]
self.boxes_tensor_index = boxes_details["index"]
self.boxes_scale, self.boxes_zero_point = boxes_details["quantization"]
elif self.model_type == ModelTypeEnum.ssd:
logger.debug("Using SSD preprocessing/postprocessing")
# SSD model indices (4 outputs: boxes, class_ids, scores, count)
for x in self.tensor_output_details:
if len(x["shape"]) == 3:
self.output_boxes_index = x["index"]
elif len(x["shape"]) == 1:
self.output_count_index = x["index"]
self.output_class_ids_index = None
self.output_class_scores_index = None
else:
raise Exception(
f"{self.model_type} is currently not supported for edgetpu. See the docs for more info on supported models."
)
def _generate_anchors_and_strides(self):
# for decoding the bounding box DFL information into xy coordinates
all_anchors = []
all_strides = []
strides = (8, 16, 32) # YOLO's small, medium, large detection heads
for stride in strides:
feat_h, feat_w = self.model_height // stride, self.model_width // stride
grid_y, grid_x = np.meshgrid(
np.arange(feat_h, dtype=np.float32),
np.arange(feat_w, dtype=np.float32),
indexing="ij",
)
grid_coords = np.stack((grid_x.flatten(), grid_y.flatten()), axis=1)
anchor_points = grid_coords + 0.5
all_anchors.append(anchor_points)
all_strides.append(np.full((feat_h * feat_w, 1), stride, dtype=np.float32))
self.anchors = np.concatenate(all_anchors, axis=0)
self.anchor_strides = np.concatenate(all_strides, axis=0)
def determine_indexes_for_non_yolo_models(self):
"""Legacy method for SSD models."""
if (
self.output_class_ids_index is None
or self.output_class_scores_index is None
):
for i in range(4):
index = self.tensor_output_details[i]["index"]
if (
index != self.output_boxes_index
and index != self.output_count_index
):
if (
np.mod(np.float32(self.interpreter.tensor(index)()[0][0]), 1)
== 0.0
):
self.output_class_ids_index = index
else:
self.output_scores_index = index
def pre_process(self, tensor_input):
if self.model_requires_int8:
tensor_input = np.bitwise_xor(tensor_input, 128).view(
np.int8
) # shift by -128
return tensor_input
def detect_raw(self, tensor_input):
tensor_input = self.pre_process(tensor_input)
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], tensor_input)
self.interpreter.invoke()
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
class_ids = self.interpreter.tensor(self.tensor_output_details[1]["index"])()[0]
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[0]
count = int(
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
)
if self.model_type == ModelTypeEnum.yologeneric:
# Multi-tensor YOLO model with (non-standard B(H*W)C output format).
# (the comments indicate the shape of tensors,
# using "2100" as the anchor count (for image size of 320x320),
# "NC" as number of classes,
# "N" as the count that survive after min-score filtering)
# TENSOR A) class scores (1, 2100, NC) with logit values
# TENSOR B) box coordinates (1, 2100, 64) encoded as dfl scores
# Recommend that the model clamp the logit values in tensor (A)
# to the range [-4,+4] to preserve precision from [2%,98%]
# and because NMS requires the min_score parameter to be >= 0
detections = np.zeros((20, 6), np.float32)
# don't dequantize scores data yet, wait until the low-confidence
# candidates are filtered out from the overall result set.
# This reduces the work and makes post-processing faster.
# this method works with raw quantized numbers when possible,
# which relies on the value of the scale factor to be >0.
# This speeds up max and argmax operations.
# Get max confidence for each detection and create the mask
detections = np.zeros(
(self.max_detections, 6), np.float32
) # initialize zero results
scores_output_quantized = self.interpreter.get_tensor(
self.scores_tensor_index
)[0] # (2100, NC)
max_scores_quantized = np.max(scores_output_quantized, axis=1) # (2100,)
mask = max_scores_quantized >= self.min_score_quantized # (2100,)
for i in range(count):
if scores[i] < 0.4 or i == 20:
break
detections[i] = [
class_ids[i],
float(scores[i]),
boxes[i][0],
boxes[i][1],
boxes[i][2],
boxes[i][3],
if not np.any(mask):
return detections # empty results
max_scores_filtered_shiftedpositive = (
(max_scores_quantized[mask] - self.scores_zero_point)
* self.scores_scale
) + self.logit_shift_to_positive_values # (N,1) shifted logit values
scores_output_quantized_filtered = scores_output_quantized[mask]
# dequantize boxes. NMS needs them to be in float format
# remove candidates with probabilities < threshold
boxes_output_quantized_filtered = (
self.interpreter.get_tensor(self.boxes_tensor_index)[0]
)[mask] # (N, 64)
boxes_output_filtered = (
boxes_output_quantized_filtered.astype(np.float32)
- self.boxes_zero_point
) * self.boxes_scale
# 2. Decode DFL to distances (ltrb)
dfl_distributions = boxes_output_filtered.reshape(
-1, 4, self.reg_max
) # (N, 4, 16)
# Softmax over the 16 bins
dfl_max = np.max(dfl_distributions, axis=2, keepdims=True)
dfl_exp = np.exp(dfl_distributions - dfl_max)
dfl_probs = dfl_exp / np.sum(dfl_exp, axis=2, keepdims=True) # (N, 4, 16)
# Weighted sum: (N, 4, 16) * (16,) -> (N, 4)
distances = np.einsum("pcr,r->pc", dfl_probs, self.project)
# Calculate box corners in pixel coordinates
anchors_filtered = self.anchors[mask]
anchor_strides_filtered = self.anchor_strides[mask]
x1y1 = (
anchors_filtered - distances[:, [0, 1]]
) * anchor_strides_filtered # (N, 2)
x2y2 = (
anchors_filtered + distances[:, [2, 3]]
) * anchor_strides_filtered # (N, 2)
boxes_filtered_decoded = np.concatenate((x1y1, x2y2), axis=-1) # (N, 4)
# 9. Apply NMS. Use logit scores here to defer sigmoid()
# until after filtering out redundant boxes
# Shift the logit scores to be non-negative (required by cv2)
indices = cv2.dnn.NMSBoxes(
bboxes=boxes_filtered_decoded,
scores=max_scores_filtered_shiftedpositive,
score_threshold=(
self.min_logit_value + self.logit_shift_to_positive_values
),
nms_threshold=0.4, # should this be a model config setting?
)
num_detections = len(indices)
if num_detections == 0:
return detections # empty results
nms_indices = np.array(indices, dtype=np.int32).ravel() # or .flatten()
if num_detections > self.max_detections:
nms_indices = nms_indices[: self.max_detections]
num_detections = self.max_detections
kept_logits_quantized = scores_output_quantized_filtered[nms_indices]
class_ids_post_nms = np.argmax(kept_logits_quantized, axis=1)
# Extract the final boxes and scores using fancy indexing
final_boxes = boxes_filtered_decoded[nms_indices]
final_scores_logits = (
max_scores_filtered_shiftedpositive[nms_indices]
- self.logit_shift_to_positive_values
) # Unshifted logits
# Detections array format: [class_id, score, ymin, xmin, ymax, xmax]
detections[:num_detections, 0] = class_ids_post_nms
detections[:num_detections, 1] = 1.0 / (
1.0 + np.exp(-final_scores_logits)
) # sigmoid
detections[:num_detections, 2] = final_boxes[:, 1] / self.model_height
detections[:num_detections, 3] = final_boxes[:, 0] / self.model_width
detections[:num_detections, 4] = final_boxes[:, 3] / self.model_height
detections[:num_detections, 5] = final_boxes[:, 2] / self.model_width
return detections
elif self.model_type == ModelTypeEnum.ssd:
self.determine_indexes_for_non_yolo_models()
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
class_ids = self.interpreter.tensor(
self.tensor_output_details[1]["index"]
)()[0]
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[
0
]
count = int(
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
)
return detections
detections = np.zeros((self.max_detections, 6), np.float32)
for i in range(count):
if scores[i] < self.min_score:
break
if i == self.max_detections:
logger.debug(f"Too many detections ({count})!")
break
detections[i] = [
class_ids[i],
float(scores[i]),
boxes[i][0],
boxes[i][1],
boxes[i][2],
boxes[i][3],
]
return detections
else:
raise Exception(
f"{self.model_type} is currently not supported for edgetpu. See the docs for more info on supported models."
)

View File

@@ -2,7 +2,6 @@ import glob
import logging
import os
import shutil
import time
import urllib.request
import zipfile
from queue import Queue
@@ -55,6 +54,9 @@ class MemryXDetector(DetectionApi):
)
return
# Initialize stop_event as None, will be set later by set_stop_event()
self.stop_event = None
model_cfg = getattr(detector_config, "model", None)
# Check if model_type was explicitly set by the user
@@ -363,26 +365,43 @@ class MemryXDetector(DetectionApi):
def process_input(self):
"""Input callback function: wait for frames in the input queue, preprocess, and send to MX3 (return)"""
while True:
# Check if shutdown is requested
if self.stop_event and self.stop_event.is_set():
logger.debug("[process_input] Stop event detected, returning None")
return None
try:
# Wait for a frame from the queue (blocking call)
frame = self.capture_queue.get(
block=True
) # Blocks until data is available
# Wait for a frame from the queue with timeout to check stop_event periodically
frame = self.capture_queue.get(block=True, timeout=0.5)
return frame
except Exception as e:
logger.info(f"[process_input] Error processing input: {e}")
time.sleep(0.1) # Prevent busy waiting in case of error
# Silently handle queue.Empty timeouts (expected during normal operation)
# Log any other unexpected exceptions
if "Empty" not in str(type(e).__name__):
logger.warning(f"[process_input] Unexpected error: {e}")
# Loop continues and will check stop_event at the top
def receive_output(self):
"""Retrieve processed results from MemryX output queue + a copy of the original frame"""
connection_id = (
self.capture_id_queue.get()
) # Get the corresponding connection ID
detections = self.output_queue.get() # Get detections from MemryX
try:
# Get connection ID with timeout
connection_id = self.capture_id_queue.get(
block=True, timeout=1.0
) # Get the corresponding connection ID
detections = self.output_queue.get() # Get detections from MemryX
return connection_id, detections
return connection_id, detections
except Exception as e:
# On timeout or stop event, return None
if self.stop_event and self.stop_event.is_set():
logger.debug("[receive_output] Stop event detected, exiting")
# Silently handle queue.Empty timeouts, they're expected during normal operation
elif "Empty" not in str(type(e).__name__):
logger.warning(f"[receive_output] Error receiving output: {e}")
return None, None
def post_process_yolonas(self, output):
predictions = output[0]
@@ -831,6 +850,19 @@ class MemryXDetector(DetectionApi):
f"{self.memx_model_type} is currently not supported for memryx. See the docs for more info on supported models."
)
def set_stop_event(self, stop_event):
"""Set the stop event for graceful shutdown."""
self.stop_event = stop_event
def shutdown(self):
"""Gracefully shutdown the MemryX accelerator"""
try:
if hasattr(self, "accl") and self.accl is not None:
self.accl.shutdown()
logger.info("MemryX accelerator shutdown complete")
except Exception as e:
logger.error(f"Error during MemryX shutdown: {e}")
def detect_raw(self, tensor_input: np.ndarray):
"""Removed synchronous detect_raw() function so that we only use async"""
return 0

View File

@@ -8,7 +8,7 @@ import cv2
import numpy as np
from pydantic import Field
from frigate.const import MODEL_CACHE_DIR
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detection_runners import RKNNModelRunner
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
@@ -19,8 +19,6 @@ logger = logging.getLogger(__name__)
DETECTOR_KEY = "rknn"
supported_socs = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
supported_models = {
ModelTypeEnum.yologeneric: "^frigate-fp16-yolov9-[cemst]$",
ModelTypeEnum.yolonas: "^deci-fp16-yolonas_[sml]$",
@@ -82,9 +80,9 @@ class Rknn(DetectionApi):
except FileNotFoundError:
raise Exception("Make sure to run docker in privileged mode.")
if soc not in supported_socs:
if soc not in SUPPORTED_RK_SOCS:
raise Exception(
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {supported_socs}."
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {SUPPORTED_RK_SOCS}."
)
return soc

View File

@@ -12,9 +12,6 @@ from peewee import DoesNotExist, IntegrityError
from PIL import Image
from playhouse.shortcuts import model_to_dict
from frigate.comms.embeddings_updater import (
EmbeddingsRequestEnum,
)
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.classification import SemanticSearchModelEnum
@@ -495,44 +492,49 @@ class Embeddings:
or thumbnail_missing
):
existing_trigger.embedding = self._calculate_trigger_embedding(
trigger
trigger, trigger_name, camera.name
)
needs_embedding_update = True
if needs_embedding_update:
existing_trigger.save()
continue
else:
# Create new trigger
try:
try:
event: Event = Event.get(Event.id == trigger.data)
except DoesNotExist:
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
# For thumbnail triggers, validate the event exists
if trigger.type == "thumbnail":
try:
event: Event = Event.get(Event.id == trigger.data)
except DoesNotExist:
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
)
continue
# Skip the event if not an object
if event.data.get("type") != "object":
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} is not a tracked object."
)
continue
thumbnail = get_event_thumbnail_bytes(event)
if not thumbnail:
logger.warning(
f"Unable to retrieve thumbnail for event ID {trigger.data} for {trigger_name}."
)
continue
self.write_trigger_thumbnail(
camera.name, trigger.data, thumbnail
)
continue
# Skip the event if not an object
if event.data.get("type") != "object":
logger.warning(
f"Event ID {trigger.data} for trigger {trigger_name} is not a tracked object."
)
continue
thumbnail = get_event_thumbnail_bytes(event)
if not thumbnail:
logger.warning(
f"Unable to retrieve thumbnail for event ID {trigger.data} for {trigger_name}."
)
continue
self.write_trigger_thumbnail(
camera.name, trigger.data, thumbnail
)
# Calculate embedding for new trigger
embedding = self._calculate_trigger_embedding(trigger)
embedding = self._calculate_trigger_embedding(
trigger, trigger_name, camera.name
)
Trigger.create(
camera=camera.name,
@@ -558,7 +560,11 @@ class Embeddings:
Trigger.camera == camera.name, Trigger.name.in_(triggers_to_remove)
).execute()
for trigger_name in triggers_to_remove:
self.remove_trigger_thumbnail(camera.name, trigger_name)
# Only remove thumbnail files for thumbnail triggers
if existing_triggers[trigger_name].type == "thumbnail":
self.remove_trigger_thumbnail(
camera.name, existing_triggers[trigger_name].data
)
def write_trigger_thumbnail(
self, camera: str, event_id: str, thumbnail: bytes
@@ -588,14 +594,13 @@ class Embeddings:
f"Failed to delete thumbnail for trigger with data {event_id} in {camera}: {e}"
)
def _calculate_trigger_embedding(self, trigger) -> bytes:
def _calculate_trigger_embedding(
self, trigger, trigger_name: str, camera_name: str
) -> bytes:
"""Calculate embedding for a trigger based on its type and data."""
if trigger.type == "description":
logger.debug(f"Generating embedding for trigger description {trigger.name}")
embedding = self.requestor.send_data(
EmbeddingsRequestEnum.embed_description.value,
{"id": None, "description": trigger.data, "upsert": False},
)
logger.debug(f"Generating embedding for trigger description {trigger_name}")
embedding = self.embed_description(None, trigger.data, upsert=False)
return embedding.astype(np.float32).tobytes()
elif trigger.type == "thumbnail":
@@ -615,28 +620,21 @@ class Embeddings:
try:
with open(
os.path.join(
TRIGGER_DIR, trigger.camera, f"{trigger.data}.webp"
),
os.path.join(TRIGGER_DIR, camera_name, f"{trigger.data}.webp"),
"rb",
) as f:
thumbnail = f.read()
except Exception as e:
logger.error(
f"Failed to read thumbnail for trigger {trigger.name} with ID {trigger.data}: {e}"
f"Failed to read thumbnail for trigger {trigger_name} with ID {trigger.data}: {e}"
)
return b""
logger.debug(
f"Generating embedding for trigger thumbnail {trigger.name} with ID {trigger.data}"
f"Generating embedding for trigger thumbnail {trigger_name} with ID {trigger.data}"
)
embedding = self.requestor.send_data(
EmbeddingsRequestEnum.embed_thumbnail.value,
{
"id": str(trigger.data),
"thumbnail": str(thumbnail),
"upsert": False,
},
embedding = self.embed_thumbnail(
str(trigger.data), thumbnail, upsert=False
)
return embedding.astype(np.float32).tobytes()

View File

@@ -203,7 +203,9 @@ class EmbeddingMaintainer(threading.Thread):
# post processors
self.post_processors: list[PostProcessorApi] = []
if any(c.review.genai.enabled_in_config for c in self.config.cameras.values()):
if self.genai_client is not None and any(
c.review.genai.enabled_in_config for c in self.config.cameras.values()
):
self.post_processors.append(
ReviewDescriptionProcessor(
self.config, self.requestor, self.metrics, self.genai_client
@@ -244,7 +246,9 @@ class EmbeddingMaintainer(threading.Thread):
)
self.post_processors.append(semantic_trigger_processor)
if any(c.objects.genai.enabled_in_config for c in self.config.cameras.values()):
if self.genai_client is not None and any(
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
):
self.post_processors.append(
ObjectDescriptionProcessor(
self.config,
@@ -522,6 +526,8 @@ class EmbeddingMaintainer(threading.Thread):
)
elif isinstance(processor, ObjectDescriptionProcessor):
if not updated_db:
# Still need to cleanup tracked events even if not processing
processor.cleanup_event(event_id)
continue
processor.process_data(

View File

@@ -8,7 +8,7 @@ import numpy as np
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_runners import get_optimized_runner
from frigate.embeddings.types import EnrichmentModelTypeEnum
from frigate.log import redirect_output_to_logger
from frigate.log import suppress_stderr_during
from frigate.util.downloader import ModelDownloader
from ...config import FaceRecognitionConfig
@@ -57,17 +57,18 @@ class FaceNetEmbedding(BaseEmbedding):
self._load_model_and_utils()
logger.debug(f"models are already downloaded for {self.model_name}")
@redirect_output_to_logger(logger, logging.DEBUG)
def _load_model_and_utils(self):
if self.runner is None:
if self.downloader:
self.downloader.wait_for_download()
self.runner = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
num_threads=2,
)
self.runner.allocate_tensors()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.runner = Interpreter(
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
num_threads=2,
)
self.runner.allocate_tensors()
self.tensor_input_details = self.runner.get_input_details()
self.tensor_output_details = self.runner.get_output_details()

View File

@@ -186,6 +186,9 @@ class JinaV1ImageEmbedding(BaseEmbedding):
download_func=self._download_model,
)
self.downloader.ensure_model_files()
# Avoid lazy loading in worker threads: block until downloads complete
# and load the model on the main thread during initialization.
self._load_model_and_utils()
else:
self.downloader = None
ModelDownloader.mark_files_state(

View File

@@ -65,6 +65,9 @@ class JinaV2Embedding(BaseEmbedding):
download_func=self._download_model,
)
self.downloader.ensure_model_files()
# Avoid lazy loading in worker threads: block until downloads complete
# and load the model on the main thread during initialization.
self._load_model_and_utils()
else:
self.downloader = None
ModelDownloader.mark_files_state(

View File

@@ -34,7 +34,7 @@ from frigate.data_processing.real_time.audio_transcription import (
AudioTranscriptionRealTimeProcessor,
)
from frigate.ffmpeg_presets import parse_preset_input
from frigate.log import LogPipe, redirect_output_to_logger
from frigate.log import LogPipe, suppress_stderr_during
from frigate.object_detection.base import load_labels
from frigate.util.builtin import get_ffmpeg_arg_list
from frigate.util.process import FrigateProcess
@@ -367,17 +367,17 @@ class AudioEventMaintainer(threading.Thread):
class AudioTfl:
@redirect_output_to_logger(logger, logging.DEBUG)
def __init__(self, stop_event: threading.Event, num_threads=2):
self.stop_event = stop_event
self.num_threads = num_threads
self.labels = load_labels("/audio-labelmap.txt", prefill=521)
self.interpreter = Interpreter(
model_path="/cpu_audio_model.tflite",
num_threads=self.num_threads,
)
self.interpreter.allocate_tensors()
# Suppress TFLite delegate creation messages that bypass Python logging
with suppress_stderr_during("tflite_interpreter_init"):
self.interpreter = Interpreter(
model_path="/cpu_audio_model.tflite",
num_threads=self.num_threads,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()

View File

@@ -46,7 +46,7 @@ def should_update_state(prev_event: Event, current_event: Event) -> bool:
if prev_event["sub_label"] != current_event["sub_label"]:
return True
if len(prev_event["current_zones"]) < len(current_event["current_zones"]):
if set(prev_event["current_zones"]) != set(current_event["current_zones"]):
return True
return False

View File

@@ -153,7 +153,7 @@ PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
"preset-intel-qsv-h264": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {2}",
"preset-intel-qsv-h265": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v main -level:v 4.1 -async_depth:v 1 {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
"preset-jetson-h264": "{0} -hide_banner {1} -c:v h264_nvmpi -profile high {2}",
"preset-jetson-h265": "{0} -hide_banner {1} -c:v h264_nvmpi -profile main {2}",
FFMPEG_HWACCEL_RKMPP: "{0} -hide_banner {1} -c:v h264_rkmpp -profile:v high {2}",

View File

@@ -177,50 +177,64 @@ Each line represents a detection state, not necessarily unique individuals. Pare
self,
start_ts: float,
end_ts: float,
segments: list[dict[str, Any]],
events: list[dict[str, Any]],
preferred_language: str | None,
debug_save: bool,
) -> str | None:
"""Generate a summary of review item descriptions over a period of time."""
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%B %d, %Y at %I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%B %d, %Y at %I:%M %p')}"
timeline_summary_prompt = f"""
You are a security officer.
Time range: {time_range}.
Input: JSON list with "title", "scene", "confidence", "potential_threat_level" (1-2), "other_concerns".
You are a security officer writing a concise security report.
Task: Write a concise, human-presentable security report in markdown format.
Time range: {time_range}
Rules for the report:
Input format: Each event is a JSON object with:
- "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "camera", "time", "start_time", "end_time"
- "context": array of related events from other cameras that occurred during overlapping time periods
- Title & overview
- Start with:
# Security Summary - {time_range}
- Write a 1-2 sentence situational overview capturing the general pattern of the period.
Report Structure - Use this EXACT format:
- Event details
- Present events in chronological order as a bullet list.
- **If multiple events occur within the same minute or overlapping time range, COMBINE them into a single bullet.**
- Summarize the distinct activities as sub-points under the shared timestamp.
- If no timestamp is given, preserve order but label as “Time not specified.”
- Use bold timestamps for clarity.
- Group bullets under subheadings when multiple events fall into the same category (e.g., Vehicle Activity, Porch Activity, Unusual Behavior).
# Security Summary - {time_range}
- Threat levels
- Always show (threat level: X) for each event.
- If multiple events at the same time share the same threat level, only state it once.
## Overview
[Write 1-2 sentences summarizing the overall activity pattern during this period.]
- Final assessment
- End with a Final Assessment section.
- If all events are threat level 1 with no escalation:
Final assessment: Only normal residential activity observed during this period.
- If threat level 2+ events are present, clearly summarize them as Potential concerns requiring review.
---
- Conciseness
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
- Summarize similar routine events instead of restating full scene descriptions.
## Timeline
[Group events by time periods (e.g., "Morning (6:00 AM - 12:00 PM)", "Afternoon (12:00 PM - 5:00 PM)", "Evening (5:00 PM - 9:00 PM)", "Night (9:00 PM - 6:00 AM)"). Use appropriate time blocks based on when events occurred.]
### [Time Block Name]
**HH:MM AM/PM** | [Camera Name] | [Threat Level Indicator]
- [Event title]: [Clear description incorporating contextual information from the "context" array]
- Context: [If context array has items, mention them here, e.g., "Delivery truck present on Front Driveway Cam (HH:MM AM/PM)"]
- Assessment: [Brief assessment incorporating context - if context explains the event, note it here]
[Repeat for each event in chronological order within the time block]
---
## Summary
[One sentence summarizing the period. If all events are normal/explained: "Routine activity observed." If review needed: "Some activity requires review but no security concerns." If security concerns: "Security concerns requiring immediate attention."]
Guidelines:
- List ALL events in chronological order, grouped by time blocks
- Threat level indicators: ✓ Normal, ⚠️ Needs review, 🔴 Security concern
- Integrate contextual information naturally - use the "context" array to enrich each event's description
- If context explains the event (e.g., delivery truck explains person at door), describe it accordingly (e.g., "delivery person" not "unidentified person")
- Be concise but informative - focus on what happened and what it means
- If contextual information makes an event clearly normal, reflect that in your assessment
- Only create time blocks that have events - don't create empty sections
"""
for item in segments:
timeline_summary_prompt += f"\n{item}"
timeline_summary_prompt += "\n\nEvents:\n"
for event in events:
timeline_summary_prompt += f"\n{event}\n"
if preferred_language:
timeline_summary_prompt += f"\nProvide your answer in {preferred_language}"
if debug_save:
with open(

View File

@@ -80,10 +80,15 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
log_levels = {
"absl": LogLevel.error,
"httpx": LogLevel.error,
"h5py": LogLevel.error,
"keras": LogLevel.error,
"matplotlib": LogLevel.error,
"tensorflow": LogLevel.error,
"tensorflow.python": LogLevel.error,
"werkzeug": LogLevel.error,
"ws4py": LogLevel.error,
"PIL": LogLevel.warning,
"numba": LogLevel.warning,
**log_levels,
}
@@ -318,3 +323,31 @@ def suppress_os_output(func: Callable) -> Callable:
return result
return wrapper
@contextmanager
def suppress_stderr_during(operation_name: str) -> Generator[None, None, None]:
"""
Context manager to suppress stderr output during a specific operation.
Useful for silencing LLVM debug output, CUDA messages, and other native
library logging that cannot be controlled via Python logging or environment
variables. Completely redirects file descriptor 2 (stderr) to /dev/null.
Usage:
with suppress_stderr_during("model_conversion"):
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Args:
operation_name: Name of the operation for debugging purposes
"""
original_stderr_fd = os.dup(2)
devnull = os.open(os.devnull, os.O_WRONLY)
try:
os.dup2(devnull, 2)
yield
finally:
os.dup2(original_stderr_fd, 2)
os.close(devnull)
os.close(original_stderr_fd)

View File

@@ -133,6 +133,7 @@ class User(Model):
default="admin",
)
password_hash = CharField(null=False, max_length=120)
password_changed_at = DateTimeField(null=True)
notification_tokens = JSONField()
@classmethod

View File

@@ -239,6 +239,12 @@ class ImprovedMotionDetector(MotionDetector):
)
self.mask = np.where(resized_mask == [0])
# Reset motion detection state when mask changes
# so motion detection can quickly recalibrate with the new mask
self.avg_frame = np.zeros(self.motion_frame_size, np.float32)
self.calibrating = True
self.motion_frame_count = 0
def stop(self) -> None:
"""stop the motion detector."""
pass

View File

@@ -43,6 +43,7 @@ class BaseLocalDetector(ObjectDetector):
self,
detector_config: BaseDetectorConfig = None,
labels: str = None,
stop_event: MpEvent = None,
):
self.fps = EventsPerSecond()
if labels is None:
@@ -60,6 +61,10 @@ class BaseLocalDetector(ObjectDetector):
self.detect_api = create_detector(detector_config)
# If the detector supports stop_event, pass it
if hasattr(self.detect_api, "set_stop_event") and stop_event:
self.detect_api.set_stop_event(stop_event)
def _transform_input(self, tensor_input: np.ndarray) -> np.ndarray:
if self.input_transform:
tensor_input = np.transpose(tensor_input, self.input_transform)
@@ -240,6 +245,10 @@ class AsyncDetectorRunner(FrigateProcess):
while not self.stop_event.is_set():
connection_id, detections = self._detector.async_receive_output()
# Handle timeout case (queue.Empty) - just continue
if connection_id is None:
continue
if not self.send_times:
# guard; shouldn't happen if send/recv are balanced
continue
@@ -266,21 +275,38 @@ class AsyncDetectorRunner(FrigateProcess):
self._frame_manager = SharedMemoryFrameManager()
self._publisher = ObjectDetectorPublisher()
self._detector = AsyncLocalObjectDetector(detector_config=self.detector_config)
self._detector = AsyncLocalObjectDetector(
detector_config=self.detector_config, stop_event=self.stop_event
)
for name in self.cameras:
self.create_output_shm(name)
t_detect = threading.Thread(target=self._detect_worker, daemon=True)
t_result = threading.Thread(target=self._result_worker, daemon=True)
t_detect = threading.Thread(target=self._detect_worker, daemon=False)
t_result = threading.Thread(target=self._result_worker, daemon=False)
t_detect.start()
t_result.start()
while not self.stop_event.is_set():
time.sleep(0.5)
try:
while not self.stop_event.is_set():
time.sleep(0.5)
self._publisher.stop()
logger.info("Exited async detection process...")
logger.info(
"Stop event detected, waiting for detector threads to finish..."
)
# Wait for threads to finish processing
t_detect.join(timeout=5)
t_result.join(timeout=5)
# Shutdown the AsyncDetector
self._detector.detect_api.shutdown()
self._publisher.stop()
except Exception as e:
logger.error(f"Error during async detector shutdown: {e}")
finally:
logger.info("Exited Async detection process...")
class ObjectDetectProcess:
@@ -308,7 +334,7 @@ class ObjectDetectProcess:
# if the process has already exited on its own, just return
if self.detect_process and self.detect_process.exitcode:
return
self.detect_process.terminate()
logging.info("Waiting for detection process to exit gracefully...")
self.detect_process.join(timeout=30)
if self.detect_process.exitcode is None:

View File

@@ -95,12 +95,21 @@ class OnvifController:
cam = self.camera_configs[cam_name]
try:
user = cam.onvif.user
password = cam.onvif.password
if user is not None and isinstance(user, bytes):
user = user.decode("utf-8")
if password is not None and isinstance(password, bytes):
password = password.decode("utf-8")
self.cams[cam_name] = {
"onvif": ONVIFCamera(
cam.onvif.host,
cam.onvif.port,
cam.onvif.user,
cam.onvif.password,
user,
password,
wsdl_dir=str(Path(find_spec("onvif").origin).parent / "wsdl"),
adjust_time=cam.onvif.ignore_time_mismatch,
encrypt=not cam.onvif.tls_insecure,
@@ -190,7 +199,11 @@ class OnvifController:
ptz: ONVIFService = await onvif.create_ptz_service()
self.cams[camera_name]["ptz"] = ptz
imaging: ONVIFService = await onvif.create_imaging_service()
try:
imaging: ONVIFService = await onvif.create_imaging_service()
except (Fault, ONVIFError, TransportError, Exception) as e:
logger.debug(f"Imaging service not supported for {camera_name}: {e}")
imaging = None
self.cams[camera_name]["imaging"] = imaging
try:
video_sources = await media.GetVideoSources()
@@ -321,9 +334,15 @@ class OnvifController:
presets = []
for preset in presets:
self.cams[camera_name]["presets"][
(getattr(preset, "Name") or f"preset {preset['token']}").lower()
] = preset["token"]
# Ensure preset name is a Unicode string and handle UTF-8 characters correctly
preset_name = getattr(preset, "Name") or f"preset {preset['token']}"
if isinstance(preset_name, bytes):
preset_name = preset_name.decode("utf-8")
# Convert to lowercase while preserving UTF-8 characters
preset_name_lower = preset_name.lower()
self.cams[camera_name]["presets"][preset_name_lower] = preset["token"]
# get list of supported features
supported_features = []
@@ -381,7 +400,10 @@ class OnvifController:
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported. Exception: {e}"
)
if self.cams[camera_name]["video_source_token"] is not None:
if (
self.cams[camera_name]["video_source_token"] is not None
and imaging is not None
):
try:
imaging_capabilities = await imaging.GetImagingSettings(
{"VideoSourceToken": self.cams[camera_name]["video_source_token"]}
@@ -421,6 +443,7 @@ class OnvifController:
if (
"focus" in self.cams[camera_name]["features"]
and self.cams[camera_name]["video_source_token"]
and self.cams[camera_name]["imaging"] is not None
):
try:
stop_request = self.cams[camera_name]["imaging"].create_type("Stop")
@@ -555,6 +578,11 @@ class OnvifController:
self.cams[camera_name]["active"] = False
async def _move_to_preset(self, camera_name: str, preset: str) -> None:
if isinstance(preset, bytes):
preset = preset.decode("utf-8")
preset = preset.lower()
if preset not in self.cams[camera_name]["presets"]:
logger.error(f"{preset} is not a valid preset for {camera_name}")
return
@@ -648,6 +676,7 @@ class OnvifController:
if (
"focus" not in self.cams[camera_name]["features"]
or not self.cams[camera_name]["video_source_token"]
or self.cams[camera_name]["imaging"] is None
):
logger.error(f"{camera_name} does not support ONVIF continuous focus.")
return

View File

@@ -119,6 +119,7 @@ class RecordingCleanup(threading.Thread):
Recordings.path,
Recordings.objects,
Recordings.motion,
Recordings.dBFS,
)
.where(
(Recordings.camera == config.name)
@@ -126,6 +127,7 @@ class RecordingCleanup(threading.Thread):
(
(Recordings.end_time < continuous_expire_date)
& (Recordings.motion == 0)
& (Recordings.dBFS == 0)
)
| (Recordings.end_time < motion_expire_date)
)
@@ -185,6 +187,7 @@ class RecordingCleanup(threading.Thread):
mode == RetainModeEnum.motion
and recording.motion == 0
and recording.objects == 0
and recording.dBFS == 0
)
or (mode == RetainModeEnum.active_objects and recording.objects == 0)
):

View File

@@ -67,7 +67,7 @@ class SegmentInfo:
if (
not keep
and retain_mode == RetainModeEnum.motion
and (self.motion_count > 0 or self.average_dBFS > 0)
and (self.motion_count > 0 or self.average_dBFS != 0)
):
keep = True

View File

@@ -42,11 +42,10 @@ def get_latest_version(config: FrigateConfig) -> str:
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
timeout=10,
)
response = request.json()
except (RequestException, JSONDecodeError):
return "unknown"
response = request.json()
if request.ok and response and "tag_name" in response:
return str(response.get("tag_name").replace("v", ""))
else:

View File

@@ -5,7 +5,7 @@ import shutil
import threading
from pathlib import Path
from peewee import fn
from peewee import SQL, fn
from frigate.config import FrigateConfig
from frigate.const import RECORD_DIR
@@ -44,13 +44,19 @@ class StorageMaintainer(threading.Thread):
)
}
# calculate MB/hr
# calculate MB/hr from last 100 segments
try:
bandwidth = round(
Recordings.select(fn.AVG(bandwidth_equation))
# Subquery to get last 100 segments, then average their bandwidth
last_100 = (
Recordings.select(bandwidth_equation.alias("bw"))
.where(Recordings.camera == camera, Recordings.segment_size > 0)
.order_by(Recordings.start_time.desc())
.limit(100)
.scalar()
.alias("recent")
)
bandwidth = round(
Recordings.select(fn.AVG(SQL("bw"))).from_(last_100).scalar()
* 3600,
2,
)

View File

@@ -3,6 +3,8 @@ import logging
import os
import unittest
from fastapi import Request
from fastapi.testclient import TestClient
from peewee_migrate import Router
from playhouse.sqlite_ext import SqliteExtDatabase
from playhouse.sqliteq import SqliteQueueDatabase
@@ -16,6 +18,20 @@ from frigate.review.types import SeverityEnum
from frigate.test.const import TEST_DB, TEST_DB_CLEANUPS
class AuthTestClient(TestClient):
"""TestClient that automatically adds auth headers to all requests."""
def request(self, *args, **kwargs):
# Add default auth headers if not already present
headers = kwargs.get("headers") or {}
if "remote-user" not in headers:
headers["remote-user"] = "admin"
if "remote-role" not in headers:
headers["remote-role"] = "admin"
kwargs["headers"] = headers
return super().request(*args, **kwargs)
class BaseTestHttp(unittest.TestCase):
def setUp(self, models):
# setup clean database for each test run
@@ -113,7 +129,9 @@ class BaseTestHttp(unittest.TestCase):
pass
def create_app(self, stats=None, event_metadata_publisher=None):
return create_fastapi_app(
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
app = create_fastapi_app(
FrigateConfig(**self.minimal_config),
self.db,
None,
@@ -123,8 +141,33 @@ class BaseTestHttp(unittest.TestCase):
stats,
event_metadata_publisher,
None,
enforce_default_admin=False,
)
# Default test mocks for authentication
# Tests can override these in their setUp if needed
# This mock uses headers set by AuthTestClient
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
async def mock_get_allowed_cameras_for_filter(request: Request):
return list(self.minimal_config.get("cameras", {}).keys())
app.dependency_overrides[get_current_user] = mock_get_current_user
app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
return app
def insert_mock_event(
self,
id: str,

View File

@@ -1,10 +1,8 @@
from unittest.mock import Mock
from fastapi.testclient import TestClient
from frigate.models import Event, Recordings, ReviewSegment
from frigate.stats.emitter import StatsEmitter
from frigate.test.http_api.base_http_test import BaseTestHttp
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestHttpApp(BaseTestHttp):
@@ -20,7 +18,7 @@ class TestHttpApp(BaseTestHttp):
stats.get_latest_stats.return_value = self.test_stats
app = super().create_app(stats)
with TestClient(app) as client:
with AuthTestClient(app) as client:
response = client.get("/stats")
response_json = response.json()
assert response_json == self.test_stats

View File

@@ -1,14 +1,13 @@
from unittest.mock import patch
from fastapi import HTTPException, Request
from fastapi.testclient import TestClient
from frigate.api.auth import (
get_allowed_cameras_for_filter,
get_current_user,
)
from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import BaseTestHttp
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestCameraAccessEventReview(BaseTestHttp):
@@ -16,9 +15,17 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().setUp([Event, ReviewSegment, Recordings])
self.app = super().create_app()
# Mock get_current_user to return valid user for all tests
async def mock_get_current_user():
return {"username": "test_user", "role": "user"}
# Mock get_current_user for all tests
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
self.app.dependency_overrides[get_current_user] = mock_get_current_user
@@ -30,21 +37,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_event("event1", camera="front_door")
super().insert_mock_event("event2", camera="back_door")
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/events")
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
assert "event1" in ids
assert "event2" not in ids
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/events")
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
@@ -54,21 +65,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_review_segment("rev1", camera="front_door")
super().insert_mock_review_segment("rev2", camera="back_door")
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/review")
assert resp.status_code == 200
ids = [r["id"] for r in resp.json()]
assert "rev1" in ids
assert "rev2" not in ids
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/review")
assert resp.status_code == 200
ids = [r["id"] for r in resp.json()]
@@ -84,7 +99,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.event.require_camera_access", mock_require_allowed):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
resp = client.get("/events/event1")
assert resp.status_code == 200
assert resp.json()["id"] == "event1"
@@ -94,7 +109,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.event.require_camera_access", mock_require_disallowed):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
resp = client.get("/events/event1")
assert resp.status_code == 403
@@ -108,7 +123,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.review.require_camera_access", mock_require_allowed):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
resp = client.get("/review/rev1")
assert resp.status_code == 200
assert resp.json()["id"] == "rev1"
@@ -118,7 +133,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
raise HTTPException(status_code=403, detail="Access denied")
with patch("frigate.api.review.require_camera_access", mock_require_disallowed):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
resp = client.get("/review/rev1")
assert resp.status_code == 403
@@ -126,21 +141,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_event("event1", camera="front_door")
super().insert_mock_event("event2", camera="back_door")
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/events", params={"cameras": "all"})
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
assert "event1" in ids
assert "event2" not in ids
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/events", params={"cameras": "all"})
assert resp.status_code == 200
ids = [e["id"] for e in resp.json()]
@@ -150,20 +169,24 @@ class TestCameraAccessEventReview(BaseTestHttp):
super().insert_mock_event("event1", camera="front_door")
super().insert_mock_event("event2", camera="back_door")
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/events/summary")
assert resp.status_code == 200
summary_list = resp.json()
assert len(summary_list) == 1
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
with TestClient(self.app) as client:
async def mock_cameras(request: Request):
return [
"front_door",
"back_door",
]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
with AuthTestClient(self.app) as client:
resp = client.get("/events/summary")
summary_list = resp.json()
assert len(summary_list) == 2

View File

@@ -2,14 +2,13 @@ from datetime import datetime
from typing import Any
from unittest.mock import Mock
from fastapi.testclient import TestClient
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
from frigate.comms.event_metadata_updater import EventMetadataPublisher
from frigate.models import Event, Recordings, ReviewSegment, Timeline
from frigate.stats.emitter import StatsEmitter
from frigate.test.http_api.base_http_test import BaseTestHttp
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp, Request
from frigate.test.test_storage import _insert_mock_event
@@ -18,14 +17,26 @@ class TestHttpApp(BaseTestHttp):
super().setUp([Event, Recordings, ReviewSegment, Timeline])
self.app = super().create_app()
# Mock auth to bypass camera access for tests
async def mock_get_current_user(request: Any):
return {"username": "test_user", "role": "admin"}
# Mock get_current_user for all tests
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
self.app.dependency_overrides[get_current_user] = mock_get_current_user
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
async def mock_get_allowed_cameras_for_filter(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
def tearDown(self):
self.app.dependency_overrides.clear()
@@ -35,20 +46,20 @@ class TestHttpApp(BaseTestHttp):
################################### GET /events Endpoint #########################################################
####################################################################################################################
def test_get_event_list_no_events(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
events = client.get("/events").json()
assert len(events) == 0
def test_get_event_list_no_match_event_id(self):
id = "123456.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
events = client.get("/events", params={"event_id": "abc"}).json()
assert len(events) == 0
def test_get_event_list_match_event_id(self):
id = "123456.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
events = client.get("/events", params={"event_id": id}).json()
assert len(events) == 1
@@ -58,7 +69,7 @@ class TestHttpApp(BaseTestHttp):
now = int(datetime.now().timestamp())
id = "123456.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id, now, now + 1)
events = client.get(
"/events", params={"max_length": 1, "min_length": 1}
@@ -69,7 +80,7 @@ class TestHttpApp(BaseTestHttp):
def test_get_event_list_no_match_max_length(self):
now = int(datetime.now().timestamp())
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_event(id, now, now + 2)
events = client.get("/events", params={"max_length": 1}).json()
@@ -78,7 +89,7 @@ class TestHttpApp(BaseTestHttp):
def test_get_event_list_no_match_min_length(self):
now = int(datetime.now().timestamp())
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_event(id, now, now + 2)
events = client.get("/events", params={"min_length": 3}).json()
@@ -88,7 +99,7 @@ class TestHttpApp(BaseTestHttp):
id = "123456.random"
id2 = "54321.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
events = client.get("/events").json()
assert len(events) == 1
@@ -108,14 +119,14 @@ class TestHttpApp(BaseTestHttp):
def test_get_event_list_no_match_has_clip(self):
now = int(datetime.now().timestamp())
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_event(id, now, now + 2)
events = client.get("/events", params={"has_clip": 0}).json()
assert len(events) == 0
def test_get_event_list_has_clip(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_event(id, has_clip=True)
events = client.get("/events", params={"has_clip": 1}).json()
@@ -123,7 +134,7 @@ class TestHttpApp(BaseTestHttp):
assert events[0]["id"] == id
def test_get_event_list_sort_score(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
id2 = "54321.random"
super().insert_mock_event(id, top_score=37, score=37, data={"score": 50})
@@ -141,7 +152,7 @@ class TestHttpApp(BaseTestHttp):
def test_get_event_list_sort_start_time(self):
now = int(datetime.now().timestamp())
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
id2 = "54321.random"
super().insert_mock_event(id, start_time=now + 3)
@@ -159,7 +170,7 @@ class TestHttpApp(BaseTestHttp):
def test_get_good_event(self):
id = "123456.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
event = client.get(f"/events/{id}").json()
@@ -171,7 +182,7 @@ class TestHttpApp(BaseTestHttp):
id = "123456.random"
bad_id = "654321.other"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
event_response = client.get(f"/events/{bad_id}")
assert event_response.status_code == 404
@@ -180,7 +191,7 @@ class TestHttpApp(BaseTestHttp):
def test_delete_event(self):
id = "123456.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
event = client.get(f"/events/{id}").json()
assert event
@@ -193,7 +204,7 @@ class TestHttpApp(BaseTestHttp):
def test_event_retention(self):
id = "123456.random"
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(id)
client.post(f"/events/{id}/retain", headers={"remote-role": "admin"})
event = client.get(f"/events/{id}").json()
@@ -212,12 +223,11 @@ class TestHttpApp(BaseTestHttp):
morning = 1656590400 # 06/30/2022 6 am (GMT)
evening = 1656633600 # 06/30/2022 6 pm (GMT)
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_event(morning_id, morning)
super().insert_mock_event(evening_id, evening)
# both events come back
events = client.get("/events").json()
print("events!!!", events)
assert events
assert len(events) == 2
# morning event is excluded
@@ -248,7 +258,7 @@ class TestHttpApp(BaseTestHttp):
mock_event_updater.publish.side_effect = update_event
with TestClient(app) as client:
with AuthTestClient(app) as client:
super().insert_mock_event(id)
new_sub_label_response = client.post(
f"/events/{id}/sub_label",
@@ -285,7 +295,7 @@ class TestHttpApp(BaseTestHttp):
mock_event_updater.publish.side_effect = update_event
with TestClient(app) as client:
with AuthTestClient(app) as client:
super().insert_mock_event(id)
client.post(
f"/events/{id}/sub_label",
@@ -301,7 +311,7 @@ class TestHttpApp(BaseTestHttp):
####################################################################################################################
def test_get_metrics(self):
"""ensure correct prometheus metrics api response"""
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
ts_start = datetime.now().timestamp()
ts_end = ts_start + 30
_insert_mock_event(

View File

@@ -1,14 +1,13 @@
"""Unit tests for recordings/media API endpoints."""
from datetime import datetime, timezone
from typing import Any
import pytz
from fastapi.testclient import TestClient
from fastapi import Request
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
from frigate.models import Recordings
from frigate.test.http_api.base_http_test import BaseTestHttp
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestHttpMedia(BaseTestHttp):
@@ -19,15 +18,26 @@ class TestHttpMedia(BaseTestHttp):
super().setUp([Recordings])
self.app = super().create_app()
# Mock auth to bypass camera access for tests
async def mock_get_current_user(request: Any):
return {"username": "test_user", "role": "admin"}
# Mock get_current_user for all tests
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
self.app.dependency_overrides[get_current_user] = mock_get_current_user
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
async def mock_get_allowed_cameras_for_filter(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
def tearDown(self):
"""Clean up after tests."""
@@ -52,7 +62,7 @@ class TestHttpMedia(BaseTestHttp):
# March 11, 2024 at 12:00 PM EDT (after DST)
march_11_noon = tz.localize(datetime(2024, 3, 11, 12, 0, 0)).timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
# Insert recordings for each day
Recordings.insert(
id="recording_march_9",
@@ -128,7 +138,7 @@ class TestHttpMedia(BaseTestHttp):
# November 4, 2024 at 12:00 PM EST (after DST)
nov_4_noon = tz.localize(datetime(2024, 11, 4, 12, 0, 0)).timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
# Insert recordings for each day
Recordings.insert(
id="recording_nov_2",
@@ -195,7 +205,15 @@ class TestHttpMedia(BaseTestHttp):
# March 10, 2024 at 3:00 PM EDT (after DST transition)
march_10_afternoon = tz.localize(datetime(2024, 3, 10, 15, 0, 0)).timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
# Override allowed cameras for this test to include both
async def mock_get_allowed_cameras_for_filter(_request: Request):
return ["front_door", "back_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
# Insert recordings for front_door on March 9
Recordings.insert(
id="front_march_9",
@@ -236,6 +254,14 @@ class TestHttpMedia(BaseTestHttp):
assert summary["2024-03-09"] is True
assert summary["2024-03-10"] is True
# Reset dependency override back to default single camera for other tests
async def reset_allowed_cameras(_request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
reset_allowed_cameras
)
def test_recordings_summary_at_dst_transition_time(self):
"""
Test recordings that span the exact DST transition time.
@@ -250,7 +276,7 @@ class TestHttpMedia(BaseTestHttp):
# This is 1.5 hours of actual time but spans the "missing" hour
after_transition = tz.localize(datetime(2024, 3, 10, 3, 30, 0)).timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
Recordings.insert(
id="recording_during_transition",
path="/media/recordings/transition.mp4",
@@ -283,7 +309,7 @@ class TestHttpMedia(BaseTestHttp):
march_9_utc = datetime(2024, 3, 9, 17, 0, 0, tzinfo=timezone.utc).timestamp()
march_10_utc = datetime(2024, 3, 10, 17, 0, 0, tzinfo=timezone.utc).timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
Recordings.insert(
id="recording_march_9_utc",
path="/media/recordings/march_9_utc.mp4",
@@ -325,7 +351,7 @@ class TestHttpMedia(BaseTestHttp):
"""
Test recordings summary when no recordings exist.
"""
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "all"},
@@ -342,7 +368,7 @@ class TestHttpMedia(BaseTestHttp):
tz = pytz.timezone("America/New_York")
march_10_noon = tz.localize(datetime(2024, 3, 10, 12, 0, 0)).timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
# Insert recordings for both cameras
Recordings.insert(
id="front_recording",

View File

@@ -1,12 +1,12 @@
from datetime import datetime, timedelta
from fastapi.testclient import TestClient
from fastapi import Request
from peewee import DoesNotExist
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
from frigate.models import Event, Recordings, ReviewSegment, UserReviewStatus
from frigate.review.types import SeverityEnum
from frigate.test.http_api.base_http_test import BaseTestHttp
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestHttpReview(BaseTestHttp):
@@ -16,14 +16,26 @@ class TestHttpReview(BaseTestHttp):
self.user_id = "admin"
# Mock get_current_user for all tests
async def mock_get_current_user():
return {"username": self.user_id, "role": "admin"}
# This mock uses headers set by AuthTestClient
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."}, status_code=401
)
return {"username": username, "role": role}
self.app.dependency_overrides[get_current_user] = mock_get_current_user
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door"
]
async def mock_get_allowed_cameras_for_filter(request: Request):
return ["front_door"]
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
def tearDown(self):
self.app.dependency_overrides.clear()
@@ -57,7 +69,7 @@ class TestHttpReview(BaseTestHttp):
but ends after is included in the results."""
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random", now, now + 2)
response = client.get("/review")
assert response.status_code == 200
@@ -67,7 +79,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_no_filters(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now - 2, now - 1)
response = client.get("/review")
@@ -81,7 +93,7 @@ class TestHttpReview(BaseTestHttp):
"""Test that review items outside the range are not returned."""
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now - 2, now - 1)
super().insert_mock_review_segment(f"{id}2", now + 4, now + 5)
@@ -97,7 +109,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_with_time_filter(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2)
params = {
@@ -113,7 +125,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_with_limit_filter(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
id2 = "654321.random"
super().insert_mock_review_segment(id, now, now + 2)
@@ -132,7 +144,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_with_severity_filters_no_matches(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
params = {
@@ -149,7 +161,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_with_severity_filters(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
params = {
@@ -165,7 +177,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_with_all_filters(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2)
params = {
@@ -188,7 +200,7 @@ class TestHttpReview(BaseTestHttp):
################################### GET /review/summary Endpoint #################################################
####################################################################################################################
def test_get_review_summary_all_filters(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
params = {
"cameras": "front_door",
@@ -219,7 +231,7 @@ class TestHttpReview(BaseTestHttp):
self.assertEqual(response_json, expected_response)
def test_get_review_summary_no_filters(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
response = client.get("/review/summary")
assert response.status_code == 200
@@ -247,7 +259,7 @@ class TestHttpReview(BaseTestHttp):
now = datetime.now()
five_days_ago = datetime.today() - timedelta(days=5)
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment(
"123456.random", now.timestamp() - 2, now.timestamp() - 1
)
@@ -291,7 +303,7 @@ class TestHttpReview(BaseTestHttp):
now = datetime.now()
five_days_ago = datetime.today() - timedelta(days=5)
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random", now.timestamp())
five_days_ago_ts = five_days_ago.timestamp()
for i in range(20):
@@ -342,7 +354,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review_summary_multiple_in_same_day_with_reviewed(self):
five_days_ago = datetime.today() - timedelta(days=5)
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
five_days_ago_ts = five_days_ago.timestamp()
for i in range(10):
id = f"123456_{i}.random_alert_not_reviewed"
@@ -393,14 +405,14 @@ class TestHttpReview(BaseTestHttp):
####################################################################################################################
def test_post_reviews_viewed_no_body(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
response = client.post("/reviews/viewed")
# Missing ids
assert response.status_code == 422
def test_post_reviews_viewed_no_body_ids(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
body = {"ids": [""]}
response = client.post("/reviews/viewed", json=body)
@@ -408,7 +420,7 @@ class TestHttpReview(BaseTestHttp):
assert response.status_code == 422
def test_post_reviews_viewed_non_existent_id(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": ["1"]}
@@ -425,7 +437,7 @@ class TestHttpReview(BaseTestHttp):
)
def test_post_reviews_viewed(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": [id]}
@@ -445,14 +457,14 @@ class TestHttpReview(BaseTestHttp):
################################### POST reviews/delete Endpoint ################################################
####################################################################################################################
def test_post_reviews_delete_no_body(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
response = client.post("/reviews/delete", headers={"remote-role": "admin"})
# Missing ids
assert response.status_code == 422
def test_post_reviews_delete_no_body_ids(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
body = {"ids": [""]}
response = client.post(
@@ -462,7 +474,7 @@ class TestHttpReview(BaseTestHttp):
assert response.status_code == 422
def test_post_reviews_delete_non_existent_id(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": ["1"]}
@@ -479,7 +491,7 @@ class TestHttpReview(BaseTestHttp):
assert review_ids_in_db_after[0].id == id
def test_post_reviews_delete(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": [id]}
@@ -495,7 +507,7 @@ class TestHttpReview(BaseTestHttp):
assert len(review_ids_in_db_after) == 0
def test_post_reviews_delete_many(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
ids = ["123456.random", "654321.random"]
for id in ids:
super().insert_mock_review_segment(id)
@@ -527,7 +539,7 @@ class TestHttpReview(BaseTestHttp):
def test_review_activity_motion_no_data_for_time_range(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
params = {
"after": now,
"before": now + 3,
@@ -540,7 +552,7 @@ class TestHttpReview(BaseTestHttp):
def test_review_activity_motion(self):
now = int(datetime.now().timestamp())
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
one_m = int((datetime.now() + timedelta(minutes=1)).timestamp())
id = "123456.random"
id2 = "123451.random"
@@ -573,7 +585,7 @@ class TestHttpReview(BaseTestHttp):
################################### GET /review/event/{event_id} Endpoint #######################################
####################################################################################################################
def test_review_event_not_found(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
response = client.get("/review/event/123456.random")
assert response.status_code == 404
response_json = response.json()
@@ -585,7 +597,7 @@ class TestHttpReview(BaseTestHttp):
def test_review_event_not_found_in_data(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now + 1, now + 2)
response = client.get(f"/review/event/{id}")
@@ -599,7 +611,7 @@ class TestHttpReview(BaseTestHttp):
def test_review_get_specific_event(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
event_id = "123456.event.random"
super().insert_mock_event(event_id)
review_id = "123456.review.random"
@@ -626,7 +638,7 @@ class TestHttpReview(BaseTestHttp):
################################### GET /review/{review_id} Endpoint #######################################
####################################################################################################################
def test_review_not_found(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
response = client.get("/review/123456.random")
assert response.status_code == 404
response_json = response.json()
@@ -638,7 +650,7 @@ class TestHttpReview(BaseTestHttp):
def test_get_review(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
review_id = "123456.review.random"
super().insert_mock_review_segment(review_id, now + 1, now + 2)
response = client.get(f"/review/{review_id}")
@@ -662,7 +674,7 @@ class TestHttpReview(BaseTestHttp):
####################################################################################################################
def test_delete_review_viewed_review_not_found(self):
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
review_id = "123456.random"
response = client.delete(f"/review/{review_id}/viewed")
assert response.status_code == 404
@@ -675,7 +687,7 @@ class TestHttpReview(BaseTestHttp):
def test_delete_review_viewed(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
with AuthTestClient(self.app) as client:
review_id = "123456.review.random"
super().insert_mock_review_segment(review_id, now + 1, now + 2)
self._insert_user_review_status(review_id, reviewed=True)

View File

@@ -86,11 +86,11 @@ class TimelineProcessor(threading.Thread):
event_data: dict[Any, Any],
) -> bool:
"""Handle object detection."""
save = False
camera_config = self.config.cameras[camera]
event_id = event_data["id"]
timeline_entry = {
# Base timeline entry data that all entries will share
base_entry = {
Timeline.timestamp: event_data["frame_time"],
Timeline.camera: camera,
Timeline.source: "tracked_object",
@@ -123,40 +123,64 @@ class TimelineProcessor(threading.Thread):
e[Timeline.data]["sub_label"] = event_data["sub_label"]
if event_type == EventStateEnum.start:
timeline_entry = base_entry.copy()
timeline_entry[Timeline.class_type] = "visible"
save = True
self.insert_or_save(timeline_entry, prev_event_data, event_data)
elif event_type == EventStateEnum.update:
# Check all conditions and create timeline entries for each change
entries_to_save = []
# Check for zone changes
prev_zones = set(prev_event_data["current_zones"])
current_zones = set(event_data["current_zones"])
zones_changed = prev_zones != current_zones
# Only save "entered_zone" events when the object is actually IN zones
if (
len(prev_event_data["current_zones"]) < len(event_data["current_zones"])
zones_changed
and not event_data["stationary"]
and len(current_zones) > 0
):
timeline_entry[Timeline.class_type] = "entered_zone"
timeline_entry[Timeline.data]["zones"] = event_data["current_zones"]
save = True
elif prev_event_data["stationary"] != event_data["stationary"]:
timeline_entry[Timeline.class_type] = (
zone_entry = base_entry.copy()
zone_entry[Timeline.class_type] = "entered_zone"
zone_entry[Timeline.data] = base_entry[Timeline.data].copy()
zone_entry[Timeline.data]["zones"] = event_data["current_zones"]
entries_to_save.append(zone_entry)
# Check for stationary status change
if prev_event_data["stationary"] != event_data["stationary"]:
stationary_entry = base_entry.copy()
stationary_entry[Timeline.class_type] = (
"stationary" if event_data["stationary"] else "active"
)
save = True
elif prev_event_data["attributes"] == {} and event_data["attributes"] != {}:
timeline_entry[Timeline.class_type] = "attribute"
timeline_entry[Timeline.data]["attribute"] = list(
stationary_entry[Timeline.data] = base_entry[Timeline.data].copy()
entries_to_save.append(stationary_entry)
# Check for new attributes
if prev_event_data["attributes"] == {} and event_data["attributes"] != {}:
attribute_entry = base_entry.copy()
attribute_entry[Timeline.class_type] = "attribute"
attribute_entry[Timeline.data] = base_entry[Timeline.data].copy()
attribute_entry[Timeline.data]["attribute"] = list(
event_data["attributes"].keys()
)[0]
if len(event_data["current_attributes"]) > 0:
timeline_entry[Timeline.data]["attribute_box"] = to_relative_box(
attribute_entry[Timeline.data]["attribute_box"] = to_relative_box(
camera_config.detect.width,
camera_config.detect.height,
event_data["current_attributes"][0]["box"],
)
save = True
elif event_type == EventStateEnum.end:
timeline_entry[Timeline.class_type] = "gone"
save = True
entries_to_save.append(attribute_entry)
if save:
# Save all entries
for entry in entries_to_save:
self.insert_or_save(entry, prev_event_data, event_data)
elif event_type == EventStateEnum.end:
timeline_entry = base_entry.copy()
timeline_entry[Timeline.class_type] = "gone"
self.insert_or_save(timeline_entry, prev_event_data, event_data)
def handle_api_entry(

View File

@@ -78,6 +78,8 @@ class TrackedObjectProcessor(threading.Thread):
[
CameraConfigUpdateEnum.add,
CameraConfigUpdateEnum.enabled,
CameraConfigUpdateEnum.motion,
CameraConfigUpdateEnum.objects,
CameraConfigUpdateEnum.remove,
CameraConfigUpdateEnum.zones,
],

View File

@@ -19,9 +19,10 @@ from frigate.const import (
PROCESS_PRIORITY_LOW,
UPDATE_MODEL_STATE,
)
from frigate.log import redirect_output_to_logger
from frigate.log import redirect_output_to_logger, suppress_stderr_during
from frigate.models import Event, Recordings, ReviewSegment
from frigate.types import ModelStatusTypesEnum
from frigate.util.downloader import ModelDownloader
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.process import FrigateProcess
@@ -121,6 +122,10 @@ def get_dataset_image_count(model_name: str) -> int:
class ClassificationTrainingProcess(FrigateProcess):
def __init__(self, model_name: str) -> None:
self.BASE_WEIGHT_URL = os.environ.get(
"TF_KERAS_MOBILENET_V2_WEIGHTS_URL",
"",
)
super().__init__(
stop_event=None,
priority=PROCESS_PRIORITY_LOW,
@@ -179,11 +184,23 @@ class ClassificationTrainingProcess(FrigateProcess):
)
return False
weights_path = "imagenet"
# Download MobileNetV2 weights if not present
if self.BASE_WEIGHT_URL:
weights_path = os.path.join(
MODEL_CACHE_DIR, "MobileNet", "mobilenet_v2_weights.h5"
)
if not os.path.exists(weights_path):
logger.info("Downloading MobileNet V2 weights file")
ModelDownloader.download_from_url(
self.BASE_WEIGHT_URL, weights_path
)
# Start with imagenet base model with 35% of channels in each layer
base_model = MobileNetV2(
input_shape=(224, 224, 3),
include_top=False,
weights="imagenet",
weights=weights_path,
alpha=0.35,
)
base_model.trainable = False # Freeze pre-trained layers
@@ -233,15 +250,20 @@ class ClassificationTrainingProcess(FrigateProcess):
logger.debug(f"Converting {self.model_name} to TFLite...")
# convert model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = (
self.__generate_representative_dataset_factory(dataset_dir)
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# Suppress stderr during conversion to avoid LLVM debug output
# (fully_quantize, inference_type, MLIR optimization messages, etc)
with suppress_stderr_during("tflite_conversion"):
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = (
self.__generate_representative_dataset_factory(dataset_dir)
)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS_INT8
]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# write model
model_path = os.path.join(model_dir, "model.tflite")
@@ -330,7 +352,7 @@ def collect_state_classification_examples(
1. Queries review items from specified cameras
2. Selects 100 balanced timestamps across the data
3. Extracts keyframes from recordings (cropped to specified regions)
4. Selects 20 most visually distinct images
4. Selects 24 most visually distinct images
5. Saves them to the dataset directory
Args:
@@ -338,8 +360,6 @@ def collect_state_classification_examples(
cameras: Dict mapping camera names to normalized crop coordinates [x1, y1, x2, y2] (0-1)
"""
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
temp_dir = os.path.join(dataset_dir, "temp")
os.makedirs(temp_dir, exist_ok=True)
# Step 1: Get review items for the cameras
camera_names = list(cameras.keys())
@@ -354,6 +374,10 @@ def collect_state_classification_examples(
logger.warning(f"No review items found for cameras: {camera_names}")
return
# The temp directory is only created when there are review_items.
temp_dir = os.path.join(dataset_dir, "temp")
os.makedirs(temp_dir, exist_ok=True)
# Step 2: Create balanced timestamp selection (100 samples)
timestamps = _select_balanced_timestamps(review_items, target_count=100)
@@ -482,6 +506,10 @@ def _extract_keyframes(
"""
Extract keyframes from recordings at specified timestamps and crop to specified regions.
This implementation batches work by running multiple ffmpeg snapshot commands
concurrently, which significantly reduces total runtime compared to
processing each timestamp serially.
Args:
ffmpeg_path: Path to ffmpeg binary
timestamps: List of timestamp dicts from _select_balanced_timestamps
@@ -491,15 +519,21 @@ def _extract_keyframes(
Returns:
List of paths to successfully extracted and cropped keyframe images
"""
keyframe_paths = []
from concurrent.futures import ThreadPoolExecutor, as_completed
for idx, ts_info in enumerate(timestamps):
if not timestamps:
return []
# Limit the number of concurrent ffmpeg processes so we don't overload the host.
max_workers = min(5, len(timestamps))
def _process_timestamp(idx: int, ts_info: dict) -> tuple[int, str | None]:
camera = ts_info["camera"]
timestamp = ts_info["timestamp"]
if camera not in camera_crops:
logger.warning(f"No crop coordinates for camera {camera}")
continue
return idx, None
norm_x1, norm_y1, norm_x2, norm_y2 = camera_crops[camera]
@@ -516,7 +550,7 @@ def _extract_keyframes(
.get()
)
except Exception:
continue
return idx, None
relative_time = timestamp - recording.start_time
@@ -530,38 +564,57 @@ def _extract_keyframes(
height=None,
)
if image_data:
nparr = np.frombuffer(image_data, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
if not image_data:
return idx, None
if img is not None:
height, width = img.shape[:2]
nparr = np.frombuffer(image_data, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
x1 = int(norm_x1 * width)
y1 = int(norm_y1 * height)
x2 = int(norm_x2 * width)
y2 = int(norm_y2 * height)
if img is None:
return idx, None
x1_clipped = max(0, min(x1, width))
y1_clipped = max(0, min(y1, height))
x2_clipped = max(0, min(x2, width))
y2_clipped = max(0, min(y2, height))
height, width = img.shape[:2]
if x2_clipped > x1_clipped and y2_clipped > y1_clipped:
cropped = img[y1_clipped:y2_clipped, x1_clipped:x2_clipped]
resized = cv2.resize(cropped, (224, 224))
x1 = int(norm_x1 * width)
y1 = int(norm_y1 * height)
x2 = int(norm_x2 * width)
y2 = int(norm_y2 * height)
output_path = os.path.join(output_dir, f"frame_{idx:04d}.jpg")
cv2.imwrite(output_path, resized)
keyframe_paths.append(output_path)
x1_clipped = max(0, min(x1, width))
y1_clipped = max(0, min(y1, height))
x2_clipped = max(0, min(x2, width))
y2_clipped = max(0, min(y2, height))
if x2_clipped <= x1_clipped or y2_clipped <= y1_clipped:
return idx, None
cropped = img[y1_clipped:y2_clipped, x1_clipped:x2_clipped]
resized = cv2.resize(cropped, (224, 224))
output_path = os.path.join(output_dir, f"frame_{idx:04d}.jpg")
cv2.imwrite(output_path, resized)
return idx, output_path
except Exception as e:
logger.debug(
f"Failed to extract frame from {recording.path} at {relative_time}s: {e}"
)
continue
return idx, None
return keyframe_paths
keyframes_with_index: list[tuple[int, str]] = []
with ThreadPoolExecutor(max_workers=max_workers) as executor:
future_to_idx = {
executor.submit(_process_timestamp, idx, ts_info): idx
for idx, ts_info in enumerate(timestamps)
}
for future in as_completed(future_to_idx):
_, path = future.result()
if path:
keyframes_with_index.append((future_to_idx[future], path))
keyframes_with_index.sort(key=lambda item: item[0])
return [path for _, path in keyframes_with_index]
def _select_distinct_images(
@@ -660,7 +713,6 @@ def collect_object_classification_examples(
Args:
model_name: Name of the classification model
label: Object label to collect (e.g., "person", "car")
cameras: List of camera names to collect examples from
"""
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
temp_dir = os.path.join(dataset_dir, "temp")

Some files were not shown because too many files have changed in this diff Show More