* improve chip tooltip display
- use formatList to use i18n separators instead of commas
- ensure the correct event type is used so sublabels are not run through normalization
- remove smart-capitalization classes as translated labels use i18n (which includes capitalization)
- give icons an optional key so that the console doesn't complain about duplication when rendering
* Add grace period for recording segment checks to prevent spurious ffmpeg restarts
* add admin precedence to proxy role_map resolution to prevent downgrade
* clean up
* formatting
* work around radix pointer events issue when dialog is opened from drawer
fixes https://github.com/blakeblackshear/frigate/discussions/21940
* prevent console warnings about missing titles and descriptions
make these invisible with sr-only
* remove duplicate language
* Adjust handling for device sizes
* Cleanup
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* fix display of custom sublabels in review item chip
use "manual" as type so it's not run through translation and normalized, producing "Josh S Car" instead of "Josh's Car"
* use css instead of js for reviewed button hover state in filmstrip
* Update installation.md for Raspberry Pi and Hailo
Updated Hailo installation instructions to cover both Bookworm and Trixie OS on Raspberry Pi.
Referenced discussions: #21177, #20621, #20062, #19531
* Update user_installation.sh for Raspberry Pi (Bookworm and Trixie)
Simplified and improved the user installation script for Hailo to support Raspberry Pi OS Bookworm, Trixie, and x86 platforms.
Referenced discussions: #21177, #20621, #20062, #19531
* Update installation.md
* Update user_installation.sh
* Update installation.md
* Update installation.md
Added optional fix for PCIe descriptor page size error.
Related discussion: #19481
* Update installation.md
Changed kernel driver version check from modinfo to /sys/module for correct post-reboot output
Currently translated at 94.5% (52 of 55 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Serbian)
Currently translated at 7.6% (50 of 654 strings)
Translated using Weblate (Serbian)
Currently translated at 55.4% (51 of 92 strings)
Translated using Weblate (Serbian)
Currently translated at 44.0% (52 of 118 strings)
Translated using Weblate (Serbian)
Currently translated at 37.5% (51 of 136 strings)
Translated using Weblate (Serbian)
Currently translated at 18.6% (40 of 215 strings)
Translated using Weblate (Serbian)
Currently translated at 12.9% (65 of 501 strings)
Translated using Weblate (Serbian)
Currently translated at 36.0% (49 of 136 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Serbian)
Currently translated at 96.2% (51 of 53 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Serbian)
Currently translated at 42.6% (52 of 122 strings)
Translated using Weblate (Serbian)
Currently translated at 31.9% (39 of 122 strings)
Translated using Weblate (Serbian)
Currently translated at 16.2% (35 of 215 strings)
Translated using Weblate (Serbian)
Currently translated at 10.3% (52 of 501 strings)
Translated using Weblate (Serbian)
Currently translated at 71.6% (38 of 53 strings)
Translated using Weblate (Serbian)
Currently translated at 42.3% (39 of 92 strings)
Translated using Weblate (Serbian)
Currently translated at 27.9% (38 of 136 strings)
Translated using Weblate (Serbian)
Currently translated at 32.2% (38 of 118 strings)
Translated using Weblate (Serbian)
Currently translated at 84.7% (39 of 46 strings)
Translated using Weblate (Serbian)
Currently translated at 70.9% (39 of 55 strings)
Translated using Weblate (Serbian)
Currently translated at 90.6% (39 of 43 strings)
Translated using Weblate (Serbian)
Currently translated at 28.6% (39 of 136 strings)
Translated using Weblate (Serbian)
Currently translated at 79.5% (39 of 49 strings)
Translated using Weblate (Serbian)
Currently translated at 5.5% (36 of 654 strings)
Translated using Weblate (Serbian)
Currently translated at 14.8% (32 of 215 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Serbian)
Currently translated at 56.6% (30 of 53 strings)
Translated using Weblate (Serbian)
Currently translated at 72.0% (31 of 43 strings)
Translated using Weblate (Serbian)
Currently translated at 22.7% (31 of 136 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Serbian)
Currently translated at 67.3% (31 of 46 strings)
Translated using Weblate (Serbian)
Currently translated at 25.4% (31 of 122 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Serbian)
Currently translated at 56.3% (31 of 55 strings)
Translated using Weblate (Serbian)
Currently translated at 100.0% (25 of 25 strings)
Translated using Weblate (Serbian)
Currently translated at 8.5% (43 of 501 strings)
Translated using Weblate (Serbian)
Currently translated at 63.2% (31 of 49 strings)
Translated using Weblate (Serbian)
Currently translated at 32.6% (30 of 92 strings)
Translated using Weblate (Serbian)
Currently translated at 26.2% (31 of 118 strings)
Translated using Weblate (Serbian)
Currently translated at 22.0% (30 of 136 strings)
Translated using Weblate (Serbian)
Currently translated at 4.5% (30 of 654 strings)
Co-authored-by: Aleksandar Jevremovic <aleksandar@jevremovic.org>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
* Adjust title prompt to have less rigidity
* Improve motion boxes handling for features that don't require motion
* Improve handling of classes starting with digits
* Improve vehicle nuance
* tweak lpr docs
* Improve grammar
* Don't allow # in face name
* add password requirements to new user dialog
* change password requirements
* Clenaup
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* version update
* Restrict go2rtc exec sources by default (#21543)
* Restrict go2rtc exec sources by default
* add docs
* check for addon value too
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add 640x640 Intel NPU stats
* use css instead of js for reviewed button hover state in filmstrip
* update copilot instructions to copy HA's format
* Set json schema for genai
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* use default stable api version for gemini genai client
* update gemini docs
* remove outdated genai.md and update correct file
* Classification fixes
* Mutate when a date is selected and marked as reviewed
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* tracking details tweaks
- fix 4:3 layout
- get and use aspect of record stream if different from detect stream
* aspect ratio docs tip
* spacing
* fix
* i18n fix
* additional logs on ffmpeg exit
* improve no camera view
instead of showing an "add camera" message, show a specific message for empty camera groups when frigate already has cameras added
* add note about separate onvif accounts in some camera firmware
* clarify review summary report docs
* review settings tweaks
- remove horizontal divider
- update description language for switches
- keep save button disabled until review classification settings change
* use correct Toaster component from shadcn
* clarify support for intel b-series (battlemage) gpus
* add clarifying comment to dummy camera docs
* misc triggers tweaks
i18n fixes
fix toaster color
fix clicking on labels selecting incorrect checkbox
* update copilot instructions
* lpr docs tweaks
* add retry params to gemini
* i18n fix
* ensure users only see recognized plates from accessible cameras in explore
* ensure all zone filters are converted to pixels
zone-level filters were never converted from percentage area to pixels. RuntimeFilterConfig was only applied to filters at the camera level, not zone.filters.
Fixes https://github.com/blakeblackshear/frigate/discussions/21694
* add test for percentage based zone filters
* use export id for key instead of name
* update gemini docs
* fix(recording): handle unexpected filenames in cache maintainer to prevent crash
* test(recording): add test for maintainer cache file parsing
* Prevent log spam from unexpected cache files
Addresses PR review feedback: Add deduplication to prevent warning
messages from being logged repeatedly for the same unexpected file
in the cache directory. Each unexpected filename is only logged once
per RecordingMaintainer instance lifecycle.
Also adds test to verify warning is only emitted once per filename.
* Fix code formatting for test_maintainer.py
* fixes + ruff
* Fix jetson stats reading
* Return result
* Avoid unknown class for cover image
* fix double encoding of passwords in camera wizard
* formatting
* empty homekit config fixes
* add locks to jina v1 embeddings
protect tokenizer and feature extractor in jina_v1_embedding with per-instance thread lock to avoid the "Already borrowed" RuntimeError during concurrent tokenization
* Capitalize correctly
* replace deprecated google-generativeai with google-genai
update gemini genai provider with new calls from SDK
provider_options specifies any http options
suppress unneeded info logging
* fix attribute area on detail stream hover
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (122 of 122 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (131 of 131 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (6 of 6 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (654 of 654 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (92 of 92 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (136 of 136 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (131 of 131 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (41 of 41 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Persian)
Currently translated at 99.6% (652 of 654 strings)
Translated using Weblate (Persian)
Currently translated at 98.9% (91 of 92 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (122 of 122 strings)
Translated using Weblate (Persian)
Currently translated at 84.6% (11 of 13 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (135 of 135 strings)
Translated using Weblate (Persian)
Currently translated at 95.6% (44 of 46 strings)
Translated using Weblate (Persian)
Currently translated at 66.6% (4 of 6 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Persian)
Currently translated at 92.0% (23 of 25 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (2 of 2 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Persian)
Currently translated at 97.9% (48 of 49 strings)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: حمید ملک محمدی <hmmftg@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fa/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Croatian)
Currently translated at 46.7% (43 of 92 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (215 of 215 strings)
Translated using Weblate (Croatian)
Currently translated at 10.9% (55 of 501 strings)
Translated using Weblate (Croatian)
Currently translated at 27.8% (34 of 122 strings)
Translated using Weblate (Croatian)
Currently translated at 15.8% (34 of 215 strings)
Translated using Weblate (Croatian)
Currently translated at 24.5% (29 of 118 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Croatian)
Currently translated at 67.4% (29 of 43 strings)
Translated using Weblate (Croatian)
Currently translated at 39.1% (29 of 74 strings)
Translated using Weblate (Croatian)
Currently translated at 58.4% (31 of 53 strings)
Translated using Weblate (Croatian)
Currently translated at 22.7% (31 of 136 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Croatian)
Currently translated at 63.0% (29 of 46 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Croatian)
Currently translated at 31.5% (29 of 92 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (25 of 25 strings)
Translated using Weblate (Croatian)
Currently translated at 59.1% (29 of 49 strings)
Translated using Weblate (Croatian)
Currently translated at 7.9% (40 of 501 strings)
Translated using Weblate (Croatian)
Currently translated at 52.7% (29 of 55 strings)
Translated using Weblate (Croatian)
Currently translated at 5.0% (33 of 654 strings)
Translated using Weblate (Croatian)
Currently translated at 26.4% (36 of 136 strings)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: stipe-jurkovic <sjurko00@fesb.hr>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/hr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
* Strip model name before training
* Handle options file for go2rtc option
* Make reviewed optional and add null to API call
* Send reviewed for dashboard
* Allow setting context size for openai compatible endpoints
* push empty go2rtc config to avoid homekit error in log
* Add option to set runtime options for LLM providers
* Docs
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.
# GitHub Copilot Instructions for Frigate NVR
This document provides coding guidelines and best practices for contributing to Frigate NVR, a complete and local NVR designed for Home Assistant with AI object detection.
## Project Overview
Frigate NVR is a realtime object detection system for IP cameras that uses:
- **Backend**: Python 3.13+ with FastAPI, OpenCV, TensorFlow/ONNX
- **Frontend**: React with TypeScript, Vite, TailwindCSS
- **Architecture**: Multiprocessing design with ZMQ and MQTT communication
- **Focus**: Minimal resource usage with maximum performance
## Code Review Guidelines
When reviewing code, do NOT comment on:
- Missing imports - Static analysis tooling catches these
Constructing secure passwords and managing them properly is important. Frigate requires a minimum length of 12 characters. For guidance on password standards see [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html). To learn what makes a password truly secure, read this [article](https://medium.com/peerio/how-to-build-a-billion-dollar-password-3d92568d9277).
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
@@ -162,6 +166,10 @@ In this example:
- If no mapping matches, Frigate falls back to `default_role` if configured.
- If `role_map` is not defined, Frigate assumes the role header directly contains `admin`, `viewer`, or a custom role name.
**Note on matching semantics:**
- Admin precedence: if the `admin` mapping matches, Frigate resolves the session to `admin` to avoid accidental downgrade when a user belongs to multiple groups (for example both `admin` and `viewer` groups).
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::note
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
:::
:::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
@@ -95,7 +101,7 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
provider:gemini
api_key:"{FRIGATE_GEMINI_API_KEY}"
model:gemini-2.0-flash
cameras:
front_camera:
genai:
enabled:True# <- enable GenAI for your front camera
use_snapshot:True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
enabled:False# <- disable GenAI for your indoor camera
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
provider:ollama
base_url:http://localhost:11434
model:qwen3-vl:4b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider:gemini
api_key:"{FRIGATE_GEMINI_API_KEY}"
model:gemini-2.0-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider:openai
api_key:"{FRIGATE_OPENAI_API_KEY}"
model:gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
Frigate's thumbnail search excels at identifying specific details about tracked objects – for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate’s default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate’s default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what’s happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they’re moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation’s context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end:true# default
after_significant_updates:3# how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider:ollama
base_url:http://localhost:11434
model:qwen3-vl:8b-instruct
objects:
prompt:"Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person:"Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car:"Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled:True
use_snapshot:True
prompt:"Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person:"Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat:"Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
@@ -17,11 +17,23 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
### Supported Models
@@ -41,12 +53,12 @@ If you are trying to use a single model for Frigate and HomeAssistant, it will n
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::note
@@ -54,26 +66,26 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
provider:ollama
base_url:http://localhost:11434
model:minicpm-v:8b
provider_options:# other Ollama client options can be defined
keep_alive:-1
options:
num_ctx:8192# make sure the context matches other services that are using ollama
model:qwen3-vl:4b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
@@ -90,16 +102,32 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai:
provider:gemini
api_key:"{FRIGATE_GEMINI_API_KEY}"
model:gemini-1.5-flash
model:gemini-2.5-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
```
genai:
provider: gemini
...
provider_options:
base_url: https://...
```
Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai).
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
@@ -120,23 +148,41 @@ To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` env
:::
:::tip
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
```yaml
genai:
provider:openai
base_url:http://your-llama-server
model:your-model-name
provider_options:
context_size:8192# Specify the configured context size
```
This ensures Frigate uses the correct context window size when generating prompts.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.
@@ -68,8 +68,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `CPU`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- Default: `None`
- This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`.
@@ -381,6 +381,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
```yaml
lpr:
enabled: true
device: CPU
debug_save_plates: true
```
@@ -432,6 +433,6 @@ If you are using a model that natively detects `license_plate`, add an _object m
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
### I see "Error running ... model" in my logs. How can I fix this?
### I see "Error running ... model" in my logs, or my inference time is very high. How can I fix this?
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.
@@ -214,6 +214,12 @@ The `exec:`, `echo:`, and `expr:` sources are disabled by default for security.
:::
:::warning
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
:::
NOTE: The output will need to be passed with two curly braces `{{output}}`
@@ -11,6 +11,12 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
:::tip
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
:::
### Choosing a detect resolution
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors
@@ -55,12 +55,10 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#configuration)
- Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
@@ -89,7 +87,6 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
@@ -152,9 +149,7 @@ The OpenVINO detector type is able to run on:
:::note
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported.
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
:::
@@ -172,7 +167,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms s-640: 30 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
@@ -191,7 +186,7 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
#### Compatibility References:
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-841/support-matrix/index.html)
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/getting-started/support-matrix.html)
[NVIDIA CUDA Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
@@ -112,42 +112,65 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
:::warning
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
On Raspberry Pi OS **Bookworm**, the kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the kernel. It is installed via DKMS, and the conflict described below does not apply. You can simply run the installation script.
:::
1.**Disable the built-in Hailo driver (Raspberry Pi only)**:
1.**Disable the built-in Hailo driver (Raspberry Pi Bookworm OS only)**:
:::note
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
If you are **not** using a Raspberry Pi with **Bookworm OS**, skip this step and proceed directly to step 2.
If you are using Raspberry Pi with **Trixie OS**, also skip this step and proceed directly to step 2.
:::
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
First, check if the driver is currently loaded:
```bash
lsmod | grep hailo
```
If it shows `hailo_pci`, unload it:
```bash
sudo rmmod hailo_pci
sudo modprobe -r hailo_pci
```
Now blacklist the driver to prevent it from loading on boot:
Then locate the built-in kernel driver and rename it so it cannot be loaded.
Renaming allows the original driver to be restored later if needed.
First, locate the currently installed kernel module:
```bash
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
modinfo -n hailo_pci
```
Update initramfs to ensure the blacklist takes effect:
Now refresh the kernel module map so the system recognizes the change:
```bash
sudo depmod -a
```
Reboot your Raspberry Pi:
```bash
@@ -160,9 +183,9 @@ The Raspberry Pi kernel includes an older version of the Hailo driver that is in
lsmod | grep hailo
```
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
This command should return no results.
2. **Run the installation script**:
3. **Run the installation script**:
Download the installation script:
@@ -190,7 +213,7 @@ The Raspberry Pi kernel includes an older version of the Hailo driver that is in
- Download and install the required firmware
- Set up udev rules
3. **Reboot your system**:
4. **Reboot your system**:
After the script completes successfully, reboot to load the firmware:
@@ -198,7 +221,7 @@ The Raspberry Pi kernel includes an older version of the Hailo driver that is in
sudo reboot
```
4. **Verify the installation**:
5. **Verify the installation**:
After rebooting, verify that the Hailo device is available:
@@ -212,6 +235,38 @@ The Raspberry Pi kernel includes an older version of the Hailo driver that is in
lsmod | grep hailo_pci
```
Verify the driver version:
```bash
cat /sys/module/hailo_pci/version
```
Verify that the firmware was installed correctly:
```bash
ls -l /lib/firmware/hailo/hailo8_fw.bin
```
**Optional: Fix PCIe descriptor page size error**
If you encounter the following error:
```
[HailoRT] [error] CHECK failed - max_desc_page_size given 16384 is bigger than hw max desc page size 4096
```
Create a configuration file to force the correct descriptor page size:
```bash
echo 'options hailo_pci force_desc_page_size=4096' | sudo tee /etc/modprobe.d/hailo_pci.conf
```
and reboot:
```bash
sudo reboot
```
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
@@ -634,3 +689,42 @@ docker run \
```
Log into QNAP, open Container Station. Frigate docker container should be listed under 'Overview' and running. Visit Frigate Web UI by clicking Frigate docker, and then clicking the URL shown at the top of the detail page.
## macOS - Apple Silicon
:::warning
macOS uses port 5000 for its Airplay Receiver service. If you want to expose port 5000 in Frigate for local app and API access the port will need to be mapped to another port on the host e.g. 5001
Failure to remap port 5000 on the host will result in the WebUI and all API endpoints on port 5000 being unreachable, even if port 5000 is exposed correctly in Docker.
:::
Docker containers on macOS can be orchestrated by either [Docker Desktop](https://docs.docker.com/desktop/setup/install/mac-install/) or [OrbStack](https://orbstack.dev) (native swift app). The difference in inference speeds is negligable, however CPU, power consumption and container start times will be lower on OrbStack because it is a native Swift application.
To allow Frigate to use the Apple Silicon Neural Engine / Processing Unit (NPU) the host must be running [Apple Silicon Detector](../configuration/object_detectors.md#apple-silicon-detector) on the host (outside Docker)
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
description="Creates a new user with the specified username, password, and role. Requires admin role. Password must be at least 12 characters long.",
)
defcreate_user(
request:Request,
@@ -817,6 +818,15 @@ def create_user(
content={"message":f"Role must be one of: {', '.join(config_roles)}"},
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must be at least 12 characters long. If user changes their own password, a new JWT cookie is automatically issued.",
@@ -108,12 +108,13 @@ class GenAIReviewConfig(FrigateBaseModel):
default="""### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Routine residential vehicle access during daytime/evening (6 AM - 10 PM): entering, exiting, loading/unloading items — normal commute and travel patterns
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Checking or probing vehicle/building access**: trying handles without entering, peering through windows, examining multiple vehicles, or possessing break-in tools — Level 1
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
@@ -133,8 +134,8 @@ Evaluate in this order:
1. **If person is verified/known** → Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
- Check actions: If probing access (trying handles without entering, checking multiple vehicles), taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service, routine vehicle access) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `title` (string): A concise, grammatically complete title in the format "[Subject] [action verb] [context]" that matches your scene description. Use names from "Objects in Scene" when you visually observe them.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail. This should be a condensed version of the scene description above.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
"addFace":"قم بإضافة مجموعة جديدة لمكتبة الأوجه.",
"addFace":"أضف مجموعة جديدة إلى مكتبة الوجوه عن طريق رفع صورتك الأولى.",
"invalidName":"أسم غير صالح. يجب أن يشمل الأسم فقط على الحروف، الأرقام، المسافات، الفاصلة العليا، الشرطة التحتية، والشرطة الواصلة.",
"placeholder":"أدخل أسم لهذه المجموعة"
},
@@ -21,6 +21,88 @@
"collections":"المجموعات",
"createFaceLibrary":{
"title":"إنشاء المجاميع",
"desc":"إنشاء مجموعة جديدة"
"desc":"إنشاء مجموعة جديدة",
"new":"إضافة وجه جديد",
"nextSteps":"لبناء أساس قوي:<li>استخدم علامة التبويب \"التعرّفات الأخيرة\" لاختيار الصور والتدريب عليها لكل شخص تم اكتشافه.</li> <li>ركّز على الصور الأمامية المباشرة للحصول على أفضل النتائج؛ وتجنّب صور التدريب التي تُظهر الوجوه بزاوية.</li>"
},
"steps":{
"faceName":"ادخل اسم للوجه",
"uploadFace":"ارفع صورة للوجه",
"nextSteps":"الخطوة التالية",
"description":{
"uploadFace":"قم برفع صورة لـ {{name}} تُظهر وجهه من زاوية أمامية مباشرة. لا يلزم أن تكون الصورة مقتصرة على الوجه فقط."
}
},
"train":{
"title":"التعرّفات الأخيرة",
"titleShort":"الأخيرة",
"aria":"اختر التعرّفات الأخيرة",
"empty":"لا توجد أي محاولات حديثة للتعرّف على الوجوه"
},
"deleteFaceLibrary":{
"title":"احذف الاسم",
"desc":"هل أنت متأكد أنك تريد حذف المجموعة {{name}}؟ سيؤدي هذا إلى حذف جميع الوجوه المرتبطة بها نهائيًا."
},
"deleteFaceAttempts":{
"title":"احذف الوجوه",
"desc_zero":"وجه",
"desc_one":"وجه",
"desc_two":"وجهان",
"desc_few":"وجوه",
"desc_many":"وجهًا",
"desc_other":"وجه"
},
"renameFace":{
"title":"اعادة تسمية الوجه",
"desc":"ادخل اسم جديد لـ{{name}}"
},
"button":{
"deleteFaceAttempts":"احذف الوجوه",
"addFace":"اظف وجهًا",
"renameFace":"اعد تسمية وجه",
"deleteFace":"احذف وجهًا",
"uploadImage":"ارفع صورة",
"reprocessFace":"إعادة معالجة الوجه"
},
"imageEntry":{
"validation":{
"selectImage":"يرجى اختيار ملف صورة."
},
"dropActive":"اسحب الصورة إلى هنا…",
"dropInstructions":"اسحب وأفلت أو الصق صورة هنا، أو انقر للاختيار",
"maxSize":"الحجم الأقصى: {{size}} ميغابايت"
},
"nofaces":"لا توجد وجوه متاحة",
"trainFaceAs":"درّب الوجه كـ:",
"trainFace":"درّب الوجه",
"toast":{
"success":{
"uploadedImage":"تم رفع الصورة بنجاح.",
"addFaceLibrary":"تمت إضافة {{name}} بنجاح إلى مكتبة الوجوه!",
"deletedFace_zero":"وجه",
"deletedFace_one":"وجه",
"deletedFace_two":"وجهين",
"deletedFace_few":"وجوه",
"deletedFace_many":"وجهًا",
"deletedFace_other":"وجه",
"deletedName_zero":"وجه",
"deletedName_one":"وجه",
"deletedName_two":"وجهين",
"deletedName_few":"وجوه",
"deletedName_many":"وجهًا",
"deletedName_other":"وجه",
"renamedFace":"تمت إعادة تسمية الوجه بنجاح إلى {{name}}",
"trainedFace":"تم تدريب الوجه بنجاح.",
"updatedFaceScore":"تم تحديث درجة الوجه بنجاح إلى {{name}} ({{score}})."
},
"error":{
"uploadingImageFailed":"فشل في رفع الصورة: {{errorMessage}}",
"addFaceLibraryFailed":"فشل في تعيين اسم الوجه: {{errorMessage}}",
"deleteFaceFailed":"فشل الحذف: {{errorMessage}}",
"deleteNameFailed":"فشل في حذف الاسم: {{errorMessage}}",
"renameFaceFailed":"فشل في إعادة تسمية الوجه: {{errorMessage}}",
"trainFailed":"فشل التدريب: {{errorMessage}}",
"updateFaceScoreFailed":"فشل في تحديث درجة الوجه: {{errorMessage}}"
"mustBeFinished":"El dibuix del polígon s'ha d'acabar abans de desar."
},
"type":{
"zone":"zona",
"motion_mask":"màscara de moviment",
"object_mask":"màscara d'objecte"
}
},
"zoneName":{
@@ -532,7 +537,7 @@
"hide":"Amaga contrasenya",
"requirements":{
"title":"Requisits contrasenya:",
"length":"Com a mínim 8 carácters",
"length":"Com a mínim 12 carácters",
"uppercase":"Com a mínim una majúscula",
"digit":"Com a mínim un digit",
"special":"Com a mínim un carácter especial (!@#$%^&*(),.?\":{}|<>)"
@@ -954,7 +959,7 @@
"useDigestAuthDescription":"Usa l'autenticació de resum HTTP per a ONVIF. Algunes càmeres poden requerir un nom d'usuari/contrasenya ONVIF dedicat en lloc de l'usuari administrador estàndard."
},
"save":{
"failure":"SS'ha produït un error en desar {{cameraName}}.",
"failure":"S'ha produït un error en desar {{cameraName}}.",
"success":"S'ha desat correctament la càmera nova {{cameraName}}."
"desc":"Activa/desactiva temporalment les descripcions d'objectes generatius d'IA per a aquesta càmera. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als objectes rastrejats en aquesta càmera."
"desc":"Activa/desactiva temporalment les descripcions d'objectes generatius d'IA per a aquesta càmera fins que es reiniciï Frigate. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als objectes rastrejats en aquesta càmera."
},
"review_descriptions":{
"title":"Descripcions de la IA generativa",
"desc":"Activa/desactiva temporalment les descripcions de revisió de la IA generativa per a aquesta càmera. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als elements de revisió d'aquesta càmera."
"desc":"Activa/desactiva temporalment les descripcions de la IA Generativa per a aquesta càmera fins que es reiniciï Frigate. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als elements de revisió d'aquesta càmera."
"desc":"Správci mají plný přístup ke všem funkcím v uživatelském rozhraní Frigate. Diváci jsou omezeni na sledování kamer, položek přehledu a historických záznamů v UI."
},
"readTheDocumentation":"Přečtěte si dokumentaci"
"readTheDocumentation":"Přečtěte si dokumentaci",
"list":{
"two":"{{0}} a {{1}}",
"many":"{{items}}, a {{last}}",
"separatorWithSpace":", "
},
"field":{
"optional":"Volitelné",
"internalID":"Interní ID Frigate používá v konfiguraci a databázi"
"desc_one":"Jste si jistí, že chcete odstranit {{count}} model? Tím trvale odstraníte všechny související data včetně obrázků a tréninkových dat. Tato akce je nevratná.",
"desc_few":"Jste si jistí, že chcete odstranit {{count}} modely? Tím trvale odstraníte všechny související data včetně obrázků a tréninkových dat. Tato akce je nevratná.",
"desc_other":"Jste si jistí, že chcete odstranit {{count}} modelů? Tím trvale odstraníte všechny související data včetně obrázků a tréninkových dat. Tato akce je nevratná."
},
"deleteDatasetImages":{
"desc_one":"Opravdu chcete odstranit {{count}} obrázek z {{dataset}}? Tato akce je nevratná a vyžaduje přetrénování modelu.",
"desc_few":"Opravdu chcete odstranit {{count}} obrázky z {{dataset}}? Tato akce je nevratná a vyžaduje přetrénování modelu.",
"desc_other":"Opravdu chcete odstranit {{count}} obrázků z {{dataset}}? Tato akce je nevratná a vyžaduje přetrénování modelu.",
"title":"Smazat obrázky datové sady"
},
"deleteTrainImages":{
"desc_one":"Opravdu chcete odstranit {{count}} obrázek? Tato akce je nevratná.",
"desc_few":"Opravdu chcete odstranit {{count}} obrázky? Tato akce je nevratná.",
"desc_other":"Opravdu chcete odstranit {{count}} obrázků? Tato akce je nevratná.",
"title":"Odstranit tréninkové obrázky"
},
"wizard":{
"step3":{
"allImagesRequired_one":"Prosím, zařaďte všechny obrázky. Zbývá {{count}} obrázek.",
"allImagesRequired_few":"Prosím, zařaďte všechny obrázky. Zbývají {{count}} obrázky.",
"allImagesRequired_other":"Prosím, zařaďte všechny obrázky. Zbývá {{count}} obrázků.",
"trainingStarted":"Trénování úspěšně spuštěno",
"generateSuccess":"Vzorové obrázky byly úspěšně vytvořeny"
}
},
"deleteCategory":{
"title":"Smazat Třídu",
"desc":"Opravdu chcete odstranit třídu {{name}}? Tím se na trvalo odstraní všechny související obrázky a bude potřeba přetrénovat model.",
"minClassesTitle":"Nemůžete smazat třídu",
"minClassesDesc":"Klasifikační model musí mít alespoň 2 třídy. Než tuto třídu odstraníte přidejte další třídu."
},
"edit":{
"descriptionObject":"Upravte typ objektu a typ klasifikace pro tento model klasifikace.",
"stateClassesInfo":"Poznámka: Změna tříd stavů vyžaduje přetrénování modelu s aktualizovanými třídami."
},
"renameCategory":{
"title":"Přejmenovat třídu",
"desc":"Vložte nové jméno pro {{name}}. Aby se změna názvu projevila, bude nutné model znovu natrénovat."
},
"description":{
"invalidName":"Neplatné jméno. Jméno můžou obsahovat pouze písmena, čísla, mezery, apostrofy, podtržítka a spojovníky."
"audioTranscription":"Požádání o přepis zvuku bylo úspěšné."
"audioTranscription":"Požádání o přepis zvuku bylo úspěšné. V závislosti na rychlosti Vašeho Frigate serveru může přepis trvat nějaký čas než bude dokončen.",
"updatedAttributes":"Atributy byly úspěšně aktualizovány."
},
"error":{
"regenerate":"Chyba volání {{provider}} pro nový popis: {{errorMessage}}",
@@ -206,7 +207,7 @@
"dialog":{
"confirmDelete":{
"title":"Potvrdit smazání",
"desc":"Odstraněním tohoto sledovaného objektu se odstraní snímek, všechna uložená vložení a všechny související položky životního cyklu objektu. Zaznamenaný záznam tohoto sledovaného objektu v zobrazení Historie <em>NEBUDE</em> smazán.<br /><br />Opravdu chcete pokračovat?"
"desc":"Odstraněním tohoto sledovaného objektu se odstraní snímek, všechna uložená vložení a všechny související položky s podrobnostmi o sledování. Zaznamenaný záznam tohoto sledovaného objektu v zobrazení Historie <em>NEBUDE</em> smazán.<br /><br />Opravdu chcete pokračovat?"
"desc":"Vždy zobrazovat zóny na snímcích, na kterých objekty vstoupili do zóny."
},
"offset":{
"label":"Odsazení anotace",
"desc":"Tato data pocházejí z detekčního kanálu vaší kamery, ale překrývají se s obrázky ze záznamového kanálu. Je nepravděpodobné, že by oba streamy byly dokonale synchronizované. V důsledku toho se ohraničovací rámeček a záznam nebudou dokonale srovnávat. Toto nastavení můžete použít k časovému posunutí anotací dopředu nebo dozadu, abyste je lépe zarovnali se zaznamenaným záznamem.",
"millisecondsToOffset":"Milisekundy na posunutí detekce anotací. <em>Výchozí: 0</em>",
"tips":"Snižte hodnotu, pokud je přehrávané video před ohraničením a body cesty, nebo zvyšte hodnotu, pokud je přehrávané video za nimi. Hodnota může být i záporná.",
"toast":{
"success":"Odsazení anotací pro {{camera}} bylo uloženo do konfiguračního souboru."
"dropInstructions":"Přetáhněte obrázek zde, nebo klikněte na výběr",
"dropInstructions":"Přetáhněte obrázek sem, nebo klikněte na výběr",
"maxSize":"Maximální velikost: {{size}}MB",
"dropActive":"Přetáhněte obrázek zde…",
"validation":{
@@ -10,7 +10,7 @@
"createFaceLibrary":{
"new":"Vytvořit nový obličej",
"desc":"Vytvořit novou kolekci",
"nextSteps":"Chcete-li vybudovat pevný základ:<li>Použijte kartu Trénování k výběru a trénování na snímcích pro každou detekovanou osobu.</li><li>Pro nejlepší výsledky se zaměřte na přímé snímky; vyhněte se trénování snímků, které zachycují obličeje pod úhlem.</li></ul>",
"nextSteps":"Chcete-li vybudovat pevný základ:<li>Použijte kartu Nedávná Rozpoznání k výběru a trénování na snímcích pro každou detekovanou osobu.</li><li>Pro nejlepší výsledky se zaměřte na přímé snímky; vyhněte se trénování snímků, které zachycují obličeje pod úhlem.</li></ul>",
"title":"Vytvořit kolekci"
},
"details":{
@@ -44,7 +44,7 @@
"description":{
"addFace":"Přidejte novou kolekci do Knihovny obličejů nahráním prvního obrázku.",
"placeholder":"Zadejte název pro tuto kolekci",
"invalidName":"Neplatný název. Názvy mohou obsahovat pouze písmena, čísla, mezery, apostrofy, podtržítka a pomlčky."
"invalidName":"Neplatné jméno. Jméno můžou obsahovat pouze písmena, čísla, mezery, apostrofy, podtržítka a spojovníky."
"tips":"Název musí mít alespoň 2 znaky a nesmí být shodný s názvem kamery nebo jiné zóny."
"tips":"Název musí mít alespoň 2 znaky, musí obsahovat alespoň jedno písmeno a nesmí být shodný s názvem kamery nebo jiné zóny této kamery."
},
"inertia":{
"title":"Setrvačnost",
@@ -160,7 +160,7 @@
}
},
"toast":{
"success":"Zóna {{zoneName}} byla uložena. Restartujte Frigate pro aplikování změn."
"success":"Zóna {{zoneName}} byla uložena."
},
"label":"Zóny",
"desc":{
@@ -199,8 +199,8 @@
"clickDrawPolygon":"Kliknutím nakreslíte polygon do obrázku.",
"toast":{
"success":{
"title":"{{polygonName}} byl uložen. Restartujte Frigate pro aplikování změn.",
"noName":"Maska Detekce pohybu byla uložena. Restartujte Frigate pro aplikování změn."
"title":"{{polygonName}} byl uložen.",
"noName":"Maska Detekce pohybu byla uložena."
}
}
},
@@ -284,8 +284,8 @@
"clickDrawPolygon":"Kliknutím nakreslete polygon do obrázku.",
"toast":{
"success":{
"title":"{{polygonName}} byl uložen. Restartujte Frigate pro aplikování změn.",
"noName":"Maska Objektu byla uložena. Restartujte Frigate pro aplikování změn."
"title":"{{polygonName}} byl uložen.",
"noName":"Maska Objektu byla uložena."
}
},
"point_one":"{{count}} bod",
@@ -322,7 +322,7 @@
"noCamera":"Žádná Kamera"
},
"general":{
"title":"Hlavní nastavení",
"title":"Nastavení rozhraní",
"liveDashboard":{
"title":"Živý dashboard",
"automaticLiveView":{
@@ -332,6 +332,13 @@
"playAlertVideos":{
"label":"Přehrát videa s výstrahou",
"desc":"Ve výchozím nastavení se nedávná upozornění na ovládacím panelu Živě přehrávají jako malá opakující se videa. Vypněte tuto možnost, chcete-li na tomto zařízení/prohlížeči zobrazovat pouze statický obrázek nedávných výstrah."
},
"displayCameraNames":{
"label":"Vždy zobrazit názvy kamer",
"desc":"Vždy zobrazit názvy kamer v čipu na ovládacím panelu živého náhledu s více kamerami."
},
"liveFallbackTimeout":{
"label":"Časový limit pádu živého přehrávání"
}
},
"storedLayouts":{
@@ -629,11 +636,11 @@
"actions":"Akce",
"noUsers":"Žádní uživatelé nebyli nalezeni.",
"changeRole":"Změnit roli uživatele",
"password":"Heslo",
"password":"Resetovat Heslo",
"deleteUser":"Smazat uživatele",
"role":"Role"
},
"updatePassword":"Aktualizovat heslo",
"updatePassword":"Resetovat heslo",
"toast":{
"success":{
"createUser":"Uživatel {{user}} úspěšně vytvořen",
@@ -743,7 +750,7 @@
"triggers":{
"documentTitle":"Spouštěče",
"management":{
"title":"Správa spouštěčů",
"title":"Spouštěče",
"desc":"Spravovat spouštěče pro {{camera}}. Použít typ miniatury ke spuštění u miniatur podobných vybranému sledovanému objektu a typ popisu ke spuštění u popisů podobných zadanému textu."
},
"addTrigger":"Přidat spouštěč",
@@ -782,10 +789,10 @@
"form":{
"name":{
"title":"Název",
"placeholder":"Zadejte název spouštěče",
"placeholder":"Pojmenujte tento spouštěč",
"error":{
"minLength":"Název musí mít alespoň 2 znaky.",
"invalidCharacters":"Jméno může obsahovat pouze písmena, číslice, podtržítka a pomlčky.",
"minLength":"Pole musí mít alespoň 2 znaky.",
"invalidCharacters":"Pole může obsahovat pouze písmena, číslice, podtržítka a pomlčky.",
"alreadyExists":"Spouštěč s tímto názvem již pro tuto kameru existuje."
}
},
@@ -798,9 +805,9 @@
},
"content":{
"title":"Obsah",
"imagePlaceholder":"Vybrat obrázek",
"imagePlaceholder":"Vyberte miniaturu",
"textPlaceholder":"Zadat textový obsah",
"imageDesc":"Vybrat obrázek, který spustí tuto akci, když bude detekován podobný obrázek.",
"imageDesc":"Je zobrazeno pouze posledních 100 miniatur. Pokud nemůžete najít požadovanou miniaturu, prosím zkontrolujte dřívější objekty v Prozkoumat a nastavte spouštěč ze tamějšího menu.",
"textDesc":"Zadejte text, který spustí tuto akci, když bude zjištěn podobný popis sledovaného objektu.",
"error":{
"required":"Obsah je povinný."
@@ -808,7 +815,7 @@
},
"actions":{
"title":"Akce",
"desc":"Ve výchozím nastavení Frigate odesílá MQTT zprávu pro všechny spouštěče. Zvolte dodatečnou akci, která se má provést, když se tento spouštěč aktivuje.",
"desc":"Ve výchozím nastavení Frigate odesílá MQTT zprávu pro všechny spouštěče. Podřazené popisky přidávají název spouštěče k popisku objektu. Atributy jsou prohledávatelná metadata uložená samostatně v metadatech sledovaného objektu.",
"error":{
"min":"Musí být vybrána alespoň jedna akce."
}
@@ -850,9 +857,9 @@
"createRole":"Role {{role}} byla úspěšně vytvořena",
"updateCameras":"Kamery byly aktualizovány pro roli {{role}}",
"deleteRole":"Role {{role}} byla úspěšně smazána",
"userRolesUpdated_one":"{{count}} uživatel(ů) přiřazených k této roli bylo aktualizováno na „Divák“, který má přístup ke všem kamerám.",
"userRolesUpdated_few":"",
"userRolesUpdated_other":""
"userRolesUpdated_one":"{{count}} uživatel přiřazený k této roli byl aktualizován na „diváka“, který má přístup ke všem kamerám.",
"userRolesUpdated_few":"{{count}} uživatelé přiřazení k této roli bylo aktualizováno na „diváky“, kteří mají přístup ke všem kamerám.",
"userRolesUpdated_other":"{{count}} uživatelů přiřazených k této roli bylo aktualizováno na „diváky“, kteří mají přístup ke všem kamerám."
},
"error":{
"createRoleFailed":"Nepodařilo se vytvořit roli: {{errorMessage}}",
@@ -896,5 +903,36 @@
"title":"Správa role diváka",
"desc":"Spravujte vlastní role diváků a jejich oprávnění k přístupu ke kamerám pro tuto instanci Frigate."
}
},
"cameraWizard":{
"save":{
"success":"Nová kamera {{cameraName}} úspěšně uložena."
},
"step2":{
"testSuccess":"Test připojení v pořádku!",
"probeSuccessful":"Sonda úspěšná",
"probeNoSuccess":"Sonda neúspěšná"
},
"step3":{
"testSuccess":"Test streamu v pořádku!"
},
"step4":{
"reconnectionSuccess":"Opakované připojení úspěšné.",
"streamValidated":"Stream {{number}} úspěšně ověřený"
}
},
"cameraManagement":{
"cameraConfig":{
"toast":{
"success":"Kamera {{cameraName}} úspěšně uložena"
}
}
},
"cameraReview":{
"reviewClassification":{
"toast":{
"success":"Konfigurace Klasifikací Revizí byla uložena. Restartujte Frigate pro aplikování změn."
"description":"Toto je známá chyba v nástrojích Intel pro hlášení statistik GPU (intel_gpu_top), která selhává a opakovaně vrací využití GPU 0 %, a to i v případech, kdy na (i)GPU správně běží hardwarová akcelerace a detekce objektů. Nejedná se o chybu Frigate. Můžete restartovat hostitele, abyste problém dočasně vyřešili a potvrdili, že GPU funguje správně. Toto neovlivňuje výkon."
"mustLeastCharacters":"Kameragruppens navn skal være mindst 2 tegn."
"mustLeastCharacters":"Kameragruppens navn skal være mindst 2 tegn.",
"exists":"Kameragruppenavn findes allerede.",
"nameMustNotPeriod":"Kameragruppenavn må ikke indeholde en periode.",
"invalid":"Ugyldigt kamera gruppenavn."
}
},
"cameras":{
"label":"Kameraer",
"desc":"Vælg kameraer til denne gruppe."
},
"icon":"Ikon",
"success":"Kameragruppe ({{name}}) er blevet gemt.",
"camera":{
"birdseye":"Fugleøje",
"setting":{
"label":"Kamera Streaming Indstillinger",
"title":"{{cameraName}} Streaming Indstillinger",
"desc":"Skift de live streaming muligheder for denne kameragruppes dashboard. <em> Disse indstillinger er enheds- og browserspecifikke.</em>",
"audioIsAvailable":"Lyd er tilgængelig for denne stream",
"audioIsUnavailable":"Lyd er ikke tilgængelig for denne strøm",
"audio":{
"tips":{
"title":"Lyd skal komme fra dit kamera og konfigureret i go2rtc til denne stream."
}
},
"stream":"Stream",
"placeholder":"Vælg en stream",
"streamMethod":{
"label":"Streaming Metode",
"placeholder":"Vælg en streaming metode",
"method":{
"noStreaming":{
"label":"Ingen Streaming",
"desc":"Kamerabilleder vil kun opdatere én gang i minuttet og ingen live streaming vil forekomme."
},
"smartStreaming":{
"label":"Smart Streaming (anbefalet)",
"desc":"Smart streaming vil opdatere dit kamerabillede én gang i minuttet, når der ikke sker noget, for at spare båndbredde og ressourcer. Når der registreres aktivitet, skifter billedet problemfrit til en live stream."
},
"continuousStreaming":{
"label":"Kontinuerlig Streaming",
"desc":{
"title":"Kamerabillede vil altid være en live stream, når det er synligt på instrumentbrættet, selv om der ikke registreres nogen aktivitet.",
"warning":"Kontinuerlig streaming kan forårsage højt båndbreddeforbrug og ydelsesproblemer. Brug med omtanke."
}
}
}
},
"compatibilityMode":{
"label":"Kompatibilitetstilstand",
"desc":"Aktivér kun denne mulighed, hvis kameraets live stream viser farve artefakter og har en diagonal linje på højre side af billedet."
"trainingInProgress":"Modellen er ved at blive trænet",
"noNewImages":"Der er ingen nye billeder at lære af. Kategorisér flere billeder i datasættet først.",
"noChanges":"Ingen ændringer i datasættet siden sidste træning.",
"modelNotReady":"Modellen er ikke klar til træning"
},
"toast":{
"success":{
"deletedCategory":"Slettet kategori",
"deletedImage":"Slettede billeder",
"deletedModel_one":"{{count}} model er nu slettet",
"deletedModel_other":"{{count}} modeller er nu slettet",
"categorizedImage":"Billedet er nu kategoriseret",
"trainedModel":"Modellen er klar.",
"trainingModel":"Modeltræning er started.",
"updatedModel":"Modellens indstillinger er opdateret",
"renamedCategory":"Kategorien er omdøbt til {{name}}"
},
"error":{
"deleteImageFailed":"Fejl under sletning: {{errorMessage}}",
"deleteCategoryFailed":"Sletning af kategori fejlede: {{errorMessage}}",
"deleteModelFailed":"Sletning af model fejlede: {{errorMessage}}",
"categorizeFailed":"Kategorisering af billedet fejlede: {{errorMessage}}",
"trainingFailed":"Træning af modellen fejlede. Check Frigate loggen.",
"trainingFailedToStart":"Opstart af modeltræning fejlede: {{errorMessage}}",
"updateModelFailed":"Ændring af modellen fejlede: {{errorMessage}}",
"renameCategoryFailed":"Kan ikke omdøbe kategorien: {{errorMessage}}"
}
},
"deleteCategory":{
"title":"Slet kategori",
"desc":"Er du sikker på at du vil slette kategorien {{name}}? Dette kan ikke fortrydes og sletter alle tilhørende billeder samt træning af modellen.",
"minClassesTitle":"Kan ikke slette Kategori",
"minClassesDesc":"Modellen skal have mindst 2 kategorier. Tilføj en kategori, før du sletter denne."
},
"deleteModel":{
"title":"Slet Kategoriseringsmodellen",
"desc_one":"Er du sikker på, at du vil slette {{count}} model? Dette vil permanent slette alle tilknyttede data, inkl. billeder og træningsdata. Denne handling kan ikke fortrydes.",
"desc_other":"Er du sikker på, at du vil slette {{count}} modeller? Dette vil permanent slette alle tilknyttede data, inkl. billeder og træningsdata. Denne handling kan ikke fortrydes.",
"single":"Er du sikker på, at du vil slette {{name}}? Dette vil permanent slette alle tilknyttede data, inklusive billeder og træningsdata. Denne handling kan ikke fortrydes."
},
"train":{
"title":"Nyeste kategorier",
"titleShort":"Nyeste",
"aria":"Vælg de nyeste kategorier"
},
"categories":"Kategorier",
"createCategory":{
"new":"Opret en ny kategori"
},
"categorizeImageAs":"Kategoriser billedet som:",
"categorizeImage":"Kategoriser billedet",
"menu":{
"objects":"Genstande",
"states":"Statestik"
},
"noModels":{
"object":{
"title":"Ingen kategoriseringsmodeller for genstande",
"description":"Opret en model, der kan kategorisere genstande.",
"buttonText":"Opret Genstands Model"
},
"state":{
"title":"Ingen modeller til genstandstilstande",
"description":"Opret en brugerdefineret model til at overvåge og kategorisere tilstandsændringer i specifikke kamerområder.",
"nameLength":"Modellens navn må højst være 64 tegn",
"nameOnlyNumbers":"Modellens navn skal indeholde bogstaver",
"classRequired":"Der mangler en kategori",
"classesUnique":"Kategorinavne skal være unikke",
"noneNotAllowed":"Kategorinavnet 'none' er ikke tilladt",
"stateRequiresTwoClasses":"Tilstandsmodeller har brug for 2 kategorier",
"objectLabelRequired":"Vælg genstands mærkat",
"objectTypeRequired":"Vælg kategoriseringstype",
"nameRequired":"Modelnavn er påkrævet"
},
"description":"Tilstandsmodeller overvåger faste kameraområder for ændringer (f.eks. dør åben/lukket). Genstandsmodeller tilføjer kategoriseringer til detekterede genstande (f.eks. kendte dyr, leveringspersoner osv.).",
"name":"Navn",
"namePlaceholder":"Skriv modelnavn...",
"classificationTypeDesc":"Underetiketter tilføjer ekstra tekst til genstandens etiket (f.eks. 'Person: UPS'). Attributter er søgbare metadata, der opbevares separat i genstandens metadata.",
"classificationSubLabel":"Underetiketter",
"classificationAttribute":"Attribut",
"classes":"Kategori",
"states":"Tilstande",
"classesTip":"Lær om kategorier",
"classesStateDesc":"Definér de forskellige tilstande, dit kameraområde kan være i. For eksempel: 'åben' og 'lukket' for en garageport.",
"classesObjectDesc":"Definér de forskellige kategorier, som detekterede genstande skal kategoriseres i. For eksempel: 'leveringsperson', 'beboer', 'fremmed' til kategorisering af personer.",
"classPlaceholder":"Skriv kategorinavn..."
},
"step2":{
"description":"Vælg kameraer, og definer det område, der skal overvåges for hvert kamera. Modellen vil kategorisere tilstanden i disse områder.",
"cameras":"Kameraer",
"selectCamera":"Vælg Kamera",
"noCameras":"Klik + for at tilføje kamera",
"selectCameraPrompt":"Vælg et kamera fra listen for at definere dets overvågningsområde"
},
"step3":{
"selectImagesPrompt":"Vælg alle billeder med: {{className}}",
"selectImagesDescription":"Klik på billederne for at vælge dem. Klik på Fortsæt, når du er færdig med denne kategori.",
"allImagesRequired_one":"Venligst kategoriser alle billeder. {{count}} billede tilbage.",
"allImagesRequired_other":"Venligst kategoriser alle billeder. {{count}} billeder tilbage.",
"generating":{
"title":"Genererer testbilleder",
"description":"Frigate henter repræsentative billeder fra dine optagelser. Det kan tage et øjeblik..."
},
"training":{
"title":"Træningsmodel",
"description":"Din model trænes i baggrunden. Luk denne dialog, og din model vil begynde at køre, så snart træningen er færdig."
},
"retryGenerate":"Forsøg at generere igen",
"noImages":"Ingen prøvebilleder blev genereret",
"classifying":"Kategoriserer og træner...",
"trainingStarted":"Træningen er startet",
"modelCreated":"Model er oprettet. Brug visningen af nylige kategoriseringer til at tilføje billeder for de manglende tilstande, og træn modellen derefter.",
"errors":{
"noCameras":"Ingen kamera konfigureret",
"noObjectLabel":"Ingen genstandsmærkat valgt",
"generateFailed":"Kunne ikke generere eksempler: {{error}}",
"generationFailed":"Der opstod en fejl under genereringen. Prøv igen.",
"classifyFailed":"Kunne ikke kategorisere billederne: {{error}}"
},
"generateSuccess":"Eksempelbilleder er nu genereret",
"missingStatesWarning":{
"title":"Manglende tilstandseksempler",
"description":"Det anbefales at vælge eksempler for alle tilstande for at opnå de bedste resultater. Du kan fortsætte uden at vælge alle tilstande, men modellen bliver ikke trænet, før alle tilstande har billeder. Efter du fortsætter, kan du bruge visningen Seneste kategoriseringer til at kategorisere billeder for de manglende tilstande og derefter træne modellen."
}
},
"title":"Opret ny kategorisering",
"steps":{
"nameAndDefine":"Navn og definition",
"stateArea":"Tilstandsområde",
"chooseExamples":"Vælg Eksempler"
}
},
"edit":{
"title":"Rediger kategoriseringsmodel",
"descriptionState":"Rediger kategorierne for denne model til genstandstilstande. Ændringer kræver, at modellen trænes igen.",
"descriptionObject":"Rediger genstandstypen og kategoriseringstypen for denne genstandskategoriseringsmodel.",
"stateClassesInfo":"Bemærk: Ændring af tilstandskategorier kræver, at modellen trænes igen med de opdaterede kategorier."
},
"deleteDatasetImages":{
"title":"Slet billeder i datasættet",
"desc_one":"Er du sikker på, at du vil slette {{count}} billede fra {{dataset}}? Denne handling kan ikke fortrydes og kræver, at modellen trænes igen.",
"desc_other":"Er du sikker på, at du vil slette {{count}} billeder fra {{dataset}}? Denne handling kan ikke fortrydes og kræver, at modellen trænes igen."
},
"deleteTrainImages":{
"title":"Slet trænings billeder",
"desc_one":"Er du sikker på, at du vil slette {{count}} billede? Denne handling kan ikke fortrydes.",
"desc_other":"Er du sikker på, at du vil slette {{count}} billeder? Denne handling kan ikke fortrydes."
},
"renameCategory":{
"title":"Omdøb Kategori",
"desc":"Indtast et nyt navn til {{name}}. Modellen skal trænes igen, før navneændringen træder i kraft."
"detection":"Der er ingen registreringer at gennemgå",
"motion":"Ingen bevægelsesdata fundet"
}
"motion":"Ingen bevægelsesdata fundet",
"recordingsDisabled":{
"title":"Optagelser skal være aktiveret"
}
},
"documentTitle":"Gennemse - Frigate",
"recordings":{
"documentTitle":"Optagelser - Frigate"
},
"calendarFilter":{
"last24Hours":"Sidste 24 timer"
},
"markAsReviewed":"Marker som gennemset",
"markTheseItemsAsReviewed":"Marker disse som gennemset",
"detail":{
"aria":"Skift til detaljevisning"
},
"timeline.aria":"Vælg tidslinje"
}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.