Compare commits

..

74 Commits

Author SHA1 Message Date
Nicolas Mowen
88bad3423b Set model 2026-02-26 15:28:58 -07:00
Nicolas Mowen
f3cda9020b Don't require download check 2026-02-26 14:33:08 -07:00
Nicolas Mowen
0c333ec28a Fix sending images 2026-02-26 14:33:08 -07:00
Nicolas Mowen
de986c7430 undo 2026-02-26 14:33:08 -07:00
Nicolas Mowen
dd2d7aca19 Basic docs 2026-02-26 14:33:08 -07:00
Nicolas Mowen
3f1bf1ae12 Add support for embedding via genai 2026-02-26 14:33:08 -07:00
Nicolas Mowen
d6e8cad32f Add embed API support 2026-02-26 14:33:07 -07:00
Nicolas Mowen
699d5ffa28 Support GenAI for embeddings 2026-02-26 14:32:33 -07:00
Nicolas Mowen
f400e91ede Add a starting state for chat 2026-02-26 08:38:59 -07:00
Nicolas Mowen
3bac4b15ae Add thumbnail images to object results 2026-02-26 08:38:59 -07:00
Nicolas Mowen
b2c424ad73 Add support for markdown tables 2026-02-26 08:38:59 -07:00
Nicolas Mowen
c18846ac62 Fix loading 2026-02-26 08:38:59 -07:00
Nicolas Mowen
b65ae76f0c Cleanup UI bubbles 2026-02-26 08:38:59 -07:00
Nicolas Mowen
5faf5e0d84 Cleanup UI and prompt 2026-02-26 08:38:59 -07:00
Nicolas Mowen
6837b9c89a Cleanup 2026-02-26 08:38:59 -07:00
Nicolas Mowen
f04df4a144 Add sub label to event tool filtering 2026-02-26 08:38:59 -07:00
Nicolas Mowen
e42f70eeec Implement message editing 2026-02-26 08:38:59 -07:00
Nicolas Mowen
e7b2b919d5 Improve default behavior 2026-02-26 08:38:59 -07:00
Nicolas Mowen
c68b7c9f46 Improvements to UI 2026-02-26 08:38:59 -07:00
Nicolas Mowen
8184ec5c8f Add copy button 2026-02-26 08:38:59 -07:00
Nicolas Mowen
ef448a7f7c Fix tool calling 2026-02-26 08:38:58 -07:00
Nicolas Mowen
f841ccdb63 Undo 2026-02-26 08:38:58 -07:00
Nicolas Mowen
4b6228acd9 Full streaming support 2026-02-26 08:38:58 -07:00
Nicolas Mowen
0b8d1ce568 Support streaming 2026-02-26 08:38:58 -07:00
Nicolas Mowen
9ad7a2639f Improve UI handling 2026-02-26 08:38:58 -07:00
Nicolas Mowen
089c2c1018 Add title 2026-02-26 08:38:58 -07:00
Nicolas Mowen
3e97f9e985 Show tool calls separately from message 2026-02-26 08:38:58 -07:00
Nicolas Mowen
eb9f16b4fa More time parsing improvements 2026-02-26 08:38:58 -07:00
Nicolas Mowen
45c6be47d2 Reduce fields in response 2026-02-26 08:38:58 -07:00
Nicolas Mowen
5a6c62a844 Adjust timing format 2026-02-26 08:38:58 -07:00
Nicolas Mowen
f29fbe14ca Improvements 2026-02-26 08:38:58 -07:00
Nicolas Mowen
cc941ab2db Add markdown 2026-02-26 08:38:58 -07:00
Nicolas Mowen
56b3ebe791 processing 2026-02-26 08:38:58 -07:00
Nicolas Mowen
6fdfe22f8c Add chat history 2026-02-26 08:38:58 -07:00
Nicolas Mowen
0cf713985f Add basic chat page with entry 2026-02-26 08:38:58 -07:00
Nicolas Mowen
dc39d2f0ef Set model in llama.cpp config 2026-02-26 08:38:52 -07:00
Nicolas Mowen
e6387dac05 Fix import issues 2026-02-26 08:38:52 -07:00
Nicolas Mowen
c870ebea37 Cleanup 2026-02-26 08:38:52 -07:00
Nicolas Mowen
56a1a0f5e3 Support getting client via manager 2026-02-26 08:38:52 -07:00
Nicolas Mowen
67a245c8ef Convert to roles list 2026-02-26 08:38:52 -07:00
Nicolas Mowen
a072600c94 Add config migration 2026-02-26 08:38:52 -07:00
Nicolas Mowen
b603678b26 GenAI client manager 2026-02-26 08:38:52 -07:00
Nicolas Mowen
8793650c2f Fix frame time access 2026-02-26 08:38:42 -07:00
Nicolas Mowen
9c8dd9a6ba Adapt to new Gemini format 2026-02-25 09:19:56 -07:00
nulledy
507b495b90 ffmpeg Preview Segment Optimization for "high" and "very_high" (#21996)
* Introduce qmax parameter for ffmpeg preview encoding

Added PREVIEW_QMAX_PARAM to control ffmpeg encoding quality.

* formatting

* Fix spacing in qmax parameters for preview quality
2026-02-25 09:02:08 -07:00
nulledy
3525f32bc2 Allow API Events to be Detections or Alerts, depending on the Event Label (#21923)
* - API created events will be alerts OR detections, depending on the event label, defaulting to alerts
- Indefinite API events will extend the recording segment until those events are ended
- API event start time is the actual start time, instead of having a pre-buffer of record.event_pre_capture

* Instead of checking for indefinite events on a camera before deciding if we should end the segment, only update last_detection_time and last_alert_time if frame_time is greater, which should have the same effect

* Add the ability to set a pre_capture number of seconds when creating a manual event via the API. Default behavior unchanged

* Remove unnecessary _publish_segment_start() call

* Formatting

* handle last_alert_time or last_detection_time being None when checking them against the frame_time

* comment manual_info["label"].split(": ")[0] for clarity
2026-02-25 09:02:08 -07:00
Josh Hawkins
ac142449f1 Improve jsmpeg player websocket handling (#21943)
* improve jsmpeg player websocket handling

prevent websocket console messages from appearing when player is destroyed

* reformat files after ruff upgrade
2026-02-25 09:02:08 -07:00
FL42
47b89a1d60 feat: add X-Frame-Time when returning snapshot (#21932)
Co-authored-by: Florent MORICONI <170678386+fmcloudconsulting@users.noreply.github.com>
2026-02-25 09:02:08 -07:00
Eric Work
cdcf56092c Add networking options for configuring listening ports (#21779) 2026-02-25 09:02:08 -07:00
Nicolas Mowen
08ee2e21de Add live context tool to LLM (#21754)
* Add live context tool

* Improve handling of images in request

* Improve prompt caching
2026-02-25 09:02:08 -07:00
Nicolas Mowen
9ab4dd4538 Update to ROCm 7.2.0 (#21753)
* Update to ROCm 7.2.0

* ROCm now works properly with JinaV1

* Arcface has compilation error
2026-02-25 09:02:08 -07:00
Josh Hawkins
fe5441349b Offline preview image (#21752)
* use latest preview frame for latest image when camera is offline

* remove frame extraction logic

* tests

* frontend

* add description to api endpoint
2026-02-25 09:02:08 -07:00
Nicolas Mowen
a4b1cc3a54 Implement LLM Chat API with tool calling support (#21731)
* Implement initial tools definiton APIs

* Add initial chat completion API with tool support

* Implement other providers

* Cleanup
2026-02-25 09:02:08 -07:00
John Shaw
99e25661b2 Remove parents in remove_empty_directories (#21726)
The original implementation did a full directory tree walk to find and remove
empty directories, so this implementation should remove the parents as well,
like the original did.
2026-02-25 09:02:08 -07:00
Nicolas Mowen
20360db2c9 Implement llama.cpp GenAI Provider (#21690)
* Implement llama.cpp GenAI Provider

* Add docs

* Update links

* Fix broken mqtt links

* Fix more broken anchors
2026-02-25 09:02:08 -07:00
John Shaw
3826d72c2a Optimize empty directory cleanup for recordings (#21695)
The previous empty directory cleanup did a full recursive directory
walk, which can be extremely slow. This new implementation only removes
directories which have a chance of being empty due to a recent file
deletion.
2026-02-25 09:02:08 -07:00
Nicolas Mowen
3d5757c640 Refactor Time-Lapse Export (#21668)
* refactor time lapse creation to be a separate API call with ability to pass arbitrary ffmpeg args

* Add CPU fallback
2026-02-25 09:02:08 -07:00
Eugeny Tulupov
86100fde6f Update go2rtc to v1.9.13 (#21648)
Co-authored-by: Eugeny Tulupov <eugeny.tulupov@spirent.com>
2026-02-25 09:02:08 -07:00
Josh Hawkins
28b1195a79 Fix incorrect counting in sync_recordings (#21626) 2026-02-25 09:02:08 -07:00
Josh Hawkins
b6db38bd4e use same logging pattern in sync_recordings as the other sync functions (#21625) 2026-02-25 09:02:08 -07:00
Josh Hawkins
92c6b8e484 Media sync API refactor and UI (#21542)
* generic job infrastructure

* types and dispatcher changes for jobs

* save data in memory only for completed jobs

* implement media sync job and endpoints

* change logs to debug

* websocket hook and types

* frontend

* i18n

* docs tweaks

* endpoint descriptions

* tweak docs
2026-02-25 09:02:07 -07:00
Josh Hawkins
9381f26352 Add media sync API endpoint (#21526)
* add media cleanup functions

* add endpoint

* remove scheduled sync recordings from cleanup

* move to utils dir

* tweak import

* remove sync_recordings and add config migrator

* remove sync_recordings

* docs

* remove key

* clean up docs

* docs fix

* docs tweak
2026-02-25 09:02:07 -07:00
Nicolas Mowen
e0180005be Add API to handle deleting recordings (#21520)
* Add recording delete API

* Re-organize recordings apis

* Fix import

* Consolidate query types
2026-02-25 09:02:07 -07:00
Nicolas Mowen
2041798702 Exports Improvements (#21521)
* Add images to case folder view

* Add ability to select case in export dialog

* Add to mobile review too
2026-02-25 09:02:07 -07:00
Nicolas Mowen
3d23b5de30 Add support for GPU and NPU temperatures (#21495)
* Add rockchip temps

* Add support for GPU and NPU temperatures in the frontend

* Add support for Nvidia temperature

* Improve separation

* Adjust graph scaling
2026-02-25 09:02:07 -07:00
Andrew Roberts
209bb44518 Camera-specific hwaccel settings for timelapse exports (correct base) (#21386)
* added hwaccel_args to camera.record.export config struct

* populate camera.record.export.hwaccel_args with a cascade up to camera then global if 'auto'

* use new hwaccel args in export

* added documentation for camera-specific hwaccel export

* fix c/p error

* missed an import

* fleshed out the docs and comments a bit

* ruff lint

* separated out the tips in the doc

* fix documentation

* fix and simplify reference config doc
2026-02-25 09:02:07 -07:00
Nicolas Mowen
88462cd6c3 Refactor temperature reporting for detectors and implement Hailo temp reading (#21395)
* Add Hailo temperature retrieval

* Refactor `get_hailo_temps()` to use ctxmanager

* Show Hailo temps in system UI

* Move hailo_platform import to get_hailo_temps

* Refactor temperatures calculations to use within detector block

* Adjust webUI to handle new location

---------

Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com>
2026-02-25 09:02:07 -07:00
Nicolas Mowen
c2cc23861a Export filter UI (#21322)
* Get started on export filters

* implement basic filter

* Implement filtering and adjust api

* Improve filter handling

* Improve navigation

* Cleanup

* handle scrolling
2026-02-25 09:02:07 -07:00
Josh Hawkins
2b46084260 Camera connection quality indicator (#21297)
* add camera connection quality metrics and indicator

* formatting

* move stall calcs to watchdog

* clean up

* change watchdog to 1s and separately track time for ffmpeg retry_interval

* implement status caching to reduce message volume
2026-02-25 09:02:07 -07:00
Nicolas Mowen
67466f215c Case management UI (#21299)
* Refactor export cards to match existing cards in other UI pages

* Show cases separately from exports

* Add proper filtering and display of cases

* Add ability to edit and select cases for exports

* Cleanup typing

* Hide if no unassigned

* Cleanup hiding logic

* fix scrolling

* Improve layout
2026-02-25 09:02:07 -07:00
Josh Hawkins
e011424947 refactor vainfo to search for first GPU (#21296)
use existing LibvaGpuSelector to pick appropritate libva device
2026-02-25 09:02:07 -07:00
Nicolas Mowen
a1a0051dd7 implement case management for export apis (#21295) 2026-02-25 09:02:07 -07:00
Nicolas Mowen
ff331060c3 Create scaffolding for case management (#21293) 2026-02-25 09:02:07 -07:00
Nicolas Mowen
7aab1f02ec Update version 2026-02-25 09:02:07 -07:00
263 changed files with 5778 additions and 24981 deletions

View File

@@ -229,7 +229,6 @@ Reolink
restream
restreamed
restreaming
RJSF
rkmpp
rknn
rkrga

View File

@@ -5,96 +5,72 @@ title: Configuring Generative AI
## Configuration
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI-Compatible section below.
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
## Local Providers
Local providers run on your own hardware and keep all data processing private. These require a GPU or dedicated hardware for best performance.
## Ollama
:::warning
Running Generative AI models on CPU is not recommended, as high inference times make using Generative AI impractical.
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
### Recommended Local Models
You must use a vision-capable model with Frigate. The following models are recommended for local deployment:
| Model | Notes |
| ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, strong ability to identify smaller objects and interactions with object. |
| `qwen3.5` | Strong situational understanding, but missing DeepStack from qwen3-vl leading to worse performance for identifying objects in people's hand and other small details. |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Slower model with good vision and temporal understanding |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::info
Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger sizes are more capable of complex tasks and understanding of situations, but requires more memory and computational resources. It is recommended to try multiple models and experiment to see which performs best.
:::
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 24 GB to run the 33B models.
:::
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Reasoning / Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, it is recommended to disable reasoning / thinking, which is generally model specific (see your models documentation).
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider's documentation or model library for guidance on the correct model variant to use.
### llama.cpp
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a C++ implementation of LLaMA that provides a high-performance inference server.
It is highly recommended to host the llama.cpp server on a machine with a discrete graphics card, or on an Apple silicon Mac for best performance.
#### Supported Models
You must use a vision capable model with Frigate. The llama.cpp server supports various vision models in GGUF format.
#### Configuration
All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters.
```yaml
genai:
provider: llamacpp
base_url: http://localhost:8080
model: your-model-name
provider_options:
context_size: 16000 # Tell Frigate your context size so it can send the appropriate amount of information.
```
### Ollama
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::info
Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger sizes are more capable of complex tasks and understanding of situations, but requires more memory and computational resources. It is recommended to try multiple models and experiment to see which performs best.
:::
:::tip
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
:::
Note that Frigate will not automatically download the model you specify in your config. Ollama will try to download the model but it may take longer than the timeout, so it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. The model specified in Frigate's config must match the downloaded model tag.
The following models are recommended:
#### Configuration
| Model | Notes |
| ------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
@@ -107,65 +83,49 @@ genai:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
### OpenAI-Compatible
## llama.cpp
Frigate supports any provider that implements the OpenAI API standard. This includes self-hosted solutions like [vLLM](https://docs.vllm.ai/), [LocalAI](https://localai.io/), and other OpenAI-compatible servers.
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a C++ implementation of LLaMA that provides a high-performance inference server. Using llama.cpp directly gives you access to all native llama.cpp options and parameters.
:::tip
:::warning
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
```yaml
genai:
provider: openai
base_url: http://your-llama-server
model: your-model-name
provider_options:
context_size: 8192 # Specify the configured context size
```
This ensures Frigate uses the correct context window size when generating prompts.
Using llama.cpp on CPU is not recommended, high inference times make using Generative AI impractical.
:::
#### Configuration
It is highly recommended to host the llama.cpp server on a machine with a discrete graphics card, or on an Apple silicon Mac for best performance.
### Supported Models
You must use a vision capable model with Frigate. The llama.cpp server supports various vision models in GGUF format.
### Configuration
```yaml
genai:
provider: openai
base_url: http://your-server:port
api_key: your-api-key # May not be required for local servers
provider: llamacpp
base_url: http://localhost:8080
model: your-model-name
provider_options:
temperature: 0.7
repeat_penalty: 1.05
top_p: 0.8
top_k: 40
min_p: 0.05
seed: -1
```
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters.
## Cloud Providers
Cloud providers run on remote infrastructure and require an API key for authentication. These services handle all model inference on their servers.
### Ollama Cloud
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
#### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: cloud-model-name
```
### Google Gemini
## Google Gemini
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
#### Supported Models
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
#### Get API Key
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
@@ -174,7 +134,7 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
3. Click "Create API key in new project"
4. Copy the API key for use in your config
#### Configuration
### Configuration
```yaml
genai:
@@ -199,19 +159,19 @@ Other HTTP options are available, see the [python-genai documentation](https://g
:::
### OpenAI
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
#### Supported Models
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
#### Get API Key
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
#### Configuration
### Configuration
```yaml
genai:
@@ -220,19 +180,42 @@ genai:
model: gpt-4o
```
### Azure OpenAI
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
:::tip
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
```yaml
genai:
provider: openai
base_url: http://your-llama-server
model: your-model-name
provider_options:
context_size: 8192 # Specify the configured context size
```
This ensures Frigate uses the correct context window size when generating prompts.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
#### Supported Models
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
#### Create Resource and Get API Key
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
#### Configuration
### Configuration
```yaml
genai:
@@ -240,4 +223,4 @@ genai:
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
```

View File

@@ -12,20 +12,23 @@ Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accel
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
- **AMD**
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
- **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
- **RockChip**
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image to run enrichments on an Nvidia GPU and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is the `tensorrt` image for object detection on an Nvidia GPU and Intel iGPU for enrichments.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
:::note

View File

@@ -29,12 +29,12 @@ cameras:
When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running.
| Add-on Variant | Configuration directory |
| -------------------------- | ----------------------------------------- |
| Frigate | `/addon_configs/ccab4aaf_frigate` |
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
| Add-on Variant | Configuration directory |
| -------------------------- | -------------------------------------------- |
| Frigate | `/addon_configs/ccab4aaf_frigate` |
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
**Whenever you see `/config` in the documentation, it refers to this directory.**
@@ -109,16 +109,15 @@ detectors:
record:
enabled: True
motion:
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motion
snapshots:
enabled: True
@@ -138,10 +137,7 @@ cameras:
- detect
motion:
mask:
timestamp:
friendly_name: "Camera timestamp"
enabled: true
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400
```
### Standalone Intel Mini PC with USB Coral
@@ -169,16 +165,15 @@ detectors:
record:
enabled: True
motion:
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motion
snapshots:
enabled: True
@@ -198,10 +193,7 @@ cameras:
- detect
motion:
mask:
timestamp:
friendly_name: "Camera timestamp"
enabled: true
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400
```
### Home Assistant integrated Intel Mini PC with OpenVino
@@ -239,16 +231,15 @@ model:
record:
enabled: True
motion:
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motion
snapshots:
enabled: True
@@ -268,8 +259,5 @@ cameras:
- detect
motion:
mask:
timestamp:
friendly_name: "Camera timestamp"
enabled: true
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400
```

View File

@@ -33,55 +33,18 @@ Your config file will be updated with the relative coordinates of the mask/zone:
```yaml
motion:
mask:
# Motion mask name (required)
mask1:
# Optional: A friendly name for the mask
friendly_name: "Timestamp area"
# Optional: Whether this mask is active (default: true)
enabled: true
# Required: Coordinates polygon for the mask
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
mask: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
```
Multiple motion masks can be listed in your config:
Multiple masks can be listed in your config.
```yaml
motion:
mask:
mask1:
friendly_name: "Timestamp area"
enabled: true
coordinates: "0.239,1.246,0.175,0.901,0.165,0.805,0.195,0.802"
mask2:
friendly_name: "Tree area"
enabled: true
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456"
- 0.239,1.246,0.175,0.901,0.165,0.805,0.195,0.802
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456
```
Object filter masks can also be created through the UI or manually in the config. They are configured under the object filters section for each object type:
```yaml
objects:
filters:
person:
mask:
person_filter1:
friendly_name: "Roof area"
enabled: true
coordinates: "0.000,0.000,1.000,0.000,1.000,0.400,0.000,0.400"
car:
mask:
car_filter1:
friendly_name: "Sidewalk area"
enabled: true
coordinates: "0.000,0.700,1.000,0.700,1.000,1.000,0.000,1.000"
```
## Enabling/Disabling Masks
Both motion masks and object filter masks can be toggled on or off without removing them from the configuration. Disabled masks are completely ignored at runtime - they will not affect motion detection or object filtering. This is useful for temporarily disabling a mask during certain seasons or times of day without modifying the configuration.
### Further Clarification
This is a response to a [question posed on reddit](https://www.reddit.com/r/homeautomation/comments/ppxdve/replacing_my_doorbell_with_a_security_camera_a_6/hd876w4?utm_source=share&utm_medium=web2x&context=3):

View File

@@ -34,7 +34,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia GPU**
- [ONNX](#onnx): Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
**Nvidia Jetson** <CommunityBadge />
@@ -65,7 +65,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
# Officially Supported Detectors
Frigate provides a number of builtin detector types. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -157,13 +157,7 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite`
#### YOLOv9
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Instructions](#yolov9-for-google-coral-support) for downloading a model with support for the Google Coral.
:::tip
**Frigate+ Users:** Follow the [instructions](../integrations/plus#use-models) to set a model ID in your config file.
:::
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
<details>
<summary>YOLOv9 Setup & Config</summary>
@@ -660,9 +654,11 @@ ONNX is an open format for building machine learning models, Frigate supports ru
If the correct build is used for your GPU then the GPU will be detected and used automatically.
- **AMD**
- ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image.
- **Nvidia**
@@ -1560,11 +1556,7 @@ cd tensorrt_demos/yolo
python3 yolo_to_onnx.py -m yolov7-320
```
#### YOLOv9 for Google Coral Support
[Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
#### YOLOv9 for other detectors
#### YOLOv9
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).

View File

@@ -345,15 +345,7 @@ objects:
# Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object.
# NOTE: This mask is COMBINED with the object type specific mask below
mask:
# Object filter mask name (required)
mask1:
# Optional: A friendly name for the mask
friendly_name: "Object filter mask area"
# Optional: Whether this mask is active (default: true)
enabled: true
# Required: Coordinates polygon for the mask
coordinates: "0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278"
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: filters to reduce false positives for specific object types
filters:
person:
@@ -373,15 +365,7 @@ objects:
threshold: 0.7
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask:
# Object filter mask name (required)
mask1:
# Optional: A friendly name for the mask
friendly_name: "Object filter mask area"
# Optional: Whether this mask is active (default: true)
enabled: true
# Required: Coordinates polygon for the mask
coordinates: "0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278"
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI object description generation (default: shown below)
@@ -505,15 +489,7 @@ motion:
frame_height: 100
# Optional: motion mask
# NOTE: see docs for more detailed info on creating masks
mask:
# Motion mask name (required)
mask1:
# Optional: A friendly name for the mask
friendly_name: "Motion mask area"
# Optional: Whether this mask is active (default: true)
enabled: true
# Required: Coordinates polygon for the mask
coordinates: "0.000,0.469,1.000,0.469,1.000,1.000,0.000,1.000"
mask: 0.000,0.469,1.000,0.469,1.000,1.000,0.000,1.000
# Optional: improve contrast (default: shown below)
# Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
# for daytime.
@@ -890,9 +866,6 @@ cameras:
front_steps:
# Optional: A friendly name or descriptive text for the zones
friendly_name: ""
# Optional: Whether this zone is active (default: shown below)
# Disabled zones are completely ignored at runtime - no object tracking or debug drawing
enabled: True
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428

View File

@@ -76,6 +76,40 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
:::
### GenAI Provider (llama.cpp)
Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images).
To use llama.cpp for semantic search:
1. Configure a GenAI provider in your config with `embeddings` in its `roles`.
2. Set `semantic_search.model` to the GenAI config key (e.g. `default`).
3. Start the llama.cpp server with `--embeddings` and `--mmproj` for image support:
```yaml
genai:
default:
provider: llamacpp
base_url: http://localhost:8080
model: your-model-name
roles:
- embeddings
- vision
- tools
semantic_search:
enabled: True
model: default
```
The llama.cpp server must be started with `--embeddings` for the embeddings API, and `--mmproj <mmproj.gguf>` when using image embeddings. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details.
:::note
Switching between Jina models and a GenAI provider requires reindexing. Embeddings from different backends are incompatible.
:::
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.

View File

@@ -10,10 +10,6 @@ For example, the cat in this image is currently in Zone 1, but **not** Zone 2.
Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
## Enabling/Disabling Zones
Zones can be toggled on or off without removing them from the configuration. Disabled zones are completely ignored at runtime - objects will not be tracked for zone presence, and zones will not appear in the debug view. This is useful for temporarily disabling a zone during certain seasons or times of day without modifying the configuration.
During testing, enable the Zones option for the Debug view of your camera (Settings --> Debug) so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
To create a zone, follow [the steps for a "Motion mask"](masks.md), but use the section of the web UI for creating a zone instead.
@@ -90,6 +86,7 @@ cameras:
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. Objects will be tracked for any `person` that enter anywhere in the yard, and for cars only if they enter the street.
### Zone Loitering
Sometimes objects are expected to be passing through a zone, but an object loitering in an area is unexpected. Zones can be configured to have a minimum loitering time after which the object will be considered in the zone.
@@ -97,7 +94,6 @@ Sometimes objects are expected to be passing through a zone, but an object loite
:::note
When using loitering zones, a review item will behave in the following way:
- When a person is in a loitering zone, the review item will remain active until the person leaves the loitering zone, regardless of if they are stationary.
- When any other object is in a loitering zone, the review item will remain active until the loitering time is met. Then if the object is stationary the review item will end.

View File

@@ -41,8 +41,8 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
| Name | Capabilities | Notes |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP)) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM)) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors
@@ -86,7 +86,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia**
- [Nvidia GPU](#nvidia-gpus): Nvidia GPUs can provide efficient object detection.
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
@@ -172,7 +172,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
### Nvidia GPUs
### TensorRT - Nvidia GPU
Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA libraries.
@@ -182,6 +182,8 @@ Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
There are improved capabilities in newer GPU architectures that TensorRT can benefit from, such as INT8 operations and Tensor cores. The features compatible with your hardware will be optimized when the model is converted to a trt file. Currently the script provided for generating the model provides a switch to enable/disable FP16 operations. If you wish to use newer features such as INT8 optimization, more work is required.
#### Compatibility References:
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/getting-started/support-matrix.html)
@@ -190,7 +192,7 @@ Make sure your host system has the [nvidia-container-runtime](https://docs.docke
[NVIDIA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
Inference is done with the `onnx` detector type. Speeds will vary greatly depending on the GPU and the model used.
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny (t)` variants are faster than the equivalent non-tiny model, some known examples are below:
✅ - Accelerated with CUDA Graphs

View File

@@ -56,7 +56,7 @@ services:
volumes:
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000
@@ -123,7 +123,7 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
:::note
If you are **not** using a Raspberry Pi with **Bookworm OS**, skip this step and proceed directly to step 2.
If you are using Raspberry Pi with **Trixie OS**, also skip this step and proceed directly to step 2.
:::
@@ -133,13 +133,13 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
```bash
lsmod | grep hailo
```
If it shows `hailo_pci`, unload it:
```bash
sudo modprobe -r hailo_pci
```
Then locate the built-in kernel driver and rename it so it cannot be loaded.
Renaming allows the original driver to be restored later if needed.
First, locate the currently installed kernel module:
@@ -149,29 +149,28 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
```
Example output:
```
/lib/modules/6.6.31+rpt-rpi-2712/kernel/drivers/media/pci/hailo/hailo_pci.ko.xz
```
Save the module path to a variable:
```bash
BUILTIN=$(modinfo -n hailo_pci)
```
And rename the module by appending .bak:
```bash
sudo mv "$BUILTIN" "${BUILTIN}.bak"
```
Now refresh the kernel module map so the system recognizes the change:
```bash
sudo depmod -a
```
Reboot your Raspberry Pi:
```bash
@@ -207,6 +206,7 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
```
The script will:
- Install necessary build dependencies
- Clone and build the Hailo driver from the official repository
- Install the driver
@@ -236,18 +236,18 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
```
Verify the driver version:
```bash
cat /sys/module/hailo_pci/version
```
Verify that the firmware was installed correctly:
```bash
ls -l /lib/firmware/hailo/hailo8_fw.bin
```
**Optional: Fix PCIe descriptor page size error**
**Optional: Fix PCIe descriptor page size error**
If you encounter the following error:
@@ -462,7 +462,7 @@ services:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000
@@ -502,12 +502,12 @@ The official docker image tags for the current stable version are:
- `stable` - Standard Frigate build for amd64 & RPi Optimized Frigate build for arm64. This build includes support for Hailo devices as well.
- `stable-standard-arm64` - Standard Frigate build for arm64
- `stable-tensorrt` - Frigate build specific for amd64 devices running an Nvidia GPU
- `stable-tensorrt` - Frigate build specific for amd64 devices running an nvidia GPU
- `stable-rocm` - Frigate build for [AMD GPUs](../configuration/object_detectors.md#amdrocm-gpu-detector)
The community supported docker image tags for the current stable version are:
- `stable-tensorrt-jp6` - Frigate build optimized for Nvidia Jetson devices running Jetpack 6
- `stable-tensorrt-jp6` - Frigate build optimized for nvidia Jetson devices running Jetpack 6
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
## Home Assistant Add-on
@@ -521,7 +521,7 @@ There are important limitations in HA OS to be aware of:
- Separate local storage for media is not yet supported by Home Assistant
- AMD GPUs are not supported because HA OS does not include the mesa driver.
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
- Nvidia GPUs are not supported because addons do not support the Nvidia runtime.
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
:::
@@ -694,18 +694,17 @@ Log into QNAP, open Container Station. Frigate docker container should be listed
:::warning
macOS uses port 5000 for its Airplay Receiver service. If you want to expose port 5000 in Frigate for local app and API access the port will need to be mapped to another port on the host e.g. 5001
macOS uses port 5000 for its Airplay Receiver service. If you want to expose port 5000 in Frigate for local app and API access the port will need to be mapped to another port on the host e.g. 5001
Failure to remap port 5000 on the host will result in the WebUI and all API endpoints on port 5000 being unreachable, even if port 5000 is exposed correctly in Docker.
:::
Docker containers on macOS can be orchestrated by either [Docker Desktop](https://docs.docker.com/desktop/setup/install/mac-install/) or [OrbStack](https://orbstack.dev) (native swift app). The difference in inference speeds is negligable, however CPU, power consumption and container start times will be lower on OrbStack because it is a native Swift application.
Docker containers on macOS can be orchestrated by either [Docker Desktop](https://docs.docker.com/desktop/setup/install/mac-install/) or [OrbStack](https://orbstack.dev) (native swift app). The difference in inference speeds is negligable, however CPU, power consumption and container start times will be lower on OrbStack because it is a native Swift application.
To allow Frigate to use the Apple Silicon Neural Engine / Processing Unit (NPU) the host must be running [Apple Silicon Detector](../configuration/object_detectors.md#apple-silicon-detector) on the host (outside Docker)
#### Docker Compose example
```yaml
services:
frigate:
@@ -720,7 +719,7 @@ services:
ports:
- "8971:8971"
# If exposing on macOS map to a diffent host port like 5001 or any orher port with no conflicts
# - "5001:5000" # Internal unauthenticated access. Expose carefully.
# - "5001:5000" # Internal unauthenticated access. Expose carefully.
- "8554:8554" # RTSP feeds
extra_hosts:
# This is very important

View File

@@ -20,6 +20,7 @@ Keeping Frigate up to date ensures you benefit from the latest features, perform
If youre running Frigate via Docker (recommended method), follow these steps:
1. **Stop the Container**:
- If using Docker Compose:
```bash
docker compose down frigate
@@ -30,8 +31,9 @@ If youre running Frigate via Docker (recommended method), follow these steps:
```
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.4`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
```yaml
services:
frigate:
@@ -49,6 +51,7 @@ If youre running Frigate via Docker (recommended method), follow these steps:
```
3. **Start the Container**:
- If using Docker Compose:
```bash
docker compose up -d
@@ -72,15 +75,18 @@ If youre running Frigate via Docker (recommended method), follow these steps:
For users running Frigate as a Home Assistant Addon:
1. **Check for Updates**:
- Navigate to **Settings > Add-ons** in Home Assistant.
- Find your installed Frigate addon (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
- If an update is available, youll see an "Update" button.
2. **Update the Addon**:
- Click the "Update" button next to the Frigate addon.
- Wait for the process to complete. Home Assistant will handle downloading and installing the new version.
3. **Restart the Addon**:
- After updating, go to the addons page and click "Restart" to apply the changes.
4. **Verify the Update**:
@@ -99,8 +105,8 @@ If an update causes issues:
1. Stop Frigate.
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again.

View File

@@ -119,7 +119,7 @@ services:
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
@@ -240,10 +240,7 @@ cameras:
- detect
motion:
mask:
motion_area:
friendly_name: "Motion mask"
enabled: true
coordinates: "0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432"
- 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432
```
### Step 6: Enable recordings

View File

@@ -429,30 +429,6 @@ Topic to adjust motion contour area for a camera. Expected value is an integer.
Topic with current motion contour area for a camera. Published value is an integer.
### `frigate/<camera_name>/motion_mask/<mask_name>/set`
Topic to turn a specific motion mask for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/motion_mask/<mask_name>/state`
Topic with current state of a specific motion mask for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/object_mask/<mask_name>/set`
Topic to turn a specific object mask for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/object_mask/<mask_name>/state`
Topic with current state of a specific object mask for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/zone/<zone_name>/set`
Topic to turn a specific zone for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/zone/<zone_name>/state`
Topic with current state of a specific zone for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_status`
Topic with current activity status of the camera. Possible values are `NONE`, `DETECTION`, or `ALERT`.

View File

@@ -54,8 +54,6 @@ Once you have [requested your first model](../plus/first_model.md) and gotten yo
You can either choose the new model from the Frigate+ pane in the Settings page of the Frigate UI, or manually set the model at the root level in your config:
```yaml
detectors: ...
model:
path: plus://<your_model_id>
```

View File

@@ -24,8 +24,6 @@ You will receive an email notification when your Frigate+ model is ready.
Models available in Frigate+ can be used with a special model path. No other information needs to be configured because it fetches the remaining config from Frigate+ automatically.
```yaml
detectors: ...
model:
path: plus://<your_model_id>
```

View File

@@ -15,15 +15,15 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on most hardware. |
| Model Type | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
### YOLOv9 Details
YOLOv9 models are available in `s`, `t`, `edgetpu` variants. When requesting a `yolov9` model, you will be prompted to choose a variant. If you want the model to be compatible with a Google Coral, you will need to choose the `edgetpu` variant. If you are unsure what variant to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
:::info
@@ -37,21 +37,23 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support
Rockchip models are automatically converted as of 0.17. For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it.
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
## Supported detector types
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip (`rknn`) detectors.
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
| Hardware | Recommended Detector Type | Recommended Model Type |
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `yolov9` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform) | `rknn` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
## Improving your model
@@ -79,7 +81,7 @@ Candidate labels are also available for annotation. These labels don't have enou
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`, `la_poste`, `lawnmower`, `heron`, `rickshaw`, `wombat`, `auspost`, `aramex`, `bobcat`, `mustelid`, `transoflex`, `airplane`, `drone`, `mountain_lion`, `crocodile`, `turkey`, `baby_stroller`, `monkey`, `coyote`, `porcupine`, `parcelforce`, `sheep`, `snake`, `helicopter`, `lizard`, `duck`, `hermes`, `cargus`, `fan_courier`, `sameday`
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`
Candidate labels are not available for automatic suggestions.

View File

@@ -19,7 +19,6 @@ from fastapi import APIRouter, Body, Path, Request, Response
from fastapi.encoders import jsonable_encoder
from fastapi.params import Depends
from fastapi.responses import JSONResponse, PlainTextResponse, StreamingResponse
from filelock import FileLock, Timeout
from markupsafe import escape
from peewee import SQL, fn, operator
from pydantic import ValidationError
@@ -39,6 +38,7 @@ from frigate.config.camera.updater import (
CameraConfigUpdateTopic,
)
from frigate.ffmpeg_presets import FFMPEG_HWACCEL_VAAPI, _gpu_selector
from frigate.genai import GenAIClientManager
from frigate.jobs.media_sync import (
get_current_media_sync_job,
get_media_sync_job_by_id,
@@ -50,12 +50,10 @@ from frigate.types import JobStatusTypesEnum
from frigate.util.builtin import (
clean_camera_user_pass,
flatten_config_data,
load_labels,
process_config_query_string,
update_yaml_file_bulk,
)
from frigate.util.config import find_config_file
from frigate.util.schema import get_config_schema
from frigate.util.services import (
get_nvidia_driver_info,
process_logs,
@@ -80,7 +78,9 @@ def is_healthy():
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
def config_schema(request: Request):
return JSONResponse(content=get_config_schema(FrigateConfig))
return Response(
content=request.app.frigate_config.schema_json(), media_type="application/json"
)
@router.get(
@@ -126,10 +126,6 @@ def config(request: Request):
config: dict[str, dict[str, Any]] = config_obj.model_dump(
mode="json", warnings="none", exclude_none=True
)
config["detectors"] = {
name: detector.model_dump(mode="json", warnings="none", exclude_none=True)
for name, detector in config_obj.detectors.items()
}
# remove the mqtt password
config["mqtt"].pop("password", None)
@@ -200,54 +196,6 @@ def config(request: Request):
return JSONResponse(content=config)
@router.get("/ffmpeg/presets", dependencies=[Depends(allow_any_authenticated())])
def ffmpeg_presets():
"""Return available ffmpeg preset keys for config UI usage."""
# Whitelist based on documented presets in ffmpeg_presets.md
hwaccel_presets = [
"preset-rpi-64-h264",
"preset-rpi-64-h265",
"preset-vaapi",
"preset-intel-qsv-h264",
"preset-intel-qsv-h265",
"preset-nvidia",
"preset-jetson-h264",
"preset-jetson-h265",
"preset-rkmpp",
]
input_presets = [
"preset-http-jpeg-generic",
"preset-http-mjpeg-generic",
"preset-http-reolink",
"preset-rtmp-generic",
"preset-rtsp-generic",
"preset-rtsp-restream",
"preset-rtsp-restream-low-latency",
"preset-rtsp-udp",
"preset-rtsp-blue-iris",
]
record_output_presets = [
"preset-record-generic",
"preset-record-generic-audio-copy",
"preset-record-generic-audio-aac",
"preset-record-mjpeg",
"preset-record-jpeg",
"preset-record-ubiquiti",
]
return JSONResponse(
content={
"hwaccel_args": hwaccel_presets,
"input_args": input_presets,
"output_args": {
"record": record_output_presets,
"detect": [],
},
}
)
@router.get("/config/raw_paths", dependencies=[Depends(require_role(["admin"]))])
def config_raw_paths(request: Request):
"""Admin-only endpoint that returns camera paths and go2rtc streams without credential masking."""
@@ -425,124 +373,102 @@ def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
@router.put("/config/set", dependencies=[Depends(require_role(["admin"]))])
def config_set(request: Request, body: AppConfigSetBody):
config_file = find_config_file()
lock = FileLock(f"{config_file}.lock", timeout=5)
with open(config_file, "r") as f:
old_raw_config = f.read()
try:
with lock:
with open(config_file, "r") as f:
old_raw_config = f.read()
updates = {}
try:
updates = {}
# process query string parameters (takes precedence over body.config_data)
parsed_url = urllib.parse.urlparse(str(request.url))
query_string = urllib.parse.parse_qs(parsed_url.query, keep_blank_values=True)
# process query string parameters (takes precedence over body.config_data)
parsed_url = urllib.parse.urlparse(str(request.url))
query_string = urllib.parse.parse_qs(
parsed_url.query, keep_blank_values=True
)
# Filter out empty keys but keep blank values for non-empty keys
query_string = {k: v for k, v in query_string.items() if k}
# Filter out empty keys but keep blank values for non-empty keys
query_string = {k: v for k, v in query_string.items() if k}
if query_string:
updates = process_config_query_string(query_string)
elif body.config_data:
updates = flatten_config_data(body.config_data)
if query_string:
updates = process_config_query_string(query_string)
elif body.config_data:
updates = flatten_config_data(body.config_data)
# Convert None values to empty strings for deletion (e.g., when deleting masks)
updates = {k: ("" if v is None else v) for k, v in updates.items()}
if not updates:
return JSONResponse(
content=(
{"success": False, "message": "No configuration data provided"}
),
status_code=400,
)
if not updates:
return JSONResponse(
content=(
{
"success": False,
"message": "No configuration data provided",
}
),
status_code=400,
)
# apply all updates in a single operation
update_yaml_file_bulk(config_file, updates)
# apply all updates in a single operation
update_yaml_file_bulk(config_file, updates)
# validate the updated config
with open(config_file, "r") as f:
new_raw_config = f.read()
try:
config = FrigateConfig.parse(new_raw_config)
except Exception:
with open(config_file, "w") as f:
f.write(old_raw_config)
f.close()
logger.error(f"\nConfig Error:\n\n{str(traceback.format_exc())}")
return JSONResponse(
content=(
{
"success": False,
"message": "Error parsing config. Check logs for error message.",
}
),
status_code=400,
)
except Exception as e:
logging.error(f"Error updating config: {e}")
return JSONResponse(
content=({"success": False, "message": "Error updating config"}),
status_code=500,
)
if body.requires_restart == 0 or body.update_topic:
old_config: FrigateConfig = request.app.frigate_config
request.app.frigate_config = config
request.app.genai_manager.update_config(config)
if body.update_topic:
if body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
else:
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum[field], camera
),
settings,
)
else:
# Generic handling for global config updates
settings = config.get_nested_object(body.update_topic)
# Publish None for removal, actual config for add/update
request.app.config_publisher.publisher.publish(
body.update_topic, settings
)
# validate the updated config
with open(config_file, "r") as f:
new_raw_config = f.read()
try:
config = FrigateConfig.parse(new_raw_config)
except Exception:
with open(config_file, "w") as f:
f.write(old_raw_config)
f.close()
logger.error(f"\nConfig Error:\n\n{str(traceback.format_exc())}")
return JSONResponse(
content=(
{
"success": True,
"message": "Config successfully updated, restart to apply",
"success": False,
"message": "Error parsing config. Check logs for error message.",
}
),
status_code=200,
status_code=400,
)
except Timeout:
except Exception as e:
logging.error(f"Error updating config: {e}")
return JSONResponse(
content=(
{
"success": False,
"message": "Another process is currently updating the config. Please try again in a few seconds.",
}
),
status_code=503,
content=({"success": False, "message": "Error updating config"}),
status_code=500,
)
if body.requires_restart == 0 or body.update_topic:
old_config: FrigateConfig = request.app.frigate_config
request.app.frigate_config = config
request.app.genai_manager = GenAIClientManager(config)
if body.update_topic:
if body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
else:
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum[field], camera),
settings,
)
else:
# Generic handling for global config updates
settings = config.get_nested_object(body.update_topic)
# Publish None for removal, actual config for add/update
request.app.config_publisher.publisher.publish(
body.update_topic, settings
)
return JSONResponse(
content=(
{
"success": True,
"message": "Config successfully updated, restart to apply",
}
),
status_code=200,
)
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
def vainfo():
@@ -830,12 +756,6 @@ def get_sub_labels(split_joined: Optional[int] = None):
return JSONResponse(content=sub_labels)
@router.get("/audio_labels", dependencies=[Depends(allow_any_authenticated())])
def get_audio_labels():
labels = load_labels("/audio-labelmap.txt", prefill=521)
return JSONResponse(content=labels)
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
if not request.app.frigate_config.plus_api.is_active():

View File

@@ -242,7 +242,7 @@ async def _execute_search_objects(
return JSONResponse(
content={
"success": False,
"message": "Error searching objects",
"message": f"Error searching objects: {str(e)}",
},
status_code=500,
)
@@ -330,7 +330,7 @@ async def _execute_get_live_context(
except Exception as e:
logger.error(f"Error executing get_live_context: {e}", exc_info=True)
return {
"error": "Error getting live context",
"error": f"Error getting live context: {str(e)}",
}

View File

@@ -65,7 +65,7 @@ class CameraState:
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_YUV2BGR_I420)
# draw on the frame
if draw_options.get("mask"):
mask_overlay = np.where(self.camera_config.motion.rasterized_mask == [0])
mask_overlay = np.where(self.camera_config.motion.mask == [0])
frame_copy[mask_overlay] = [0, 0, 0]
if draw_options.get("bounding_boxes"):
@@ -197,10 +197,6 @@ class CameraState:
if draw_options.get("zones"):
for name, zone in self.camera_config.zones.items():
# skip disabled zones
if not zone.enabled:
continue
thickness = (
8
if any(

View File

@@ -15,7 +15,6 @@ from frigate.config.camera.updater import (
CameraConfigUpdatePublisher,
CameraConfigUpdateTopic,
)
from frigate.config.config import RuntimeFilterConfig, RuntimeMotionConfig
from frigate.const import (
CLEAR_ONGOING_REVIEW_SEGMENTS,
EXPIRE_AUDIO_ACTIVITY,
@@ -85,9 +84,6 @@ class Dispatcher:
"review_detections": self._on_detections_command,
"object_descriptions": self._on_object_description_command,
"review_descriptions": self._on_review_description_command,
"motion_mask": self._on_motion_mask_command,
"object_mask": self._on_object_mask_command,
"zone": self._on_zone_command,
}
self._global_settings_handlers: dict[str, Callable] = {
"notifications": self._on_global_notification_command,
@@ -104,20 +100,11 @@ class Dispatcher:
"""Handle receiving of payload from communicators."""
def handle_camera_command(
command_type: str,
camera_name: str,
command: str,
payload: str,
sub_command: str | None = None,
command_type: str, camera_name: str, command: str, payload: str
) -> None:
try:
if command_type == "set":
if sub_command:
self._camera_settings_handlers[command](
camera_name, sub_command, payload
)
else:
self._camera_settings_handlers[command](camera_name, payload)
self._camera_settings_handlers[command](camera_name, payload)
elif command_type == "ptz":
self._on_ptz_command(camera_name, payload)
except KeyError:
@@ -327,14 +314,6 @@ class Dispatcher:
camera_name = parts[-3]
command = parts[-2]
handle_camera_command("set", camera_name, command, payload)
elif len(parts) == 4 and topic.endswith("set"):
# example /cam_name/motion_mask/mask_name/set payload=ON|OFF
camera_name = parts[-4]
command = parts[-3]
sub_command = parts[-2]
handle_camera_command(
"set", camera_name, command, payload, sub_command
)
elif len(parts) == 2 and topic.endswith("set"):
command = parts[-2]
self._global_settings_handlers[command](payload)
@@ -879,149 +858,3 @@ class Dispatcher:
genai_settings,
)
self.publish(f"{camera_name}/review_descriptions/state", payload, retain=True)
def _on_motion_mask_command(
self, camera_name: str, mask_name: str, payload: str
) -> None:
"""Callback for motion mask topic."""
if payload not in ["ON", "OFF"]:
logger.error(f"Invalid payload for motion mask {mask_name}: {payload}")
return
motion_settings = self.config.cameras[camera_name].motion
if mask_name not in motion_settings.mask:
logger.error(f"Unknown motion mask: {mask_name}")
return
mask = motion_settings.mask[mask_name]
if not mask:
logger.error(f"Motion mask {mask_name} is None")
return
if payload == "ON":
if not mask.enabled_in_config:
logger.error(
f"Motion mask {mask_name} must be enabled in the config to be turned on via MQTT."
)
return
mask.enabled = payload == "ON"
# Recreate RuntimeMotionConfig to update rasterized_mask
motion_settings = RuntimeMotionConfig(
frame_shape=self.config.cameras[camera_name].frame_shape,
**motion_settings.model_dump(exclude_unset=True),
)
# Update the dispatcher's own config
self.config.cameras[camera_name].motion = motion_settings
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.publish(
f"{camera_name}/motion_mask/{mask_name}/state", payload, retain=True
)
def _on_object_mask_command(
self, camera_name: str, mask_name: str, payload: str
) -> None:
"""Callback for object mask topic."""
if payload not in ["ON", "OFF"]:
logger.error(f"Invalid payload for object mask {mask_name}: {payload}")
return
object_settings = self.config.cameras[camera_name].objects
# Check if this is a global mask
mask_found = False
if mask_name in object_settings.mask:
mask = object_settings.mask[mask_name]
if mask:
if payload == "ON":
if not mask.enabled_in_config:
logger.error(
f"Object mask {mask_name} must be enabled in the config to be turned on via MQTT."
)
return
mask.enabled = payload == "ON"
mask_found = True
# Check if this is a per-object filter mask
for object_name, filter_config in object_settings.filters.items():
if mask_name in filter_config.mask:
mask = filter_config.mask[mask_name]
if mask:
if payload == "ON":
if not mask.enabled_in_config:
logger.error(
f"Object mask {mask_name} must be enabled in the config to be turned on via MQTT."
)
return
mask.enabled = payload == "ON"
mask_found = True
if not mask_found:
logger.error(f"Unknown object mask: {mask_name}")
return
# Recreate RuntimeFilterConfig for each object filter to update rasterized_mask
for object_name, filter_config in object_settings.filters.items():
# Merge global object masks with per-object filter masks
merged_mask = dict(filter_config.mask) # Copy filter-specific masks
# Add global object masks if they exist
if object_settings.mask:
for global_mask_id, global_mask_config in object_settings.mask.items():
# Use a global prefix to avoid key collisions
global_mask_id_prefixed = f"global_{global_mask_id}"
merged_mask[global_mask_id_prefixed] = global_mask_config
object_settings.filters[object_name] = RuntimeFilterConfig(
frame_shape=self.config.cameras[camera_name].frame_shape,
mask=merged_mask,
**filter_config.model_dump(
exclude_unset=True, exclude={"mask", "raw_mask"}
),
)
# Update the dispatcher's own config
self.config.cameras[camera_name].objects = object_settings
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.objects, camera_name),
object_settings,
)
self.publish(
f"{camera_name}/object_mask/{mask_name}/state", payload, retain=True
)
def _on_zone_command(self, camera_name: str, zone_name: str, payload: str) -> None:
"""Callback for zone topic."""
if payload not in ["ON", "OFF"]:
logger.error(f"Invalid payload for zone {zone_name}: {payload}")
return
camera_config = self.config.cameras[camera_name]
if zone_name not in camera_config.zones:
logger.error(f"Unknown zone: {zone_name}")
return
if payload == "ON":
if not camera_config.zones[zone_name].enabled_in_config:
logger.error(
f"Zone {zone_name} must be enabled in the config to be turned on via MQTT."
)
return
camera_config.zones[zone_name].enabled = payload == "ON"
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.zones, camera_name),
camera_config.zones,
)
self.publish(f"{camera_name}/zone/{zone_name}/state", payload, retain=True)

View File

@@ -133,29 +133,6 @@ class MqttClient(Communicator):
retain=True,
)
for mask_name, motion_mask in camera.motion.mask.items():
if motion_mask:
self.publish(
f"{camera_name}/motion_mask/{mask_name}/state",
"ON" if motion_mask.enabled else "OFF",
retain=True,
)
for mask_name, object_mask in camera.objects.mask.items():
if object_mask:
self.publish(
f"{camera_name}/object_mask/{mask_name}/state",
"ON" if object_mask.enabled else "OFF",
retain=True,
)
for zone_name, zone in camera.zones.items():
self.publish(
f"{camera_name}/zone/{zone_name}/state",
"ON" if zone.enabled else "OFF",
retain=True,
)
if self.config.notifications.enabled_in_config:
self.publish(
"notifications/state",
@@ -265,24 +242,6 @@ class MqttClient(Communicator):
self.on_mqtt_command,
)
for mask_name in self.config.cameras[name].motion.mask.keys():
self.client.message_callback_add(
f"{self.mqtt_config.topic_prefix}/{name}/motion_mask/{mask_name}/set",
self.on_mqtt_command,
)
for mask_name in self.config.cameras[name].objects.mask.keys():
self.client.message_callback_add(
f"{self.mqtt_config.topic_prefix}/{name}/object_mask/{mask_name}/set",
self.on_mqtt_command,
)
for zone_name in self.config.cameras[name].zones.keys():
self.client.message_callback_add(
f"{self.mqtt_config.topic_prefix}/{name}/zone/{zone_name}/set",
self.on_mqtt_command,
)
if self.config.notifications.enabled_in_config:
self.client.message_callback_add(
f"{self.mqtt_config.topic_prefix}/notifications/set",

View File

@@ -8,63 +8,39 @@ __all__ = ["AuthConfig"]
class AuthConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Enable authentication",
description="Enable native authentication for the Frigate UI.",
)
enabled: bool = Field(default=True, title="Enable authentication")
reset_admin_password: bool = Field(
default=False,
title="Reset admin password",
description="If true, reset the admin user's password on startup and print the new password in logs.",
default=False, title="Reset the admin password on startup"
)
cookie_name: str = Field(
default="frigate_token",
title="JWT cookie name",
description="Name of the cookie used to store the JWT token for native authentication.",
pattern=r"^[a-z_]+$",
)
cookie_secure: bool = Field(
default=False,
title="Secure cookie flag",
description="Set the secure flag on the auth cookie; should be true when using TLS.",
default="frigate_token", title="Name for jwt token cookie", pattern=r"^[a-z_]+$"
)
cookie_secure: bool = Field(default=False, title="Set secure flag on cookie")
session_length: int = Field(
default=86400,
title="Session length",
description="Session duration in seconds for JWT-based sessions.",
ge=60,
default=86400, title="Session length for jwt session tokens", ge=60
)
refresh_time: int = Field(
default=1800,
title="Session refresh window",
description="When a session is within this many seconds of expiring, refresh it back to full length.",
title="Refresh the session if it is going to expire in this many seconds",
ge=30,
)
failed_login_rate_limit: Optional[str] = Field(
default=None,
title="Failed login limits",
description="Rate limiting rules for failed login attempts to reduce brute-force attacks.",
title="Rate limits for failed login attempts.",
)
trusted_proxies: list[str] = Field(
default=[],
title="Trusted proxies",
description="List of trusted proxy IPs used when determining client IP for rate limiting.",
title="Trusted proxies for determining IP address to rate limit",
)
# As of Feb 2023, OWASP recommends 600000 iterations for PBKDF2-SHA256
hash_iterations: int = Field(
default=600000,
title="Hash iterations",
description="Number of PBKDF2-SHA256 iterations to use when hashing user passwords.",
)
hash_iterations: int = Field(default=600000, title="Password hash iterations")
roles: Dict[str, List[str]] = Field(
default_factory=dict,
title="Role mappings",
description="Map roles to camera lists. An empty list grants access to all cameras for the role.",
title="Role to camera mappings. Empty list grants access to all cameras.",
)
admin_first_time_login: Optional[bool] = Field(
default=False,
title="First-time admin flag",
title="Internal field to expose first-time admin login flag to the UI",
description=(
"When true the UI may show a help link on the login page informing users how to sign in after an admin password reset. "
),

View File

@@ -17,45 +17,25 @@ class AudioFilterConfig(FrigateBaseModel):
default=0.8,
ge=AUDIO_MIN_CONFIDENCE,
lt=1.0,
title="Minimum audio confidence",
description="Minimum confidence threshold for the audio event to be counted.",
title="Minimum detection confidence threshold for audio to be counted.",
)
class AudioConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable audio detection",
description="Enable or disable audio event detection for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=False, title="Enable audio events.")
max_not_heard: int = Field(
default=30,
title="End timeout",
description="Amount of seconds without the configured audio type before the audio event is ended.",
default=30, title="Seconds of not hearing the type of audio to end the event."
)
min_volume: int = Field(
default=500,
title="Minimum volume",
description="Minimum RMS volume threshold required to run audio detection; lower values increase sensitivity (e.g., 200 high, 500 medium, 1000 low).",
default=500, title="Min volume required to run audio detection."
)
listen: list[str] = Field(
default=DEFAULT_LISTEN_AUDIO,
title="Listen types",
description="List of audio event types to detect (for example: bark, fire_alarm, scream, speech, yell).",
default=DEFAULT_LISTEN_AUDIO, title="Audio to listen for."
)
filters: Optional[dict[str, AudioFilterConfig]] = Field(
None,
title="Audio filters",
description="Per-audio-type filter settings such as confidence thresholds used to reduce false positives.",
None, title="Audio filters."
)
enabled_in_config: Optional[bool] = Field(
None,
title="Original audio state",
description="Indicates whether audio detection was originally enabled in the static config file.",
)
num_threads: int = Field(
default=2,
title="Detection threads",
description="Number of threads to use for audio detection processing.",
ge=1,
None, title="Keep track of original state of audio detection."
)
num_threads: int = Field(default=2, title="Number of detection threads", ge=1)

View File

@@ -29,88 +29,45 @@ class BirdseyeModeEnum(str, Enum):
class BirdseyeLayoutConfig(FrigateBaseModel):
scaling_factor: float = Field(
default=2.0,
title="Scaling factor",
description="Scaling factor used by the layout calculator (range 1.0 to 5.0).",
ge=1.0,
le=5.0,
)
max_cameras: Optional[int] = Field(
default=None,
title="Max cameras",
description="Maximum number of cameras to display at once in Birdseye; shows the most recent cameras.",
default=2.0, title="Birdseye Scaling Factor", ge=1.0, le=5.0
)
max_cameras: Optional[int] = Field(default=None, title="Max cameras")
class BirdseyeConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Enable Birdseye",
description="Enable or disable the Birdseye view feature.",
)
enabled: bool = Field(default=True, title="Enable birdseye view.")
mode: BirdseyeModeEnum = Field(
default=BirdseyeModeEnum.objects,
title="Tracking mode",
description="Mode for including cameras in Birdseye: 'objects', 'motion', or 'continuous'.",
default=BirdseyeModeEnum.objects, title="Tracking mode."
)
restream: bool = Field(
default=False,
title="Restream RTSP",
description="Re-stream the Birdseye output as an RTSP feed; enabling this will keep Birdseye running continuously.",
)
width: int = Field(
default=1280,
title="Width",
description="Output width (pixels) of the composed Birdseye frame.",
)
height: int = Field(
default=720,
title="Height",
description="Output height (pixels) of the composed Birdseye frame.",
)
restream: bool = Field(default=False, title="Restream birdseye via RTSP.")
width: int = Field(default=1280, title="Birdseye width.")
height: int = Field(default=720, title="Birdseye height.")
quality: int = Field(
default=8,
title="Encoding quality",
description="Encoding quality for the Birdseye mpeg1 feed (1 highest quality, 31 lowest).",
title="Encoding quality.",
ge=1,
le=31,
)
inactivity_threshold: int = Field(
default=30,
title="Inactivity threshold",
description="Seconds of inactivity after which a camera will stop being shown in Birdseye.",
gt=0,
default=30, title="Birdseye Inactivity Threshold", gt=0
)
layout: BirdseyeLayoutConfig = Field(
default_factory=BirdseyeLayoutConfig,
title="Layout",
description="Layout options for the Birdseye composition.",
default_factory=BirdseyeLayoutConfig, title="Birdseye Layout Config"
)
idle_heartbeat_fps: float = Field(
default=0.0,
ge=0.0,
le=10.0,
title="Idle heartbeat FPS",
description="Frames-per-second to resend the last composed Birdseye frame when idle; set to 0 to disable.",
title="Idle heartbeat FPS (0 disables, max 10)",
)
# uses BaseModel because some global attributes are not available at the camera level
class BirdseyeCameraConfig(BaseModel):
enabled: bool = Field(
default=True,
title="Enable Birdseye",
description="Enable or disable the Birdseye view feature.",
)
enabled: bool = Field(default=True, title="Enable birdseye view for camera.")
mode: BirdseyeModeEnum = Field(
default=BirdseyeModeEnum.objects,
title="Tracking mode",
description="Mode for including cameras in Birdseye: 'objects', 'motion', or 'continuous'.",
default=BirdseyeModeEnum.objects, title="Tracking mode for camera."
)
order: int = Field(
default=0,
title="Position",
description="Numeric position controlling the camera's ordering in the Birdseye layout.",
)
order: int = Field(default=0, title="Position of the camera in the birdseye view.")

View File

@@ -50,17 +50,10 @@ class CameraTypeEnum(str, Enum):
class CameraConfig(FrigateBaseModel):
name: Optional[str] = Field(
None,
title="Camera name",
description="Camera name is required",
pattern=REGEX_CAMERA_NAME,
)
name: Optional[str] = Field(None, title="Camera name.", pattern=REGEX_CAMERA_NAME)
friendly_name: Optional[str] = Field(
None,
title="Friendly name",
description="Camera friendly name used in the Frigate UI",
None, title="Camera friendly name used in the Frigate UI."
)
@model_validator(mode="before")
@@ -70,129 +63,80 @@ class CameraConfig(FrigateBaseModel):
pass
return values
enabled: bool = Field(default=True, title="Enabled", description="Enabled")
enabled: bool = Field(default=True, title="Enable camera.")
# Options with global fallback
audio: AudioConfig = Field(
default_factory=AudioConfig,
title="Audio events",
description="Settings for audio-based event detection for this camera.",
default_factory=AudioConfig, title="Audio events configuration."
)
audio_transcription: CameraAudioTranscriptionConfig = Field(
default_factory=CameraAudioTranscriptionConfig,
title="Audio transcription",
description="Settings for live and speech audio transcription used for events and live captions.",
title="Audio transcription config.",
)
birdseye: BirdseyeCameraConfig = Field(
default_factory=BirdseyeCameraConfig,
title="Birdseye",
description="Settings for the Birdseye composite view that composes multiple camera feeds into a single layout.",
default_factory=BirdseyeCameraConfig, title="Birdseye camera configuration."
)
detect: DetectConfig = Field(
default_factory=DetectConfig,
title="Object Detection",
description="Settings for the detection/detect role used to run object detection and initialize trackers.",
default_factory=DetectConfig, title="Object detection configuration."
)
face_recognition: CameraFaceRecognitionConfig = Field(
default_factory=CameraFaceRecognitionConfig,
title="Face recognition",
description="Settings for face detection and recognition for this camera.",
)
ffmpeg: CameraFfmpegConfig = Field(
title="FFmpeg",
description="FFmpeg settings including binary path, args, hwaccel options, and per-role output args.",
default_factory=CameraFaceRecognitionConfig, title="Face recognition config."
)
ffmpeg: CameraFfmpegConfig = Field(title="FFmpeg configuration for the camera.")
live: CameraLiveConfig = Field(
default_factory=CameraLiveConfig,
title="Live playback",
description="Settings used by the Web UI to control live stream selection, resolution and quality.",
default_factory=CameraLiveConfig, title="Live playback settings."
)
lpr: CameraLicensePlateRecognitionConfig = Field(
default_factory=CameraLicensePlateRecognitionConfig,
title="License Plate Recognition",
description="License plate recognition settings including detection thresholds, formatting, and known plates.",
)
motion: MotionConfig = Field(
None,
title="Motion detection",
description="Default motion detection settings for this camera.",
default_factory=CameraLicensePlateRecognitionConfig, title="LPR config."
)
motion: MotionConfig = Field(None, title="Motion detection configuration.")
objects: ObjectConfig = Field(
default_factory=ObjectConfig,
title="Objects",
description="Object tracking defaults including which labels to track and per-object filters.",
default_factory=ObjectConfig, title="Object configuration."
)
record: RecordConfig = Field(
default_factory=RecordConfig,
title="Recording",
description="Recording and retention settings for this camera.",
default_factory=RecordConfig, title="Record configuration."
)
review: ReviewConfig = Field(
default_factory=ReviewConfig,
title="Review",
description="Settings that control alerts, detections, and GenAI review summaries used by the UI and storage for this camera.",
default_factory=ReviewConfig, title="Review configuration."
)
semantic_search: CameraSemanticSearchConfig = Field(
default_factory=CameraSemanticSearchConfig,
title="Semantic Search",
description="Settings for semantic search which builds and queries object embeddings to find similar items.",
title="Semantic search configuration.",
)
snapshots: SnapshotsConfig = Field(
default_factory=SnapshotsConfig,
title="Snapshots",
description="Settings for saved JPEG snapshots of tracked objects for this camera.",
default_factory=SnapshotsConfig, title="Snapshot configuration."
)
timestamp_style: TimestampStyleConfig = Field(
default_factory=TimestampStyleConfig,
title="Timestamp style",
description="Styling options for in-feed timestamps applied to recordings and snapshots.",
default_factory=TimestampStyleConfig, title="Timestamp style configuration."
)
# Options without global fallback
best_image_timeout: int = Field(
default=60,
title="Best image timeout",
description="How long to wait for the image with the highest confidence score.",
title="How long to wait for the image with the highest confidence score.",
)
mqtt: CameraMqttConfig = Field(
default_factory=CameraMqttConfig,
title="MQTT",
description="MQTT image publishing settings.",
default_factory=CameraMqttConfig, title="MQTT configuration."
)
notifications: NotificationConfig = Field(
default_factory=NotificationConfig,
title="Notifications",
description="Settings to enable and control notifications for this camera.",
default_factory=NotificationConfig, title="Notifications configuration."
)
onvif: OnvifConfig = Field(
default_factory=OnvifConfig,
title="ONVIF",
description="ONVIF connection and PTZ autotracking settings for this camera.",
)
type: CameraTypeEnum = Field(
default=CameraTypeEnum.generic,
title="Camera type",
description="Camera Type",
default_factory=OnvifConfig, title="Camera Onvif Configuration."
)
type: CameraTypeEnum = Field(default=CameraTypeEnum.generic, title="Camera Type")
ui: CameraUiConfig = Field(
default_factory=CameraUiConfig,
title="Camera UI",
description="Display ordering and visibility for this camera in the UI. Ordering affects the default dashboard. For more granular control, use camera groups.",
default_factory=CameraUiConfig, title="Camera UI Modifications."
)
webui_url: Optional[str] = Field(
None,
title="Camera URL",
description="URL to visit the camera directly from system page",
title="URL to visit the camera directly from system page",
)
zones: dict[str, ZoneConfig] = Field(
default_factory=dict,
title="Zones",
description="Zones allow you to define a specific area of the frame so you can determine whether or not an object is within a particular area.",
default_factory=dict, title="Zone configuration."
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original camera state",
description="Keep track of original state of camera.",
default=None, title="Keep track of original state of camera."
)
_ffmpeg_cmds: list[dict[str, list[str]]] = PrivateAttr()

View File

@@ -8,82 +8,56 @@ __all__ = ["DetectConfig", "StationaryConfig", "StationaryMaxFramesConfig"]
class StationaryMaxFramesConfig(FrigateBaseModel):
default: Optional[int] = Field(
default=None,
title="Default max frames",
description="Default maximum frames to track a stationary object before stopping.",
ge=1,
)
default: Optional[int] = Field(default=None, title="Default max frames.", ge=1)
objects: dict[str, int] = Field(
default_factory=dict,
title="Object max frames",
description="Per-object overrides for maximum frames to track stationary objects.",
default_factory=dict, title="Object specific max frames."
)
class StationaryConfig(FrigateBaseModel):
interval: Optional[int] = Field(
default=None,
title="Stationary interval",
description="How often (in frames) to run a detection check to confirm a stationary object.",
title="Frame interval for checking stationary objects.",
gt=0,
)
threshold: Optional[int] = Field(
default=None,
title="Stationary threshold",
description="Number of frames with no position change required to mark an object as stationary.",
title="Number of frames without a position change for an object to be considered stationary",
ge=1,
)
max_frames: StationaryMaxFramesConfig = Field(
default_factory=StationaryMaxFramesConfig,
title="Max frames",
description="Limits how long stationary objects are tracked before being discarded.",
title="Max frames for stationary objects.",
)
classifier: bool = Field(
default=True,
title="Enable visual classifier",
description="Use a visual classifier to detect truly stationary objects even when bounding boxes jitter.",
title="Enable visual classifier for determing if objects with jittery bounding boxes are stationary.",
)
class DetectConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Detection enabled",
description="Enable or disable object detection for all cameras; can be overridden per-camera. Detection must be enabled for object tracking to run.",
)
enabled: bool = Field(default=False, title="Detection Enabled.")
height: Optional[int] = Field(
default=None,
title="Detect height",
description="Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution.",
default=None, title="Height of the stream for the detect role."
)
width: Optional[int] = Field(
default=None,
title="Detect width",
description="Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution.",
default=None, title="Width of the stream for the detect role."
)
fps: int = Field(
default=5,
title="Detect FPS",
description="Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects).",
default=5, title="Number of frames per second to process through detection."
)
min_initialized: Optional[int] = Field(
default=None,
title="Minimum initialization frames",
description="Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2.",
title="Minimum number of consecutive hits for an object to be initialized by the tracker.",
)
max_disappeared: Optional[int] = Field(
default=None,
title="Maximum disappeared frames",
description="Number of frames without a detection before a tracked object is considered gone.",
title="Maximum number of frames the object can disappear before detection ends.",
)
stationary: StationaryConfig = Field(
default_factory=StationaryConfig,
title="Stationary objects config",
description="Settings to detect and manage objects that remain stationary for a period of time.",
title="Stationary objects config.",
)
annotation_offset: int = Field(
default=0,
title="Annotation offset",
description="Milliseconds to shift detect annotations to better align timeline bounding boxes with recordings; can be positive or negative.",
default=0, title="Milliseconds to offset detect annotations by."
)

View File

@@ -35,58 +35,39 @@ DETECT_FFMPEG_OUTPUT_ARGS_DEFAULT = [
class FfmpegOutputArgsConfig(FrigateBaseModel):
detect: Union[str, list[str]] = Field(
default=DETECT_FFMPEG_OUTPUT_ARGS_DEFAULT,
title="Detect output arguments",
description="Default output arguments for detect role streams.",
title="Detect role FFmpeg output arguments.",
)
record: Union[str, list[str]] = Field(
default=RECORD_FFMPEG_OUTPUT_ARGS_DEFAULT,
title="Record output arguments",
description="Default output arguments for record role streams.",
title="Record role FFmpeg output arguments.",
)
class FfmpegConfig(FrigateBaseModel):
path: str = Field(
default="default",
title="FFmpeg path",
description='Path to the FFmpeg binary to use or a version alias ("5.0" or "7.0").',
)
path: str = Field(default="default", title="FFmpeg path")
global_args: Union[str, list[str]] = Field(
default=FFMPEG_GLOBAL_ARGS_DEFAULT,
title="FFmpeg global arguments",
description="Global arguments passed to FFmpeg processes.",
default=FFMPEG_GLOBAL_ARGS_DEFAULT, title="Global FFmpeg arguments."
)
hwaccel_args: Union[str, list[str]] = Field(
default="auto",
title="Hardware acceleration arguments",
description="Hardware acceleration arguments for FFmpeg. Provider-specific presets are recommended.",
default="auto", title="FFmpeg hardware acceleration arguments."
)
input_args: Union[str, list[str]] = Field(
default=FFMPEG_INPUT_ARGS_DEFAULT,
title="Input arguments",
description="Input arguments applied to FFmpeg input streams.",
default=FFMPEG_INPUT_ARGS_DEFAULT, title="FFmpeg input arguments."
)
output_args: FfmpegOutputArgsConfig = Field(
default_factory=FfmpegOutputArgsConfig,
title="Output arguments",
description="Default output arguments used for different FFmpeg roles such as detect and record.",
title="FFmpeg output arguments per role.",
)
retry_interval: float = Field(
default=10.0,
title="FFmpeg retry time",
description="Seconds to wait before attempting to reconnect a camera stream after failure. Default is 10.",
title="Time in seconds to wait before FFmpeg retries connecting to the camera.",
gt=0.0,
)
apple_compatibility: bool = Field(
default=False,
title="Apple compatibility",
description="Enable HEVC tagging for better Apple player compatibility when recording H.265.",
)
gpu: int = Field(
default=0,
title="GPU index",
description="Default GPU index used for hardware acceleration if available.",
title="Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players.",
)
gpu: int = Field(default=0, title="GPU index to use for hardware acceleration.")
@property
def ffmpeg_path(self) -> str:
@@ -114,36 +95,21 @@ class CameraRoleEnum(str, Enum):
class CameraInput(FrigateBaseModel):
path: EnvString = Field(
title="Input path",
description="Camera input stream URL or path.",
)
roles: list[CameraRoleEnum] = Field(
title="Input roles",
description="Roles for this input stream.",
)
path: EnvString = Field(title="Camera input path.")
roles: list[CameraRoleEnum] = Field(title="Roles assigned to this input.")
global_args: Union[str, list[str]] = Field(
default_factory=list,
title="FFmpeg global arguments",
description="FFmpeg global arguments for this input stream.",
default_factory=list, title="FFmpeg global arguments."
)
hwaccel_args: Union[str, list[str]] = Field(
default_factory=list,
title="Hardware acceleration arguments",
description="Hardware acceleration arguments for this input stream.",
default_factory=list, title="FFmpeg hardware acceleration arguments."
)
input_args: Union[str, list[str]] = Field(
default_factory=list,
title="Input arguments",
description="Input arguments specific to this stream.",
default_factory=list, title="FFmpeg input arguments."
)
class CameraFfmpegConfig(FfmpegConfig):
inputs: list[CameraInput] = Field(
title="Camera inputs",
description="List of input stream definitions (paths and roles) for this camera.",
)
inputs: list[CameraInput] = Field(title="Camera inputs.")
@field_validator("inputs")
@classmethod

View File

@@ -26,44 +26,21 @@ class GenAIRoleEnum(str, Enum):
class GenAIConfig(FrigateBaseModel):
"""Primary GenAI Config to define GenAI Provider."""
api_key: Optional[EnvString] = Field(
default=None,
title="API key",
description="API key required by some providers (can also be set via environment variables).",
)
base_url: Optional[str] = Field(
default=None,
title="Base URL",
description="Base URL for self-hosted or compatible providers (for example an Ollama instance).",
)
model: str = Field(
default="gpt-4o",
title="Model",
description="The model to use from the provider for generating descriptions or summaries.",
)
provider: GenAIProviderEnum | None = Field(
default=None,
title="Provider",
description="The GenAI provider to use (for example: ollama, gemini, openai).",
)
api_key: Optional[EnvString] = Field(default=None, title="Provider API key.")
base_url: Optional[str] = Field(default=None, title="Provider base url.")
model: str = Field(default="gpt-4o", title="GenAI model.")
provider: GenAIProviderEnum | None = Field(default=None, title="GenAI provider.")
roles: list[GenAIRoleEnum] = Field(
default_factory=lambda: [
GenAIRoleEnum.embeddings,
GenAIRoleEnum.vision,
GenAIRoleEnum.tools,
],
title="Roles",
description="GenAI roles (tools, vision, embeddings); one provider per role.",
title="GenAI roles (tools, vision, embeddings); one provider per role.",
)
provider_options: dict[str, Any] = Field(
default={},
title="Provider options",
description="Additional provider-specific options to pass to the GenAI client.",
json_schema_extra={"additionalProperties": {"type": "string"}},
default={}, title="GenAI Provider extra options."
)
runtime_options: dict[str, Any] = Field(
default={},
title="Runtime options",
description="Runtime options passed to the provider for each inference call.",
json_schema_extra={"additionalProperties": {"type": "string"}},
default={}, title="Options to pass during inference calls."
)

View File

@@ -10,18 +10,7 @@ __all__ = ["CameraLiveConfig"]
class CameraLiveConfig(FrigateBaseModel):
streams: Dict[str, str] = Field(
default_factory=list,
title="Live stream names",
description="Mapping of configured stream names to restream/go2rtc names used for live playback.",
)
height: int = Field(
default=720,
title="Live height",
description="Height (pixels) to render the jsmpeg live stream in the Web UI; must be <= detect stream height.",
)
quality: int = Field(
default=8,
ge=1,
le=31,
title="Live quality",
description="Encoding quality for the jsmpeg stream (1 highest, 31 lowest).",
title="Friendly names and restream names to use for live view.",
)
height: int = Field(default=720, title="Live camera view height")
quality: int = Field(default=8, ge=1, le=31, title="Live camera view quality")

View File

@@ -1,85 +0,0 @@
"""Mask configuration for motion and object masks."""
from typing import Any, Optional, Union
from pydantic import Field, field_serializer
from ..base import FrigateBaseModel
__all__ = ["MotionMaskConfig", "ObjectMaskConfig"]
class MotionMaskConfig(FrigateBaseModel):
"""Configuration for a single motion mask."""
friendly_name: Optional[str] = Field(
default=None,
title="Friendly name",
description="A friendly name for this motion mask used in the Frigate UI",
)
enabled: bool = Field(
default=True,
title="Enabled",
description="Enable or disable this motion mask",
)
coordinates: Union[str, list[str]] = Field(
default="",
title="Coordinates",
description="Ordered x,y coordinates defining the motion mask polygon used to include/exclude areas.",
)
raw_coordinates: Union[str, list[str]] = ""
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of motion mask."
)
def get_formatted_name(self, mask_id: str) -> str:
"""Return the friendly name if set, otherwise return a formatted version of the mask ID."""
if self.friendly_name:
return self.friendly_name
return mask_id.replace("_", " ").title()
@field_serializer("coordinates", when_used="json")
def serialize_coordinates(self, value: Any, info):
return self.raw_coordinates if self.raw_coordinates else value
@field_serializer("raw_coordinates", when_used="json")
def serialize_raw_coordinates(self, value: Any, info):
return None
class ObjectMaskConfig(FrigateBaseModel):
"""Configuration for a single object mask."""
friendly_name: Optional[str] = Field(
default=None,
title="Friendly name",
description="A friendly name for this object mask used in the Frigate UI",
)
enabled: bool = Field(
default=True,
title="Enabled",
description="Enable or disable this object mask",
)
coordinates: Union[str, list[str]] = Field(
default="",
title="Coordinates",
description="Ordered x,y coordinates defining the object mask polygon used to include/exclude areas.",
)
raw_coordinates: Union[str, list[str]] = ""
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of object mask."
)
@field_serializer("coordinates", when_used="json")
def serialize_coordinates(self, value: Any, info):
return self.raw_coordinates if self.raw_coordinates else value
@field_serializer("raw_coordinates", when_used="json")
def serialize_raw_coordinates(self, value: Any, info):
return None
def get_formatted_name(self, mask_id: str) -> str:
"""Return the friendly name if set, otherwise return a formatted version of the mask ID."""
if self.friendly_name:
return self.friendly_name
return mask_id.replace("_", " ").title()

View File

@@ -1,82 +1,43 @@
from typing import Any, Optional
from typing import Any, Optional, Union
from pydantic import Field, field_serializer
from ..base import FrigateBaseModel
from .mask import MotionMaskConfig
__all__ = ["MotionConfig"]
class MotionConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Enable motion detection",
description="Enable or disable motion detection for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=True, title="Enable motion on all cameras.")
threshold: int = Field(
default=30,
title="Motion threshold",
description="Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255).",
title="Motion detection threshold (1-255).",
ge=1,
le=255,
)
lightning_threshold: float = Field(
default=0.8,
title="Lightning threshold",
description="Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0).",
ge=0.3,
le=1.0,
default=0.8, title="Lightning detection threshold (0.3-1.0).", ge=0.3, le=1.0
)
improve_contrast: bool = Field(
default=True,
title="Improve contrast",
description="Apply contrast improvement to frames before motion analysis to help detection.",
)
contour_area: Optional[int] = Field(
default=10,
title="Contour area",
description="Minimum contour area in pixels required for a motion contour to be counted.",
)
delta_alpha: float = Field(
default=0.2,
title="Delta alpha",
description="Alpha blending factor used in frame differencing for motion calculation.",
)
frame_alpha: float = Field(
default=0.01,
title="Frame alpha",
description="Alpha value used when blending frames for motion preprocessing.",
)
frame_height: Optional[int] = Field(
default=100,
title="Frame height",
description="Height in pixels to scale frames to when computing motion.",
)
mask: dict[str, Optional[MotionMaskConfig]] = Field(
default_factory=dict,
title="Mask coordinates",
description="Ordered x,y coordinates defining the motion mask polygon used to include/exclude areas.",
improve_contrast: bool = Field(default=True, title="Improve Contrast")
contour_area: Optional[int] = Field(default=10, title="Contour Area")
delta_alpha: float = Field(default=0.2, title="Delta Alpha")
frame_alpha: float = Field(default=0.01, title="Frame Alpha")
frame_height: Optional[int] = Field(default=100, title="Frame Height")
mask: Union[str, list[str]] = Field(
default="", title="Coordinates polygon for the motion mask."
)
mqtt_off_delay: int = Field(
default=30,
title="MQTT off delay",
description="Seconds to wait after last motion before publishing an MQTT 'off' state.",
title="Delay for updating MQTT with no motion detected.",
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original motion state",
description="Indicates whether motion detection was enabled in the original static configuration.",
)
raw_mask: dict[str, Optional[MotionMaskConfig]] = Field(
default_factory=dict, exclude=True
default=None, title="Keep track of original state of motion detection."
)
raw_mask: Union[str, list[str]] = ""
@field_serializer("mask", when_used="json")
def serialize_mask(self, value: Any, info):
if self.raw_mask:
return self.raw_mask
return value
return self.raw_mask
@field_serializer("raw_mask", when_used="json")
def serialize_raw_mask(self, value: Any, info):

View File

@@ -6,40 +6,18 @@ __all__ = ["CameraMqttConfig"]
class CameraMqttConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Send image",
description="Enable publishing image snapshots for objects to MQTT topics for this camera.",
)
timestamp: bool = Field(
default=True,
title="Add timestamp",
description="Overlay a timestamp on images published to MQTT.",
)
bounding_box: bool = Field(
default=True,
title="Add bounding box",
description="Draw bounding boxes on images published over MQTT.",
)
crop: bool = Field(
default=True,
title="Crop image",
description="Crop images published to MQTT to the detected object's bounding box.",
)
height: int = Field(
default=270,
title="Image height",
description="Height (pixels) to resize images published over MQTT.",
)
enabled: bool = Field(default=True, title="Send image over MQTT.")
timestamp: bool = Field(default=True, title="Add timestamp to MQTT image.")
bounding_box: bool = Field(default=True, title="Add bounding box to MQTT image.")
crop: bool = Field(default=True, title="Crop MQTT image to detected object.")
height: int = Field(default=270, title="MQTT image height.")
required_zones: list[str] = Field(
default_factory=list,
title="Required zones",
description="Zones that an object must enter for an MQTT image to be published.",
title="List of required zones to be entered in order to send the image.",
)
quality: int = Field(
default=70,
title="JPEG quality",
description="JPEG quality for images published to MQTT (0-100).",
title="Quality of the encoded jpeg (0-100).",
ge=0,
le=100,
)

View File

@@ -8,24 +8,11 @@ __all__ = ["NotificationConfig"]
class NotificationConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable notifications",
description="Enable or disable notifications for all cameras; can be overridden per-camera.",
)
email: Optional[str] = Field(
default=None,
title="Notification email",
description="Email address used for push notifications or required by certain notification providers.",
)
enabled: bool = Field(default=False, title="Enable notifications")
email: Optional[str] = Field(default=None, title="Email required for push.")
cooldown: int = Field(
default=0,
ge=0,
title="Cooldown period",
description="Cooldown (seconds) between notifications to avoid spamming recipients.",
default=0, ge=0, title="Cooldown period for notifications (time in seconds)."
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original notifications state",
description="Indicates whether notifications were enabled in the original static configuration.",
default=None, title="Keep track of original state of notifications."
)

View File

@@ -3,7 +3,6 @@ from typing import Any, Optional, Union
from pydantic import Field, PrivateAttr, field_serializer, field_validator
from ..base import FrigateBaseModel
from .mask import ObjectMaskConfig
__all__ = ["ObjectConfig", "GenAIObjectConfig", "FilterConfig"]
@@ -14,48 +13,36 @@ DEFAULT_TRACKED_OBJECTS = ["person"]
class FilterConfig(FrigateBaseModel):
min_area: Union[int, float] = Field(
default=0,
title="Minimum object area",
description="Minimum bounding box area (pixels or percentage) required for this object type. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
title="Minimum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
)
max_area: Union[int, float] = Field(
default=24000000,
title="Maximum object area",
description="Maximum bounding box area (pixels or percentage) allowed for this object type. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
title="Maximum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
)
min_ratio: float = Field(
default=0,
title="Minimum aspect ratio",
description="Minimum width/height ratio required for the bounding box to qualify.",
title="Minimum ratio of bounding box's width/height for object to be counted.",
)
max_ratio: float = Field(
default=24000000,
title="Maximum aspect ratio",
description="Maximum width/height ratio allowed for the bounding box to qualify.",
title="Maximum ratio of bounding box's width/height for object to be counted.",
)
threshold: float = Field(
default=0.7,
title="Confidence threshold",
description="Average detection confidence threshold required for the object to be considered a true positive.",
title="Average detection confidence threshold for object to be counted.",
)
min_score: float = Field(
default=0.5,
title="Minimum confidence",
description="Minimum single-frame detection confidence required for the object to be counted.",
default=0.5, title="Minimum detection confidence for object to be counted."
)
mask: dict[str, Optional[ObjectMaskConfig]] = Field(
default_factory=dict,
title="Filter mask",
description="Polygon coordinates defining where this filter applies within the frame.",
)
raw_mask: dict[str, Optional[ObjectMaskConfig]] = Field(
default_factory=dict, exclude=True
mask: Optional[Union[str, list[str]]] = Field(
default=None,
title="Detection area polygon mask for this filter configuration.",
)
raw_mask: Union[str, list[str]] = ""
@field_serializer("mask", when_used="json")
def serialize_mask(self, value: Any, info):
if self.raw_mask:
return self.raw_mask
return value
return self.raw_mask
@field_serializer("raw_mask", when_used="json")
def serialize_raw_mask(self, value: Any, info):
@@ -64,64 +51,46 @@ class FilterConfig(FrigateBaseModel):
class GenAIObjectTriggerConfig(FrigateBaseModel):
tracked_object_end: bool = Field(
default=True,
title="Send on end",
description="Send a request to GenAI when the tracked object ends.",
default=True, title="Send once the object is no longer tracked."
)
after_significant_updates: Optional[int] = Field(
default=None,
title="Early GenAI trigger",
description="Send a request to GenAI after a specified number of significant updates for the tracked object.",
title="Send an early request to generative AI when X frames accumulated.",
ge=1,
)
class GenAIObjectConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable GenAI",
description="Enable GenAI generation of descriptions for tracked objects by default.",
)
enabled: bool = Field(default=False, title="Enable GenAI for camera.")
use_snapshot: bool = Field(
default=False,
title="Use snapshots",
description="Use object snapshots instead of thumbnails for GenAI description generation.",
default=False, title="Use snapshots for generating descriptions."
)
prompt: str = Field(
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
title="Caption prompt",
description="Default prompt template used when generating descriptions with GenAI.",
title="Default caption prompt.",
)
object_prompts: dict[str, str] = Field(
default_factory=dict,
title="Object prompts",
description="Per-object prompts to customize GenAI outputs for specific labels.",
default_factory=dict, title="Object specific prompts."
)
objects: Union[str, list[str]] = Field(
default_factory=list,
title="GenAI objects",
description="List of object labels to send to GenAI by default.",
title="List of objects to run generative AI for.",
)
required_zones: Union[str, list[str]] = Field(
default_factory=list,
title="Required zones",
description="Zones that must be entered for objects to qualify for GenAI description generation.",
title="List of required zones to be entered in order to run generative AI.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails",
description="Save thumbnails sent to GenAI for debugging and review.",
title="Save thumbnails sent to generative AI for debugging purposes.",
)
send_triggers: GenAIObjectTriggerConfig = Field(
default_factory=GenAIObjectTriggerConfig,
title="GenAI triggers",
description="Defines when frames should be sent to GenAI (on end, after updates, etc.).",
title="What triggers to use to send frames to generative AI for a tracked object.",
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original GenAI state",
description="Indicates whether GenAI was enabled in the original static config.",
default=None, title="Keep track of original state of generative AI."
)
@field_validator("required_zones", mode="before")
@@ -134,28 +103,14 @@ class GenAIObjectConfig(FrigateBaseModel):
class ObjectConfig(FrigateBaseModel):
track: list[str] = Field(
default=DEFAULT_TRACKED_OBJECTS,
title="Objects to track",
description="List of object labels to track for all cameras; can be overridden per-camera.",
)
track: list[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
filters: dict[str, FilterConfig] = Field(
default_factory=dict,
title="Object filters",
description="Filters applied to detected objects to reduce false positives (area, ratio, confidence).",
)
mask: dict[str, Optional[ObjectMaskConfig]] = Field(
default_factory=dict,
title="Object mask",
description="Mask polygon used to prevent object detection in specified areas.",
)
raw_mask: dict[str, Optional[ObjectMaskConfig]] = Field(
default_factory=dict, exclude=True
default_factory=dict, title="Object filters."
)
mask: Union[str, list[str]] = Field(default="", title="Object mask.")
genai: GenAIObjectConfig = Field(
default_factory=GenAIObjectConfig,
title="GenAI object config",
description="GenAI options for describing tracked objects and sending frames for generation.",
title="Config for using genai to analyze objects.",
)
_all_objects: list[str] = PrivateAttr()
@@ -174,13 +129,3 @@ class ObjectConfig(FrigateBaseModel):
enabled_labels.update(camera.objects.track)
self._all_objects = list(enabled_labels)
@field_serializer("mask", when_used="json")
def serialize_mask(self, value: Any, info):
if self.raw_mask:
return self.raw_mask
return value
@field_serializer("raw_mask", when_used="json")
def serialize_raw_mask(self, value: Any, info):
return None

View File

@@ -17,57 +17,37 @@ class ZoomingModeEnum(str, Enum):
class PtzAutotrackConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable Autotracking",
description="Enable or disable automatic PTZ camera tracking of detected objects.",
)
enabled: bool = Field(default=False, title="Enable PTZ object autotracking.")
calibrate_on_startup: bool = Field(
default=False,
title="Calibrate on start",
description="Measure PTZ motor speeds on startup to improve tracking accuracy. Frigate will update config with movement_weights after calibration.",
default=False, title="Perform a camera calibration when Frigate starts."
)
zooming: ZoomingModeEnum = Field(
default=ZoomingModeEnum.disabled,
title="Zoom mode",
description="Control zoom behavior: disabled (pan/tilt only), absolute (most compatible), or relative (concurrent pan/tilt/zoom).",
default=ZoomingModeEnum.disabled, title="Autotracker zooming mode."
)
zoom_factor: float = Field(
default=0.3,
title="Zoom factor",
description="Control zoom level on tracked objects. Lower values keep more scene in view; higher values zoom in closer but may lose tracking. Values between 0.1 and 0.75.",
title="Zooming factor (0.1-0.75).",
ge=0.1,
le=0.75,
)
track: list[str] = Field(
default=DEFAULT_TRACKED_OBJECTS,
title="Tracked objects",
description="List of object types that should trigger autotracking.",
)
track: list[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
required_zones: list[str] = Field(
default_factory=list,
title="Required zones",
description="Objects must enter one of these zones before autotracking begins.",
title="List of required zones to be entered in order to begin autotracking.",
)
return_preset: str = Field(
default="home",
title="Return preset",
description="ONVIF preset name configured in camera firmware to return to after tracking ends.",
title="Name of camera preset to return to when object tracking is over.",
)
timeout: int = Field(
default=10,
title="Return timeout",
description="Wait this many seconds after losing tracking before returning camera to preset position.",
default=10, title="Seconds to delay before returning to preset."
)
movement_weights: Optional[Union[str, list[str]]] = Field(
default_factory=list,
title="Movement weights",
description="Calibration values automatically generated by camera calibration. Do not modify manually.",
title="Internal value used for PTZ movements based on the speed of your camera's motor.",
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original autotrack state",
description="Internal field to track whether autotracking was enabled in configuration.",
default=None, title="Keep track of original state of autotracking."
)
@field_validator("movement_weights", mode="before")
@@ -92,38 +72,16 @@ class PtzAutotrackConfig(FrigateBaseModel):
class OnvifConfig(FrigateBaseModel):
host: str = Field(
default="",
title="ONVIF host",
description="Host (and optional scheme) for the ONVIF service for this camera.",
)
port: int = Field(
default=8000,
title="ONVIF port",
description="Port number for the ONVIF service.",
)
user: Optional[EnvString] = Field(
default=None,
title="ONVIF username",
description="Username for ONVIF authentication; some devices require admin user for ONVIF.",
)
password: Optional[EnvString] = Field(
default=None,
title="ONVIF password",
description="Password for ONVIF authentication.",
)
tls_insecure: bool = Field(
default=False,
title="Disable TLS verify",
description="Skip TLS verification and disable digest auth for ONVIF (unsafe; use in safe networks only).",
)
host: str = Field(default="", title="Onvif Host")
port: int = Field(default=8000, title="Onvif Port")
user: Optional[EnvString] = Field(default=None, title="Onvif Username")
password: Optional[EnvString] = Field(default=None, title="Onvif Password")
tls_insecure: bool = Field(default=False, title="Onvif Disable TLS verification")
autotracking: PtzAutotrackConfig = Field(
default_factory=PtzAutotrackConfig,
title="Autotracking",
description="Automatically track moving objects and keep them centered in the frame using PTZ camera movements.",
title="PTZ auto tracking config.",
)
ignore_time_mismatch: bool = Field(
default=False,
title="Ignore time mismatch",
description="Ignore time synchronization differences between camera and Frigate server for ONVIF communication.",
title="Onvif Ignore Time Synchronization Mismatch Between Camera and Server",
)

View File

@@ -21,12 +21,7 @@ __all__ = [
class RecordRetainConfig(FrigateBaseModel):
days: float = Field(
default=0,
ge=0,
title="Retention days",
description="Days to retain recordings.",
)
days: float = Field(default=0, ge=0, title="Default retention period.")
class RetainModeEnum(str, Enum):
@@ -36,37 +31,22 @@ class RetainModeEnum(str, Enum):
class ReviewRetainConfig(FrigateBaseModel):
days: float = Field(
default=10,
ge=0,
title="Retention days",
description="Number of days to retain recordings of detection events.",
)
mode: RetainModeEnum = Field(
default=RetainModeEnum.motion,
title="Retention mode",
description="Mode for retention: all (save all segments), motion (save segments with motion), or active_objects (save segments with active objects).",
)
days: float = Field(default=10, ge=0, title="Default retention period.")
mode: RetainModeEnum = Field(default=RetainModeEnum.motion, title="Retain mode.")
class EventsConfig(FrigateBaseModel):
pre_capture: int = Field(
default=5,
title="Pre-capture seconds",
description="Number of seconds before the detection event to include in the recording.",
title="Seconds to retain before event starts.",
le=MAX_PRE_CAPTURE,
ge=0,
)
post_capture: int = Field(
default=5,
ge=0,
title="Post-capture seconds",
description="Number of seconds after the detection event to include in the recording.",
default=5, ge=0, title="Seconds to retain after event ends."
)
retain: ReviewRetainConfig = Field(
default_factory=ReviewRetainConfig,
title="Event retention",
description="Retention settings for recordings of detection events.",
default_factory=ReviewRetainConfig, title="Event retention settings."
)
@@ -80,65 +60,43 @@ class RecordQualityEnum(str, Enum):
class RecordPreviewConfig(FrigateBaseModel):
quality: RecordQualityEnum = Field(
default=RecordQualityEnum.medium,
title="Preview quality",
description="Preview quality level (very_low, low, medium, high, very_high).",
default=RecordQualityEnum.medium, title="Quality of recording preview."
)
class RecordExportConfig(FrigateBaseModel):
hwaccel_args: Union[str, list[str]] = Field(
default="auto",
title="Export hwaccel args",
description="Hardware acceleration args to use for export/transcode operations.",
default="auto", title="Export-specific FFmpeg hardware acceleration arguments."
)
class RecordConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable recording",
description="Enable or disable recording for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=False, title="Enable record on all cameras.")
expire_interval: int = Field(
default=60,
title="Record cleanup interval",
description="Minutes between cleanup passes that remove expired recording segments.",
title="Number of minutes to wait between cleanup runs.",
)
continuous: RecordRetainConfig = Field(
default_factory=RecordRetainConfig,
title="Continuous retention",
description="Number of days to retain recordings regardless of tracked objects or motion. Set to 0 if you only want to retain recordings of alerts and detections.",
title="Continuous recording retention settings.",
)
motion: RecordRetainConfig = Field(
default_factory=RecordRetainConfig,
title="Motion retention",
description="Number of days to retain recordings triggered by motion regardless of tracked objects. Set to 0 if you only want to retain recordings of alerts and detections.",
default_factory=RecordRetainConfig, title="Motion recording retention settings."
)
detections: EventsConfig = Field(
default_factory=EventsConfig,
title="Detection retention",
description="Recording retention settings for detection events including pre/post capture durations.",
default_factory=EventsConfig, title="Detection specific retention settings."
)
alerts: EventsConfig = Field(
default_factory=EventsConfig,
title="Alert retention",
description="Recording retention settings for alert events including pre/post capture durations.",
default_factory=EventsConfig, title="Alert specific retention settings."
)
export: RecordExportConfig = Field(
default_factory=RecordExportConfig,
title="Export config",
description="Settings used when exporting recordings such as timelapse and hardware acceleration.",
default_factory=RecordExportConfig, title="Recording Export Config"
)
preview: RecordPreviewConfig = Field(
default_factory=RecordPreviewConfig,
title="Preview config",
description="Settings controlling the quality of recording previews shown in the UI.",
default_factory=RecordPreviewConfig, title="Recording Preview Config"
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original recording state",
description="Indicates whether recording was enabled in the original static configuration.",
default=None, title="Keep track of original state of recording."
)
@property

View File

@@ -21,32 +21,22 @@ DEFAULT_ALERT_OBJECTS = ["person", "car"]
class AlertsConfig(FrigateBaseModel):
"""Configure alerts"""
enabled: bool = Field(
default=True,
title="Enable alerts",
description="Enable or disable alert generation for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=True, title="Enable alerts.")
labels: list[str] = Field(
default=DEFAULT_ALERT_OBJECTS,
title="Alert labels",
description="List of object labels that qualify as alerts (for example: car, person).",
default=DEFAULT_ALERT_OBJECTS, title="Labels to create alerts for."
)
required_zones: Union[str, list[str]] = Field(
default_factory=list,
title="Required zones",
description="Zones that an object must enter to be considered an alert; leave empty to allow any zone.",
title="List of required zones to be entered in order to save the event as an alert.",
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original alerts state",
description="Tracks whether alerts were originally enabled in the static configuration.",
default=None, title="Keep track of original state of alerts."
)
cutoff_time: int = Field(
default=40,
title="Alerts cutoff time",
description="Seconds to wait after no alert-causing activity before cutting off an alert.",
title="Time to cutoff alerts after no alert-causing activity has occurred.",
)
@field_validator("required_zones", mode="before")
@@ -61,32 +51,22 @@ class AlertsConfig(FrigateBaseModel):
class DetectionsConfig(FrigateBaseModel):
"""Configure detections"""
enabled: bool = Field(
default=True,
title="Enable detections",
description="Enable or disable detection events for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=True, title="Enable detections.")
labels: Optional[list[str]] = Field(
default=None,
title="Detection labels",
description="List of object labels that qualify as detection events.",
default=None, title="Labels to create detections for."
)
required_zones: Union[str, list[str]] = Field(
default_factory=list,
title="Required zones",
description="Zones that an object must enter to be considered a detection; leave empty to allow any zone.",
title="List of required zones to be entered in order to save the event as a detection.",
)
cutoff_time: int = Field(
default=30,
title="Detections cutoff time",
description="Seconds to wait after no detection-causing activity before cutting off a detection.",
title="Time to cutoff detection after no detection-causing activity has occurred.",
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original detections state",
description="Tracks whether detections were originally enabled in the static configuration.",
default=None, title="Keep track of original state of detections."
)
@field_validator("required_zones", mode="before")
@@ -101,42 +81,27 @@ class DetectionsConfig(FrigateBaseModel):
class GenAIReviewConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable GenAI descriptions",
description="Enable or disable GenAI-generated descriptions and summaries for review items.",
)
alerts: bool = Field(
default=True,
title="Enable GenAI for alerts",
description="Use GenAI to generate descriptions for alert items.",
)
detections: bool = Field(
default=False,
title="Enable GenAI for detections",
description="Use GenAI to generate descriptions for detection items.",
title="Enable GenAI descriptions for review items.",
)
alerts: bool = Field(default=True, title="Enable GenAI for alerts.")
detections: bool = Field(default=False, title="Enable GenAI for detections.")
image_source: ImageSourceEnum = Field(
default=ImageSourceEnum.preview,
title="Review image source",
description="Source of images sent to GenAI ('preview' or 'recordings'); 'recordings' uses higher quality frames but more tokens.",
title="Image source for review descriptions.",
)
additional_concerns: list[str] = Field(
default=[],
title="Additional concerns",
description="A list of additional concerns or notes the GenAI should consider when evaluating activity on this camera.",
title="Additional concerns that GenAI should make note of on this camera.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails",
description="Save thumbnails that are sent to the GenAI provider for debugging and review.",
title="Save thumbnails sent to generative AI for debugging purposes.",
)
enabled_in_config: Optional[bool] = Field(
default=None,
title="Original GenAI state",
description="Tracks whether GenAI review was originally enabled in the static configuration.",
default=None, title="Keep track of original state of generative AI."
)
preferred_language: str | None = Field(
title="Preferred language",
description="Preferred language to request from the GenAI provider for generated responses.",
title="Preferred language for GenAI Response",
default=None,
)
activity_context_prompt: str = Field(
@@ -174,24 +139,19 @@ Evaluate in this order:
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
title="Activity context prompt",
description="Custom prompt describing what is and is not suspicious activity to provide context for GenAI summaries.",
title="Custom activity context prompt defining normal and suspicious activity patterns for this property.",
)
class ReviewConfig(FrigateBaseModel):
"""Configure reviews"""
alerts: AlertsConfig = Field(
default_factory=AlertsConfig,
title="Alerts config",
description="Settings for which tracked objects generate alerts and how alerts are retained.",
default_factory=AlertsConfig, title="Review alerts config."
)
detections: DetectionsConfig = Field(
default_factory=DetectionsConfig,
title="Detections config",
description="Settings for creating detection events (non-alert) and how long to keep them.",
default_factory=DetectionsConfig, title="Review detections config."
)
genai: GenAIReviewConfig = Field(
default_factory=GenAIReviewConfig,
title="GenAI config",
description="Controls use of generative AI for producing descriptions and summaries of review items.",
default_factory=GenAIReviewConfig, title="Review description genai config."
)

View File

@@ -9,68 +9,36 @@ __all__ = ["SnapshotsConfig", "RetainConfig"]
class RetainConfig(FrigateBaseModel):
default: float = Field(
default=10,
title="Default retention",
description="Default number of days to retain snapshots.",
)
mode: RetainModeEnum = Field(
default=RetainModeEnum.motion,
title="Retention mode",
description="Mode for retention: all (save all segments), motion (save segments with motion), or active_objects (save segments with active objects).",
)
default: float = Field(default=10, title="Default retention period.")
mode: RetainModeEnum = Field(default=RetainModeEnum.motion, title="Retain mode.")
objects: dict[str, float] = Field(
default_factory=dict,
title="Object retention",
description="Per-object overrides for snapshot retention days.",
default_factory=dict, title="Object retention period."
)
class SnapshotsConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Snapshots enabled",
description="Enable or disable saving snapshots for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=False, title="Snapshots enabled.")
clean_copy: bool = Field(
default=True,
title="Save clean copy",
description="Save an unannotated clean copy of snapshots in addition to annotated ones.",
default=True, title="Create a clean copy of the snapshot image."
)
timestamp: bool = Field(
default=False,
title="Timestamp overlay",
description="Overlay a timestamp on saved snapshots.",
default=False, title="Add a timestamp overlay on the snapshot."
)
bounding_box: bool = Field(
default=True,
title="Bounding box overlay",
description="Draw bounding boxes for tracked objects on saved snapshots.",
)
crop: bool = Field(
default=False,
title="Crop snapshot",
description="Crop saved snapshots to the detected object's bounding box.",
default=True, title="Add a bounding box overlay on the snapshot."
)
crop: bool = Field(default=False, title="Crop the snapshot to the detected object.")
required_zones: list[str] = Field(
default_factory=list,
title="Required zones",
description="Zones an object must enter for a snapshot to be saved.",
)
height: Optional[int] = Field(
default=None,
title="Snapshot height",
description="Height (pixels) to resize saved snapshots to; leave empty to preserve original size.",
title="List of required zones to be entered in order to save a snapshot.",
)
height: Optional[int] = Field(default=None, title="Snapshot image height.")
retain: RetainConfig = Field(
default_factory=RetainConfig,
title="Snapshot retention",
description="Retention settings for saved snapshots including default days and per-object overrides.",
default_factory=RetainConfig, title="Snapshot retention."
)
quality: int = Field(
default=70,
title="JPEG quality",
description="JPEG encode quality for saved snapshots (0-100).",
title="Quality of the encoded jpeg (0-100).",
ge=0,
le=100,
)

View File

@@ -27,27 +27,9 @@ class TimestampPositionEnum(str, Enum):
class ColorConfig(FrigateBaseModel):
red: int = Field(
default=255,
ge=0,
le=255,
title="Red",
description="Red component (0-255) for timestamp color.",
)
green: int = Field(
default=255,
ge=0,
le=255,
title="Green",
description="Green component (0-255) for timestamp color.",
)
blue: int = Field(
default=255,
ge=0,
le=255,
title="Blue",
description="Blue component (0-255) for timestamp color.",
)
red: int = Field(default=255, ge=0, le=255, title="Red")
green: int = Field(default=255, ge=0, le=255, title="Green")
blue: int = Field(default=255, ge=0, le=255, title="Blue")
class TimestampEffectEnum(str, Enum):
@@ -57,27 +39,11 @@ class TimestampEffectEnum(str, Enum):
class TimestampStyleConfig(FrigateBaseModel):
position: TimestampPositionEnum = Field(
default=TimestampPositionEnum.tl,
title="Timestamp position",
description="Position of the timestamp on the image (tl/tr/bl/br).",
)
format: str = Field(
default=DEFAULT_TIME_FORMAT,
title="Timestamp format",
description="Datetime format string used for timestamps (Python datetime format codes).",
)
color: ColorConfig = Field(
default_factory=ColorConfig,
title="Timestamp color",
description="RGB color values for the timestamp text (all values 0-255).",
)
thickness: int = Field(
default=2,
title="Timestamp thickness",
description="Line thickness of the timestamp text.",
default=TimestampPositionEnum.tl, title="Timestamp position."
)
format: str = Field(default=DEFAULT_TIME_FORMAT, title="Timestamp format.")
color: ColorConfig = Field(default_factory=ColorConfig, title="Timestamp color.")
thickness: int = Field(default=2, title="Timestamp thickness.")
effect: Optional[TimestampEffectEnum] = Field(
default=None,
title="Timestamp effect",
description="Visual effect for the timestamp text (none, solid, shadow).",
default=None, title="Timestamp effect."
)

View File

@@ -6,13 +6,7 @@ __all__ = ["CameraUiConfig"]
class CameraUiConfig(FrigateBaseModel):
order: int = Field(
default=0,
title="UI order",
description="Numeric order used to sort the camera in the UI (default dashboard and lists); larger numbers appear later.",
)
order: int = Field(default=0, title="Order of camera in UI.")
dashboard: bool = Field(
default=True,
title="Show in UI",
description="Toggle whether this camera is visible everywhere in the Frigate UI. Disabling this will require manually editing the config to view this camera in the UI again.",
default=True, title="Show this camera in Frigate dashboard UI."
)

View File

@@ -14,54 +14,36 @@ logger = logging.getLogger(__name__)
class ZoneConfig(BaseModel):
friendly_name: Optional[str] = Field(
None,
title="Zone name",
description="A user-friendly name for the zone, displayed in the Frigate UI. If not set, a formatted version of the zone name will be used.",
)
enabled: bool = Field(
default=True,
title="Enabled",
description="Enable or disable this zone. Disabled zones are ignored at runtime.",
)
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of zone."
None, title="Zone friendly name used in the Frigate UI."
)
filters: dict[str, FilterConfig] = Field(
default_factory=dict,
title="Zone filters",
description="Filters to apply to objects within this zone. Used to reduce false positives or restrict which objects are considered present in the zone.",
default_factory=dict, title="Zone filters."
)
coordinates: Union[str, list[str]] = Field(
title="Coordinates",
description="Polygon coordinates that define the zone area. Can be a comma-separated string or a list of coordinate strings. Coordinates should be relative (0-1) or absolute (legacy).",
title="Coordinates polygon for the defined zone."
)
distances: Optional[Union[str, list[str]]] = Field(
default_factory=list,
title="Real-world distances",
description="Optional real-world distances for each side of the zone quadrilateral, used for speed or distance calculations. Must have exactly 4 values if set.",
title="Real-world distances for the sides of quadrilateral for the defined zone.",
)
inertia: int = Field(
default=3,
title="Inertia frames",
title="Number of consecutive frames required for object to be considered present in the zone.",
gt=0,
description="Number of consecutive frames an object must be detected in the zone before it is considered present. Helps filter out transient detections.",
)
loitering_time: int = Field(
default=0,
ge=0,
title="Loitering seconds",
description="Number of seconds an object must remain in the zone to be considered as loitering. Set to 0 to disable loitering detection.",
title="Number of seconds that an object must loiter to be considered in the zone.",
)
speed_threshold: Optional[float] = Field(
default=None,
ge=0.1,
title="Minimum speed",
description="Minimum speed (in real-world units if distances are set) required for an object to be considered present in the zone. Used for speed-based zone triggers.",
title="Minimum speed value for an object to be considered in the zone.",
)
objects: Union[str, list[str]] = Field(
default_factory=list,
title="Trigger objects",
description="List of object types (from labelmap) that can trigger this zone. Can be a string or a list of strings. If empty, all objects are considered.",
title="List of objects that can trigger the zone.",
)
_color: Optional[tuple[int, int, int]] = PrivateAttr()
_contour: np.ndarray = PrivateAttr()

View File

@@ -8,21 +8,13 @@ __all__ = ["CameraGroupConfig"]
class CameraGroupConfig(FrigateBaseModel):
"""Represents a group of cameras."""
cameras: Union[str, list[str]] = Field(
default_factory=list,
title="Camera list",
description="Array of camera names included in this group.",
)
icon: str = Field(
default="generic",
title="Group icon",
description="Icon used to represent the camera group in the UI.",
)
order: int = Field(
default=0,
title="Sort order",
description="Numeric order used to sort camera groups in the UI; larger numbers appear later.",
default_factory=list, title="List of cameras in this group."
)
icon: str = Field(default="generic", title="Icon that represents camera group.")
order: int = Field(default=0, title="Sort order for group.")
@field_validator("cameras", mode="before")
@classmethod

View File

@@ -1,5 +1,5 @@
from enum import Enum
from typing import Dict, List, Optional
from typing import Dict, List, Optional, Union
from pydantic import ConfigDict, Field
@@ -43,43 +43,28 @@ class ObjectClassificationType(str, Enum):
class AudioTranscriptionConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable audio transcription",
description="Enable or disable automatic audio transcription for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=False, title="Enable audio transcription.")
language: str = Field(
default="en",
title="Transcription language",
description="Language code used for transcription/translation (for example 'en' for English). See https://whisper-api.com/docs/languages/ for supported language codes.",
title="Language abbreviation to use for audio event transcription/translation.",
)
device: Optional[EnrichmentsDeviceEnum] = Field(
default=EnrichmentsDeviceEnum.CPU,
title="Transcription device",
description="Device key (CPU/GPU) to run the transcription model on. Only NVIDIA CUDA GPUs are currently supported for transcription.",
title="The device used for audio transcription.",
)
model_size: str = Field(
default="small",
title="Model size",
description="Model size to use for offline audio event transcription.",
default="small", title="The size of the embeddings model used."
)
live_enabled: Optional[bool] = Field(
default=False,
title="Live transcription",
description="Enable streaming live transcription for audio as it is received.",
default=False, title="Enable live transcriptions."
)
class BirdClassificationConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Bird classification",
description="Enable or disable bird classification.",
)
enabled: bool = Field(default=False, title="Enable bird classification.")
threshold: float = Field(
default=0.9,
title="Minimum score",
description="Minimum classification score required to accept a bird classification.",
title="Minimum classification score required to be considered a match.",
gt=0.0,
le=1.0,
)
@@ -87,62 +72,42 @@ class BirdClassificationConfig(FrigateBaseModel):
class CustomClassificationStateCameraConfig(FrigateBaseModel):
crop: list[float, float, float, float] = Field(
title="Classification crop",
description="Crop coordinates to use for running classification on this camera.",
title="Crop of image frame on this camera to run classification on."
)
class CustomClassificationStateConfig(FrigateBaseModel):
cameras: Dict[str, CustomClassificationStateCameraConfig] = Field(
title="Classification cameras",
description="Per-camera crop and settings for running state classification.",
title="Cameras to run classification on."
)
motion: bool = Field(
default=False,
title="Run on motion",
description="If true, run classification when motion is detected within the specified crop.",
title="If classification should be run when motion is detected in the crop.",
)
interval: int | None = Field(
default=None,
title="Classification interval",
description="Interval (seconds) between periodic classification runs for state classification.",
title="Interval to run classification on in seconds.",
gt=0,
)
class CustomClassificationObjectConfig(FrigateBaseModel):
objects: list[str] = Field(
default_factory=list,
title="Classify objects",
description="List of object types to run object classification on.",
)
objects: list[str] = Field(title="Object types to classify.")
classification_type: ObjectClassificationType = Field(
default=ObjectClassificationType.sub_label,
title="Classification type",
description="Classification type applied: 'sub_label' (adds sub_label) or other supported types.",
title="Type of classification that is applied.",
)
class CustomClassificationConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Enable model",
description="Enable or disable the custom classification model.",
)
name: str | None = Field(
default=None,
title="Model name",
description="Identifier for the custom classification model to use.",
)
enabled: bool = Field(default=True, title="Enable running the model.")
name: str | None = Field(default=None, title="Name of classification model.")
threshold: float = Field(
default=0.8,
title="Score threshold",
description="Score threshold used to change the classification state.",
default=0.8, title="Classification score threshold to change the state."
)
save_attempts: int | None = Field(
default=None,
title="Save attempts",
description="How many classification attempts to save for recent classifications UI.",
title="Number of classification attempts to save in the recent classifications tab. If not specified, defaults to 200 for object classification and 100 for state classification.",
ge=0,
)
object_config: CustomClassificationObjectConfig | None = Field(default=None)
@@ -151,76 +116,48 @@ class CustomClassificationConfig(FrigateBaseModel):
class ClassificationConfig(FrigateBaseModel):
bird: BirdClassificationConfig = Field(
default_factory=BirdClassificationConfig,
title="Bird classification config",
description="Settings specific to bird classification models.",
default_factory=BirdClassificationConfig, title="Bird classification config."
)
custom: Dict[str, CustomClassificationConfig] = Field(
default={},
title="Custom Classification Models",
description="Configuration for custom classification models used for objects or state detection.",
default={}, title="Custom Classification Model Configs."
)
class SemanticSearchConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable semantic search",
description="Enable or disable the semantic search feature.",
)
enabled: bool = Field(default=False, title="Enable semantic search.")
reindex: Optional[bool] = Field(
default=False,
title="Reindex on startup",
description="Trigger a full reindex of historical tracked objects into the embeddings database.",
default=False, title="Reindex all tracked objects on startup."
)
model: Optional[SemanticSearchModelEnum] = Field(
model: Optional[Union[SemanticSearchModelEnum, str]] = Field(
default=SemanticSearchModelEnum.jinav1,
title="Semantic search model",
description="The embeddings model to use for semantic search (for example 'jinav1').",
title="The CLIP model or GenAI provider name for semantic search.",
description="Use 'jinav1', 'jinav2' for ONNX models, or a GenAI config key (e.g. 'default') when that provider has the embeddings role.",
)
model_size: str = Field(
default="small",
title="Model size",
description="Select model size; 'small' runs on CPU and 'large' typically requires GPU.",
default="small", title="The size of the embeddings model used."
)
device: Optional[str] = Field(
default=None,
title="Device",
title="The device key to use for semantic search.",
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
)
class TriggerConfig(FrigateBaseModel):
friendly_name: Optional[str] = Field(
None,
title="Friendly name",
description="Optional friendly name displayed in the UI for this trigger.",
)
enabled: bool = Field(
default=True,
title="Enable this trigger",
description="Enable or disable this semantic search trigger.",
)
type: TriggerType = Field(
default=TriggerType.DESCRIPTION,
title="Trigger type",
description="Type of trigger: 'thumbnail' (match against image) or 'description' (match against text).",
)
data: str = Field(
title="Trigger content",
description="Text phrase or thumbnail ID to match against tracked objects.",
None, title="Trigger friendly name used in the Frigate UI."
)
enabled: bool = Field(default=True, title="Enable this trigger")
type: TriggerType = Field(default=TriggerType.DESCRIPTION, title="Type of trigger")
data: str = Field(title="Trigger content (text phrase or image ID)")
threshold: float = Field(
title="Trigger threshold",
description="Minimum similarity score (0-1) required to activate this trigger.",
title="Confidence score required to run the trigger",
default=0.8,
gt=0.0,
le=1.0,
)
actions: List[TriggerAction] = Field(
default=[],
title="Trigger actions",
description="List of actions to execute when trigger matches (notification, sub_label, attribute).",
default=[], title="Actions to perform when trigger is matched"
)
model_config = ConfigDict(extra="forbid", protected_namespaces=())
@@ -229,191 +166,147 @@ class TriggerConfig(FrigateBaseModel):
class CameraSemanticSearchConfig(FrigateBaseModel):
triggers: Dict[str, TriggerConfig] = Field(
default={},
title="Triggers",
description="Actions and matching criteria for camera-specific semantic search triggers.",
title="Trigger actions on tracked objects that match existing thumbnails or descriptions",
)
model_config = ConfigDict(extra="forbid", protected_namespaces=())
class FaceRecognitionConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable face recognition",
description="Enable or disable face recognition for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=False, title="Enable face recognition.")
model_size: str = Field(
default="small",
title="Model size",
description="Model size to use for face embeddings (small/large); larger may require GPU.",
default="small", title="The size of the embeddings model used."
)
unknown_score: float = Field(
title="Unknown score threshold",
description="Distance threshold below which a face is considered a potential match (higher = stricter).",
title="Minimum face distance score required to be marked as a potential match.",
default=0.8,
gt=0.0,
le=1.0,
)
detection_threshold: float = Field(
default=0.7,
title="Detection threshold",
description="Minimum detection confidence required to consider a face detection valid.",
title="Minimum face detection score required to be considered a face.",
gt=0.0,
le=1.0,
)
recognition_threshold: float = Field(
default=0.9,
title="Recognition threshold",
description="Face embedding distance threshold to consider two faces a match.",
title="Minimum face distance score required to be considered a match.",
gt=0.0,
le=1.0,
)
min_area: int = Field(
default=750,
title="Minimum face area",
description="Minimum area (pixels) of a detected face box required to attempt recognition.",
default=750, title="Min area of face box to consider running face recognition."
)
min_faces: int = Field(
default=1,
gt=0,
le=6,
title="Minimum faces",
description="Minimum number of face recognitions required before applying a recognized sub-label to a person.",
title="Min face recognitions for the sub label to be applied to the person object.",
)
save_attempts: int = Field(
default=200,
ge=0,
title="Save attempts",
description="Number of face recognition attempts to retain for recent recognition UI.",
title="Number of face attempts to save in the recent recognitions tab.",
)
blur_confidence_filter: bool = Field(
default=True,
title="Blur confidence filter",
description="Adjust confidence scores based on image blur to reduce false positives for poor quality faces.",
default=True, title="Apply blur quality filter to face confidence."
)
device: Optional[str] = Field(
default=None,
title="Device",
title="The device key to use for face recognition.",
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
)
class CameraFaceRecognitionConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable face recognition",
description="Enable or disable face recognition.",
)
enabled: bool = Field(default=False, title="Enable face recognition.")
min_area: int = Field(
default=750,
title="Minimum face area",
description="Minimum area (pixels) of a detected face box required to attempt recognition.",
default=750, title="Min area of face box to consider running face recognition."
)
model_config = ConfigDict(extra="forbid", protected_namespaces=())
class ReplaceRule(FrigateBaseModel):
pattern: str = Field(..., title="Regex pattern")
replacement: str = Field(..., title="Replacement string")
pattern: str = Field(..., title="Regex pattern to match.")
replacement: str = Field(
..., title="Replacement string (supports backrefs like '\\1')."
)
class LicensePlateRecognitionConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable LPR",
description="Enable or disable license plate recognition for all cameras; can be overridden per-camera.",
)
enabled: bool = Field(default=False, title="Enable license plate recognition.")
model_size: str = Field(
default="small",
title="Model size",
description="Model size used for text detection/recognition. Most users should use 'small'.",
default="small", title="The size of the embeddings model used."
)
detection_threshold: float = Field(
default=0.7,
title="Detection threshold",
description="Detection confidence threshold to begin running OCR on a suspected plate.",
title="License plate object confidence score required to begin running recognition.",
gt=0.0,
le=1.0,
)
min_area: int = Field(
default=1000,
title="Minimum plate area",
description="Minimum plate area (pixels) required to attempt recognition.",
title="Minimum area of license plate to begin running recognition.",
)
recognition_threshold: float = Field(
default=0.9,
title="Recognition threshold",
description="Confidence threshold required for recognized plate text to be attached as a sub-label.",
title="Recognition confidence score required to add the plate to the object as a sub label.",
gt=0.0,
le=1.0,
)
min_plate_length: int = Field(
default=4,
title="Min plate length",
description="Minimum number of characters a recognized plate must contain to be considered valid.",
title="Minimum number of characters a license plate must have to be added to the object as a sub label.",
)
format: Optional[str] = Field(
default=None,
title="Plate format regex",
description="Optional regex to validate recognized plate strings against an expected format.",
title="Regular expression for the expected format of license plate.",
)
match_distance: int = Field(
default=1,
title="Match distance",
description="Number of character mismatches allowed when comparing detected plates to known plates.",
title="Allow this number of missing/incorrect characters to still cause a detected plate to match a known plate.",
ge=0,
)
known_plates: Optional[Dict[str, List[str]]] = Field(
default={},
title="Known plates",
description="List of plates or regexes to specially track or alert on.",
default={}, title="Known plates to track (strings or regular expressions)."
)
enhancement: int = Field(
default=0,
title="Enhancement level",
description="Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution.",
title="Amount of contrast adjustment and denoising to apply to license plate images before recognition.",
ge=0,
le=10,
)
debug_save_plates: bool = Field(
default=False,
title="Save debug plates",
description="Save plate crop images for debugging LPR performance.",
title="Save plates captured for LPR for debugging purposes.",
)
device: Optional[str] = Field(
default=None,
title="Device",
title="The device key to use for LPR.",
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
)
replace_rules: List[ReplaceRule] = Field(
default_factory=list,
title="Replacement rules",
description="Regex replacement rules used to normalize detected plate strings before matching.",
title="List of regex replacement rules for normalizing detected plates. Each rule has 'pattern' and 'replacement'.",
)
class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable LPR",
description="Enable or disable LPR on this camera.",
)
enabled: bool = Field(default=False, title="Enable license plate recognition.")
expire_time: int = Field(
default=3,
title="Expire seconds",
description="Time in seconds after which an unseen plate is expired from the tracker (for dedicated LPR cameras only).",
title="Expire plates not seen after number of seconds (for dedicated LPR cameras only).",
gt=0,
)
min_area: int = Field(
default=1000,
title="Minimum plate area",
description="Minimum plate area (pixels) required to attempt recognition.",
title="Minimum area of license plate to begin running recognition.",
)
enhancement: int = Field(
default=0,
title="Enhancement level",
description="Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution.",
title="Amount of contrast adjustment and denoising to apply to license plate images before recognition.",
ge=0,
le=10,
)
@@ -422,18 +315,12 @@ class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
class CameraAudioTranscriptionConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable transcription",
description="Enable or disable manually triggered audio event transcription.",
)
enabled: bool = Field(default=False, title="Enable audio transcription.")
enabled_in_config: Optional[bool] = Field(
default=None, title="Original transcription state"
default=None, title="Keep track of original state of audio transcription."
)
live_enabled: Optional[bool] = Field(
default=False,
title="Live transcription",
description="Enable streaming live transcription for audio as it is received.",
default=False, title="Enable live transcriptions."
)
model_config = ConfigDict(extra="forbid", protected_namespaces=())

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
import json
import logging
import os
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional, Union
import numpy as np
from pydantic import (
@@ -46,7 +46,6 @@ from .camera.birdseye import BirdseyeConfig
from .camera.detect import DetectConfig
from .camera.ffmpeg import FfmpegConfig
from .camera.genai import GenAIConfig, GenAIRoleEnum
from .camera.mask import ObjectMaskConfig
from .camera.motion import MotionConfig
from .camera.notification import NotificationConfig
from .camera.objects import FilterConfig, ObjectConfig
@@ -94,111 +93,54 @@ stream_info_retriever = StreamInfoRetriever()
class RuntimeMotionConfig(MotionConfig):
"""Runtime version of MotionConfig with rasterized masks."""
# The rasterized numpy mask (combination of all enabled masks)
rasterized_mask: np.ndarray = None
raw_mask: Union[str, List[str]] = ""
mask: np.ndarray = None
def __init__(self, **config):
frame_shape = config.get("frame_shape", (1, 1))
# Store original mask dict for serialization
original_mask = config.get("mask", {})
if isinstance(original_mask, dict):
# Process the new dict format - update raw_coordinates for each mask
processed_mask = {}
for mask_id, mask_config in original_mask.items():
if isinstance(mask_config, dict):
coords = mask_config.get("coordinates", "")
relative_coords = get_relative_coordinates(coords, frame_shape)
mask_config_copy = mask_config.copy()
mask_config_copy["raw_coordinates"] = (
relative_coords if relative_coords else coords
)
mask_config_copy["coordinates"] = (
relative_coords if relative_coords else coords
)
processed_mask[mask_id] = mask_config_copy
else:
processed_mask[mask_id] = mask_config
config["mask"] = processed_mask
config["raw_mask"] = processed_mask
mask = get_relative_coordinates(config.get("mask", ""), frame_shape)
config["raw_mask"] = mask
super().__init__(**config)
# Rasterize only enabled masks
enabled_coords = []
for mask_config in self.mask.values():
if mask_config.enabled and mask_config.coordinates:
coords = mask_config.coordinates
if isinstance(coords, list):
enabled_coords.extend(coords)
else:
enabled_coords.append(coords)
if enabled_coords:
self.rasterized_mask = create_mask(frame_shape, enabled_coords)
if mask:
config["mask"] = create_mask(frame_shape, mask)
else:
empty_mask = np.zeros(frame_shape, np.uint8)
empty_mask[:] = 255
self.rasterized_mask = empty_mask
config["mask"] = empty_mask
super().__init__(**config)
def dict(self, **kwargs):
ret = super().model_dump(**kwargs)
if "rasterized_mask" in ret:
ret.pop("rasterized_mask")
if "mask" in ret:
ret["mask"] = ret["raw_mask"]
ret.pop("raw_mask")
return ret
@field_serializer("rasterized_mask", when_used="json")
def serialize_rasterized_mask(self, value: Any, info):
@field_serializer("mask", when_used="json")
def serialize_mask(self, value: Any, info):
return self.raw_mask
@field_serializer("raw_mask", when_used="json")
def serialize_raw_mask(self, value: Any, info):
return None
model_config = ConfigDict(arbitrary_types_allowed=True, extra="ignore")
class RuntimeFilterConfig(FilterConfig):
"""Runtime version of FilterConfig with rasterized masks."""
# The rasterized numpy mask (combination of all enabled masks)
rasterized_mask: Optional[np.ndarray] = None
mask: Optional[np.ndarray] = None
raw_mask: Optional[Union[str, List[str]]] = None
def __init__(self, **config):
frame_shape = config.get("frame_shape", (1, 1))
mask = get_relative_coordinates(config.get("mask"), frame_shape)
# Store original mask dict for serialization
original_mask = config.get("mask", {})
if isinstance(original_mask, dict):
# Process the new dict format - update raw_coordinates for each mask
processed_mask = {}
for mask_id, mask_config in original_mask.items():
# Handle both dict and ObjectMaskConfig formats
if hasattr(mask_config, "model_dump"):
# It's an ObjectMaskConfig object
mask_dict = mask_config.model_dump()
coords = mask_dict.get("coordinates", "")
relative_coords = get_relative_coordinates(coords, frame_shape)
mask_dict["raw_coordinates"] = (
relative_coords if relative_coords else coords
)
mask_dict["coordinates"] = (
relative_coords if relative_coords else coords
)
processed_mask[mask_id] = mask_dict
elif isinstance(mask_config, dict):
coords = mask_config.get("coordinates", "")
relative_coords = get_relative_coordinates(coords, frame_shape)
mask_config_copy = mask_config.copy()
mask_config_copy["raw_coordinates"] = (
relative_coords if relative_coords else coords
)
mask_config_copy["coordinates"] = (
relative_coords if relative_coords else coords
)
processed_mask[mask_id] = mask_config_copy
else:
processed_mask[mask_id] = mask_config
config["mask"] = processed_mask
config["raw_mask"] = processed_mask
config["raw_mask"] = mask
if mask is not None:
config["mask"] = create_mask(frame_shape, mask)
# Convert min_area and max_area to pixels if they're percentages
if "min_area" in config:
@@ -209,31 +151,13 @@ class RuntimeFilterConfig(FilterConfig):
super().__init__(**config)
# Rasterize only enabled masks
enabled_coords = []
for mask_config in self.mask.values():
if mask_config.enabled and mask_config.coordinates:
coords = mask_config.coordinates
if isinstance(coords, list):
enabled_coords.extend(coords)
else:
enabled_coords.append(coords)
if enabled_coords:
self.rasterized_mask = create_mask(frame_shape, enabled_coords)
else:
self.rasterized_mask = None
def dict(self, **kwargs):
ret = super().model_dump(**kwargs)
if "rasterized_mask" in ret:
ret.pop("rasterized_mask")
if "mask" in ret:
ret["mask"] = ret["raw_mask"]
ret.pop("raw_mask")
return ret
@field_serializer("rasterized_mask", when_used="json")
def serialize_rasterized_mask(self, value: Any, info):
return None
model_config = ConfigDict(arbitrary_types_allowed=True, extra="ignore")
@@ -375,189 +299,116 @@ def verify_lpr_and_face(
class FrigateConfig(FrigateBaseModel):
version: Optional[str] = Field(
default=None,
title="Current config version",
description="Numeric or string version of the active configuration to help detect migrations or format changes.",
)
version: Optional[str] = Field(default=None, title="Current config version.")
safe_mode: bool = Field(
default=False,
title="Safe mode",
description="When enabled, start Frigate in safe mode with reduced features for troubleshooting.",
default=False, title="If Frigate should be started in safe mode."
)
# Fields that install global state should be defined first, so that their validators run first.
environment_vars: EnvVars = Field(
default_factory=dict,
title="Environment variables",
description="Key/value pairs of environment variables to set for the Frigate process in Home Assistant OS. Non-HAOS users must use Docker environment variable configuration instead.",
default_factory=dict, title="Frigate environment variables."
)
logger: LoggerConfig = Field(
default_factory=LoggerConfig,
title="Logging",
description="Controls default log verbosity and per-component log level overrides.",
title="Logging configuration.",
validate_default=True,
)
# Global config
auth: AuthConfig = Field(
default_factory=AuthConfig,
title="Authentication",
description="Authentication and session-related settings including cookie and rate limit options.",
)
auth: AuthConfig = Field(default_factory=AuthConfig, title="Auth configuration.")
database: DatabaseConfig = Field(
default_factory=DatabaseConfig,
title="Database",
description="Settings for the SQLite database used by Frigate to store tracked object and recording metadata.",
default_factory=DatabaseConfig, title="Database configuration."
)
go2rtc: RestreamConfig = Field(
default_factory=RestreamConfig,
title="go2rtc",
description="Settings for the integrated go2rtc restreaming service used for live stream relaying and translation.",
)
mqtt: MqttConfig = Field(
title="MQTT",
description="Settings for connecting and publishing telemetry, snapshots, and event details to an MQTT broker.",
default_factory=RestreamConfig, title="Global restream configuration."
)
mqtt: MqttConfig = Field(title="MQTT configuration.")
notifications: NotificationConfig = Field(
default_factory=NotificationConfig,
title="Notifications",
description="Settings to enable and control notifications for all cameras; can be overridden per-camera.",
default_factory=NotificationConfig, title="Global notification configuration."
)
networking: NetworkingConfig = Field(
default_factory=NetworkingConfig,
title="Networking",
description="Network-related settings such as IPv6 enablement for Frigate endpoints.",
default_factory=NetworkingConfig, title="Networking configuration"
)
proxy: ProxyConfig = Field(
default_factory=ProxyConfig,
title="Proxy",
description="Settings for integrating Frigate behind a reverse proxy that passes authenticated user headers.",
default_factory=ProxyConfig, title="Proxy configuration."
)
telemetry: TelemetryConfig = Field(
default_factory=TelemetryConfig,
title="Telemetry",
description="System telemetry and stats options including GPU and network bandwidth monitoring.",
)
tls: TlsConfig = Field(
default_factory=TlsConfig,
title="TLS",
description="TLS settings for Frigate's web endpoints (port 8971).",
)
ui: UIConfig = Field(
default_factory=UIConfig,
title="UI",
description="User interface preferences such as timezone, time/date formatting, and units.",
default_factory=TelemetryConfig, title="Telemetry configuration."
)
tls: TlsConfig = Field(default_factory=TlsConfig, title="TLS configuration.")
ui: UIConfig = Field(default_factory=UIConfig, title="UI configuration.")
# Detector config
detectors: Dict[str, BaseDetectorConfig] = Field(
default=DEFAULT_DETECTORS,
title="Detector hardware",
description="Configuration for object detectors (CPU, GPU, ONNX backends) and any detector-specific model settings.",
title="Detector hardware configuration.",
)
model: ModelConfig = Field(
default_factory=ModelConfig,
title="Detection model",
description="Settings to configure a custom object detection model and its input shape.",
default_factory=ModelConfig, title="Detection model configuration."
)
# GenAI config (named provider configs: name -> GenAIConfig)
genai: Dict[str, GenAIConfig] = Field(
default_factory=dict,
title="Generative AI configuration (named providers).",
description="Settings for integrated generative AI providers used to generate object descriptions and review summaries.",
default_factory=dict, title="Generative AI configuration (named providers)."
)
# Camera config
cameras: Dict[str, CameraConfig] = Field(title="Cameras", description="Cameras")
cameras: Dict[str, CameraConfig] = Field(title="Camera configuration.")
audio: AudioConfig = Field(
default_factory=AudioConfig,
title="Audio events",
description="Settings for audio-based event detection for all cameras; can be overridden per-camera.",
default_factory=AudioConfig, title="Global Audio events configuration."
)
birdseye: BirdseyeConfig = Field(
default_factory=BirdseyeConfig,
title="Birdseye",
description="Settings for the Birdseye composite view that composes multiple camera feeds into a single layout.",
default_factory=BirdseyeConfig, title="Birdseye configuration."
)
detect: DetectConfig = Field(
default_factory=DetectConfig,
title="Object Detection",
description="Settings for the detection/detect role used to run object detection and initialize trackers.",
default_factory=DetectConfig, title="Global object tracking configuration."
)
ffmpeg: FfmpegConfig = Field(
default_factory=FfmpegConfig,
title="FFmpeg",
description="FFmpeg settings including binary path, args, hwaccel options, and per-role output args.",
default_factory=FfmpegConfig, title="Global FFmpeg configuration."
)
live: CameraLiveConfig = Field(
default_factory=CameraLiveConfig,
title="Live playback",
description="Settings used by the Web UI to control live stream resolution and quality.",
default_factory=CameraLiveConfig, title="Live playback settings."
)
motion: Optional[MotionConfig] = Field(
default=None,
title="Motion detection",
description="Default motion detection settings applied to cameras unless overridden per-camera.",
default=None, title="Global motion detection configuration."
)
objects: ObjectConfig = Field(
default_factory=ObjectConfig,
title="Objects",
description="Object tracking defaults including which labels to track and per-object filters.",
default_factory=ObjectConfig, title="Global object configuration."
)
record: RecordConfig = Field(
default_factory=RecordConfig,
title="Recording",
description="Recording and retention settings applied to cameras unless overridden per-camera.",
default_factory=RecordConfig, title="Global record configuration."
)
review: ReviewConfig = Field(
default_factory=ReviewConfig,
title="Review",
description="Settings that control alerts, detections, and GenAI review summaries used by the UI and storage.",
default_factory=ReviewConfig, title="Review configuration."
)
snapshots: SnapshotsConfig = Field(
default_factory=SnapshotsConfig,
title="Snapshots",
description="Settings for saved JPEG snapshots of tracked objects for all cameras; can be overridden per-camera.",
default_factory=SnapshotsConfig, title="Global snapshots configuration."
)
timestamp_style: TimestampStyleConfig = Field(
default_factory=TimestampStyleConfig,
title="Timestamp style",
description="Styling options for in-feed timestamps applied to debug view and snapshots.",
title="Global timestamp style configuration.",
)
# Classification Config
audio_transcription: AudioTranscriptionConfig = Field(
default_factory=AudioTranscriptionConfig,
title="Audio transcription",
description="Settings for live and speech audio transcription used for events and live captions.",
default_factory=AudioTranscriptionConfig, title="Audio transcription config."
)
classification: ClassificationConfig = Field(
default_factory=ClassificationConfig,
title="Object classification",
description="Settings for classification models used to refine object labels or state classification.",
default_factory=ClassificationConfig, title="Object classification config."
)
semantic_search: SemanticSearchConfig = Field(
default_factory=SemanticSearchConfig,
title="Semantic Search",
description="Settings for Semantic Search which builds and queries object embeddings to find similar items.",
default_factory=SemanticSearchConfig, title="Semantic search configuration."
)
face_recognition: FaceRecognitionConfig = Field(
default_factory=FaceRecognitionConfig,
title="Face recognition",
description="Settings for face detection and recognition for all cameras; can be overridden per-camera.",
default_factory=FaceRecognitionConfig, title="Face recognition config."
)
lpr: LicensePlateRecognitionConfig = Field(
default_factory=LicensePlateRecognitionConfig,
title="License Plate Recognition",
description="License plate recognition settings including detection thresholds, formatting, and known plates.",
title="License Plate recognition config.",
)
camera_groups: Dict[str, CameraGroupConfig] = Field(
default_factory=dict,
title="Camera groups",
description="Configuration for named camera groups used to organize cameras in the UI.",
default_factory=dict, title="Camera group configuration"
)
_plus_api: PlusApi
@@ -592,6 +443,22 @@ class FrigateConfig(FrigateBaseModel):
)
role_to_name[role] = name
# validate semantic_search.model when it is a GenAI provider name
if self.semantic_search.enabled and isinstance(
self.semantic_search.model, str
):
if self.semantic_search.model not in self.genai:
raise ValueError(
f"semantic_search.model '{self.semantic_search.model}' is not a "
"valid GenAI config key. Must match a key in genai config."
)
genai_cfg = self.genai[self.semantic_search.model]
if GenAIRoleEnum.embeddings not in genai_cfg.roles:
raise ValueError(
f"GenAI provider '{self.semantic_search.model}' must have "
"'embeddings' in its roles for semantic search."
)
# set default min_score for object attributes
for attribute in self.model.all_attributes:
if not self.objects.filters.get(attribute):
@@ -636,9 +503,6 @@ class FrigateConfig(FrigateBaseModel):
# users should not set model themselves
if detector_config.model:
logger.warning(
"The model key should be specified at the root level of the config, not under detectors. The nested model key will be ignored."
)
detector_config.model = None
model_config = self.model.model_dump(exclude_unset=True, warnings="none")
@@ -789,63 +653,35 @@ class FrigateConfig(FrigateBaseModel):
for key in object_keys:
camera_config.objects.filters[key] = FilterConfig()
# Process global object masks to set raw_coordinates
if camera_config.objects.mask:
processed_global_masks = {}
for mask_id, mask_config in camera_config.objects.mask.items():
if mask_config:
coords = mask_config.coordinates
relative_coords = get_relative_coordinates(
coords, camera_config.frame_shape
)
# Create a new ObjectMaskConfig with raw_coordinates set
processed_global_masks[mask_id] = ObjectMaskConfig(
friendly_name=mask_config.friendly_name,
enabled=mask_config.enabled,
coordinates=relative_coords if relative_coords else coords,
raw_coordinates=relative_coords
if relative_coords
else coords,
enabled_in_config=mask_config.enabled,
)
else:
processed_global_masks[mask_id] = mask_config
camera_config.objects.mask = processed_global_masks
camera_config.objects.raw_mask = processed_global_masks
# Apply global object masks and convert masks to numpy array
for object, filter in camera_config.objects.filters.items():
# Set enabled_in_config for per-object masks before processing
for mask_config in filter.mask.values():
if mask_config:
mask_config.enabled_in_config = mask_config.enabled
# Merge global object masks with per-object filter masks
merged_mask = dict(filter.mask) # Copy filter-specific masks
# Add global object masks if they exist
if camera_config.objects.mask:
for mask_id, mask_config in camera_config.objects.mask.items():
# Use a global prefix to avoid key collisions
global_mask_id = f"global_{mask_id}"
merged_mask[global_mask_id] = mask_config
filter_mask = []
if filter.mask is not None:
filter_mask = (
filter.mask
if isinstance(filter.mask, list)
else [filter.mask]
)
object_mask = (
get_relative_coordinates(
(
camera_config.objects.mask
if isinstance(camera_config.objects.mask, list)
else [camera_config.objects.mask]
),
camera_config.frame_shape,
)
or []
)
filter.mask = filter_mask + object_mask
# Set runtime filter to create masks
camera_config.objects.filters[object] = RuntimeFilterConfig(
frame_shape=camera_config.frame_shape,
mask=merged_mask,
**filter.model_dump(
exclude_unset=True, exclude={"mask", "raw_mask"}
),
**filter.model_dump(exclude_unset=True),
)
# Set enabled_in_config for motion masks to match config file state BEFORE creating RuntimeMotionConfig
if camera_config.motion:
camera_config.motion.enabled_in_config = camera_config.motion.enabled
for mask_config in camera_config.motion.mask.values():
if mask_config:
mask_config.enabled_in_config = mask_config.enabled
# Convert motion configuration
if camera_config.motion is None:
camera_config.motion = RuntimeMotionConfig(
@@ -854,8 +690,10 @@ class FrigateConfig(FrigateBaseModel):
else:
camera_config.motion = RuntimeMotionConfig(
frame_shape=camera_config.frame_shape,
raw_mask=camera_config.motion.mask,
**camera_config.motion.model_dump(exclude_unset=True),
)
camera_config.motion.enabled_in_config = camera_config.motion.enabled
# generate zone contours
if len(camera_config.zones) > 0:
@@ -869,10 +707,6 @@ class FrigateConfig(FrigateBaseModel):
zone.generate_contour(camera_config.frame_shape)
# Set enabled_in_config for zones to match config file state
for zone in camera_config.zones.values():
zone.enabled_in_config = zone.enabled
# Set live view stream if none is set
if not camera_config.live.streams:
camera_config.live.streams = {name: name}

View File

@@ -8,8 +8,4 @@ __all__ = ["DatabaseConfig"]
class DatabaseConfig(FrigateBaseModel):
path: str = Field(
default=DEFAULT_DB_PATH,
title="Database path",
description="Filesystem path where the Frigate SQLite database file will be stored.",
) # noqa: F821
path: str = Field(default=DEFAULT_DB_PATH, title="Database path.") # noqa: F821

View File

@@ -9,15 +9,9 @@ __all__ = ["LoggerConfig"]
class LoggerConfig(FrigateBaseModel):
default: LogLevel = Field(
default=LogLevel.info,
title="Logging level",
description="Default global log verbosity (debug, info, warning, error).",
)
default: LogLevel = Field(default=LogLevel.info, title="Default logging level.")
logs: dict[str, LogLevel] = Field(
default_factory=dict,
title="Per-process log level",
description="Per-component log level overrides to increase or decrease verbosity for specific modules.",
default_factory=dict, title="Log level for specified processes."
)
@model_validator(mode="after")

View File

@@ -12,73 +12,25 @@ __all__ = ["MqttConfig"]
class MqttConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Enable MQTT",
description="Enable or disable MQTT integration for state, events, and snapshots.",
)
host: str = Field(
default="",
title="MQTT host",
description="Hostname or IP address of the MQTT broker.",
)
port: int = Field(
default=1883,
title="MQTT port",
description="Port of the MQTT broker (usually 1883 for plain MQTT).",
)
topic_prefix: str = Field(
default="frigate",
title="Topic prefix",
description="MQTT topic prefix for all Frigate topics; must be unique if running multiple instances.",
)
client_id: str = Field(
default="frigate",
title="Client ID",
description="Client identifier used when connecting to the MQTT broker; should be unique per instance.",
)
enabled: bool = Field(default=True, title="Enable MQTT Communication.")
host: str = Field(default="", title="MQTT Host")
port: int = Field(default=1883, title="MQTT Port")
topic_prefix: str = Field(default="frigate", title="MQTT Topic Prefix")
client_id: str = Field(default="frigate", title="MQTT Client ID")
stats_interval: int = Field(
default=60,
ge=FREQUENCY_STATS_POINTS,
title="Stats interval",
description="Interval in seconds for publishing system and camera stats to MQTT.",
)
user: Optional[EnvString] = Field(
default=None,
title="MQTT username",
description="Optional MQTT username; can be provided via environment variables or secrets.",
default=60, ge=FREQUENCY_STATS_POINTS, title="MQTT Camera Stats Interval"
)
user: Optional[EnvString] = Field(default=None, title="MQTT Username")
password: Optional[EnvString] = Field(
default=None,
title="MQTT password",
description="Optional MQTT password; can be provided via environment variables or secrets.",
validate_default=True,
)
tls_ca_certs: Optional[str] = Field(
default=None,
title="TLS CA certs",
description="Path to CA certificate for TLS connections to the broker (for self-signed certs).",
default=None, title="MQTT Password", validate_default=True
)
tls_ca_certs: Optional[str] = Field(default=None, title="MQTT TLS CA Certificates")
tls_client_cert: Optional[str] = Field(
default=None,
title="Client cert",
description="Client certificate path for TLS mutual authentication; do not set user/password when using client certs.",
)
tls_client_key: Optional[str] = Field(
default=None,
title="Client key",
description="Private key path for the client certificate.",
)
tls_insecure: Optional[bool] = Field(
default=None,
title="TLS insecure",
description="Allow insecure TLS connections by skipping hostname verification (not recommended).",
)
qos: int = Field(
default=0,
title="MQTT QoS",
description="Quality of Service level for MQTT publishes/subscriptions (0, 1, or 2).",
default=None, title="MQTT TLS Client Certificate"
)
tls_client_key: Optional[str] = Field(default=None, title="MQTT TLS Client Key")
tls_insecure: Optional[bool] = Field(default=None, title="MQTT TLS Insecure")
qos: int = Field(default=0, title="MQTT QoS")
@model_validator(mode="after")
def user_requires_pass(self, info: ValidationInfo) -> Self:

View File

@@ -8,34 +8,20 @@ __all__ = ["IPv6Config", "ListenConfig", "NetworkingConfig"]
class IPv6Config(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable IPv6",
description="Enable IPv6 support for Frigate services (API and UI) where applicable.",
)
enabled: bool = Field(default=False, title="Enable IPv6 for port 5000 and/or 8971")
class ListenConfig(FrigateBaseModel):
internal: Union[int, str] = Field(
default=5000,
title="Internal port",
description="Internal listening port for Frigate (default 5000).",
default=5000, title="Internal listening port for Frigate"
)
external: Union[int, str] = Field(
default=8971,
title="External port",
description="External listening port for Frigate (default 8971).",
default=8971, title="External listening port for Frigate"
)
class NetworkingConfig(FrigateBaseModel):
ipv6: IPv6Config = Field(
default_factory=IPv6Config,
title="IPv6 configuration",
description="IPv6-specific settings for Frigate network services.",
)
ipv6: IPv6Config = Field(default_factory=IPv6Config, title="IPv6 configuration")
listen: ListenConfig = Field(
default_factory=ListenConfig,
title="Listening ports configuration",
description="Configuration for internal and external listening ports. This is for advanced users. For the majority of use cases it's recommended to change the ports section of your Docker compose file.",
default_factory=ListenConfig, title="Listening ports configuration"
)

View File

@@ -10,47 +10,36 @@ __all__ = ["ProxyConfig", "HeaderMappingConfig"]
class HeaderMappingConfig(FrigateBaseModel):
user: str = Field(
default=None,
title="User header",
description="Header containing the authenticated username provided by the upstream proxy.",
default=None, title="Header name from upstream proxy to identify user."
)
role: str = Field(
default=None,
title="Role header",
description="Header containing the authenticated user's role or groups from the upstream proxy.",
title="Header name from upstream proxy to identify user role.",
)
role_map: Optional[dict[str, list[str]]] = Field(
default_factory=dict,
title=("Role mapping"),
description="Map upstream group values to Frigate roles (for example map admin groups to the admin role).",
title=("Mapping of Frigate roles to upstream group values. "),
)
class ProxyConfig(FrigateBaseModel):
header_map: HeaderMappingConfig = Field(
default_factory=HeaderMappingConfig,
title="Header mapping",
description="Map incoming proxy headers to Frigate user and role fields for proxy-based auth.",
title="Header mapping definitions for proxy user passing.",
)
logout_url: Optional[str] = Field(
default=None,
title="Logout URL",
description="URL to redirect users to when logging out via the proxy.",
default=None, title="Redirect url for logging out with proxy."
)
auth_secret: Optional[EnvString] = Field(
default=None,
title="Proxy secret",
description="Optional secret checked against the X-Proxy-Secret header to verify trusted proxies.",
title="Secret value for proxy authentication.",
)
default_role: Optional[str] = Field(
default="viewer",
title="Default role",
description="Default role assigned to proxy-authenticated users when no role mapping applies (admin or viewer).",
default="viewer", title="Default role for proxy users."
)
separator: Optional[str] = Field(
default=",",
title="Separator character",
description="Character used to split multiple values provided in proxy headers.",
title="The character used to separate values in a mapped header.",
)
@field_validator("separator", mode="before")

View File

@@ -8,41 +8,22 @@ __all__ = ["TelemetryConfig", "StatsConfig"]
class StatsConfig(FrigateBaseModel):
amd_gpu_stats: bool = Field(
default=True,
title="AMD GPU stats",
description="Enable collection of AMD GPU statistics if an AMD GPU is present.",
)
intel_gpu_stats: bool = Field(
default=True,
title="Intel GPU stats",
description="Enable collection of Intel GPU statistics if an Intel GPU is present.",
)
amd_gpu_stats: bool = Field(default=True, title="Enable AMD GPU stats.")
intel_gpu_stats: bool = Field(default=True, title="Enable Intel GPU stats.")
network_bandwidth: bool = Field(
default=False,
title="Network bandwidth",
description="Enable per-process network bandwidth monitoring for camera ffmpeg processes and detectors (requires capabilities).",
default=False, title="Enable network bandwidth for ffmpeg processes."
)
intel_gpu_device: Optional[str] = Field(
default=None,
title="SR-IOV device",
description="Device identifier used when treating Intel GPUs as SR-IOV to fix GPU stats.",
default=None, title="Define the device to use when gathering SR-IOV stats."
)
class TelemetryConfig(FrigateBaseModel):
network_interfaces: list[str] = Field(
default=[],
title="Network interfaces",
description="List of network interface name prefixes to monitor for bandwidth statistics.",
title="Enabled network interfaces for bandwidth calculation.",
)
stats: StatsConfig = Field(
default_factory=StatsConfig,
title="System stats",
description="Options to enable/disable collection of various system and GPU statistics.",
)
version_check: bool = Field(
default=True,
title="Version check",
description="Enable an outbound check to detect if a newer Frigate version is available.",
default_factory=StatsConfig, title="System Stats Configuration"
)
version_check: bool = Field(default=True, title="Enable latest version check.")

View File

@@ -6,8 +6,4 @@ __all__ = ["TlsConfig"]
class TlsConfig(FrigateBaseModel):
enabled: bool = Field(
default=True,
title="Enable TLS",
description="Enable TLS for Frigate's web UI and API on the configured TLS port.",
)
enabled: bool = Field(default=True, title="Enable TLS for port 8971")

View File

@@ -27,28 +27,16 @@ class UnitSystemEnum(str, Enum):
class UIConfig(FrigateBaseModel):
timezone: Optional[str] = Field(
default=None,
title="Timezone",
description="Optional timezone to display across the UI (defaults to browser local time if unset).",
)
timezone: Optional[str] = Field(default=None, title="Override UI timezone.")
time_format: TimeFormatEnum = Field(
default=TimeFormatEnum.browser,
title="Time format",
description="Time format to use in the UI (browser, 12hour, or 24hour).",
default=TimeFormatEnum.browser, title="Override UI time format."
)
date_style: DateTimeStyleEnum = Field(
default=DateTimeStyleEnum.short,
title="Date style",
description="Date style to use in the UI (full, long, medium, short).",
default=DateTimeStyleEnum.short, title="Override UI dateStyle."
)
time_style: DateTimeStyleEnum = Field(
default=DateTimeStyleEnum.medium,
title="Time style",
description="Time style to use in the UI (full, long, medium, short).",
default=DateTimeStyleEnum.medium, title="Override UI timeStyle."
)
unit_system: UnitSystemEnum = Field(
default=UnitSystemEnum.metric,
title="Unit system",
description="Unit system for display (metric or imperial) used in the UI and MQTT.",
default=UnitSystemEnum.metric, title="The unit system to use for measurements."
)

View File

@@ -1220,7 +1220,7 @@ class LicensePlateProcessingMixin:
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
# apply motion mask
rgb[self.config.cameras[obj_data].motion.rasterized_mask == 0] = [0, 0, 0]
rgb[self.config.cameras[obj_data].motion.mask == 0] = [0, 0, 0]
if WRITE_DEBUG_IMAGES:
cv2.imwrite(
@@ -1324,7 +1324,7 @@ class LicensePlateProcessingMixin:
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
# apply motion mask
rgb[self.config.cameras[camera].motion.rasterized_mask == 0] = [0, 0, 0]
rgb[self.config.cameras[camera].motion.mask == 0] = [0, 0, 0]
left, top, right, bottom = car_box
car = rgb[top:bottom, left:right]

View File

@@ -22,7 +22,7 @@ from .api import RealTimeProcessorApi
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter
from tensorflow.lite.python.interpreter import Interpreter
logger = logging.getLogger(__name__)

View File

@@ -32,7 +32,7 @@ from .api import RealTimeProcessorApi
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter
from tensorflow.lite.python.interpreter import Interpreter
logger = logging.getLogger(__name__)
@@ -73,6 +73,11 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.__build_detector()
def __build_detector(self) -> None:
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from tensorflow.lite.python.interpreter import Interpreter
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")

View File

@@ -45,55 +45,30 @@ class ModelTypeEnum(str, Enum):
class ModelConfig(BaseModel):
path: Optional[str] = Field(
None,
title="Custom Object detection model path",
description="Path to a custom detection model file (or plus://<model_id> for Frigate+ models).",
)
path: Optional[str] = Field(None, title="Custom Object detection model path.")
labelmap_path: Optional[str] = Field(
None,
title="Label map for custom object detector",
description="Path to a labelmap file that maps numeric classes to string labels for the detector.",
)
width: int = Field(
default=320,
title="Object detection model input width",
description="Width of the model input tensor in pixels.",
)
height: int = Field(
default=320,
title="Object detection model input height",
description="Height of the model input tensor in pixels.",
None, title="Label map for custom object detector."
)
width: int = Field(default=320, title="Object detection model input width.")
height: int = Field(default=320, title="Object detection model input height.")
labelmap: Dict[int, str] = Field(
default_factory=dict,
title="Labelmap customization",
description="Overrides or remapping entries to merge into the standard labelmap.",
default_factory=dict, title="Labelmap customization."
)
attributes_map: Dict[str, list[str]] = Field(
default=DEFAULT_ATTRIBUTE_LABEL_MAP,
title="Map of object labels to their attribute labels",
description="Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate']).",
title="Map of object labels to their attribute labels.",
)
input_tensor: InputTensorEnum = Field(
default=InputTensorEnum.nhwc,
title="Model Input Tensor Shape",
description="Tensor format expected by the model: 'nhwc' or 'nchw'.",
default=InputTensorEnum.nhwc, title="Model Input Tensor Shape"
)
input_pixel_format: PixelFormatEnum = Field(
default=PixelFormatEnum.rgb,
title="Model Input Pixel Color Format",
description="Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'.",
default=PixelFormatEnum.rgb, title="Model Input Pixel Color Format"
)
input_dtype: InputDTypeEnum = Field(
default=InputDTypeEnum.int,
title="Model Input D Type",
description="Data type of the model input tensor (for example 'float32').",
default=InputDTypeEnum.int, title="Model Input D Type"
)
model_type: ModelTypeEnum = Field(
default=ModelTypeEnum.ssd,
title="Object Detection Model Type",
description="Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization.",
default=ModelTypeEnum.ssd, title="Object Detection Model Type"
)
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
@@ -235,20 +210,12 @@ class ModelConfig(BaseModel):
class BaseDetectorConfig(BaseModel):
# the type field must be defined in all subclasses
type: str = Field(
default="cpu",
title="Detector Type",
description="Type of detector to use for object detection (for example 'cpu', 'edgetpu', 'openvino').",
)
type: str = Field(default="cpu", title="Detector Type")
model: Optional[ModelConfig] = Field(
default=None,
title="Detector specific model configuration",
description="Detector-specific model configuration options (path, input size, etc.).",
default=None, title="Detector specific model configuration."
)
model_path: Optional[str] = Field(
default=None,
title="Detector specific model path",
description="File path to the detector model binary if required by the chosen detector.",
default=None, title="Detector specific model path."
)
model_config = ConfigDict(
extra="allow", arbitrary_types_allowed=True, protected_namespaces=()

View File

@@ -6,7 +6,7 @@ import numpy as np
try:
from tflite_runtime.interpreter import Interpreter, load_delegate
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter, load_delegate
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
logger = logging.getLogger(__name__)

View File

@@ -1,6 +1,6 @@
import logging
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -12,7 +12,7 @@ from ..detector_utils import tflite_detect_raw, tflite_init
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter
from tensorflow.lite.python.interpreter import Interpreter
logger = logging.getLogger(__name__)
@@ -21,18 +21,8 @@ DETECTOR_KEY = "cpu"
class CpuDetectorConfig(BaseDetectorConfig):
"""CPU TFLite detector that runs TensorFlow Lite models on the host CPU without hardware acceleration. Not recommended."""
model_config = ConfigDict(
title="CPU",
)
type: Literal[DETECTOR_KEY]
num_threads: int = Field(
default=3,
title="Number of detection threads",
description="The number of threads used for CPU-based inference.",
)
num_threads: int = Field(default=3, title="Number of detection threads")
class CpuTfl(DetectionApi):

View File

@@ -4,7 +4,7 @@ import logging
import numpy as np
import requests
from PIL import Image
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -16,28 +16,12 @@ DETECTOR_KEY = "deepstack"
class DeepstackDetectorConfig(BaseDetectorConfig):
"""DeepStack/CodeProject.AI detector that sends images to a remote DeepStack HTTP API for inference. Not recommended."""
model_config = ConfigDict(
title="DeepStack",
)
type: Literal[DETECTOR_KEY]
api_url: str = Field(
default="http://localhost:80/v1/vision/detection",
title="DeepStack API URL",
description="The URL of the DeepStack API.",
)
api_timeout: float = Field(
default=0.1,
title="DeepStack API timeout (in seconds)",
description="Maximum time allowed for a DeepStack API request.",
)
api_key: str = Field(
default="",
title="DeepStack API key (if required)",
description="Optional API key for authenticated DeepStack services.",
default="http://localhost:80/v1/vision/detection", title="DeepStack API URL"
)
api_timeout: float = Field(default=0.1, title="DeepStack API timeout (in seconds)")
api_key: str = Field(default="", title="DeepStack API key (if required)")
class DeepStack(DetectionApi):

View File

@@ -2,7 +2,7 @@ import logging
import queue
import numpy as np
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -14,28 +14,10 @@ DETECTOR_KEY = "degirum"
### DETECTOR CONFIG ###
class DGDetectorConfig(BaseDetectorConfig):
"""DeGirum detector for running models via DeGirum cloud or local inference services."""
model_config = ConfigDict(
title="DeGirum",
)
type: Literal[DETECTOR_KEY]
location: str = Field(
default=None,
title="Inference Location",
description="Location of the DeGirim inference engine (e.g. '@cloud', '127.0.0.1').",
)
zoo: str = Field(
default=None,
title="Model Zoo",
description="Path or URL to the DeGirum model zoo.",
)
token: str = Field(
default=None,
title="DeGirum Cloud Token",
description="Token for DeGirum Cloud access.",
)
location: str = Field(default=None, title="Inference Location")
zoo: str = Field(default=None, title="Model Zoo")
token: str = Field(default=None, title="DeGirum Cloud Token")
### ACTUAL DETECTOR ###

View File

@@ -4,7 +4,7 @@ import os
import cv2
import numpy as np
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -13,7 +13,7 @@ from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
try:
from tflite_runtime.interpreter import Interpreter, load_delegate
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter, load_delegate
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
logger = logging.getLogger(__name__)
@@ -21,18 +21,8 @@ DETECTOR_KEY = "edgetpu"
class EdgeTpuDetectorConfig(BaseDetectorConfig):
"""EdgeTPU detector that runs TensorFlow Lite models compiled for Coral EdgeTPU using the EdgeTPU delegate."""
model_config = ConfigDict(
title="EdgeTPU",
)
type: Literal[DETECTOR_KEY]
device: str = Field(
default=None,
title="Device Type",
description="The device to use for EdgeTPU inference (e.g. 'usb', 'pci').",
)
device: str = Field(default=None, title="Device Type")
class EdgeTpuTfl(DetectionApi):

View File

@@ -8,7 +8,7 @@ from typing import Dict, List, Optional, Tuple
import cv2
import numpy as np
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.const import MODEL_CACHE_DIR
@@ -410,15 +410,5 @@ class HailoDetector(DetectionApi):
# ----------------- HailoDetectorConfig Class ----------------- #
class HailoDetectorConfig(BaseDetectorConfig):
"""Hailo-8/Hailo-8L detector using HEF models and the HailoRT SDK for inference on Hailo hardware."""
model_config = ConfigDict(
title="Hailo-8/Hailo-8L",
)
type: Literal[DETECTOR_KEY]
device: str = Field(
default="PCIe",
title="Device Type",
description="The device to use for Hailo inference (e.g. 'PCIe', 'M.2').",
)
device: str = Field(default="PCIe", title="Device Type")

View File

@@ -8,7 +8,7 @@ from queue import Queue
import cv2
import numpy as np
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -30,18 +30,8 @@ class ModelConfig(BaseModel):
class MemryXDetectorConfig(BaseDetectorConfig):
"""MemryX MX3 detector that runs compiled DFP models on MemryX accelerators."""
model_config = ConfigDict(
title="MemryX",
)
type: Literal[DETECTOR_KEY]
device: str = Field(
default="PCIe",
title="Device Path",
description="The device to use for MemryX inference (e.g. 'PCIe').",
)
device: str = Field(default="PCIe", title="Device Path")
class MemryXDetector(DetectionApi):

View File

@@ -1,7 +1,7 @@
import logging
import numpy as np
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -23,18 +23,8 @@ DETECTOR_KEY = "onnx"
class ONNXDetectorConfig(BaseDetectorConfig):
"""ONNX detector for running ONNX models; will use available acceleration backends (CUDA/ROCm/OpenVINO) when available."""
model_config = ConfigDict(
title="ONNX",
)
type: Literal[DETECTOR_KEY]
device: str = Field(
default="AUTO",
title="Device Type",
description="The device to use for ONNX inference (e.g. 'AUTO', 'CPU', 'GPU').",
)
device: str = Field(default="AUTO", title="Device Type")
class ONNXDetector(DetectionApi):

View File

@@ -2,7 +2,7 @@ import logging
import numpy as np
import openvino as ov
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -20,18 +20,8 @@ DETECTOR_KEY = "openvino"
class OvDetectorConfig(BaseDetectorConfig):
"""OpenVINO detector for AMD and Intel CPUs, Intel GPUs and Intel VPU hardware."""
model_config = ConfigDict(
title="OpenVINO",
)
type: Literal[DETECTOR_KEY]
device: str = Field(
default=None,
title="Device Type",
description="The device to use for OpenVINO inference (e.g. 'CPU', 'GPU', 'NPU').",
)
device: str = Field(default=None, title="Device Type")
class OvDetector(DetectionApi):

View File

@@ -6,7 +6,7 @@ from typing import Literal
import cv2
import numpy as np
from pydantic import ConfigDict, Field
from pydantic import Field
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
from frigate.detectors.detection_api import DetectionApi
@@ -29,20 +29,8 @@ model_cache_dir = os.path.join(MODEL_CACHE_DIR, "rknn_cache/")
class RknnDetectorConfig(BaseDetectorConfig):
"""RKNN detector for Rockchip NPUs; runs compiled RKNN models on Rockchip hardware."""
model_config = ConfigDict(
title="RKNN",
)
type: Literal[DETECTOR_KEY]
num_cores: int = Field(
default=0,
ge=0,
le=3,
title="Number of NPU cores to use.",
description="The number of NPU cores to use (0 for auto).",
)
num_cores: int = Field(default=0, ge=0, le=3, title="Number of NPU cores to use.")
class Rknn(DetectionApi):

View File

@@ -2,7 +2,6 @@ import logging
import os
import numpy as np
from pydantic import ConfigDict
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -28,12 +27,6 @@ DETECTOR_KEY = "synaptics"
class SynapDetectorConfig(BaseDetectorConfig):
"""Synaptics NPU detector for models in .synap format using the Synap SDK on Synaptics hardware."""
model_config = ConfigDict(
title="Synaptics",
)
type: Literal[DETECTOR_KEY]

View File

@@ -1,6 +1,5 @@
import logging
from pydantic import ConfigDict
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -19,12 +18,6 @@ DETECTOR_KEY = "teflon_tfl"
class TeflonDetectorConfig(BaseDetectorConfig):
"""Teflon delegate detector for TFLite using Mesa Teflon delegate library to accelerate inference on supported GPUs."""
model_config = ConfigDict(
title="Teflon",
)
type: Literal[DETECTOR_KEY]

View File

@@ -14,7 +14,7 @@ try:
except ModuleNotFoundError:
TRT_SUPPORT = False
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -46,16 +46,8 @@ if TRT_SUPPORT:
class TensorRTDetectorConfig(BaseDetectorConfig):
"""TensorRT detector for Nvidia Jetson devices using serialized TensorRT engines for accelerated inference."""
model_config = ConfigDict(
title="TensorRT",
)
type: Literal[DETECTOR_KEY]
device: int = Field(
default=0, title="GPU Device Index", description="The GPU device index to use."
)
device: int = Field(default=0, title="GPU Device Index")
class HostDeviceMem(object):

View File

@@ -5,7 +5,7 @@ from typing import Any, List
import numpy as np
import zmq
from pydantic import ConfigDict, Field
from pydantic import Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
@@ -17,28 +17,14 @@ DETECTOR_KEY = "zmq"
class ZmqDetectorConfig(BaseDetectorConfig):
"""ZMQ IPC detector that offloads inference to an external process via a ZeroMQ IPC endpoint."""
model_config = ConfigDict(
title="ZMQ IPC",
)
type: Literal[DETECTOR_KEY]
endpoint: str = Field(
default="ipc:///tmp/cache/zmq_detector",
title="ZMQ IPC endpoint",
description="The ZMQ endpoint to connect to.",
default="ipc:///tmp/cache/zmq_detector", title="ZMQ IPC endpoint"
)
request_timeout_ms: int = Field(
default=200,
title="ZMQ request timeout in milliseconds",
description="Timeout for ZMQ requests in milliseconds.",
)
linger_ms: int = Field(
default=0,
title="ZMQ socket linger in milliseconds",
description="Socket linger period in milliseconds.",
default=200, title="ZMQ request timeout in milliseconds"
)
linger_ms: int = Field(default=0, title="ZMQ socket linger in milliseconds")
class ZmqIpcDetector(DetectionApi):

View File

@@ -28,6 +28,7 @@ from frigate.types import ModelStatusTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, serialize
from frigate.util.file import get_event_thumbnail_bytes
from .genai_embedding import GenAIEmbedding
from .onnx.jina_v1_embedding import JinaV1ImageEmbedding, JinaV1TextEmbedding
from .onnx.jina_v2_embedding import JinaV2Embedding
@@ -73,11 +74,13 @@ class Embeddings:
config: FrigateConfig,
db: SqliteVecQueueDatabase,
metrics: DataProcessorMetrics,
genai_manager=None,
) -> None:
self.config = config
self.db = db
self.metrics = metrics
self.requestor = InterProcessRequestor()
self.genai_manager = genai_manager
self.image_inference_speed = InferenceSpeed(self.metrics.image_embeddings_speed)
self.image_eps = EventsPerSecond()
@@ -104,7 +107,27 @@ class Embeddings:
},
)
if self.config.semantic_search.model == SemanticSearchModelEnum.jinav2:
model_cfg = self.config.semantic_search.model
is_genai_model = isinstance(model_cfg, str)
if is_genai_model:
embeddings_client = (
genai_manager.embeddings_client if genai_manager else None
)
if not embeddings_client:
raise ValueError(
f"semantic_search.model is '{model_cfg}' (GenAI provider) but "
"no embeddings client is configured. Ensure the GenAI provider "
"has 'embeddings' in its roles."
)
self.embedding = GenAIEmbedding(embeddings_client)
self.text_embedding = lambda input_data: self.embedding(
input_data, embedding_type="text"
)
self.vision_embedding = lambda input_data: self.embedding(
input_data, embedding_type="vision"
)
elif model_cfg == SemanticSearchModelEnum.jinav2:
# Single JinaV2Embedding instance for both text and vision
self.embedding = JinaV2Embedding(
model_size=self.config.semantic_search.model_size,
@@ -118,7 +141,8 @@ class Embeddings:
self.vision_embedding = lambda input_data: self.embedding(
input_data, embedding_type="vision"
)
else: # Default to jinav1
else:
# Default to jinav1
self.text_embedding = JinaV1TextEmbedding(
model_size=config.semantic_search.model_size,
requestor=self.requestor,
@@ -136,8 +160,11 @@ class Embeddings:
self.metrics.text_embeddings_eps.value = self.text_eps.eps()
def get_model_definitions(self):
# Version-specific models
if self.config.semantic_search.model == SemanticSearchModelEnum.jinav2:
model_cfg = self.config.semantic_search.model
if isinstance(model_cfg, str):
# GenAI provider: no ONNX models to download
models = []
elif model_cfg == SemanticSearchModelEnum.jinav2:
models = [
"jinaai/jina-clip-v2-tokenizer",
"jinaai/jina-clip-v2-model_fp16.onnx"
@@ -224,6 +251,14 @@ class Embeddings:
embeddings = self.vision_embedding(valid_thumbs)
if len(embeddings) != len(valid_ids):
logger.warning(
"Batch embed returned %d embeddings for %d thumbnails; skipping batch",
len(embeddings),
len(valid_ids),
)
return []
if upsert:
items = []
for i in range(len(valid_ids)):
@@ -246,9 +281,15 @@ class Embeddings:
def embed_description(
self, event_id: str, description: str, upsert: bool = True
) -> np.ndarray:
) -> np.ndarray | None:
start = datetime.datetime.now().timestamp()
embedding = self.text_embedding([description])[0]
embeddings = self.text_embedding([description])
if not embeddings:
logger.warning(
"Failed to generate description embedding for event %s", event_id
)
return None
embedding = embeddings[0]
if upsert:
self.db.execute_sql(
@@ -271,8 +312,32 @@ class Embeddings:
# upsert embeddings one by one to avoid token limit
embeddings = []
for desc in event_descriptions.values():
embeddings.append(self.text_embedding([desc])[0])
for eid, desc in event_descriptions.items():
result = self.text_embedding([desc])
if not result:
logger.warning(
"Failed to generate description embedding for event %s", eid
)
continue
embeddings.append(result[0])
if not embeddings:
logger.warning("No description embeddings generated in batch")
return np.array([])
# Build ids list for only successful embeddings - we need to track which succeeded
ids = list(event_descriptions.keys())
if len(embeddings) != len(ids):
# Rebuild ids/embeddings for only successful ones (match by order)
ids = []
embeddings_filtered = []
for eid, desc in event_descriptions.items():
result = self.text_embedding([desc])
if result:
ids.append(eid)
embeddings_filtered.append(result[0])
ids = ids
embeddings = embeddings_filtered
if upsert:
ids = list(event_descriptions.keys())
@@ -314,7 +379,10 @@ class Embeddings:
batch_size = (
4
if self.config.semantic_search.model == SemanticSearchModelEnum.jinav2
if (
isinstance(self.config.semantic_search.model, str)
or self.config.semantic_search.model == SemanticSearchModelEnum.jinav2
)
else 32
)
current_page = 1
@@ -601,6 +669,8 @@ class Embeddings:
if trigger.type == "description":
logger.debug(f"Generating embedding for trigger description {trigger_name}")
embedding = self.embed_description(None, trigger.data, upsert=False)
if embedding is None:
return b""
return embedding.astype(np.float32).tobytes()
elif trigger.type == "thumbnail":
@@ -636,6 +706,8 @@ class Embeddings:
embedding = self.embed_thumbnail(
str(trigger.data), thumbnail, upsert=False
)
if embedding is None:
return b""
return embedding.astype(np.float32).tobytes()
else:

View File

@@ -0,0 +1,85 @@
"""GenAI-backed embeddings for semantic search."""
import io
import logging
from typing import TYPE_CHECKING
import numpy as np
from PIL import Image
if TYPE_CHECKING:
from frigate.genai import GenAIClient
logger = logging.getLogger(__name__)
EMBEDDING_DIM = 768
class GenAIEmbedding:
"""Embedding adapter that delegates to a GenAI provider's embed API.
Provides the same interface as JinaV2Embedding for semantic search:
__call__(inputs, embedding_type) -> list[np.ndarray]. Output embeddings are
normalized to 768 dimensions for Frigate's sqlite-vec schema.
"""
def __init__(self, client: "GenAIClient") -> None:
self.client = client
def __call__(
self,
inputs: list[str] | list[bytes] | list[Image.Image],
embedding_type: str = "text",
) -> list[np.ndarray]:
"""Generate embeddings for text or images.
Args:
inputs: List of strings (text) or bytes/PIL images (vision).
embedding_type: "text" or "vision".
Returns:
List of 768-dim numpy float32 arrays.
"""
if not inputs:
return []
if embedding_type == "text":
texts = [str(x) for x in inputs]
embeddings = self.client.embed(texts=texts)
elif embedding_type == "vision":
images: list[bytes] = []
for inp in inputs:
if isinstance(inp, bytes):
images.append(inp)
elif isinstance(inp, Image.Image):
buf = io.BytesIO()
inp.convert("RGB").save(buf, format="JPEG")
images.append(buf.getvalue())
else:
logger.warning(
"GenAIEmbedding: skipping unsupported vision input type %s",
type(inp).__name__,
)
if not images:
return []
embeddings = self.client.embed(images=images)
else:
raise ValueError(
f"Invalid embedding_type '{embedding_type}'. Must be 'text' or 'vision'."
)
result = []
for emb in embeddings:
arr = np.asarray(emb, dtype=np.float32).flatten()
if arr.size != EMBEDDING_DIM:
if arr.size > EMBEDDING_DIM:
arr = arr[:EMBEDDING_DIM]
else:
arr = np.pad(
arr,
(0, EMBEDDING_DIM - arr.size),
mode="constant",
constant_values=0,
)
result.append(arr)
return result

View File

@@ -116,8 +116,10 @@ class EmbeddingMaintainer(threading.Thread):
models = [Event, Recordings, ReviewSegment, Trigger]
db.bind(models)
self.genai_manager = GenAIClientManager(config)
if config.semantic_search.enabled:
self.embeddings = Embeddings(config, db, metrics)
self.embeddings = Embeddings(config, db, metrics, self.genai_manager)
# Check if we need to re-index events
if config.semantic_search.reindex:
@@ -144,7 +146,6 @@ class EmbeddingMaintainer(threading.Thread):
self.frame_manager = SharedMemoryFrameManager()
self.detected_license_plates: dict[str, dict[str, Any]] = {}
self.genai_manager = GenAIClientManager(config)
# model runners to share between realtime and post processors
if self.config.lpr.enabled:

View File

@@ -17,7 +17,7 @@ from .base_embedding import BaseEmbedding
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter
from tensorflow.lite.python.interpreter import Interpreter
logger = logging.getLogger(__name__)

View File

@@ -43,7 +43,7 @@ from frigate.video import start_or_restart_ffmpeg, stop_ffmpeg
try:
from tflite_runtime.interpreter import Interpreter
except ModuleNotFoundError:
from ai_edge_litert.interpreter import Interpreter
from tensorflow.lite.python.interpreter import Interpreter
logger = logging.getLogger(__name__)

View File

@@ -7,9 +7,10 @@ import os
import re
from typing import Any, Optional
import numpy as np
from playhouse.shortcuts import model_to_dict
from frigate.config import CameraConfig, GenAIConfig, GenAIProviderEnum
from frigate.config import CameraConfig, FrigateConfig, GenAIConfig, GenAIProviderEnum
from frigate.const import CLIPS_DIR
from frigate.data_processing.post.types import ReviewMetadata
from frigate.genai.manager import GenAIClientManager
@@ -304,6 +305,25 @@ Guidelines:
"""Get the context window size for this provider in tokens."""
return 4096
def embed(
self,
texts: list[str] | None = None,
images: list[bytes] | None = None,
) -> list[np.ndarray]:
"""Generate embeddings for text and/or images.
Returns list of numpy arrays (one per input). Expected dimension is 768
for Frigate semantic search compatibility.
Providers that support embeddings should override this method.
"""
logger.warning(
"%s does not support embeddings. "
"This method should be overridden by the provider implementation.",
self.__class__.__name__,
)
return []
def chat_with_tools(
self,
messages: list[dict[str, Any]],

View File

@@ -167,123 +167,3 @@ class OpenAIClient(GenAIClient):
"tool_calls": None,
"finish_reason": "error",
}
async def chat_with_tools_stream(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
):
"""
Stream chat with tools; yields content deltas then final message.
Implements streaming function calling/tool usage for Azure OpenAI models.
"""
try:
openai_tool_choice = None
if tool_choice:
if tool_choice == "none":
openai_tool_choice = "none"
elif tool_choice == "auto":
openai_tool_choice = "auto"
elif tool_choice == "required":
openai_tool_choice = "required"
request_params = {
"model": self.genai_config.model,
"messages": messages,
"timeout": self.timeout,
"stream": True,
}
if tools:
request_params["tools"] = tools
if openai_tool_choice is not None:
request_params["tool_choice"] = openai_tool_choice
# Use streaming API
content_parts: list[str] = []
tool_calls_by_index: dict[int, dict[str, Any]] = {}
finish_reason = "stop"
stream = self.provider.chat.completions.create(**request_params)
for chunk in stream:
if not chunk or not chunk.choices:
continue
choice = chunk.choices[0]
delta = choice.delta
# Check for finish reason
if choice.finish_reason:
finish_reason = choice.finish_reason
# Extract content deltas
if delta.content:
content_parts.append(delta.content)
yield ("content_delta", delta.content)
# Extract tool calls
if delta.tool_calls:
for tc in delta.tool_calls:
idx = tc.index
fn = tc.function
if idx not in tool_calls_by_index:
tool_calls_by_index[idx] = {
"id": tc.id or "",
"name": fn.name if fn and fn.name else "",
"arguments": "",
}
t = tool_calls_by_index[idx]
if tc.id:
t["id"] = tc.id
if fn and fn.name:
t["name"] = fn.name
if fn and fn.arguments:
t["arguments"] += fn.arguments
# Build final message
full_content = "".join(content_parts).strip() or None
# Convert tool calls to list format
tool_calls_list = None
if tool_calls_by_index:
tool_calls_list = []
for tc in tool_calls_by_index.values():
try:
# Parse accumulated arguments as JSON
parsed_args = json.loads(tc["arguments"])
except (json.JSONDecodeError, Exception):
parsed_args = tc["arguments"]
tool_calls_list.append(
{
"id": tc["id"],
"name": tc["name"],
"arguments": parsed_args,
}
)
finish_reason = "tool_calls"
yield (
"message",
{
"content": full_content,
"tool_calls": tool_calls_list,
"finish_reason": finish_reason,
},
)
except Exception as e:
logger.warning("Azure OpenAI streaming returned an error: %s", str(e))
yield (
"message",
{
"content": None,
"tool_calls": None,
"finish_reason": "error",
},
)

View File

@@ -1,6 +1,5 @@
"""Gemini Provider for Frigate AI."""
import json
import logging
from typing import Any, Optional
@@ -274,239 +273,3 @@ class GeminiClient(GenAIClient):
"tool_calls": None,
"finish_reason": "error",
}
async def chat_with_tools_stream(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
):
"""
Stream chat with tools; yields content deltas then final message.
Implements streaming function calling/tool usage for Gemini models.
"""
try:
# Convert messages to Gemini format
gemini_messages = []
for msg in messages:
role = msg.get("role", "user")
content = msg.get("content", "")
# Map roles to Gemini format
if role == "system":
# Gemini doesn't have system role, prepend to first user message
if gemini_messages and gemini_messages[0].role == "user":
gemini_messages[0].parts[
0
].text = f"{content}\n\n{gemini_messages[0].parts[0].text}"
else:
gemini_messages.append(
types.Content(
role="user", parts=[types.Part.from_text(text=content)]
)
)
elif role == "assistant":
gemini_messages.append(
types.Content(
role="model", parts=[types.Part.from_text(text=content)]
)
)
elif role == "tool":
# Handle tool response
function_response = {
"name": msg.get("name", ""),
"response": content,
}
gemini_messages.append(
types.Content(
role="function",
parts=[
types.Part.from_function_response(function_response)
],
)
)
else: # user
gemini_messages.append(
types.Content(
role="user", parts=[types.Part.from_text(text=content)]
)
)
# Convert tools to Gemini format
gemini_tools = None
if tools:
gemini_tools = []
for tool in tools:
if tool.get("type") == "function":
func = tool.get("function", {})
gemini_tools.append(
types.Tool(
function_declarations=[
types.FunctionDeclaration(
name=func.get("name", ""),
description=func.get("description", ""),
parameters=func.get("parameters", {}),
)
]
)
)
# Configure tool choice
tool_config = None
if tool_choice:
if tool_choice == "none":
tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode="NONE")
)
elif tool_choice == "auto":
tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode="AUTO")
)
elif tool_choice == "required":
tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode="ANY")
)
# Build request config
config_params = {"candidate_count": 1}
if gemini_tools:
config_params["tools"] = gemini_tools
if tool_config:
config_params["tool_config"] = tool_config
# Merge runtime_options
if isinstance(self.genai_config.runtime_options, dict):
config_params.update(self.genai_config.runtime_options)
# Use streaming API
content_parts: list[str] = []
tool_calls_by_index: dict[int, dict[str, Any]] = {}
finish_reason = "stop"
response = self.provider.models.generate_content_stream(
model=self.genai_config.model,
contents=gemini_messages,
config=types.GenerateContentConfig(**config_params),
)
async for chunk in response:
if not chunk or not chunk.candidates:
continue
candidate = chunk.candidates[0]
# Check for finish reason
if hasattr(candidate, "finish_reason") and candidate.finish_reason:
from google.genai.types import FinishReason
if candidate.finish_reason == FinishReason.STOP:
finish_reason = "stop"
elif candidate.finish_reason == FinishReason.MAX_TOKENS:
finish_reason = "length"
elif candidate.finish_reason in [
FinishReason.SAFETY,
FinishReason.RECITATION,
]:
finish_reason = "error"
# Extract content and tool calls from chunk
if candidate.content and candidate.content.parts:
for part in candidate.content.parts:
if part.text:
content_parts.append(part.text)
yield ("content_delta", part.text)
elif part.function_call:
# Handle function call
try:
arguments = (
dict(part.function_call.args)
if part.function_call.args
else {}
)
except Exception:
arguments = {}
# Store tool call
tool_call_id = part.function_call.name or ""
tool_call_name = part.function_call.name or ""
# Check if we already have this tool call
found_index = None
for idx, tc in tool_calls_by_index.items():
if tc["name"] == tool_call_name:
found_index = idx
break
if found_index is None:
found_index = len(tool_calls_by_index)
tool_calls_by_index[found_index] = {
"id": tool_call_id,
"name": tool_call_name,
"arguments": "",
}
# Accumulate arguments
if arguments:
tool_calls_by_index[found_index]["arguments"] += (
json.dumps(arguments)
if isinstance(arguments, dict)
else str(arguments)
)
# Build final message
full_content = "".join(content_parts).strip() or None
# Convert tool calls to list format
tool_calls_list = None
if tool_calls_by_index:
tool_calls_list = []
for tc in tool_calls_by_index.values():
try:
# Try to parse accumulated arguments as JSON
parsed_args = json.loads(tc["arguments"])
except (json.JSONDecodeError, Exception):
parsed_args = tc["arguments"]
tool_calls_list.append(
{
"id": tc["id"],
"name": tc["name"],
"arguments": parsed_args,
}
)
finish_reason = "tool_calls"
yield (
"message",
{
"content": full_content,
"tool_calls": tool_calls_list,
"finish_reason": finish_reason,
},
)
except errors.APIError as e:
logger.warning("Gemini API error during streaming: %s", str(e))
yield (
"message",
{
"content": None,
"tool_calls": None,
"finish_reason": "error",
},
)
except Exception as e:
logger.warning(
"Gemini returned an error during chat_with_tools_stream: %s", str(e)
)
yield (
"message",
{
"content": None,
"tool_calls": None,
"finish_reason": "error",
},
)

View File

@@ -1,12 +1,15 @@
"""llama.cpp Provider for Frigate AI."""
import base64
import io
import json
import logging
from typing import Any, Optional
import httpx
import numpy as np
import requests
from PIL import Image
from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider
@@ -15,6 +18,20 @@ from frigate.genai.utils import parse_tool_calls_from_message
logger = logging.getLogger(__name__)
def _to_jpeg(img_bytes: bytes) -> bytes | None:
"""Convert image bytes to JPEG. llama.cpp/STB does not support WebP."""
try:
img = Image.open(io.BytesIO(img_bytes))
if img.mode != "RGB":
img = img.convert("RGB")
buf = io.BytesIO()
img.save(buf, format="JPEG", quality=85)
return buf.getvalue()
except Exception as e:
logger.warning("Failed to convert image to JPEG: %s", e)
return None
@register_genai_provider(GenAIProviderEnum.llamacpp)
class LlamaCppClient(GenAIClient):
"""Generative AI client for Frigate using llama.cpp server."""
@@ -102,7 +119,7 @@ class LlamaCppClient(GenAIClient):
def get_context_size(self) -> int:
"""Get the context window size for llama.cpp."""
return int(self.provider_options.get("context_size", 4096))
return self.provider_options.get("context_size", 4096)
def _build_payload(
self,
@@ -176,6 +193,106 @@ class LlamaCppClient(GenAIClient):
)
return result if result else None
def embed(
self,
texts: list[str] | None = None,
images: list[bytes] | None = None,
) -> list[np.ndarray]:
"""Generate embeddings via llama.cpp /embeddings endpoint.
Supports batch requests. Uses content format with prompt_string and
multimodal_data for images (PR #15108). Server must be started with
--embeddings and --mmproj for multimodal support.
"""
if self.provider is None:
logger.warning(
"llama.cpp provider has not been initialized. Check your llama.cpp configuration."
)
return []
texts = texts or []
images = images or []
if not texts and not images:
return []
EMBEDDING_DIM = 768
content = []
for text in texts:
content.append({"prompt_string": text})
for img in images:
# llama.cpp uses STB which does not support WebP; convert to JPEG
jpeg_bytes = _to_jpeg(img)
to_encode = jpeg_bytes if jpeg_bytes is not None else img
encoded = base64.b64encode(to_encode).decode("utf-8")
# prompt_string must contain <__media__> placeholder for image tokenization
content.append(
{
"prompt_string": "<__media__>\n",
"multimodal_data": [encoded],
}
)
try:
response = requests.post(
f"{self.provider}/embeddings",
json={"model": self.genai_config.model, "content": content},
timeout=self.timeout,
)
response.raise_for_status()
result = response.json()
items = result.get("data", result) if isinstance(result, dict) else result
if not isinstance(items, list):
logger.warning("llama.cpp embeddings returned unexpected format")
return []
embeddings = []
for item in items:
emb = item.get("embedding") if isinstance(item, dict) else None
if emb is None:
logger.warning("llama.cpp embeddings item missing embedding field")
continue
arr = np.array(emb, dtype=np.float32)
orig_dim = arr.size
if orig_dim != EMBEDDING_DIM:
if orig_dim > EMBEDDING_DIM:
arr = arr[:EMBEDDING_DIM]
logger.debug(
"Truncated llama.cpp embedding from %d to %d dimensions",
orig_dim,
EMBEDDING_DIM,
)
else:
arr = np.pad(
arr,
(0, EMBEDDING_DIM - orig_dim),
mode="constant",
constant_values=0,
)
logger.debug(
"Padded llama.cpp embedding from %d to %d dimensions",
orig_dim,
EMBEDDING_DIM,
)
embeddings.append(arr)
return embeddings
except requests.exceptions.Timeout:
logger.warning("llama.cpp embeddings request timed out")
return []
except requests.exceptions.RequestException as e:
error_detail = str(e)
if hasattr(e, "response") and e.response is not None:
try:
error_detail = f"{str(e)} - Response: {e.response.text[:500]}"
except Exception:
pass
logger.warning("llama.cpp embeddings error: %s", error_detail)
return []
except Exception as e:
logger.warning("Unexpected error in llama.cpp embeddings: %s", str(e))
return []
def chat_with_tools(
self,
messages: list[dict[str, Any]],

View File

@@ -21,12 +21,13 @@ class GenAIClientManager:
"""Manages GenAI provider clients from Frigate config."""
def __init__(self, config: FrigateConfig) -> None:
self._config = config
self._tool_client: Optional[GenAIClient] = None
self._vision_client: Optional[GenAIClient] = None
self._embeddings_client: Optional[GenAIClient] = None
self.update_config(config)
self._update_config()
def update_config(self, config: FrigateConfig) -> None:
def _update_config(self) -> None:
"""Build role clients from current Frigate config.genai.
Called from __init__ and can be called again when config is reloaded.
@@ -39,12 +40,12 @@ class GenAIClientManager:
self._vision_client = None
self._embeddings_client = None
if not config.genai:
if not self._config.genai:
return
load_providers()
for _name, genai_cfg in config.genai.items():
for _name, genai_cfg in self._config.genai.items():
if not genai_cfg.provider:
continue
provider_cls = PROVIDERS.get(genai_cfg.provider)

View File

@@ -85,8 +85,8 @@ class OllamaClient(GenAIClient):
def get_context_size(self) -> int:
"""Get the context window size for Ollama."""
return int(
self.genai_config.provider_options.get("options", {}).get("num_ctx", 4096)
return self.genai_config.provider_options.get("options", {}).get(
"num_ctx", 4096
)
def _build_request_params(

View File

@@ -30,10 +30,6 @@ class OpenAIClient(GenAIClient):
for k, v in self.genai_config.provider_options.items()
if k != "context_size"
}
if self.genai_config.base_url:
provider_opts["base_url"] = self.genai_config.base_url
return OpenAI(api_key=self.genai_config.api_key, **provider_opts)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
@@ -231,142 +227,3 @@ class OpenAIClient(GenAIClient):
"tool_calls": None,
"finish_reason": "error",
}
async def chat_with_tools_stream(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
):
"""
Stream chat with tools; yields content deltas then final message.
Implements streaming function calling/tool usage for OpenAI models.
"""
try:
openai_tool_choice = None
if tool_choice:
if tool_choice == "none":
openai_tool_choice = "none"
elif tool_choice == "auto":
openai_tool_choice = "auto"
elif tool_choice == "required":
openai_tool_choice = "required"
request_params = {
"model": self.genai_config.model,
"messages": messages,
"timeout": self.timeout,
"stream": True,
}
if tools:
request_params["tools"] = tools
if openai_tool_choice is not None:
request_params["tool_choice"] = openai_tool_choice
if isinstance(self.genai_config.provider_options, dict):
excluded_options = {"context_size"}
provider_opts = {
k: v
for k, v in self.genai_config.provider_options.items()
if k not in excluded_options
}
request_params.update(provider_opts)
# Use streaming API
content_parts: list[str] = []
tool_calls_by_index: dict[int, dict[str, Any]] = {}
finish_reason = "stop"
stream = self.provider.chat.completions.create(**request_params)
for chunk in stream:
if not chunk or not chunk.choices:
continue
choice = chunk.choices[0]
delta = choice.delta
# Check for finish reason
if choice.finish_reason:
finish_reason = choice.finish_reason
# Extract content deltas
if delta.content:
content_parts.append(delta.content)
yield ("content_delta", delta.content)
# Extract tool calls
if delta.tool_calls:
for tc in delta.tool_calls:
idx = tc.index
fn = tc.function
if idx not in tool_calls_by_index:
tool_calls_by_index[idx] = {
"id": tc.id or "",
"name": fn.name if fn and fn.name else "",
"arguments": "",
}
t = tool_calls_by_index[idx]
if tc.id:
t["id"] = tc.id
if fn and fn.name:
t["name"] = fn.name
if fn and fn.arguments:
t["arguments"] += fn.arguments
# Build final message
full_content = "".join(content_parts).strip() or None
# Convert tool calls to list format
tool_calls_list = None
if tool_calls_by_index:
tool_calls_list = []
for tc in tool_calls_by_index.values():
try:
# Parse accumulated arguments as JSON
parsed_args = json.loads(tc["arguments"])
except (json.JSONDecodeError, Exception):
parsed_args = tc["arguments"]
tool_calls_list.append(
{
"id": tc["id"],
"name": tc["name"],
"arguments": parsed_args,
}
)
finish_reason = "tool_calls"
yield (
"message",
{
"content": full_content,
"tool_calls": tool_calls_list,
"finish_reason": finish_reason,
},
)
except TimeoutException as e:
logger.warning("OpenAI streaming request timed out: %s", str(e))
yield (
"message",
{
"content": None,
"tool_calls": None,
"finish_reason": "error",
},
)
except Exception as e:
logger.warning("OpenAI streaming returned an error: %s", str(e))
yield (
"message",
{
"content": None,
"tool_calls": None,
"finish_reason": "error",
},
)

View File

@@ -28,7 +28,7 @@ class FrigateMotionDetector(MotionDetector):
self.motion_frame_count = 0
self.frame_counter = 0
resized_mask = cv2.resize(
config.rasterized_mask,
config.mask,
dsize=(self.motion_frame_size[1], self.motion_frame_size[0]),
interpolation=cv2.INTER_LINEAR,
)

View File

@@ -233,7 +233,7 @@ class ImprovedMotionDetector(MotionDetector):
def update_mask(self) -> None:
resized_mask = cv2.resize(
self.config.rasterized_mask,
self.config.mask,
dsize=(self.motion_frame_size[1], self.motion_frame_size[0]),
interpolation=cv2.INTER_AREA,
)

View File

@@ -116,9 +116,7 @@ class PtzMotionEstimator:
mask[y1:y2, x1:x2] = 0
# merge camera config motion mask with detections. Norfair function needs 0,1 mask
mask = np.bitwise_and(mask, self.camera_config.motion.rasterized_mask).clip(
max=1
)
mask = np.bitwise_and(mask, self.camera_config.motion.mask).clip(max=1)
# Norfair estimator function needs color so it can convert it right back to gray
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGRA)

View File

@@ -343,24 +343,8 @@ class TestConfig(unittest.TestCase):
"fps": 5,
},
"objects": {
"mask": {
"global_mask_1": {
"friendly_name": "Global Mask 1",
"enabled": True,
"coordinates": "0,0,1,1,0,1",
}
},
"filters": {
"dog": {
"mask": {
"dog_mask_1": {
"friendly_name": "Dog Mask 1",
"enabled": True,
"coordinates": "1,1,1,1,1,1",
}
}
}
},
"mask": "0,0,1,1,0,1",
"filters": {"dog": {"mask": "1,1,1,1,1,1"}},
},
}
},
@@ -369,10 +353,8 @@ class TestConfig(unittest.TestCase):
frigate_config = FrigateConfig(**config)
back_camera = frigate_config.cameras["back"]
assert "dog" in back_camera.objects.filters
# dog filter has its own mask + global mask merged
assert len(back_camera.objects.filters["dog"].mask) == 2
# person filter only has the global mask
assert len(back_camera.objects.filters["person"].mask) == 1
assert len(back_camera.objects.filters["dog"].raw_mask) == 2
assert len(back_camera.objects.filters["person"].raw_mask) == 1
def test_motion_mask_relative_matches_explicit(self):
config = {
@@ -391,13 +373,9 @@ class TestConfig(unittest.TestCase):
"fps": 5,
},
"motion": {
"mask": {
"explicit_mask": {
"friendly_name": "Explicit Mask",
"enabled": True,
"coordinates": "0,0,200,100,600,300,800,400",
}
}
"mask": [
"0,0,200,100,600,300,800,400",
]
},
},
"relative": {
@@ -412,13 +390,9 @@ class TestConfig(unittest.TestCase):
"fps": 5,
},
"motion": {
"mask": {
"relative_mask": {
"friendly_name": "Relative Mask",
"enabled": True,
"coordinates": "0.0,0.0,0.25,0.25,0.75,0.75,1.0,1.0",
}
}
"mask": [
"0.0,0.0,0.25,0.25,0.75,0.75,1.0,1.0",
]
},
},
},
@@ -426,8 +400,8 @@ class TestConfig(unittest.TestCase):
frigate_config = FrigateConfig(**config)
assert np.array_equal(
frigate_config.cameras["explicit"].motion.rasterized_mask,
frigate_config.cameras["relative"].motion.rasterized_mask,
frigate_config.cameras["explicit"].motion.mask,
frigate_config.cameras["relative"].motion.mask,
)
def test_default_input_args(self):

View File

@@ -188,10 +188,6 @@ class TrackedObject:
# check each zone
for name, zone in self.camera_config.zones.items():
# skip disabled zones
if not zone.enabled:
continue
# if the zone is not for this object type, skip
if len(zone.objects) > 0 and obj_data["label"] not in zone.objects:
continue

View File

@@ -195,8 +195,7 @@ def flatten_config_data(
) -> Dict[str, Any]:
items = []
for key, value in config_data.items():
escaped_key = escape_config_key_segment(str(key))
new_key = f"{parent_key}.{escaped_key}" if parent_key else escaped_key
new_key = f"{parent_key}.{key}" if parent_key else key
if isinstance(value, dict):
items.extend(flatten_config_data(value, new_key).items())
else:
@@ -204,41 +203,6 @@ def flatten_config_data(
return dict(items)
def escape_config_key_segment(segment: str) -> str:
"""Escape dots and backslashes so they can be treated as literal key chars."""
return segment.replace("\\", "\\\\").replace(".", "\\.")
def split_config_key_path(key_path_str: str) -> list[str]:
"""Split a dotted config path, honoring \\. as a literal dot in a key."""
parts: list[str] = []
current: list[str] = []
escaped = False
for char in key_path_str:
if escaped:
current.append(char)
escaped = False
continue
if char == "\\":
escaped = True
continue
if char == ".":
parts.append("".join(current))
current = []
continue
current.append(char)
if escaped:
current.append("\\")
parts.append("".join(current))
return parts
def update_yaml_file_bulk(file_path: str, updates: Dict[str, Any]):
yaml = YAML()
yaml.indent(mapping=2, sequence=4, offset=2)
@@ -254,7 +218,7 @@ def update_yaml_file_bulk(file_path: str, updates: Dict[str, Any]):
# Apply all updates
for key_path_str, new_value in updates.items():
key_path = split_config_key_path(key_path_str)
key_path = key_path_str.split(".")
for i in range(len(key_path)):
try:
index = int(key_path[i])

View File

@@ -434,55 +434,6 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
return new_config
def _convert_legacy_mask_to_dict(
mask: Optional[Union[str, list]], mask_type: str = "motion_mask", label: str = ""
) -> dict[str, dict[str, Any]]:
"""Convert legacy mask format (str or list[str]) to new dict format.
Args:
mask: Legacy mask format (string or list of strings)
mask_type: Type of mask for naming ("motion_mask" or "object_mask")
label: Optional label for object masks (e.g., "person")
Returns:
Dictionary with mask_id as key and mask config as value
"""
if not mask:
return {}
result = {}
if isinstance(mask, str):
if mask:
mask_id = f"{mask_type}_1"
friendly_name = (
f"Object Mask 1 ({label})"
if label
else f"{mask_type.replace('_', ' ').title()} 1"
)
result[mask_id] = {
"friendly_name": friendly_name,
"enabled": True,
"coordinates": mask,
}
elif isinstance(mask, list):
for i, coords in enumerate(mask):
if coords:
mask_id = f"{mask_type}_{i + 1}"
friendly_name = (
f"Object Mask {i + 1} ({label})"
if label
else f"{mask_type.replace('_', ' ').title()} {i + 1}"
)
result[mask_id] = {
"friendly_name": friendly_name,
"enabled": True,
"coordinates": coords,
}
return result
def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
"""Handle migrating frigate config to 0.18-0"""
new_config = config.copy()
@@ -508,35 +459,7 @@ def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
if not new_config.get("record"):
del new_config["record"]
# Migrate global motion masks
global_motion = new_config.get("motion", {})
if global_motion and "mask" in global_motion:
mask = global_motion.get("mask")
if mask is not None and not isinstance(mask, dict):
new_config["motion"]["mask"] = _convert_legacy_mask_to_dict(
mask, "motion_mask"
)
# Migrate global object masks
global_objects = new_config.get("objects", {})
if global_objects and "mask" in global_objects:
mask = global_objects.get("mask")
if mask is not None and not isinstance(mask, dict):
new_config["objects"]["mask"] = _convert_legacy_mask_to_dict(
mask, "object_mask"
)
# Migrate global object filters masks
if global_objects and "filters" in global_objects:
for obj_name, filter_config in global_objects.get("filters", {}).items():
if isinstance(filter_config, dict) and "mask" in filter_config:
mask = filter_config.get("mask")
if mask is not None and not isinstance(mask, dict):
new_config["objects"]["filters"][obj_name]["mask"] = (
_convert_legacy_mask_to_dict(mask, "object_mask", obj_name)
)
# Remove deprecated sync_recordings and migrate masks for camera-specific configs
# Remove deprecated sync_recordings and timelapse_args from camera-specific record configs
for name, camera in config.get("cameras", {}).items():
camera_config: dict[str, dict[str, Any]] = camera.copy()
@@ -555,34 +478,6 @@ def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
if not camera_config.get("record"):
del camera_config["record"]
# Migrate camera motion masks
camera_motion = camera_config.get("motion", {})
if camera_motion and "mask" in camera_motion:
mask = camera_motion.get("mask")
if mask is not None and not isinstance(mask, dict):
camera_config["motion"]["mask"] = _convert_legacy_mask_to_dict(
mask, "motion_mask"
)
# Migrate camera global object masks
camera_objects = camera_config.get("objects", {})
if camera_objects and "mask" in camera_objects:
mask = camera_objects.get("mask")
if mask is not None and not isinstance(mask, dict):
camera_config["objects"]["mask"] = _convert_legacy_mask_to_dict(
mask, "object_mask"
)
# Migrate camera object filter masks
if camera_objects and "filters" in camera_objects:
for obj_name, filter_config in camera_objects.get("filters", {}).items():
if isinstance(filter_config, dict) and "mask" in filter_config:
mask = filter_config.get("mask")
if mask is not None and not isinstance(mask, dict):
camera_config["objects"]["filters"][obj_name]["mask"] = (
_convert_legacy_mask_to_dict(mask, "object_mask", obj_name)
)
new_config["cameras"][name] = camera_config
new_config["version"] = "0.18-0"

View File

@@ -248,20 +248,20 @@ def is_object_filtered(obj, objects_to_track, object_filters):
if obj_settings.max_ratio < object_ratio:
return True
if obj_settings.rasterized_mask is not None:
if obj_settings.mask is not None:
# compute the coordinates of the object and make sure
# the location isn't outside the bounds of the image (can happen from rounding)
object_xmin = object_box[0]
object_xmax = object_box[2]
object_ymax = object_box[3]
y_location = min(int(object_ymax), len(obj_settings.rasterized_mask) - 1)
y_location = min(int(object_ymax), len(obj_settings.mask) - 1)
x_location = min(
int((object_xmax + object_xmin) / 2.0),
len(obj_settings.rasterized_mask[0]) - 1,
len(obj_settings.mask[0]) - 1,
)
# if the object is in a masked location, don't add it to detected objects
if obj_settings.rasterized_mask[y_location][x_location] == 0:
if obj_settings.mask[y_location][x_location] == 0:
return True
return False

View File

@@ -1,46 +0,0 @@
"""JSON schema utilities for Frigate."""
from typing import Any, Dict, Type
from pydantic import BaseModel, TypeAdapter
def get_config_schema(config_class: Type[BaseModel]) -> Dict[str, Any]:
"""
Returns the JSON schema for FrigateConfig with polymorphic detectors.
This utility patches the FrigateConfig schema to include the full polymorphic
definitions for detectors. By default, Pydantic's schema for Dict[str, BaseDetectorConfig]
only includes the base class fields. This function replaces it with a reference
to the DetectorConfig union, which includes all available detector subclasses.
"""
# Import here to ensure all detector plugins are loaded through the detectors module
from frigate.detectors import DetectorConfig
# Get the base schema for FrigateConfig
schema = config_class.model_json_schema()
# Get the schema for the polymorphic DetectorConfig union
detector_adapter: TypeAdapter = TypeAdapter(DetectorConfig)
detector_schema = detector_adapter.json_schema()
# Ensure $defs exists in FrigateConfig schema
if "$defs" not in schema:
schema["$defs"] = {}
# Merge $defs from DetectorConfig into FrigateConfig schema
# This includes the specific schemas for each detector plugin (OvDetectorConfig, etc.)
if "$defs" in detector_schema:
schema["$defs"].update(detector_schema["$defs"])
# Extract the union schema (oneOf/discriminator) and add it as a definition
detector_union_schema = {k: v for k, v in detector_schema.items() if k != "$defs"}
schema["$defs"]["DetectorConfig"] = detector_union_schema
# Update the 'detectors' property to use the polymorphic DetectorConfig definition
if "detectors" in schema.get("properties", {}):
schema["properties"]["detectors"]["additionalProperties"] = {
"$ref": "#/$defs/DetectorConfig"
}
return schema

View File

@@ -121,7 +121,7 @@ def get_cpu_stats() -> dict[str, dict]:
pid = str(process.info["pid"])
try:
cpu_percent = process.info["cpu_percent"]
cmdline = " ".join(process.info["cmdline"]).rstrip()
cmdline = process.info["cmdline"]
with open(f"/proc/{pid}/stat", "r") as f:
stats = f.readline().split()
@@ -155,7 +155,7 @@ def get_cpu_stats() -> dict[str, dict]:
"cpu": str(cpu_percent),
"cpu_average": str(round(cpu_average_usage, 2)),
"mem": f"{mem_pct}",
"cmdline": clean_camera_user_pass(cmdline),
"cmdline": clean_camera_user_pass(" ".join(cmdline)),
}
except Exception:
continue

View File

@@ -8,18 +8,20 @@ and generates JSON translation files with titles and descriptions for the web UI
import json
import logging
import sys
import shutil
from pathlib import Path
from typing import Any, Dict, get_args, get_origin
from typing import Any, Dict, Optional, get_args, get_origin
from pydantic import BaseModel
from pydantic.fields import FieldInfo
from frigate.config.config import FrigateConfig
from frigate.util.schema import get_config_schema
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def get_field_translations(field_info) -> Dict[str, str]:
def get_field_translations(field_info: FieldInfo) -> Dict[str, str]:
"""Extract title and description from a Pydantic field."""
translations = {}
@@ -32,147 +34,50 @@ def get_field_translations(field_info) -> Dict[str, str]:
return translations
def extract_translations_from_schema(
schema: Dict[str, Any], defs: Dict[str, Any] = None
) -> Dict[str, Any]:
def process_model_fields(model: type[BaseModel]) -> Dict[str, Any]:
"""
Recursively extract translations (titles and descriptions) from a JSON schema.
Recursively process a Pydantic model to extract translations.
Returns a dictionary structure with label and description for each field,
and nested fields directly under their parent keys.
Returns a nested dictionary structure matching the config schema,
with title and description for each field.
"""
if defs is None:
defs = schema.get("$defs", {})
translations = {}
# Add top-level title and description if present
if "title" in schema:
translations["label"] = schema["title"]
if "description" in schema:
translations["description"] = schema["description"]
model_fields = model.model_fields
# Process nested properties
properties = schema.get("properties", {})
for field_name, field_schema in properties.items():
field_translations = {}
for field_name, field_info in model_fields.items():
field_translations = get_field_translations(field_info)
# Handle $ref references
if "$ref" in field_schema:
ref_path = field_schema["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
ref_schema = defs[ref_name]
# Extract from the referenced schema
ref_translations = extract_translations_from_schema(
ref_schema, defs=defs
)
# Use the $ref field's own title/description if present
if "title" in field_schema:
field_translations["label"] = field_schema["title"]
elif "label" in ref_translations:
field_translations["label"] = ref_translations["label"]
if "description" in field_schema:
field_translations["description"] = field_schema["description"]
elif "description" in ref_translations:
field_translations["description"] = ref_translations[
"description"
]
# Add nested properties from referenced schema
nested_without_root = {
k: v
for k, v in ref_translations.items()
if k not in ("label", "description")
}
field_translations.update(nested_without_root)
# Handle additionalProperties with $ref (for dict types)
elif "additionalProperties" in field_schema:
additional_props = field_schema["additionalProperties"]
# Extract title and description from the field itself
if "title" in field_schema:
field_translations["label"] = field_schema["title"]
if "description" in field_schema:
field_translations["description"] = field_schema["description"]
# Get the field's type annotation
field_type = field_info.annotation
# If additionalProperties contains a $ref, extract nested translations
if "$ref" in additional_props:
ref_path = additional_props["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
ref_schema = defs[ref_name]
nested = extract_translations_from_schema(ref_schema, defs=defs)
nested_without_root = {
k: v
for k, v in nested.items()
if k not in ("label", "description")
}
field_translations.update(nested_without_root)
# Handle items with $ref (for array types)
elif "items" in field_schema:
items = field_schema["items"]
# Extract title and description from the field itself
if "title" in field_schema:
field_translations["label"] = field_schema["title"]
if "description" in field_schema:
field_translations["description"] = field_schema["description"]
# Handle Optional types
origin = get_origin(field_type)
# If items contains a $ref, extract nested translations
if "$ref" in items:
ref_path = items["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
ref_schema = defs[ref_name]
nested = extract_translations_from_schema(ref_schema, defs=defs)
nested_without_root = {
k: v
for k, v in nested.items()
if k not in ("label", "description")
}
field_translations.update(nested_without_root)
else:
# Extract title and description
if "title" in field_schema:
field_translations["label"] = field_schema["title"]
if "description" in field_schema:
field_translations["description"] = field_schema["description"]
if origin is Optional or (
hasattr(origin, "__name__") and origin.__name__ == "UnionType"
):
args = get_args(field_type)
field_type = next(
(arg for arg in args if arg is not type(None)), field_type
)
# Recursively process nested properties
if "properties" in field_schema:
nested = extract_translations_from_schema(field_schema, defs=defs)
# Merge nested translations
nested_without_root = {
k: v for k, v in nested.items() if k not in ("label", "description")
}
field_translations.update(nested_without_root)
# Handle anyOf cases
elif "anyOf" in field_schema:
for item in field_schema["anyOf"]:
if "properties" in item:
nested = extract_translations_from_schema(item, defs=defs)
nested_without_root = {
k: v
for k, v in nested.items()
if k not in ("label", "description")
}
field_translations.update(nested_without_root)
elif "$ref" in item:
ref_path = item["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
ref_schema = defs[ref_name]
nested = extract_translations_from_schema(
ref_schema, defs=defs
)
nested_without_root = {
k: v
for k, v in nested.items()
if k not in ("label", "description")
}
field_translations.update(nested_without_root)
# Handle Dict types (like Dict[str, CameraConfig])
if get_origin(field_type) is dict:
dict_args = get_args(field_type)
if len(dict_args) >= 2:
value_type = dict_args[1]
if isinstance(value_type, type) and issubclass(value_type, BaseModel):
nested_translations = process_model_fields(value_type)
if nested_translations:
field_translations["properties"] = nested_translations
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
nested_translations = process_model_fields(field_type)
if nested_translations:
field_translations["properties"] = nested_translations
if field_translations:
translations[field_name] = field_translations
@@ -180,350 +85,76 @@ def extract_translations_from_schema(
return translations
def generate_section_translation(config_class: type) -> Dict[str, Any]:
def generate_section_translation(
section_name: str, field_info: FieldInfo
) -> Dict[str, Any]:
"""
Generate translation structure for a config section using its JSON schema.
Generate translation structure for a top-level config section.
"""
schema = config_class.model_json_schema()
return extract_translations_from_schema(schema)
section_translations = get_field_translations(field_info)
field_type = field_info.annotation
origin = get_origin(field_type)
if origin is Optional or (
hasattr(origin, "__name__") and origin.__name__ == "UnionType"
):
args = get_args(field_type)
field_type = next((arg for arg in args if arg is not type(None)), field_type)
def get_detector_translations(
config_schema: Dict[str, Any],
) -> tuple[Dict[str, Any], set[str]]:
"""Build detector type translations with nested fields based on schema definitions."""
defs = config_schema.get("$defs", {})
detector_schema = defs.get("DetectorConfig", {})
discriminator = detector_schema.get("discriminator", {})
mapping = discriminator.get("mapping", {})
# Handle Dict types (like detectors, cameras, camera_groups)
if get_origin(field_type) is dict:
dict_args = get_args(field_type)
if len(dict_args) >= 2:
value_type = dict_args[1]
if isinstance(value_type, type) and issubclass(value_type, BaseModel):
nested = process_model_fields(value_type)
if nested:
section_translations["properties"] = nested
type_translations: Dict[str, Any] = {}
nested_field_keys: set[str] = set()
for detector_type, ref in mapping.items():
if not isinstance(ref, str):
continue
# If the field itself is a BaseModel, process it
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
nested = process_model_fields(field_type)
if nested:
section_translations["properties"] = nested
if not ref.startswith("#/$defs/"):
continue
ref_name = ref.split("/")[-1]
ref_schema = defs.get(ref_name, {})
if not ref_schema:
continue
type_entry: Dict[str, str] = {}
title = ref_schema.get("title")
description = ref_schema.get("description")
if title:
type_entry["label"] = title
if description:
type_entry["description"] = description
nested = extract_translations_from_schema(ref_schema, defs=defs)
nested_without_root = {
k: v for k, v in nested.items() if k not in ("label", "description")
}
if nested_without_root:
type_entry.update(nested_without_root)
nested_field_keys.update(nested_without_root.keys())
if type_entry:
type_translations[detector_type] = type_entry
return type_translations, nested_field_keys
return section_translations
def main():
"""Main function to generate config translations."""
# Define output directory
if len(sys.argv) > 1:
output_dir = Path(sys.argv[1])
else:
output_dir = (
Path(__file__).parent / "web" / "public" / "locales" / "en" / "config"
)
output_dir = Path(__file__).parent / "web" / "public" / "locales" / "en" / "config"
logger.info(f"Output directory: {output_dir}")
# Ensure the output directory exists; do not delete existing files.
# Clean and recreate the output directory
if output_dir.exists():
logger.info(f"Removing existing directory: {output_dir}")
shutil.rmtree(output_dir)
logger.info(f"Creating directory: {output_dir}")
output_dir.mkdir(parents=True, exist_ok=True)
logger.info(
f"Using output directory (existing files will be overwritten): {output_dir}"
)
config_fields = FrigateConfig.model_fields
config_schema = get_config_schema(FrigateConfig)
logger.info(f"Found {len(config_fields)} top-level config sections")
global_translations = {}
for field_name, field_info in config_fields.items():
if field_name.startswith("_"):
continue
logger.info(f"Processing section: {field_name}")
# Get the field's type
field_type = field_info.annotation
from typing import Optional, Union
origin = get_origin(field_type)
if (
origin is Optional
or origin is Union
or (
hasattr(origin, "__name__")
and origin.__name__ in ("UnionType", "Union")
)
):
args = get_args(field_type)
field_type = next(
(arg for arg in args if arg is not type(None)), field_type
)
# Handle Dict[str, SomeModel] - extract the value type
if origin is dict:
args = get_args(field_type)
if args and len(args) > 1:
field_type = args[1] # Get value type from Dict[key, value]
# Start with field's top-level metadata (label, description)
section_data = get_field_translations(field_info)
# Generate nested translations from the field type's schema
if hasattr(field_type, "model_json_schema"):
schema = field_type.model_json_schema()
# Extract nested properties from schema
nested = extract_translations_from_schema(schema)
# Remove top-level label/description from nested since we got those from field_info
nested_without_root = {
k: v for k, v in nested.items() if k not in ("label", "description")
}
section_data.update(nested_without_root)
if field_name == "detectors":
detector_types, detector_field_keys = get_detector_translations(
config_schema
)
section_data.update(detector_types)
for key in detector_field_keys:
if key == "type":
continue
section_data.pop(key, None)
section_data = generate_section_translation(field_name, field_info)
if not section_data:
logger.warning(f"No translations found for section: {field_name}")
continue
# Add camera-level fields to global config documentation if applicable
CAMERA_LEVEL_FIELDS = {
"birdseye": (
"frigate.config.camera.birdseye",
"BirdseyeCameraConfig",
["order"],
),
"ffmpeg": (
"frigate.config.camera.ffmpeg",
"CameraFfmpegConfig",
["inputs"],
),
"lpr": (
"frigate.config.classification",
"CameraLicensePlateRecognitionConfig",
["expire_time"],
),
"semantic_search": (
"frigate.config.classification",
"CameraSemanticSearchConfig",
["triggers"],
),
}
output_file = output_dir / f"{field_name}.json"
with open(output_file, "w", encoding="utf-8") as f:
json.dump(section_data, f, indent=2, ensure_ascii=False)
if field_name in CAMERA_LEVEL_FIELDS:
module_path, class_name, field_names = CAMERA_LEVEL_FIELDS[field_name]
try:
import importlib
module = importlib.import_module(module_path)
camera_class = getattr(module, class_name)
schema = camera_class.model_json_schema()
camera_fields = schema.get("properties", {})
defs = schema.get("$defs", {})
for fname in field_names:
if fname in camera_fields:
field_schema = camera_fields[fname]
field_trans = {}
if "title" in field_schema:
field_trans["label"] = field_schema["title"]
if "description" in field_schema:
field_trans["description"] = field_schema["description"]
# Extract nested properties based on schema type
nested_to_extract = None
# Handle direct $ref
if "$ref" in field_schema:
ref_path = field_schema["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
nested_to_extract = defs[ref_name]
# Handle additionalProperties with $ref (for dict types)
elif "additionalProperties" in field_schema:
additional_props = field_schema["additionalProperties"]
if "$ref" in additional_props:
ref_path = additional_props["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
nested_to_extract = defs[ref_name]
# Handle items with $ref (for array types)
elif "items" in field_schema:
items = field_schema["items"]
if "$ref" in items:
ref_path = items["$ref"]
if ref_path.startswith("#/$defs/"):
ref_name = ref_path.split("/")[-1]
if ref_name in defs:
nested_to_extract = defs[ref_name]
# Extract nested properties if we found a schema to use
if nested_to_extract:
nested = extract_translations_from_schema(
nested_to_extract, defs=defs
)
nested_without_root = {
k: v
for k, v in nested.items()
if k not in ("label", "description")
}
field_trans.update(nested_without_root)
if field_trans:
section_data[fname] = field_trans
except Exception as e:
logger.warning(
f"Could not add camera-level fields for {field_name}: {e}"
)
# Add to global translations instead of writing separate files
global_translations[field_name] = section_data
logger.info(f"Added section to global translations: {field_name}")
# Handle camera-level configs that aren't top-level FrigateConfig fields
# These are defined as fields in CameraConfig, so we extract title/description from there
camera_level_configs = {
"camera_mqtt": ("frigate.config.camera.mqtt", "CameraMqttConfig", "mqtt"),
"camera_ui": ("frigate.config.camera.ui", "CameraUiConfig", "ui"),
"onvif": ("frigate.config.camera.onvif", "OnvifConfig", "onvif"),
}
# Import CameraConfig to extract field metadata
from frigate.config.camera.camera import CameraConfig
camera_config_schema = CameraConfig.model_json_schema()
camera_properties = camera_config_schema.get("properties", {})
for config_name, (
module_path,
class_name,
camera_field_name,
) in camera_level_configs.items():
try:
logger.info(f"Processing camera-level section: {config_name}")
import importlib
module = importlib.import_module(module_path)
config_class = getattr(module, class_name)
section_data = {}
# Extract top-level label and description from CameraConfig field definition
if camera_field_name in camera_properties:
field_schema = camera_properties[camera_field_name]
if "title" in field_schema:
section_data["label"] = field_schema["title"]
if "description" in field_schema:
section_data["description"] = field_schema["description"]
# Process model fields from schema
schema = config_class.model_json_schema()
nested = extract_translations_from_schema(schema)
# Remove top-level label/description since we got those from CameraConfig
nested_without_root = {
k: v for k, v in nested.items() if k not in ("label", "description")
}
section_data.update(nested_without_root)
# Add camera-level section into global translations (do not write separate file)
global_translations[config_name] = section_data
logger.info(
f"Added camera-level section to global translations: {config_name}"
)
except Exception as e:
logger.error(f"Failed to generate {config_name}: {e}")
# Remove top-level 'cameras' field if present so it remains a separate file
if "cameras" in global_translations:
logger.info(
"Removing top-level 'cameras' from global translations to keep it as a separate cameras.json"
)
del global_translations["cameras"]
# Write consolidated global.json with per-section keys
global_file = output_dir / "global.json"
with open(global_file, "w", encoding="utf-8") as f:
json.dump(global_translations, f, indent=2, ensure_ascii=False)
f.write("\n")
logger.info(f"Generated consolidated translations: {global_file}")
if not global_translations:
logger.warning("No global translations were generated!")
else:
logger.info(f"Global contains {len(global_translations)} sections")
# Generate cameras.json from CameraConfig schema
cameras_file = output_dir / "cameras.json"
logger.info(f"Generating cameras.json: {cameras_file}")
try:
if "camera_config_schema" in locals():
camera_schema = camera_config_schema
else:
from frigate.config.camera.camera import CameraConfig
camera_schema = CameraConfig.model_json_schema()
camera_translations = extract_translations_from_schema(camera_schema)
# Change descriptions to use 'for this camera' for fields that are global
def sanitize_camera_descriptions(obj):
if isinstance(obj, dict):
for k, v in list(obj.items()):
if k == "description" and isinstance(v, str):
obj[k] = v.replace(
"for all cameras; can be overridden per-camera",
"for this camera",
)
else:
sanitize_camera_descriptions(v)
elif isinstance(obj, list):
for item in obj:
sanitize_camera_descriptions(item)
sanitize_camera_descriptions(camera_translations)
with open(cameras_file, "w", encoding="utf-8") as f:
json.dump(camera_translations, f, indent=2, ensure_ascii=False)
f.write("\n")
logger.info(f"Generated cameras.json: {cameras_file}")
except Exception as e:
logger.error(f"Failed to generate cameras.json: {e}")
logger.info(f"Generated: {output_file}")
logger.info("Translation generation complete!")

1415
web/package-lock.json generated
View File

File diff suppressed because it is too large Load Diff

View File

@@ -38,10 +38,6 @@
"@radix-ui/react-toggle": "^1.1.2",
"@radix-ui/react-toggle-group": "^1.1.2",
"@radix-ui/react-tooltip": "^1.2.8",
"@rjsf/core": "^6.3.1",
"@rjsf/shadcn": "^6.3.1",
"@rjsf/utils": "^6.3.1",
"@rjsf/validator-ajv8": "^6.3.1",
"apexcharts": "^3.52.0",
"axios": "^1.7.7",
"class-variance-authority": "^0.7.1",

View File

@@ -115,10 +115,8 @@
"internalID": "The Internal ID Frigate uses in the configuration and database"
},
"button": {
"add": "Add",
"apply": "Apply",
"reset": "Reset",
"undo": "Undo",
"done": "Done",
"enabled": "Enabled",
"enable": "Enable",
@@ -153,14 +151,7 @@
"export": "Export",
"deleteNow": "Delete Now",
"next": "Next",
"continue": "Continue",
"modified": "Modified",
"overridden": "Overridden",
"resetToGlobal": "Reset to Global",
"resetToDefault": "Reset to Default",
"saveAll": "Save All",
"savingAll": "Saving All…",
"undoAll": "Undo All"
"continue": "Continue"
},
"menu": {
"system": "System",

View File

@@ -0,0 +1,26 @@
{
"label": "Global Audio events configuration.",
"properties": {
"enabled": {
"label": "Enable audio events."
},
"max_not_heard": {
"label": "Seconds of not hearing the type of audio to end the event."
},
"min_volume": {
"label": "Min volume required to run audio detection."
},
"listen": {
"label": "Audio to listen for."
},
"filters": {
"label": "Audio filters."
},
"enabled_in_config": {
"label": "Keep track of original state of audio detection."
},
"num_threads": {
"label": "Number of detection threads"
}
}
}

View File

@@ -0,0 +1,23 @@
{
"label": "Audio transcription config.",
"properties": {
"enabled": {
"label": "Enable audio transcription."
},
"language": {
"label": "Language abbreviation to use for audio event transcription/translation."
},
"device": {
"label": "The device used for license plate recognition."
},
"model_size": {
"label": "The size of the embeddings model used."
},
"enabled_in_config": {
"label": "Keep track of original state of camera."
},
"live_enabled": {
"label": "Enable live transcriptions."
}
}
}

View File

@@ -0,0 +1,35 @@
{
"label": "Auth configuration.",
"properties": {
"enabled": {
"label": "Enable authentication"
},
"reset_admin_password": {
"label": "Reset the admin password on startup"
},
"cookie_name": {
"label": "Name for jwt token cookie"
},
"cookie_secure": {
"label": "Set secure flag on cookie"
},
"session_length": {
"label": "Session length for jwt session tokens"
},
"refresh_time": {
"label": "Refresh the session if it is going to expire in this many seconds"
},
"failed_login_rate_limit": {
"label": "Rate limits for failed login attempts."
},
"trusted_proxies": {
"label": "Trusted proxies for determining IP address to rate limit"
},
"hash_iterations": {
"label": "Password hash iterations"
},
"roles": {
"label": "Role to camera mappings. Empty list grants access to all cameras."
}
}
}

Some files were not shown because too many files have changed in this diff Show More