mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-02-27 11:48:40 -05:00
Compare commits
34 Commits
dev
...
genai-docs
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a505775242 | ||
|
|
b42b011a56 | ||
|
|
8793650c2f | ||
|
|
9c8dd9a6ba | ||
|
|
507b495b90 | ||
|
|
3525f32bc2 | ||
|
|
ac142449f1 | ||
|
|
47b89a1d60 | ||
|
|
cdcf56092c | ||
|
|
08ee2e21de | ||
|
|
9ab4dd4538 | ||
|
|
fe5441349b | ||
|
|
a4b1cc3a54 | ||
|
|
99e25661b2 | ||
|
|
20360db2c9 | ||
|
|
3826d72c2a | ||
|
|
3d5757c640 | ||
|
|
86100fde6f | ||
|
|
28b1195a79 | ||
|
|
b6db38bd4e | ||
|
|
92c6b8e484 | ||
|
|
9381f26352 | ||
|
|
e0180005be | ||
|
|
2041798702 | ||
|
|
3d23b5de30 | ||
|
|
209bb44518 | ||
|
|
88462cd6c3 | ||
|
|
c2cc23861a | ||
|
|
2b46084260 | ||
|
|
67466f215c | ||
|
|
e011424947 | ||
|
|
a1a0051dd7 | ||
|
|
ff331060c3 | ||
|
|
7aab1f02ec |
@@ -229,7 +229,6 @@ Reolink
|
|||||||
restream
|
restream
|
||||||
restreamed
|
restreamed
|
||||||
restreaming
|
restreaming
|
||||||
RJSF
|
|
||||||
rkmpp
|
rkmpp
|
||||||
rknn
|
rknn
|
||||||
rkrga
|
rkrga
|
||||||
|
|||||||
@@ -5,39 +5,31 @@ title: Configuring Generative AI
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
|
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI-Compatible section below.
|
||||||
|
|
||||||
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
|
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
|
||||||
|
|
||||||
## Ollama
|
## Local Providers
|
||||||
|
|
||||||
|
Local providers run on your own hardware and keep all data processing private. These require a GPU or dedicated hardware for best performance.
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
|
Running Generative AI models on CPU is not recommended, as high inference times make using Generative AI impractical.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
### Recommended Local Models
|
||||||
|
|
||||||
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
|
You must use a vision-capable model with Frigate. The following models are recommended for local deployment:
|
||||||
|
|
||||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
|
| Model | Notes |
|
||||||
|
| ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
### Model Types: Instruct vs Thinking
|
| `qwen3-vl` | Strong visual and situational understanding, strong ability to identify smaller objects and interactions with object. |
|
||||||
|
| `qwen3.5` | Strong situational understanding, but missing DeepStack from qwen3-vl leading to worse performance for identifying objects in people's hand and other small details. |
|
||||||
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
|
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
||||||
|
| `gemma3` | Slower model with good vision and temporal understanding |
|
||||||
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
|
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
||||||
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
|
|
||||||
|
|
||||||
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
|
|
||||||
|
|
||||||
**Recommendation:**
|
|
||||||
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
|
|
||||||
|
|
||||||
### Supported Models
|
|
||||||
|
|
||||||
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
|
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
@@ -45,32 +37,64 @@ Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger s
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 24 GB to run the 33B models.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
### Model Types: Instruct vs Thinking
|
||||||
|
|
||||||
|
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
|
||||||
|
|
||||||
|
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
|
||||||
|
- **Reasoning / Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
|
||||||
|
|
||||||
|
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, it is recommended to disable reasoning / thinking, which is generally model specific (see your models documentation).
|
||||||
|
|
||||||
|
**Recommendation:**
|
||||||
|
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider's documentation or model library for guidance on the correct model variant to use.
|
||||||
|
|
||||||
|
### llama.cpp
|
||||||
|
|
||||||
|
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a C++ implementation of LLaMA that provides a high-performance inference server.
|
||||||
|
|
||||||
|
It is highly recommended to host the llama.cpp server on a machine with a discrete graphics card, or on an Apple silicon Mac for best performance.
|
||||||
|
|
||||||
|
#### Supported Models
|
||||||
|
|
||||||
|
You must use a vision capable model with Frigate. The llama.cpp server supports various vision models in GGUF format.
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
genai:
|
||||||
|
provider: llamacpp
|
||||||
|
base_url: http://localhost:8080
|
||||||
|
model: your-model-name
|
||||||
|
provider_options:
|
||||||
|
context_size: 16000 # Tell Frigate your context size so it can send the appropriate amount of information.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ollama
|
||||||
|
|
||||||
|
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||||
|
|
||||||
|
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
|
||||||
|
|
||||||
|
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
|
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
The following models are recommended:
|
Note that Frigate will not automatically download the model you specify in your config. Ollama will try to download the model but it may take longer than the timeout, so it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. The model specified in Frigate's config must match the downloaded model tag.
|
||||||
|
|
||||||
| Model | Notes |
|
#### Configuration
|
||||||
| ------------- | -------------------------------------------------------------------- |
|
|
||||||
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
|
|
||||||
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
|
||||||
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
|
||||||
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
#### Ollama Cloud models
|
|
||||||
|
|
||||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
genai:
|
genai:
|
||||||
@@ -83,49 +107,65 @@ genai:
|
|||||||
num_ctx: 8192 # make sure the context matches other services that are using ollama
|
num_ctx: 8192 # make sure the context matches other services that are using ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
## llama.cpp
|
### OpenAI-Compatible
|
||||||
|
|
||||||
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a C++ implementation of LLaMA that provides a high-performance inference server. Using llama.cpp directly gives you access to all native llama.cpp options and parameters.
|
Frigate supports any provider that implements the OpenAI API standard. This includes self-hosted solutions like [vLLM](https://docs.vllm.ai/), [LocalAI](https://localai.io/), and other OpenAI-compatible servers.
|
||||||
|
|
||||||
:::warning
|
:::tip
|
||||||
|
|
||||||
Using llama.cpp on CPU is not recommended, high inference times make using Generative AI impractical.
|
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
It is highly recommended to host the llama.cpp server on a machine with a discrete graphics card, or on an Apple silicon Mac for best performance.
|
|
||||||
|
|
||||||
### Supported Models
|
|
||||||
|
|
||||||
You must use a vision capable model with Frigate. The llama.cpp server supports various vision models in GGUF format.
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
genai:
|
genai:
|
||||||
provider: llamacpp
|
provider: openai
|
||||||
base_url: http://localhost:8080
|
base_url: http://your-llama-server
|
||||||
model: your-model-name
|
model: your-model-name
|
||||||
provider_options:
|
provider_options:
|
||||||
temperature: 0.7
|
context_size: 8192 # Specify the configured context size
|
||||||
repeat_penalty: 1.05
|
|
||||||
top_p: 0.8
|
|
||||||
top_k: 40
|
|
||||||
min_p: 0.05
|
|
||||||
seed: -1
|
|
||||||
```
|
```
|
||||||
|
|
||||||
All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters.
|
This ensures Frigate uses the correct context window size when generating prompts.
|
||||||
|
|
||||||
## Google Gemini
|
:::
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
genai:
|
||||||
|
provider: openai
|
||||||
|
base_url: http://your-server:port
|
||||||
|
api_key: your-api-key # May not be required for local servers
|
||||||
|
model: your-model-name
|
||||||
|
```
|
||||||
|
|
||||||
|
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
|
||||||
|
|
||||||
|
## Cloud Providers
|
||||||
|
|
||||||
|
Cloud providers run on remote infrastructure and require an API key for authentication. These services handle all model inference on their servers.
|
||||||
|
|
||||||
|
### Ollama Cloud
|
||||||
|
|
||||||
|
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
genai:
|
||||||
|
provider: ollama
|
||||||
|
base_url: http://localhost:11434
|
||||||
|
model: cloud-model-name
|
||||||
|
```
|
||||||
|
|
||||||
|
### Google Gemini
|
||||||
|
|
||||||
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
|
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
|
||||||
|
|
||||||
### Supported Models
|
#### Supported Models
|
||||||
|
|
||||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||||
|
|
||||||
### Get API Key
|
#### Get API Key
|
||||||
|
|
||||||
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
|
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
|
||||||
|
|
||||||
@@ -134,7 +174,7 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
|
|||||||
3. Click "Create API key in new project"
|
3. Click "Create API key in new project"
|
||||||
4. Copy the API key for use in your config
|
4. Copy the API key for use in your config
|
||||||
|
|
||||||
### Configuration
|
#### Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
genai:
|
genai:
|
||||||
@@ -159,19 +199,19 @@ Other HTTP options are available, see the [python-genai documentation](https://g
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## OpenAI
|
### OpenAI
|
||||||
|
|
||||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||||
|
|
||||||
### Supported Models
|
#### Supported Models
|
||||||
|
|
||||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
|
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
|
||||||
|
|
||||||
### Get API Key
|
#### Get API Key
|
||||||
|
|
||||||
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
|
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
|
||||||
|
|
||||||
### Configuration
|
#### Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
genai:
|
genai:
|
||||||
@@ -180,42 +220,19 @@ genai:
|
|||||||
model: gpt-4o
|
model: gpt-4o
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
### Azure OpenAI
|
||||||
|
|
||||||
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
:::tip
|
|
||||||
|
|
||||||
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
genai:
|
|
||||||
provider: openai
|
|
||||||
base_url: http://your-llama-server
|
|
||||||
model: your-model-name
|
|
||||||
provider_options:
|
|
||||||
context_size: 8192 # Specify the configured context size
|
|
||||||
```
|
|
||||||
|
|
||||||
This ensures Frigate uses the correct context window size when generating prompts.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Azure OpenAI
|
|
||||||
|
|
||||||
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
|
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
|
||||||
|
|
||||||
### Supported Models
|
#### Supported Models
|
||||||
|
|
||||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
|
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
|
||||||
|
|
||||||
### Create Resource and Get API Key
|
#### Create Resource and Get API Key
|
||||||
|
|
||||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
|
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
|
||||||
|
|
||||||
### Configuration
|
#### Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
genai:
|
genai:
|
||||||
|
|||||||
@@ -12,20 +12,23 @@ Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accel
|
|||||||
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
|
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
|
||||||
|
|
||||||
- **AMD**
|
- **AMD**
|
||||||
|
|
||||||
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
|
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
|
||||||
|
|
||||||
- **Intel**
|
- **Intel**
|
||||||
|
|
||||||
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
|
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
|
||||||
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
|
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
|
||||||
|
|
||||||
- **Nvidia**
|
- **Nvidia**
|
||||||
|
|
||||||
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
|
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
|
||||||
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
|
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
|
||||||
|
|
||||||
- **RockChip**
|
- **RockChip**
|
||||||
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
|
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
|
||||||
|
|
||||||
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image to run enrichments on an Nvidia GPU and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is the `tensorrt` image for object detection on an Nvidia GPU and Intel iGPU for enrichments.
|
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
|
|||||||
@@ -29,12 +29,12 @@ cameras:
|
|||||||
|
|
||||||
When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running.
|
When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running.
|
||||||
|
|
||||||
| Add-on Variant | Configuration directory |
|
| Add-on Variant | Configuration directory |
|
||||||
| -------------------------- | ----------------------------------------- |
|
| -------------------------- | -------------------------------------------- |
|
||||||
| Frigate | `/addon_configs/ccab4aaf_frigate` |
|
| Frigate | `/addon_configs/ccab4aaf_frigate` |
|
||||||
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
|
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
|
||||||
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
|
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
|
||||||
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
|
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
|
||||||
|
|
||||||
**Whenever you see `/config` in the documentation, it refers to this directory.**
|
**Whenever you see `/config` in the documentation, it refers to this directory.**
|
||||||
|
|
||||||
@@ -109,16 +109,15 @@ detectors:
|
|||||||
|
|
||||||
record:
|
record:
|
||||||
enabled: True
|
enabled: True
|
||||||
motion:
|
retain:
|
||||||
days: 7
|
days: 7
|
||||||
|
mode: motion
|
||||||
alerts:
|
alerts:
|
||||||
retain:
|
retain:
|
||||||
days: 30
|
days: 30
|
||||||
mode: motion
|
|
||||||
detections:
|
detections:
|
||||||
retain:
|
retain:
|
||||||
days: 30
|
days: 30
|
||||||
mode: motion
|
|
||||||
|
|
||||||
snapshots:
|
snapshots:
|
||||||
enabled: True
|
enabled: True
|
||||||
@@ -166,16 +165,15 @@ detectors:
|
|||||||
|
|
||||||
record:
|
record:
|
||||||
enabled: True
|
enabled: True
|
||||||
motion:
|
retain:
|
||||||
days: 7
|
days: 7
|
||||||
|
mode: motion
|
||||||
alerts:
|
alerts:
|
||||||
retain:
|
retain:
|
||||||
days: 30
|
days: 30
|
||||||
mode: motion
|
|
||||||
detections:
|
detections:
|
||||||
retain:
|
retain:
|
||||||
days: 30
|
days: 30
|
||||||
mode: motion
|
|
||||||
|
|
||||||
snapshots:
|
snapshots:
|
||||||
enabled: True
|
enabled: True
|
||||||
@@ -233,16 +231,15 @@ model:
|
|||||||
|
|
||||||
record:
|
record:
|
||||||
enabled: True
|
enabled: True
|
||||||
motion:
|
retain:
|
||||||
days: 7
|
days: 7
|
||||||
|
mode: motion
|
||||||
alerts:
|
alerts:
|
||||||
retain:
|
retain:
|
||||||
days: 30
|
days: 30
|
||||||
mode: motion
|
|
||||||
detections:
|
detections:
|
||||||
retain:
|
retain:
|
||||||
days: 30
|
days: 30
|
||||||
mode: motion
|
|
||||||
|
|
||||||
snapshots:
|
snapshots:
|
||||||
enabled: True
|
enabled: True
|
||||||
|
|||||||
@@ -34,7 +34,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
**Nvidia GPU**
|
**Nvidia GPU**
|
||||||
|
|
||||||
- [ONNX](#onnx): Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
||||||
|
|
||||||
**Nvidia Jetson** <CommunityBadge />
|
**Nvidia Jetson** <CommunityBadge />
|
||||||
|
|
||||||
@@ -65,7 +65,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
|
|||||||
|
|
||||||
# Officially Supported Detectors
|
# Officially Supported Detectors
|
||||||
|
|
||||||
Frigate provides a number of builtin detector types. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||||
|
|
||||||
## Edge TPU Detector
|
## Edge TPU Detector
|
||||||
|
|
||||||
@@ -157,13 +157,7 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite`
|
|||||||
|
|
||||||
#### YOLOv9
|
#### YOLOv9
|
||||||
|
|
||||||
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Instructions](#yolov9-for-google-coral-support) for downloading a model with support for the Google Coral.
|
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
|
||||||
|
|
||||||
:::tip
|
|
||||||
|
|
||||||
**Frigate+ Users:** Follow the [instructions](../integrations/plus#use-models) to set a model ID in your config file.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>YOLOv9 Setup & Config</summary>
|
<summary>YOLOv9 Setup & Config</summary>
|
||||||
@@ -660,9 +654,11 @@ ONNX is an open format for building machine learning models, Frigate supports ru
|
|||||||
If the correct build is used for your GPU then the GPU will be detected and used automatically.
|
If the correct build is used for your GPU then the GPU will be detected and used automatically.
|
||||||
|
|
||||||
- **AMD**
|
- **AMD**
|
||||||
|
|
||||||
- ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image.
|
- ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image.
|
||||||
|
|
||||||
- **Intel**
|
- **Intel**
|
||||||
|
|
||||||
- OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image.
|
- OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image.
|
||||||
|
|
||||||
- **Nvidia**
|
- **Nvidia**
|
||||||
@@ -1560,11 +1556,7 @@ cd tensorrt_demos/yolo
|
|||||||
python3 yolo_to_onnx.py -m yolov7-320
|
python3 yolo_to_onnx.py -m yolov7-320
|
||||||
```
|
```
|
||||||
|
|
||||||
#### YOLOv9 for Google Coral Support
|
#### YOLOv9
|
||||||
|
|
||||||
[Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
|
|
||||||
|
|
||||||
#### YOLOv9 for other detectors
|
|
||||||
|
|
||||||
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
|
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
|
||||||
|
|
||||||
|
|||||||
@@ -41,8 +41,8 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
|
|||||||
| Name | Capabilities | Notes |
|
| Name | Capabilities | Notes |
|
||||||
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
|
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
|
||||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
|
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
|
||||||
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP)) | Can handle a large number of 1080p cameras with high activity | |
|
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
|
||||||
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM)) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
|
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
|
||||||
|
|
||||||
## Detectors
|
## Detectors
|
||||||
|
|
||||||
@@ -86,7 +86,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
**Nvidia**
|
**Nvidia**
|
||||||
|
|
||||||
- [Nvidia GPU](#nvidia-gpus): Nvidia GPUs can provide efficient object detection.
|
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
|
||||||
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
||||||
- Runs well with any size models including large
|
- Runs well with any size models including large
|
||||||
|
|
||||||
@@ -172,7 +172,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
|
|||||||
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||||
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
||||||
|
|
||||||
### Nvidia GPUs
|
### TensorRT - Nvidia GPU
|
||||||
|
|
||||||
Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA libraries.
|
Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA libraries.
|
||||||
|
|
||||||
@@ -182,6 +182,8 @@ Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA
|
|||||||
|
|
||||||
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
|
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
|
||||||
|
|
||||||
|
There are improved capabilities in newer GPU architectures that TensorRT can benefit from, such as INT8 operations and Tensor cores. The features compatible with your hardware will be optimized when the model is converted to a trt file. Currently the script provided for generating the model provides a switch to enable/disable FP16 operations. If you wish to use newer features such as INT8 optimization, more work is required.
|
||||||
|
|
||||||
#### Compatibility References:
|
#### Compatibility References:
|
||||||
|
|
||||||
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/getting-started/support-matrix.html)
|
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/getting-started/support-matrix.html)
|
||||||
@@ -190,7 +192,7 @@ Make sure your host system has the [nvidia-container-runtime](https://docs.docke
|
|||||||
|
|
||||||
[NVIDIA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
|
[NVIDIA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
|
||||||
|
|
||||||
Inference is done with the `onnx` detector type. Speeds will vary greatly depending on the GPU and the model used.
|
Inference speeds will vary greatly depending on the GPU and the model used.
|
||||||
`tiny (t)` variants are faster than the equivalent non-tiny model, some known examples are below:
|
`tiny (t)` variants are faster than the equivalent non-tiny model, some known examples are below:
|
||||||
|
|
||||||
✅ - Accelerated with CUDA Graphs
|
✅ - Accelerated with CUDA Graphs
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- /path/to/your/config:/config
|
- /path/to/your/config:/config
|
||||||
- /path/to/your/storage:/media/frigate
|
- /path/to/your/storage:/media/frigate
|
||||||
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
|
- type: tmpfs # Recommended: 1GB of memory
|
||||||
target: /tmp/cache
|
target: /tmp/cache
|
||||||
tmpfs:
|
tmpfs:
|
||||||
size: 1000000000
|
size: 1000000000
|
||||||
@@ -153,7 +153,6 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
|
|||||||
```
|
```
|
||||||
/lib/modules/6.6.31+rpt-rpi-2712/kernel/drivers/media/pci/hailo/hailo_pci.ko.xz
|
/lib/modules/6.6.31+rpt-rpi-2712/kernel/drivers/media/pci/hailo/hailo_pci.ko.xz
|
||||||
```
|
```
|
||||||
|
|
||||||
Save the module path to a variable:
|
Save the module path to a variable:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -207,6 +206,7 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
|
|||||||
```
|
```
|
||||||
|
|
||||||
The script will:
|
The script will:
|
||||||
|
|
||||||
- Install necessary build dependencies
|
- Install necessary build dependencies
|
||||||
- Clone and build the Hailo driver from the official repository
|
- Clone and build the Hailo driver from the official repository
|
||||||
- Install the driver
|
- Install the driver
|
||||||
@@ -247,7 +247,7 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
|
|||||||
ls -l /lib/firmware/hailo/hailo8_fw.bin
|
ls -l /lib/firmware/hailo/hailo8_fw.bin
|
||||||
```
|
```
|
||||||
|
|
||||||
**Optional: Fix PCIe descriptor page size error**
|
**Optional: Fix PCIe descriptor page size error**
|
||||||
|
|
||||||
If you encounter the following error:
|
If you encounter the following error:
|
||||||
|
|
||||||
@@ -462,7 +462,7 @@ services:
|
|||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
- /path/to/your/config:/config
|
- /path/to/your/config:/config
|
||||||
- /path/to/your/storage:/media/frigate
|
- /path/to/your/storage:/media/frigate
|
||||||
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
|
- type: tmpfs # Recommended: 1GB of memory
|
||||||
target: /tmp/cache
|
target: /tmp/cache
|
||||||
tmpfs:
|
tmpfs:
|
||||||
size: 1000000000
|
size: 1000000000
|
||||||
@@ -502,12 +502,12 @@ The official docker image tags for the current stable version are:
|
|||||||
|
|
||||||
- `stable` - Standard Frigate build for amd64 & RPi Optimized Frigate build for arm64. This build includes support for Hailo devices as well.
|
- `stable` - Standard Frigate build for amd64 & RPi Optimized Frigate build for arm64. This build includes support for Hailo devices as well.
|
||||||
- `stable-standard-arm64` - Standard Frigate build for arm64
|
- `stable-standard-arm64` - Standard Frigate build for arm64
|
||||||
- `stable-tensorrt` - Frigate build specific for amd64 devices running an Nvidia GPU
|
- `stable-tensorrt` - Frigate build specific for amd64 devices running an nvidia GPU
|
||||||
- `stable-rocm` - Frigate build for [AMD GPUs](../configuration/object_detectors.md#amdrocm-gpu-detector)
|
- `stable-rocm` - Frigate build for [AMD GPUs](../configuration/object_detectors.md#amdrocm-gpu-detector)
|
||||||
|
|
||||||
The community supported docker image tags for the current stable version are:
|
The community supported docker image tags for the current stable version are:
|
||||||
|
|
||||||
- `stable-tensorrt-jp6` - Frigate build optimized for Nvidia Jetson devices running Jetpack 6
|
- `stable-tensorrt-jp6` - Frigate build optimized for nvidia Jetson devices running Jetpack 6
|
||||||
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
|
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
|
||||||
|
|
||||||
## Home Assistant Add-on
|
## Home Assistant Add-on
|
||||||
@@ -521,7 +521,7 @@ There are important limitations in HA OS to be aware of:
|
|||||||
- Separate local storage for media is not yet supported by Home Assistant
|
- Separate local storage for media is not yet supported by Home Assistant
|
||||||
- AMD GPUs are not supported because HA OS does not include the mesa driver.
|
- AMD GPUs are not supported because HA OS does not include the mesa driver.
|
||||||
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
|
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
|
||||||
- Nvidia GPUs are not supported because addons do not support the Nvidia runtime.
|
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@@ -694,7 +694,7 @@ Log into QNAP, open Container Station. Frigate docker container should be listed
|
|||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
macOS uses port 5000 for its Airplay Receiver service. If you want to expose port 5000 in Frigate for local app and API access the port will need to be mapped to another port on the host e.g. 5001
|
macOS uses port 5000 for its Airplay Receiver service. If you want to expose port 5000 in Frigate for local app and API access the port will need to be mapped to another port on the host e.g. 5001
|
||||||
|
|
||||||
Failure to remap port 5000 on the host will result in the WebUI and all API endpoints on port 5000 being unreachable, even if port 5000 is exposed correctly in Docker.
|
Failure to remap port 5000 on the host will result in the WebUI and all API endpoints on port 5000 being unreachable, even if port 5000 is exposed correctly in Docker.
|
||||||
|
|
||||||
@@ -705,7 +705,6 @@ Docker containers on macOS can be orchestrated by either [Docker Desktop](https:
|
|||||||
To allow Frigate to use the Apple Silicon Neural Engine / Processing Unit (NPU) the host must be running [Apple Silicon Detector](../configuration/object_detectors.md#apple-silicon-detector) on the host (outside Docker)
|
To allow Frigate to use the Apple Silicon Neural Engine / Processing Unit (NPU) the host must be running [Apple Silicon Detector](../configuration/object_detectors.md#apple-silicon-detector) on the host (outside Docker)
|
||||||
|
|
||||||
#### Docker Compose example
|
#### Docker Compose example
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
services:
|
services:
|
||||||
frigate:
|
frigate:
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ Keeping Frigate up to date ensures you benefit from the latest features, perform
|
|||||||
If you’re running Frigate via Docker (recommended method), follow these steps:
|
If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||||
|
|
||||||
1. **Stop the Container**:
|
1. **Stop the Container**:
|
||||||
|
|
||||||
- If using Docker Compose:
|
- If using Docker Compose:
|
||||||
```bash
|
```bash
|
||||||
docker compose down frigate
|
docker compose down frigate
|
||||||
@@ -30,8 +31,9 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
|||||||
```
|
```
|
||||||
|
|
||||||
2. **Update and Pull the Latest Image**:
|
2. **Update and Pull the Latest Image**:
|
||||||
|
|
||||||
- If using Docker Compose:
|
- If using Docker Compose:
|
||||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.4`). For example:
|
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
|
||||||
```yaml
|
```yaml
|
||||||
services:
|
services:
|
||||||
frigate:
|
frigate:
|
||||||
@@ -49,6 +51,7 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
|||||||
```
|
```
|
||||||
|
|
||||||
3. **Start the Container**:
|
3. **Start the Container**:
|
||||||
|
|
||||||
- If using Docker Compose:
|
- If using Docker Compose:
|
||||||
```bash
|
```bash
|
||||||
docker compose up -d
|
docker compose up -d
|
||||||
@@ -72,15 +75,18 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
|||||||
For users running Frigate as a Home Assistant Addon:
|
For users running Frigate as a Home Assistant Addon:
|
||||||
|
|
||||||
1. **Check for Updates**:
|
1. **Check for Updates**:
|
||||||
|
|
||||||
- Navigate to **Settings > Add-ons** in Home Assistant.
|
- Navigate to **Settings > Add-ons** in Home Assistant.
|
||||||
- Find your installed Frigate addon (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
|
- Find your installed Frigate addon (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
|
||||||
- If an update is available, you’ll see an "Update" button.
|
- If an update is available, you’ll see an "Update" button.
|
||||||
|
|
||||||
2. **Update the Addon**:
|
2. **Update the Addon**:
|
||||||
|
|
||||||
- Click the "Update" button next to the Frigate addon.
|
- Click the "Update" button next to the Frigate addon.
|
||||||
- Wait for the process to complete. Home Assistant will handle downloading and installing the new version.
|
- Wait for the process to complete. Home Assistant will handle downloading and installing the new version.
|
||||||
|
|
||||||
3. **Restart the Addon**:
|
3. **Restart the Addon**:
|
||||||
|
|
||||||
- After updating, go to the addon’s page and click "Restart" to apply the changes.
|
- After updating, go to the addon’s page and click "Restart" to apply the changes.
|
||||||
|
|
||||||
4. **Verify the Update**:
|
4. **Verify the Update**:
|
||||||
@@ -99,8 +105,8 @@ If an update causes issues:
|
|||||||
1. Stop Frigate.
|
1. Stop Frigate.
|
||||||
2. Restore your backed-up config file and database.
|
2. Restore your backed-up config file and database.
|
||||||
3. Revert to the previous image version:
|
3. Revert to the previous image version:
|
||||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`) in your `docker run` command.
|
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
|
||||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`), and re-run `docker compose up -d`.
|
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
|
||||||
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
|
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
|
||||||
4. Verify the old version is running again.
|
4. Verify the old version is running again.
|
||||||
|
|
||||||
|
|||||||
@@ -119,7 +119,7 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- ./config:/config
|
- ./config:/config
|
||||||
- ./storage:/media/frigate
|
- ./storage:/media/frigate
|
||||||
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
|
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
|
||||||
target: /tmp/cache
|
target: /tmp/cache
|
||||||
tmpfs:
|
tmpfs:
|
||||||
size: 1000000000
|
size: 1000000000
|
||||||
|
|||||||
@@ -54,8 +54,6 @@ Once you have [requested your first model](../plus/first_model.md) and gotten yo
|
|||||||
You can either choose the new model from the Frigate+ pane in the Settings page of the Frigate UI, or manually set the model at the root level in your config:
|
You can either choose the new model from the Frigate+ pane in the Settings page of the Frigate UI, or manually set the model at the root level in your config:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
detectors: ...
|
|
||||||
|
|
||||||
model:
|
model:
|
||||||
path: plus://<your_model_id>
|
path: plus://<your_model_id>
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -24,8 +24,6 @@ You will receive an email notification when your Frigate+ model is ready.
|
|||||||
Models available in Frigate+ can be used with a special model path. No other information needs to be configured because it fetches the remaining config from Frigate+ automatically.
|
Models available in Frigate+ can be used with a special model path. No other information needs to be configured because it fetches the remaining config from Frigate+ automatically.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
detectors: ...
|
|
||||||
|
|
||||||
model:
|
model:
|
||||||
path: plus://<your_model_id>
|
path: plus://<your_model_id>
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -15,15 +15,15 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
|
|||||||
|
|
||||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||||
|
|
||||||
| Model Type | Description |
|
| Model Type | Description |
|
||||||
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on most hardware. |
|
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
|
||||||
|
|
||||||
### YOLOv9 Details
|
### YOLOv9 Details
|
||||||
|
|
||||||
YOLOv9 models are available in `s`, `t`, `edgetpu` variants. When requesting a `yolov9` model, you will be prompted to choose a variant. If you want the model to be compatible with a Google Coral, you will need to choose the `edgetpu` variant. If you are unsure what variant to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
|
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
@@ -37,21 +37,23 @@ If you have a Hailo device, you will need to specify the hardware you have when
|
|||||||
|
|
||||||
#### Rockchip (RKNN) Support
|
#### Rockchip (RKNN) Support
|
||||||
|
|
||||||
Rockchip models are automatically converted as of 0.17. For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it.
|
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
|
||||||
|
|
||||||
## Supported detector types
|
## Supported detector types
|
||||||
|
|
||||||
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip (`rknn`) detectors.
|
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
|
||||||
|
|
||||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||||
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `yolov9` |
|
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
|
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
|
||||||
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
|
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
|
||||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
|
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
|
||||||
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
|
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
|
||||||
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform) | `rknn` | `yolov9` |
|
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
|
||||||
|
|
||||||
|
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
|
||||||
|
|
||||||
## Improving your model
|
## Improving your model
|
||||||
|
|
||||||
@@ -79,7 +81,7 @@ Candidate labels are also available for annotation. These labels don't have enou
|
|||||||
|
|
||||||
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
|
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
|
||||||
|
|
||||||
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`, `la_poste`, `lawnmower`, `heron`, `rickshaw`, `wombat`, `auspost`, `aramex`, `bobcat`, `mustelid`, `transoflex`, `airplane`, `drone`, `mountain_lion`, `crocodile`, `turkey`, `baby_stroller`, `monkey`, `coyote`, `porcupine`, `parcelforce`, `sheep`, `snake`, `helicopter`, `lizard`, `duck`, `hermes`, `cargus`, `fan_courier`, `sameday`
|
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`
|
||||||
|
|
||||||
Candidate labels are not available for automatic suggestions.
|
Candidate labels are not available for automatic suggestions.
|
||||||
|
|
||||||
|
|||||||
@@ -49,12 +49,10 @@ from frigate.types import JobStatusTypesEnum
|
|||||||
from frigate.util.builtin import (
|
from frigate.util.builtin import (
|
||||||
clean_camera_user_pass,
|
clean_camera_user_pass,
|
||||||
flatten_config_data,
|
flatten_config_data,
|
||||||
load_labels,
|
|
||||||
process_config_query_string,
|
process_config_query_string,
|
||||||
update_yaml_file_bulk,
|
update_yaml_file_bulk,
|
||||||
)
|
)
|
||||||
from frigate.util.config import find_config_file
|
from frigate.util.config import find_config_file
|
||||||
from frigate.util.schema import get_config_schema
|
|
||||||
from frigate.util.services import (
|
from frigate.util.services import (
|
||||||
get_nvidia_driver_info,
|
get_nvidia_driver_info,
|
||||||
process_logs,
|
process_logs,
|
||||||
@@ -79,7 +77,9 @@ def is_healthy():
|
|||||||
|
|
||||||
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
|
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
|
||||||
def config_schema(request: Request):
|
def config_schema(request: Request):
|
||||||
return JSONResponse(content=get_config_schema(FrigateConfig))
|
return Response(
|
||||||
|
content=request.app.frigate_config.schema_json(), media_type="application/json"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@router.get(
|
@router.get(
|
||||||
@@ -125,10 +125,6 @@ def config(request: Request):
|
|||||||
config: dict[str, dict[str, Any]] = config_obj.model_dump(
|
config: dict[str, dict[str, Any]] = config_obj.model_dump(
|
||||||
mode="json", warnings="none", exclude_none=True
|
mode="json", warnings="none", exclude_none=True
|
||||||
)
|
)
|
||||||
config["detectors"] = {
|
|
||||||
name: detector.model_dump(mode="json", warnings="none", exclude_none=True)
|
|
||||||
for name, detector in config_obj.detectors.items()
|
|
||||||
}
|
|
||||||
|
|
||||||
# remove the mqtt password
|
# remove the mqtt password
|
||||||
config["mqtt"].pop("password", None)
|
config["mqtt"].pop("password", None)
|
||||||
@@ -199,54 +195,6 @@ def config(request: Request):
|
|||||||
return JSONResponse(content=config)
|
return JSONResponse(content=config)
|
||||||
|
|
||||||
|
|
||||||
@router.get("/ffmpeg/presets", dependencies=[Depends(allow_any_authenticated())])
|
|
||||||
def ffmpeg_presets():
|
|
||||||
"""Return available ffmpeg preset keys for config UI usage."""
|
|
||||||
|
|
||||||
# Whitelist based on documented presets in ffmpeg_presets.md
|
|
||||||
hwaccel_presets = [
|
|
||||||
"preset-rpi-64-h264",
|
|
||||||
"preset-rpi-64-h265",
|
|
||||||
"preset-vaapi",
|
|
||||||
"preset-intel-qsv-h264",
|
|
||||||
"preset-intel-qsv-h265",
|
|
||||||
"preset-nvidia",
|
|
||||||
"preset-jetson-h264",
|
|
||||||
"preset-jetson-h265",
|
|
||||||
"preset-rkmpp",
|
|
||||||
]
|
|
||||||
input_presets = [
|
|
||||||
"preset-http-jpeg-generic",
|
|
||||||
"preset-http-mjpeg-generic",
|
|
||||||
"preset-http-reolink",
|
|
||||||
"preset-rtmp-generic",
|
|
||||||
"preset-rtsp-generic",
|
|
||||||
"preset-rtsp-restream",
|
|
||||||
"preset-rtsp-restream-low-latency",
|
|
||||||
"preset-rtsp-udp",
|
|
||||||
"preset-rtsp-blue-iris",
|
|
||||||
]
|
|
||||||
record_output_presets = [
|
|
||||||
"preset-record-generic",
|
|
||||||
"preset-record-generic-audio-copy",
|
|
||||||
"preset-record-generic-audio-aac",
|
|
||||||
"preset-record-mjpeg",
|
|
||||||
"preset-record-jpeg",
|
|
||||||
"preset-record-ubiquiti",
|
|
||||||
]
|
|
||||||
|
|
||||||
return JSONResponse(
|
|
||||||
content={
|
|
||||||
"hwaccel_args": hwaccel_presets,
|
|
||||||
"input_args": input_presets,
|
|
||||||
"output_args": {
|
|
||||||
"record": record_output_presets,
|
|
||||||
"detect": [],
|
|
||||||
},
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/config/raw_paths", dependencies=[Depends(require_role(["admin"]))])
|
@router.get("/config/raw_paths", dependencies=[Depends(require_role(["admin"]))])
|
||||||
def config_raw_paths(request: Request):
|
def config_raw_paths(request: Request):
|
||||||
"""Admin-only endpoint that returns camera paths and go2rtc streams without credential masking."""
|
"""Admin-only endpoint that returns camera paths and go2rtc streams without credential masking."""
|
||||||
@@ -484,7 +432,6 @@ def config_set(request: Request, body: AppConfigSetBody):
|
|||||||
if body.requires_restart == 0 or body.update_topic:
|
if body.requires_restart == 0 or body.update_topic:
|
||||||
old_config: FrigateConfig = request.app.frigate_config
|
old_config: FrigateConfig = request.app.frigate_config
|
||||||
request.app.frigate_config = config
|
request.app.frigate_config = config
|
||||||
request.app.genai_manager.update_config(config)
|
|
||||||
|
|
||||||
if body.update_topic:
|
if body.update_topic:
|
||||||
if body.update_topic.startswith("config/cameras/"):
|
if body.update_topic.startswith("config/cameras/"):
|
||||||
@@ -807,12 +754,6 @@ def get_sub_labels(split_joined: Optional[int] = None):
|
|||||||
return JSONResponse(content=sub_labels)
|
return JSONResponse(content=sub_labels)
|
||||||
|
|
||||||
|
|
||||||
@router.get("/audio_labels", dependencies=[Depends(allow_any_authenticated())])
|
|
||||||
def get_audio_labels():
|
|
||||||
labels = load_labels("/audio-labelmap.txt", prefill=521)
|
|
||||||
return JSONResponse(content=labels)
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
|
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
|
||||||
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||||
if not request.app.frigate_config.plus_api.is_active():
|
if not request.app.frigate_config.plus_api.is_active():
|
||||||
|
|||||||
@@ -3,13 +3,12 @@
|
|||||||
import base64
|
import base64
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import time
|
from datetime import datetime, timezone
|
||||||
from datetime import datetime
|
from typing import Any, Dict, List, Optional
|
||||||
from typing import Any, Dict, Generator, List, Optional
|
|
||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
from fastapi import APIRouter, Body, Depends, Request
|
from fastapi import APIRouter, Body, Depends, Request
|
||||||
from fastapi.responses import JSONResponse, StreamingResponse
|
from fastapi.responses import JSONResponse
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
from frigate.api.auth import (
|
from frigate.api.auth import (
|
||||||
@@ -21,60 +20,16 @@ from frigate.api.defs.request.chat_body import ChatCompletionRequest
|
|||||||
from frigate.api.defs.response.chat_response import (
|
from frigate.api.defs.response.chat_response import (
|
||||||
ChatCompletionResponse,
|
ChatCompletionResponse,
|
||||||
ChatMessageResponse,
|
ChatMessageResponse,
|
||||||
ToolCall,
|
|
||||||
)
|
)
|
||||||
from frigate.api.defs.tags import Tags
|
from frigate.api.defs.tags import Tags
|
||||||
from frigate.api.event import events
|
from frigate.api.event import events
|
||||||
from frigate.genai.utils import build_assistant_message_for_conversation
|
from frigate.genai import get_genai_client
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
router = APIRouter(tags=[Tags.chat])
|
router = APIRouter(tags=[Tags.chat])
|
||||||
|
|
||||||
|
|
||||||
def _chunk_content(content: str, chunk_size: int = 80) -> Generator[str, None, None]:
|
|
||||||
"""Yield content in word-aware chunks for streaming."""
|
|
||||||
if not content:
|
|
||||||
return
|
|
||||||
words = content.split(" ")
|
|
||||||
current: List[str] = []
|
|
||||||
current_len = 0
|
|
||||||
for w in words:
|
|
||||||
current.append(w)
|
|
||||||
current_len += len(w) + 1
|
|
||||||
if current_len >= chunk_size:
|
|
||||||
yield " ".join(current) + " "
|
|
||||||
current = []
|
|
||||||
current_len = 0
|
|
||||||
if current:
|
|
||||||
yield " ".join(current)
|
|
||||||
|
|
||||||
|
|
||||||
def _format_events_with_local_time(
|
|
||||||
events_list: List[Dict[str, Any]],
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""Add human-readable local start/end times to each event for the LLM."""
|
|
||||||
result = []
|
|
||||||
for evt in events_list:
|
|
||||||
if not isinstance(evt, dict):
|
|
||||||
result.append(evt)
|
|
||||||
continue
|
|
||||||
copy_evt = dict(evt)
|
|
||||||
try:
|
|
||||||
start_ts = evt.get("start_time")
|
|
||||||
end_ts = evt.get("end_time")
|
|
||||||
if start_ts is not None:
|
|
||||||
dt_start = datetime.fromtimestamp(start_ts)
|
|
||||||
copy_evt["start_time_local"] = dt_start.strftime("%Y-%m-%d %I:%M:%S %p")
|
|
||||||
if end_ts is not None:
|
|
||||||
dt_end = datetime.fromtimestamp(end_ts)
|
|
||||||
copy_evt["end_time_local"] = dt_end.strftime("%Y-%m-%d %I:%M:%S %p")
|
|
||||||
except (TypeError, ValueError, OSError):
|
|
||||||
pass
|
|
||||||
result.append(copy_evt)
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
class ToolExecuteRequest(BaseModel):
|
class ToolExecuteRequest(BaseModel):
|
||||||
"""Request model for tool execution."""
|
"""Request model for tool execution."""
|
||||||
|
|
||||||
@@ -98,25 +53,19 @@ def get_tool_definitions() -> List[Dict[str, Any]]:
|
|||||||
"Search for detected objects in Frigate by camera, object label, time range, "
|
"Search for detected objects in Frigate by camera, object label, time range, "
|
||||||
"zones, and other filters. Use this to answer questions about when "
|
"zones, and other filters. Use this to answer questions about when "
|
||||||
"objects were detected, what objects appeared, or to find specific object detections. "
|
"objects were detected, what objects appeared, or to find specific object detections. "
|
||||||
"An 'object' in Frigate represents a tracked detection (e.g., a person, package, car). "
|
"An 'object' in Frigate represents a tracked detection (e.g., a person, package, car)."
|
||||||
"When the user asks about a specific name (person, delivery company, animal, etc.), "
|
|
||||||
"filter by sub_label only and do not set label."
|
|
||||||
),
|
),
|
||||||
"parameters": {
|
"parameters": {
|
||||||
"type": "object",
|
"type": "object",
|
||||||
"properties": {
|
"properties": {
|
||||||
"camera": {
|
"camera": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Camera name to filter by (optional).",
|
"description": "Camera name to filter by (optional). Use 'all' for all cameras.",
|
||||||
},
|
},
|
||||||
"label": {
|
"label": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Object label to filter by (e.g., 'person', 'package', 'car').",
|
"description": "Object label to filter by (e.g., 'person', 'package', 'car').",
|
||||||
},
|
},
|
||||||
"sub_label": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Name of a person, delivery company, animal, etc. When filtering by a specific name, use only sub_label; do not set label.",
|
|
||||||
},
|
|
||||||
"after": {
|
"after": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Start time in ISO 8601 format (e.g., '2024-01-01T00:00:00Z').",
|
"description": "Start time in ISO 8601 format (e.g., '2024-01-01T00:00:00Z').",
|
||||||
@@ -132,8 +81,8 @@ def get_tool_definitions() -> List[Dict[str, Any]]:
|
|||||||
},
|
},
|
||||||
"limit": {
|
"limit": {
|
||||||
"type": "integer",
|
"type": "integer",
|
||||||
"description": "Maximum number of objects to return (default: 25).",
|
"description": "Maximum number of objects to return (default: 10).",
|
||||||
"default": 25,
|
"default": 10,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
@@ -171,13 +120,14 @@ def get_tool_definitions() -> List[Dict[str, Any]]:
|
|||||||
summary="Get available tools",
|
summary="Get available tools",
|
||||||
description="Returns OpenAI-compatible tool definitions for function calling.",
|
description="Returns OpenAI-compatible tool definitions for function calling.",
|
||||||
)
|
)
|
||||||
def get_tools() -> JSONResponse:
|
def get_tools(request: Request) -> JSONResponse:
|
||||||
"""Get list of available tools for LLM function calling."""
|
"""Get list of available tools for LLM function calling."""
|
||||||
tools = get_tool_definitions()
|
tools = get_tool_definitions()
|
||||||
return JSONResponse(content={"tools": tools})
|
return JSONResponse(content={"tools": tools})
|
||||||
|
|
||||||
|
|
||||||
async def _execute_search_objects(
|
async def _execute_search_objects(
|
||||||
|
request: Request,
|
||||||
arguments: Dict[str, Any],
|
arguments: Dict[str, Any],
|
||||||
allowed_cameras: List[str],
|
allowed_cameras: List[str],
|
||||||
) -> JSONResponse:
|
) -> JSONResponse:
|
||||||
@@ -187,26 +137,23 @@ async def _execute_search_objects(
|
|||||||
This searches for detected objects (events) in Frigate using the same
|
This searches for detected objects (events) in Frigate using the same
|
||||||
logic as the events API endpoint.
|
logic as the events API endpoint.
|
||||||
"""
|
"""
|
||||||
# Parse after/before as server local time; convert to Unix timestamp
|
# Parse ISO 8601 timestamps to Unix timestamps if provided
|
||||||
after = arguments.get("after")
|
after = arguments.get("after")
|
||||||
before = arguments.get("before")
|
before = arguments.get("before")
|
||||||
|
|
||||||
def _parse_as_local_timestamp(s: str):
|
|
||||||
s = s.replace("Z", "").strip()[:19]
|
|
||||||
dt = datetime.strptime(s, "%Y-%m-%dT%H:%M:%S")
|
|
||||||
return time.mktime(dt.timetuple())
|
|
||||||
|
|
||||||
if after:
|
if after:
|
||||||
try:
|
try:
|
||||||
after = _parse_as_local_timestamp(after)
|
after_dt = datetime.fromisoformat(after.replace("Z", "+00:00"))
|
||||||
except (ValueError, AttributeError, TypeError):
|
after = after_dt.timestamp()
|
||||||
|
except (ValueError, AttributeError):
|
||||||
logger.warning(f"Invalid 'after' timestamp format: {after}")
|
logger.warning(f"Invalid 'after' timestamp format: {after}")
|
||||||
after = None
|
after = None
|
||||||
|
|
||||||
if before:
|
if before:
|
||||||
try:
|
try:
|
||||||
before = _parse_as_local_timestamp(before)
|
before_dt = datetime.fromisoformat(before.replace("Z", "+00:00"))
|
||||||
except (ValueError, AttributeError, TypeError):
|
before = before_dt.timestamp()
|
||||||
|
except (ValueError, AttributeError):
|
||||||
logger.warning(f"Invalid 'before' timestamp format: {before}")
|
logger.warning(f"Invalid 'before' timestamp format: {before}")
|
||||||
before = None
|
before = None
|
||||||
|
|
||||||
@@ -219,14 +166,15 @@ async def _execute_search_objects(
|
|||||||
|
|
||||||
# Build query parameters compatible with EventsQueryParams
|
# Build query parameters compatible with EventsQueryParams
|
||||||
query_params = EventsQueryParams(
|
query_params = EventsQueryParams(
|
||||||
|
camera=arguments.get("camera", "all"),
|
||||||
cameras=arguments.get("camera", "all"),
|
cameras=arguments.get("camera", "all"),
|
||||||
|
label=arguments.get("label", "all"),
|
||||||
labels=arguments.get("label", "all"),
|
labels=arguments.get("label", "all"),
|
||||||
sub_labels=arguments.get("sub_label", "all").lower(),
|
|
||||||
zones=zones,
|
zones=zones,
|
||||||
zone=zones,
|
zone=zones,
|
||||||
after=after,
|
after=after,
|
||||||
before=before,
|
before=before,
|
||||||
limit=arguments.get("limit", 25),
|
limit=arguments.get("limit", 10),
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -242,7 +190,7 @@ async def _execute_search_objects(
|
|||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={
|
content={
|
||||||
"success": False,
|
"success": False,
|
||||||
"message": "Error searching objects",
|
"message": f"Error searching objects: {str(e)}",
|
||||||
},
|
},
|
||||||
status_code=500,
|
status_code=500,
|
||||||
)
|
)
|
||||||
@@ -255,6 +203,7 @@ async def _execute_search_objects(
|
|||||||
description="Execute a tool function call from an LLM.",
|
description="Execute a tool function call from an LLM.",
|
||||||
)
|
)
|
||||||
async def execute_tool(
|
async def execute_tool(
|
||||||
|
request: Request,
|
||||||
body: ToolExecuteRequest = Body(...),
|
body: ToolExecuteRequest = Body(...),
|
||||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||||
) -> JSONResponse:
|
) -> JSONResponse:
|
||||||
@@ -270,7 +219,7 @@ async def execute_tool(
|
|||||||
logger.debug(f"Executing tool: {tool_name} with arguments: {arguments}")
|
logger.debug(f"Executing tool: {tool_name} with arguments: {arguments}")
|
||||||
|
|
||||||
if tool_name == "search_objects":
|
if tool_name == "search_objects":
|
||||||
return await _execute_search_objects(arguments, allowed_cameras)
|
return await _execute_search_objects(request, arguments, allowed_cameras)
|
||||||
|
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={
|
content={
|
||||||
@@ -330,7 +279,7 @@ async def _execute_get_live_context(
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error executing get_live_context: {e}", exc_info=True)
|
logger.error(f"Error executing get_live_context: {e}", exc_info=True)
|
||||||
return {
|
return {
|
||||||
"error": "Error getting live context",
|
"error": f"Error getting live context: {str(e)}",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -386,7 +335,7 @@ async def _execute_tool_internal(
|
|||||||
This is used by the chat completion endpoint to execute tools.
|
This is used by the chat completion endpoint to execute tools.
|
||||||
"""
|
"""
|
||||||
if tool_name == "search_objects":
|
if tool_name == "search_objects":
|
||||||
response = await _execute_search_objects(arguments, allowed_cameras)
|
response = await _execute_search_objects(request, arguments, allowed_cameras)
|
||||||
try:
|
try:
|
||||||
if hasattr(response, "body"):
|
if hasattr(response, "body"):
|
||||||
body_str = response.body.decode("utf-8")
|
body_str = response.body.decode("utf-8")
|
||||||
@@ -401,109 +350,15 @@ async def _execute_tool_internal(
|
|||||||
elif tool_name == "get_live_context":
|
elif tool_name == "get_live_context":
|
||||||
camera = arguments.get("camera")
|
camera = arguments.get("camera")
|
||||||
if not camera:
|
if not camera:
|
||||||
logger.error(
|
|
||||||
"Tool get_live_context failed: camera parameter is required. "
|
|
||||||
"Arguments: %s",
|
|
||||||
json.dumps(arguments),
|
|
||||||
)
|
|
||||||
return {"error": "Camera parameter is required"}
|
return {"error": "Camera parameter is required"}
|
||||||
return await _execute_get_live_context(request, camera, allowed_cameras)
|
return await _execute_get_live_context(request, camera, allowed_cameras)
|
||||||
else:
|
else:
|
||||||
logger.error(
|
|
||||||
"Tool call failed: unknown tool %r. Expected one of: search_objects, get_live_context. "
|
|
||||||
"Arguments received: %s",
|
|
||||||
tool_name,
|
|
||||||
json.dumps(arguments),
|
|
||||||
)
|
|
||||||
return {"error": f"Unknown tool: {tool_name}"}
|
return {"error": f"Unknown tool: {tool_name}"}
|
||||||
|
|
||||||
|
|
||||||
async def _execute_pending_tools(
|
|
||||||
pending_tool_calls: List[Dict[str, Any]],
|
|
||||||
request: Request,
|
|
||||||
allowed_cameras: List[str],
|
|
||||||
) -> tuple[List[ToolCall], List[Dict[str, Any]]]:
|
|
||||||
"""
|
|
||||||
Execute a list of tool calls; return (ToolCall list for API response, tool result dicts for conversation).
|
|
||||||
"""
|
|
||||||
tool_calls_out: List[ToolCall] = []
|
|
||||||
tool_results: List[Dict[str, Any]] = []
|
|
||||||
for tool_call in pending_tool_calls:
|
|
||||||
tool_name = tool_call["name"]
|
|
||||||
tool_args = tool_call.get("arguments") or {}
|
|
||||||
tool_call_id = tool_call["id"]
|
|
||||||
logger.debug(
|
|
||||||
f"Executing tool: {tool_name} (id: {tool_call_id}) with arguments: {json.dumps(tool_args, indent=2)}"
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
tool_result = await _execute_tool_internal(
|
|
||||||
tool_name, tool_args, request, allowed_cameras
|
|
||||||
)
|
|
||||||
if isinstance(tool_result, dict) and tool_result.get("error"):
|
|
||||||
logger.error(
|
|
||||||
"Tool call %s (id: %s) returned error: %s. Arguments: %s",
|
|
||||||
tool_name,
|
|
||||||
tool_call_id,
|
|
||||||
tool_result.get("error"),
|
|
||||||
json.dumps(tool_args),
|
|
||||||
)
|
|
||||||
if tool_name == "search_objects" and isinstance(tool_result, list):
|
|
||||||
tool_result = _format_events_with_local_time(tool_result)
|
|
||||||
_keys = {
|
|
||||||
"id",
|
|
||||||
"camera",
|
|
||||||
"label",
|
|
||||||
"zones",
|
|
||||||
"start_time_local",
|
|
||||||
"end_time_local",
|
|
||||||
"sub_label",
|
|
||||||
"event_count",
|
|
||||||
}
|
|
||||||
tool_result = [
|
|
||||||
{k: evt[k] for k in _keys if k in evt}
|
|
||||||
for evt in tool_result
|
|
||||||
if isinstance(evt, dict)
|
|
||||||
]
|
|
||||||
result_content = (
|
|
||||||
json.dumps(tool_result)
|
|
||||||
if isinstance(tool_result, (dict, list))
|
|
||||||
else (tool_result if isinstance(tool_result, str) else str(tool_result))
|
|
||||||
)
|
|
||||||
tool_calls_out.append(
|
|
||||||
ToolCall(name=tool_name, arguments=tool_args, response=result_content)
|
|
||||||
)
|
|
||||||
tool_results.append(
|
|
||||||
{
|
|
||||||
"role": "tool",
|
|
||||||
"tool_call_id": tool_call_id,
|
|
||||||
"content": result_content,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(
|
|
||||||
"Error executing tool %s (id: %s): %s. Arguments: %s",
|
|
||||||
tool_name,
|
|
||||||
tool_call_id,
|
|
||||||
e,
|
|
||||||
json.dumps(tool_args),
|
|
||||||
exc_info=True,
|
|
||||||
)
|
|
||||||
error_content = json.dumps({"error": f"Tool execution failed: {str(e)}"})
|
|
||||||
tool_calls_out.append(
|
|
||||||
ToolCall(name=tool_name, arguments=tool_args, response=error_content)
|
|
||||||
)
|
|
||||||
tool_results.append(
|
|
||||||
{
|
|
||||||
"role": "tool",
|
|
||||||
"tool_call_id": tool_call_id,
|
|
||||||
"content": error_content,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return (tool_calls_out, tool_results)
|
|
||||||
|
|
||||||
|
|
||||||
@router.post(
|
@router.post(
|
||||||
"/chat/completion",
|
"/chat/completion",
|
||||||
|
response_model=ChatCompletionResponse,
|
||||||
dependencies=[Depends(allow_any_authenticated())],
|
dependencies=[Depends(allow_any_authenticated())],
|
||||||
summary="Chat completion with tool calling",
|
summary="Chat completion with tool calling",
|
||||||
description=(
|
description=(
|
||||||
@@ -515,7 +370,7 @@ async def chat_completion(
|
|||||||
request: Request,
|
request: Request,
|
||||||
body: ChatCompletionRequest = Body(...),
|
body: ChatCompletionRequest = Body(...),
|
||||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||||
):
|
) -> JSONResponse:
|
||||||
"""
|
"""
|
||||||
Chat completion endpoint with tool calling support.
|
Chat completion endpoint with tool calling support.
|
||||||
|
|
||||||
@@ -528,7 +383,7 @@ async def chat_completion(
|
|||||||
6. Repeats until final answer
|
6. Repeats until final answer
|
||||||
7. Returns response to user
|
7. Returns response to user
|
||||||
"""
|
"""
|
||||||
genai_client = request.app.genai_manager.tool_client
|
genai_client = get_genai_client(request.app.frigate_config)
|
||||||
if not genai_client:
|
if not genai_client:
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={
|
content={
|
||||||
@@ -540,9 +395,9 @@ async def chat_completion(
|
|||||||
tools = get_tool_definitions()
|
tools = get_tool_definitions()
|
||||||
conversation = []
|
conversation = []
|
||||||
|
|
||||||
current_datetime = datetime.now()
|
current_datetime = datetime.now(timezone.utc)
|
||||||
current_date_str = current_datetime.strftime("%Y-%m-%d")
|
current_date_str = current_datetime.strftime("%Y-%m-%d")
|
||||||
current_time_str = current_datetime.strftime("%I:%M:%S %p")
|
current_time_str = current_datetime.strftime("%H:%M:%S %Z")
|
||||||
|
|
||||||
cameras_info = []
|
cameras_info = []
|
||||||
config = request.app.frigate_config
|
config = request.app.frigate_config
|
||||||
@@ -575,12 +430,9 @@ async def chat_completion(
|
|||||||
|
|
||||||
system_prompt = f"""You are a helpful assistant for Frigate, a security camera NVR system. You help users answer questions about their cameras, detected objects, and events.
|
system_prompt = f"""You are a helpful assistant for Frigate, a security camera NVR system. You help users answer questions about their cameras, detected objects, and events.
|
||||||
|
|
||||||
Current server local date and time: {current_date_str} at {current_time_str}
|
Current date and time: {current_date_str} at {current_time_str} (UTC)
|
||||||
|
|
||||||
Do not start your response with phrases like "I will check...", "Let me see...", or "Let me look...". Answer directly.
|
When users ask questions about "today", "yesterday", "this week", etc., use the current date above as reference.
|
||||||
|
|
||||||
Always present times to the user in the server's local timezone. When tool results include start_time_local and end_time_local, use those exact strings when listing or describing detection times—do not convert or invent timestamps. Do not use UTC or ISO format with Z for the user-facing answer unless the tool result only provides Unix timestamps without local time fields.
|
|
||||||
When users ask about "today", "yesterday", "this week", etc., use the current date above as reference.
|
|
||||||
When searching for objects or events, use ISO 8601 format for dates (e.g., {current_date_str}T00:00:00Z for the start of today).
|
When searching for objects or events, use ISO 8601 format for dates (e.g., {current_date_str}T00:00:00Z for the start of today).
|
||||||
Always be accurate with time calculations based on the current date provided.{cameras_section}{live_image_note}"""
|
Always be accurate with time calculations based on the current date provided.{cameras_section}{live_image_note}"""
|
||||||
|
|
||||||
@@ -620,7 +472,6 @@ Always be accurate with time calculations based on the current date provided.{ca
|
|||||||
conversation.append(msg_dict)
|
conversation.append(msg_dict)
|
||||||
|
|
||||||
tool_iterations = 0
|
tool_iterations = 0
|
||||||
tool_calls: List[ToolCall] = []
|
|
||||||
max_iterations = body.max_tool_iterations
|
max_iterations = body.max_tool_iterations
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
@@ -628,81 +479,6 @@ Always be accurate with time calculations based on the current date provided.{ca
|
|||||||
f"{len(tools)} tool(s) available, max_iterations={max_iterations}"
|
f"{len(tools)} tool(s) available, max_iterations={max_iterations}"
|
||||||
)
|
)
|
||||||
|
|
||||||
# True LLM streaming when client supports it and stream requested
|
|
||||||
if body.stream and hasattr(genai_client, "chat_with_tools_stream"):
|
|
||||||
stream_tool_calls: List[ToolCall] = []
|
|
||||||
stream_iterations = 0
|
|
||||||
|
|
||||||
async def stream_body_llm():
|
|
||||||
nonlocal conversation, stream_tool_calls, stream_iterations
|
|
||||||
while stream_iterations < max_iterations:
|
|
||||||
logger.debug(
|
|
||||||
f"Streaming LLM (iteration {stream_iterations + 1}/{max_iterations}) "
|
|
||||||
f"with {len(conversation)} message(s)"
|
|
||||||
)
|
|
||||||
async for event in genai_client.chat_with_tools_stream(
|
|
||||||
messages=conversation,
|
|
||||||
tools=tools if tools else None,
|
|
||||||
tool_choice="auto",
|
|
||||||
):
|
|
||||||
kind, value = event
|
|
||||||
if kind == "content_delta":
|
|
||||||
yield (
|
|
||||||
json.dumps({"type": "content", "delta": value}).encode(
|
|
||||||
"utf-8"
|
|
||||||
)
|
|
||||||
+ b"\n"
|
|
||||||
)
|
|
||||||
elif kind == "message":
|
|
||||||
msg = value
|
|
||||||
if msg.get("finish_reason") == "error":
|
|
||||||
yield (
|
|
||||||
json.dumps(
|
|
||||||
{
|
|
||||||
"type": "error",
|
|
||||||
"error": "An error occurred while processing your request.",
|
|
||||||
}
|
|
||||||
).encode("utf-8")
|
|
||||||
+ b"\n"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
pending = msg.get("tool_calls")
|
|
||||||
if pending:
|
|
||||||
stream_iterations += 1
|
|
||||||
conversation.append(
|
|
||||||
build_assistant_message_for_conversation(
|
|
||||||
msg.get("content"), pending
|
|
||||||
)
|
|
||||||
)
|
|
||||||
executed_calls, tool_results = await _execute_pending_tools(
|
|
||||||
pending, request, allowed_cameras
|
|
||||||
)
|
|
||||||
stream_tool_calls.extend(executed_calls)
|
|
||||||
conversation.extend(tool_results)
|
|
||||||
yield (
|
|
||||||
json.dumps(
|
|
||||||
{
|
|
||||||
"type": "tool_calls",
|
|
||||||
"tool_calls": [
|
|
||||||
tc.model_dump() for tc in stream_tool_calls
|
|
||||||
],
|
|
||||||
}
|
|
||||||
).encode("utf-8")
|
|
||||||
+ b"\n"
|
|
||||||
)
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
yield (json.dumps({"type": "done"}).encode("utf-8") + b"\n")
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
yield json.dumps({"type": "done"}).encode("utf-8") + b"\n"
|
|
||||||
|
|
||||||
return StreamingResponse(
|
|
||||||
stream_body_llm(),
|
|
||||||
media_type="application/x-ndjson",
|
|
||||||
headers={"X-Accel-Buffering": "no"},
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
while tool_iterations < max_iterations:
|
while tool_iterations < max_iterations:
|
||||||
logger.debug(
|
logger.debug(
|
||||||
@@ -724,71 +500,119 @@ Always be accurate with time calculations based on the current date provided.{ca
|
|||||||
status_code=500,
|
status_code=500,
|
||||||
)
|
)
|
||||||
|
|
||||||
conversation.append(
|
assistant_message = {
|
||||||
build_assistant_message_for_conversation(
|
"role": "assistant",
|
||||||
response.get("content"), response.get("tool_calls")
|
"content": response.get("content"),
|
||||||
)
|
}
|
||||||
)
|
if response.get("tool_calls"):
|
||||||
|
assistant_message["tool_calls"] = [
|
||||||
|
{
|
||||||
|
"id": tc["id"],
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": tc["name"],
|
||||||
|
"arguments": json.dumps(tc["arguments"]),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for tc in response["tool_calls"]
|
||||||
|
]
|
||||||
|
conversation.append(assistant_message)
|
||||||
|
|
||||||
pending_tool_calls = response.get("tool_calls")
|
tool_calls = response.get("tool_calls")
|
||||||
if not pending_tool_calls:
|
if not tool_calls:
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"Chat completion finished with final answer (iterations: {tool_iterations})"
|
f"Chat completion finished with final answer (iterations: {tool_iterations})"
|
||||||
)
|
)
|
||||||
final_content = response.get("content") or ""
|
|
||||||
|
|
||||||
if body.stream:
|
|
||||||
|
|
||||||
async def stream_body() -> Any:
|
|
||||||
if tool_calls:
|
|
||||||
yield (
|
|
||||||
json.dumps(
|
|
||||||
{
|
|
||||||
"type": "tool_calls",
|
|
||||||
"tool_calls": [
|
|
||||||
tc.model_dump() for tc in tool_calls
|
|
||||||
],
|
|
||||||
}
|
|
||||||
).encode("utf-8")
|
|
||||||
+ b"\n"
|
|
||||||
)
|
|
||||||
# Stream content in word-sized chunks for smooth UX
|
|
||||||
for part in _chunk_content(final_content):
|
|
||||||
yield (
|
|
||||||
json.dumps({"type": "content", "delta": part}).encode(
|
|
||||||
"utf-8"
|
|
||||||
)
|
|
||||||
+ b"\n"
|
|
||||||
)
|
|
||||||
yield json.dumps({"type": "done"}).encode("utf-8") + b"\n"
|
|
||||||
|
|
||||||
return StreamingResponse(
|
|
||||||
stream_body(),
|
|
||||||
media_type="application/x-ndjson",
|
|
||||||
)
|
|
||||||
|
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content=ChatCompletionResponse(
|
content=ChatCompletionResponse(
|
||||||
message=ChatMessageResponse(
|
message=ChatMessageResponse(
|
||||||
role="assistant",
|
role="assistant",
|
||||||
content=final_content,
|
content=response.get("content"),
|
||||||
tool_calls=None,
|
tool_calls=None,
|
||||||
),
|
),
|
||||||
finish_reason=response.get("finish_reason", "stop"),
|
finish_reason=response.get("finish_reason", "stop"),
|
||||||
tool_iterations=tool_iterations,
|
tool_iterations=tool_iterations,
|
||||||
tool_calls=tool_calls,
|
|
||||||
).model_dump(),
|
).model_dump(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Execute tools
|
||||||
tool_iterations += 1
|
tool_iterations += 1
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"Tool calls detected (iteration {tool_iterations}/{max_iterations}): "
|
f"Tool calls detected (iteration {tool_iterations}/{max_iterations}): "
|
||||||
f"{len(pending_tool_calls)} tool(s) to execute"
|
f"{len(tool_calls)} tool(s) to execute"
|
||||||
)
|
)
|
||||||
executed_calls, tool_results = await _execute_pending_tools(
|
tool_results = []
|
||||||
pending_tool_calls, request, allowed_cameras
|
|
||||||
)
|
for tool_call in tool_calls:
|
||||||
tool_calls.extend(executed_calls)
|
tool_name = tool_call["name"]
|
||||||
|
tool_args = tool_call["arguments"]
|
||||||
|
tool_call_id = tool_call["id"]
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
f"Executing tool: {tool_name} (id: {tool_call_id}) with arguments: {json.dumps(tool_args, indent=2)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
tool_result = await _execute_tool_internal(
|
||||||
|
tool_name, tool_args, request, allowed_cameras
|
||||||
|
)
|
||||||
|
|
||||||
|
if isinstance(tool_result, dict):
|
||||||
|
result_content = json.dumps(tool_result)
|
||||||
|
result_summary = tool_result
|
||||||
|
if isinstance(tool_result, dict) and isinstance(
|
||||||
|
tool_result.get("content"), list
|
||||||
|
):
|
||||||
|
result_count = len(tool_result.get("content", []))
|
||||||
|
result_summary = {
|
||||||
|
"count": result_count,
|
||||||
|
"sample": tool_result.get("content", [])[:2]
|
||||||
|
if result_count > 0
|
||||||
|
else [],
|
||||||
|
}
|
||||||
|
logger.debug(
|
||||||
|
f"Tool {tool_name} (id: {tool_call_id}) completed successfully. "
|
||||||
|
f"Result: {json.dumps(result_summary, indent=2)}"
|
||||||
|
)
|
||||||
|
elif isinstance(tool_result, str):
|
||||||
|
result_content = tool_result
|
||||||
|
logger.debug(
|
||||||
|
f"Tool {tool_name} (id: {tool_call_id}) completed successfully. "
|
||||||
|
f"Result length: {len(result_content)} characters"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
result_content = str(tool_result)
|
||||||
|
logger.debug(
|
||||||
|
f"Tool {tool_name} (id: {tool_call_id}) completed successfully. "
|
||||||
|
f"Result type: {type(tool_result).__name__}"
|
||||||
|
)
|
||||||
|
|
||||||
|
tool_results.append(
|
||||||
|
{
|
||||||
|
"role": "tool",
|
||||||
|
"tool_call_id": tool_call_id,
|
||||||
|
"content": result_content,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(
|
||||||
|
f"Error executing tool {tool_name} (id: {tool_call_id}): {e}",
|
||||||
|
exc_info=True,
|
||||||
|
)
|
||||||
|
error_content = json.dumps(
|
||||||
|
{"error": f"Tool execution failed: {str(e)}"}
|
||||||
|
)
|
||||||
|
tool_results.append(
|
||||||
|
{
|
||||||
|
"role": "tool",
|
||||||
|
"tool_call_id": tool_call_id,
|
||||||
|
"content": error_content,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
logger.debug(
|
||||||
|
f"Tool {tool_name} (id: {tool_call_id}) failed. Error result added to conversation."
|
||||||
|
)
|
||||||
|
|
||||||
conversation.extend(tool_results)
|
conversation.extend(tool_results)
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"Added {len(tool_results)} tool result(s) to conversation. "
|
f"Added {len(tool_results)} tool result(s) to conversation. "
|
||||||
@@ -807,7 +631,6 @@ Always be accurate with time calculations based on the current date provided.{ca
|
|||||||
),
|
),
|
||||||
finish_reason="length",
|
finish_reason="length",
|
||||||
tool_iterations=tool_iterations,
|
tool_iterations=tool_iterations,
|
||||||
tool_calls=tool_calls,
|
|
||||||
).model_dump(),
|
).model_dump(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -39,7 +39,3 @@ class ChatCompletionRequest(BaseModel):
|
|||||||
"user message as multimodal content. Use with get_live_context for detection info."
|
"user message as multimodal content. Use with get_live_context for detection info."
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
stream: bool = Field(
|
|
||||||
default=False,
|
|
||||||
description="If true, stream the final assistant response in the body as newline-delimited JSON.",
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -5,8 +5,8 @@ from typing import Any, Optional
|
|||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
|
||||||
class ToolCallInvocation(BaseModel):
|
class ToolCall(BaseModel):
|
||||||
"""A tool call requested by the LLM (before execution)."""
|
"""A tool call from the LLM."""
|
||||||
|
|
||||||
id: str = Field(description="Unique identifier for this tool call")
|
id: str = Field(description="Unique identifier for this tool call")
|
||||||
name: str = Field(description="Tool name to call")
|
name: str = Field(description="Tool name to call")
|
||||||
@@ -20,24 +20,11 @@ class ChatMessageResponse(BaseModel):
|
|||||||
content: Optional[str] = Field(
|
content: Optional[str] = Field(
|
||||||
default=None, description="Message content (None if tool calls present)"
|
default=None, description="Message content (None if tool calls present)"
|
||||||
)
|
)
|
||||||
tool_calls: Optional[list[ToolCallInvocation]] = Field(
|
tool_calls: Optional[list[ToolCall]] = Field(
|
||||||
default=None, description="Tool calls if LLM wants to call tools"
|
default=None, description="Tool calls if LLM wants to call tools"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class ToolCall(BaseModel):
|
|
||||||
"""A tool that was executed during the completion, with its response."""
|
|
||||||
|
|
||||||
name: str = Field(description="Tool name that was called")
|
|
||||||
arguments: dict[str, Any] = Field(
|
|
||||||
default_factory=dict, description="Arguments passed to the tool"
|
|
||||||
)
|
|
||||||
response: str = Field(
|
|
||||||
default="",
|
|
||||||
description="The response or result returned from the tool execution",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ChatCompletionResponse(BaseModel):
|
class ChatCompletionResponse(BaseModel):
|
||||||
"""Response from chat completion."""
|
"""Response from chat completion."""
|
||||||
|
|
||||||
@@ -48,7 +35,3 @@ class ChatCompletionResponse(BaseModel):
|
|||||||
tool_iterations: int = Field(
|
tool_iterations: int = Field(
|
||||||
default=0, description="Number of tool call iterations performed"
|
default=0, description="Number of tool call iterations performed"
|
||||||
)
|
)
|
||||||
tool_calls: list[ToolCall] = Field(
|
|
||||||
default_factory=list,
|
|
||||||
description="List of tool calls that were executed during this completion",
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -33,7 +33,6 @@ from frigate.comms.event_metadata_updater import (
|
|||||||
from frigate.config import FrigateConfig
|
from frigate.config import FrigateConfig
|
||||||
from frigate.config.camera.updater import CameraConfigUpdatePublisher
|
from frigate.config.camera.updater import CameraConfigUpdatePublisher
|
||||||
from frigate.embeddings import EmbeddingsContext
|
from frigate.embeddings import EmbeddingsContext
|
||||||
from frigate.genai import GenAIClientManager
|
|
||||||
from frigate.ptz.onvif import OnvifController
|
from frigate.ptz.onvif import OnvifController
|
||||||
from frigate.stats.emitter import StatsEmitter
|
from frigate.stats.emitter import StatsEmitter
|
||||||
from frigate.storage import StorageMaintainer
|
from frigate.storage import StorageMaintainer
|
||||||
@@ -135,7 +134,6 @@ def create_fastapi_app(
|
|||||||
app.include_router(record.router)
|
app.include_router(record.router)
|
||||||
# App Properties
|
# App Properties
|
||||||
app.frigate_config = frigate_config
|
app.frigate_config = frigate_config
|
||||||
app.genai_manager = GenAIClientManager(frigate_config)
|
|
||||||
app.embeddings = embeddings
|
app.embeddings = embeddings
|
||||||
app.detected_frames_processor = detected_frames_processor
|
app.detected_frames_processor = detected_frames_processor
|
||||||
app.storage_maintainer = storage_maintainer
|
app.storage_maintainer = storage_maintainer
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ from frigate.api.defs.response.review_response import (
|
|||||||
ReviewSummaryResponse,
|
ReviewSummaryResponse,
|
||||||
)
|
)
|
||||||
from frigate.api.defs.tags import Tags
|
from frigate.api.defs.tags import Tags
|
||||||
|
from frigate.config import FrigateConfig
|
||||||
from frigate.embeddings import EmbeddingsContext
|
from frigate.embeddings import EmbeddingsContext
|
||||||
from frigate.models import Recordings, ReviewSegment, UserReviewStatus
|
from frigate.models import Recordings, ReviewSegment, UserReviewStatus
|
||||||
from frigate.review.types import SeverityEnum
|
from frigate.review.types import SeverityEnum
|
||||||
@@ -746,7 +747,9 @@ async def set_not_reviewed(
|
|||||||
description="Use GenAI to summarize review items over a period of time.",
|
description="Use GenAI to summarize review items over a period of time.",
|
||||||
)
|
)
|
||||||
def generate_review_summary(request: Request, start_ts: float, end_ts: float):
|
def generate_review_summary(request: Request, start_ts: float, end_ts: float):
|
||||||
if not request.app.genai_manager.vision_client:
|
config: FrigateConfig = request.app.frigate_config
|
||||||
|
|
||||||
|
if not config.genai.provider:
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content=(
|
content=(
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -8,63 +8,39 @@ __all__ = ["AuthConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class AuthConfig(FrigateBaseModel):
|
class AuthConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable authentication")
|
||||||
default=True,
|
|
||||||
title="Enable authentication",
|
|
||||||
description="Enable native authentication for the Frigate UI.",
|
|
||||||
)
|
|
||||||
reset_admin_password: bool = Field(
|
reset_admin_password: bool = Field(
|
||||||
default=False,
|
default=False, title="Reset the admin password on startup"
|
||||||
title="Reset admin password",
|
|
||||||
description="If true, reset the admin user's password on startup and print the new password in logs.",
|
|
||||||
)
|
)
|
||||||
cookie_name: str = Field(
|
cookie_name: str = Field(
|
||||||
default="frigate_token",
|
default="frigate_token", title="Name for jwt token cookie", pattern=r"^[a-z_]+$"
|
||||||
title="JWT cookie name",
|
|
||||||
description="Name of the cookie used to store the JWT token for native authentication.",
|
|
||||||
pattern=r"^[a-z_]+$",
|
|
||||||
)
|
|
||||||
cookie_secure: bool = Field(
|
|
||||||
default=False,
|
|
||||||
title="Secure cookie flag",
|
|
||||||
description="Set the secure flag on the auth cookie; should be true when using TLS.",
|
|
||||||
)
|
)
|
||||||
|
cookie_secure: bool = Field(default=False, title="Set secure flag on cookie")
|
||||||
session_length: int = Field(
|
session_length: int = Field(
|
||||||
default=86400,
|
default=86400, title="Session length for jwt session tokens", ge=60
|
||||||
title="Session length",
|
|
||||||
description="Session duration in seconds for JWT-based sessions.",
|
|
||||||
ge=60,
|
|
||||||
)
|
)
|
||||||
refresh_time: int = Field(
|
refresh_time: int = Field(
|
||||||
default=1800,
|
default=1800,
|
||||||
title="Session refresh window",
|
title="Refresh the session if it is going to expire in this many seconds",
|
||||||
description="When a session is within this many seconds of expiring, refresh it back to full length.",
|
|
||||||
ge=30,
|
ge=30,
|
||||||
)
|
)
|
||||||
failed_login_rate_limit: Optional[str] = Field(
|
failed_login_rate_limit: Optional[str] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Failed login limits",
|
title="Rate limits for failed login attempts.",
|
||||||
description="Rate limiting rules for failed login attempts to reduce brute-force attacks.",
|
|
||||||
)
|
)
|
||||||
trusted_proxies: list[str] = Field(
|
trusted_proxies: list[str] = Field(
|
||||||
default=[],
|
default=[],
|
||||||
title="Trusted proxies",
|
title="Trusted proxies for determining IP address to rate limit",
|
||||||
description="List of trusted proxy IPs used when determining client IP for rate limiting.",
|
|
||||||
)
|
)
|
||||||
# As of Feb 2023, OWASP recommends 600000 iterations for PBKDF2-SHA256
|
# As of Feb 2023, OWASP recommends 600000 iterations for PBKDF2-SHA256
|
||||||
hash_iterations: int = Field(
|
hash_iterations: int = Field(default=600000, title="Password hash iterations")
|
||||||
default=600000,
|
|
||||||
title="Hash iterations",
|
|
||||||
description="Number of PBKDF2-SHA256 iterations to use when hashing user passwords.",
|
|
||||||
)
|
|
||||||
roles: Dict[str, List[str]] = Field(
|
roles: Dict[str, List[str]] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict,
|
||||||
title="Role mappings",
|
title="Role to camera mappings. Empty list grants access to all cameras.",
|
||||||
description="Map roles to camera lists. An empty list grants access to all cameras for the role.",
|
|
||||||
)
|
)
|
||||||
admin_first_time_login: Optional[bool] = Field(
|
admin_first_time_login: Optional[bool] = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="First-time admin flag",
|
title="Internal field to expose first-time admin login flag to the UI",
|
||||||
description=(
|
description=(
|
||||||
"When true the UI may show a help link on the login page informing users how to sign in after an admin password reset. "
|
"When true the UI may show a help link on the login page informing users how to sign in after an admin password reset. "
|
||||||
),
|
),
|
||||||
|
|||||||
@@ -17,45 +17,25 @@ class AudioFilterConfig(FrigateBaseModel):
|
|||||||
default=0.8,
|
default=0.8,
|
||||||
ge=AUDIO_MIN_CONFIDENCE,
|
ge=AUDIO_MIN_CONFIDENCE,
|
||||||
lt=1.0,
|
lt=1.0,
|
||||||
title="Minimum audio confidence",
|
title="Minimum detection confidence threshold for audio to be counted.",
|
||||||
description="Minimum confidence threshold for the audio event to be counted.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class AudioConfig(FrigateBaseModel):
|
class AudioConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable audio events.")
|
||||||
default=False,
|
|
||||||
title="Enable audio detection",
|
|
||||||
description="Enable or disable audio event detection for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
max_not_heard: int = Field(
|
max_not_heard: int = Field(
|
||||||
default=30,
|
default=30, title="Seconds of not hearing the type of audio to end the event."
|
||||||
title="End timeout",
|
|
||||||
description="Amount of seconds without the configured audio type before the audio event is ended.",
|
|
||||||
)
|
)
|
||||||
min_volume: int = Field(
|
min_volume: int = Field(
|
||||||
default=500,
|
default=500, title="Min volume required to run audio detection."
|
||||||
title="Minimum volume",
|
|
||||||
description="Minimum RMS volume threshold required to run audio detection; lower values increase sensitivity (e.g., 200 high, 500 medium, 1000 low).",
|
|
||||||
)
|
)
|
||||||
listen: list[str] = Field(
|
listen: list[str] = Field(
|
||||||
default=DEFAULT_LISTEN_AUDIO,
|
default=DEFAULT_LISTEN_AUDIO, title="Audio to listen for."
|
||||||
title="Listen types",
|
|
||||||
description="List of audio event types to detect (for example: bark, fire_alarm, scream, speech, yell).",
|
|
||||||
)
|
)
|
||||||
filters: Optional[dict[str, AudioFilterConfig]] = Field(
|
filters: Optional[dict[str, AudioFilterConfig]] = Field(
|
||||||
None,
|
None, title="Audio filters."
|
||||||
title="Audio filters",
|
|
||||||
description="Per-audio-type filter settings such as confidence thresholds used to reduce false positives.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
None,
|
None, title="Keep track of original state of audio detection."
|
||||||
title="Original audio state",
|
|
||||||
description="Indicates whether audio detection was originally enabled in the static config file.",
|
|
||||||
)
|
|
||||||
num_threads: int = Field(
|
|
||||||
default=2,
|
|
||||||
title="Detection threads",
|
|
||||||
description="Number of threads to use for audio detection processing.",
|
|
||||||
ge=1,
|
|
||||||
)
|
)
|
||||||
|
num_threads: int = Field(default=2, title="Number of detection threads", ge=1)
|
||||||
|
|||||||
@@ -29,88 +29,45 @@ class BirdseyeModeEnum(str, Enum):
|
|||||||
|
|
||||||
class BirdseyeLayoutConfig(FrigateBaseModel):
|
class BirdseyeLayoutConfig(FrigateBaseModel):
|
||||||
scaling_factor: float = Field(
|
scaling_factor: float = Field(
|
||||||
default=2.0,
|
default=2.0, title="Birdseye Scaling Factor", ge=1.0, le=5.0
|
||||||
title="Scaling factor",
|
|
||||||
description="Scaling factor used by the layout calculator (range 1.0 to 5.0).",
|
|
||||||
ge=1.0,
|
|
||||||
le=5.0,
|
|
||||||
)
|
|
||||||
max_cameras: Optional[int] = Field(
|
|
||||||
default=None,
|
|
||||||
title="Max cameras",
|
|
||||||
description="Maximum number of cameras to display at once in Birdseye; shows the most recent cameras.",
|
|
||||||
)
|
)
|
||||||
|
max_cameras: Optional[int] = Field(default=None, title="Max cameras")
|
||||||
|
|
||||||
|
|
||||||
class BirdseyeConfig(FrigateBaseModel):
|
class BirdseyeConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable birdseye view.")
|
||||||
default=True,
|
|
||||||
title="Enable Birdseye",
|
|
||||||
description="Enable or disable the Birdseye view feature.",
|
|
||||||
)
|
|
||||||
mode: BirdseyeModeEnum = Field(
|
mode: BirdseyeModeEnum = Field(
|
||||||
default=BirdseyeModeEnum.objects,
|
default=BirdseyeModeEnum.objects, title="Tracking mode."
|
||||||
title="Tracking mode",
|
|
||||||
description="Mode for including cameras in Birdseye: 'objects', 'motion', or 'continuous'.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
restream: bool = Field(
|
restream: bool = Field(default=False, title="Restream birdseye via RTSP.")
|
||||||
default=False,
|
width: int = Field(default=1280, title="Birdseye width.")
|
||||||
title="Restream RTSP",
|
height: int = Field(default=720, title="Birdseye height.")
|
||||||
description="Re-stream the Birdseye output as an RTSP feed; enabling this will keep Birdseye running continuously.",
|
|
||||||
)
|
|
||||||
width: int = Field(
|
|
||||||
default=1280,
|
|
||||||
title="Width",
|
|
||||||
description="Output width (pixels) of the composed Birdseye frame.",
|
|
||||||
)
|
|
||||||
height: int = Field(
|
|
||||||
default=720,
|
|
||||||
title="Height",
|
|
||||||
description="Output height (pixels) of the composed Birdseye frame.",
|
|
||||||
)
|
|
||||||
quality: int = Field(
|
quality: int = Field(
|
||||||
default=8,
|
default=8,
|
||||||
title="Encoding quality",
|
title="Encoding quality.",
|
||||||
description="Encoding quality for the Birdseye mpeg1 feed (1 highest quality, 31 lowest).",
|
|
||||||
ge=1,
|
ge=1,
|
||||||
le=31,
|
le=31,
|
||||||
)
|
)
|
||||||
inactivity_threshold: int = Field(
|
inactivity_threshold: int = Field(
|
||||||
default=30,
|
default=30, title="Birdseye Inactivity Threshold", gt=0
|
||||||
title="Inactivity threshold",
|
|
||||||
description="Seconds of inactivity after which a camera will stop being shown in Birdseye.",
|
|
||||||
gt=0,
|
|
||||||
)
|
)
|
||||||
layout: BirdseyeLayoutConfig = Field(
|
layout: BirdseyeLayoutConfig = Field(
|
||||||
default_factory=BirdseyeLayoutConfig,
|
default_factory=BirdseyeLayoutConfig, title="Birdseye Layout Config"
|
||||||
title="Layout",
|
|
||||||
description="Layout options for the Birdseye composition.",
|
|
||||||
)
|
)
|
||||||
idle_heartbeat_fps: float = Field(
|
idle_heartbeat_fps: float = Field(
|
||||||
default=0.0,
|
default=0.0,
|
||||||
ge=0.0,
|
ge=0.0,
|
||||||
le=10.0,
|
le=10.0,
|
||||||
title="Idle heartbeat FPS",
|
title="Idle heartbeat FPS (0 disables, max 10)",
|
||||||
description="Frames-per-second to resend the last composed Birdseye frame when idle; set to 0 to disable.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# uses BaseModel because some global attributes are not available at the camera level
|
# uses BaseModel because some global attributes are not available at the camera level
|
||||||
class BirdseyeCameraConfig(BaseModel):
|
class BirdseyeCameraConfig(BaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable birdseye view for camera.")
|
||||||
default=True,
|
|
||||||
title="Enable Birdseye",
|
|
||||||
description="Enable or disable the Birdseye view feature.",
|
|
||||||
)
|
|
||||||
mode: BirdseyeModeEnum = Field(
|
mode: BirdseyeModeEnum = Field(
|
||||||
default=BirdseyeModeEnum.objects,
|
default=BirdseyeModeEnum.objects, title="Tracking mode for camera."
|
||||||
title="Tracking mode",
|
|
||||||
description="Mode for including cameras in Birdseye: 'objects', 'motion', or 'continuous'.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
order: int = Field(
|
order: int = Field(default=0, title="Position of the camera in the birdseye view.")
|
||||||
default=0,
|
|
||||||
title="Position",
|
|
||||||
description="Numeric position controlling the camera's ordering in the Birdseye layout.",
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -50,17 +50,10 @@ class CameraTypeEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class CameraConfig(FrigateBaseModel):
|
class CameraConfig(FrigateBaseModel):
|
||||||
name: Optional[str] = Field(
|
name: Optional[str] = Field(None, title="Camera name.", pattern=REGEX_CAMERA_NAME)
|
||||||
None,
|
|
||||||
title="Camera name",
|
|
||||||
description="Camera name is required",
|
|
||||||
pattern=REGEX_CAMERA_NAME,
|
|
||||||
)
|
|
||||||
|
|
||||||
friendly_name: Optional[str] = Field(
|
friendly_name: Optional[str] = Field(
|
||||||
None,
|
None, title="Camera friendly name used in the Frigate UI."
|
||||||
title="Friendly name",
|
|
||||||
description="Camera friendly name used in the Frigate UI",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@model_validator(mode="before")
|
@model_validator(mode="before")
|
||||||
@@ -70,129 +63,80 @@ class CameraConfig(FrigateBaseModel):
|
|||||||
pass
|
pass
|
||||||
return values
|
return values
|
||||||
|
|
||||||
enabled: bool = Field(default=True, title="Enabled", description="Enabled")
|
enabled: bool = Field(default=True, title="Enable camera.")
|
||||||
|
|
||||||
# Options with global fallback
|
# Options with global fallback
|
||||||
audio: AudioConfig = Field(
|
audio: AudioConfig = Field(
|
||||||
default_factory=AudioConfig,
|
default_factory=AudioConfig, title="Audio events configuration."
|
||||||
title="Audio events",
|
|
||||||
description="Settings for audio-based event detection for this camera.",
|
|
||||||
)
|
)
|
||||||
audio_transcription: CameraAudioTranscriptionConfig = Field(
|
audio_transcription: CameraAudioTranscriptionConfig = Field(
|
||||||
default_factory=CameraAudioTranscriptionConfig,
|
default_factory=CameraAudioTranscriptionConfig,
|
||||||
title="Audio transcription",
|
title="Audio transcription config.",
|
||||||
description="Settings for live and speech audio transcription used for events and live captions.",
|
|
||||||
)
|
)
|
||||||
birdseye: BirdseyeCameraConfig = Field(
|
birdseye: BirdseyeCameraConfig = Field(
|
||||||
default_factory=BirdseyeCameraConfig,
|
default_factory=BirdseyeCameraConfig, title="Birdseye camera configuration."
|
||||||
title="Birdseye",
|
|
||||||
description="Settings for the Birdseye composite view that composes multiple camera feeds into a single layout.",
|
|
||||||
)
|
)
|
||||||
detect: DetectConfig = Field(
|
detect: DetectConfig = Field(
|
||||||
default_factory=DetectConfig,
|
default_factory=DetectConfig, title="Object detection configuration."
|
||||||
title="Object Detection",
|
|
||||||
description="Settings for the detection/detect role used to run object detection and initialize trackers.",
|
|
||||||
)
|
)
|
||||||
face_recognition: CameraFaceRecognitionConfig = Field(
|
face_recognition: CameraFaceRecognitionConfig = Field(
|
||||||
default_factory=CameraFaceRecognitionConfig,
|
default_factory=CameraFaceRecognitionConfig, title="Face recognition config."
|
||||||
title="Face recognition",
|
|
||||||
description="Settings for face detection and recognition for this camera.",
|
|
||||||
)
|
|
||||||
ffmpeg: CameraFfmpegConfig = Field(
|
|
||||||
title="FFmpeg",
|
|
||||||
description="FFmpeg settings including binary path, args, hwaccel options, and per-role output args.",
|
|
||||||
)
|
)
|
||||||
|
ffmpeg: CameraFfmpegConfig = Field(title="FFmpeg configuration for the camera.")
|
||||||
live: CameraLiveConfig = Field(
|
live: CameraLiveConfig = Field(
|
||||||
default_factory=CameraLiveConfig,
|
default_factory=CameraLiveConfig, title="Live playback settings."
|
||||||
title="Live playback",
|
|
||||||
description="Settings used by the Web UI to control live stream selection, resolution and quality.",
|
|
||||||
)
|
)
|
||||||
lpr: CameraLicensePlateRecognitionConfig = Field(
|
lpr: CameraLicensePlateRecognitionConfig = Field(
|
||||||
default_factory=CameraLicensePlateRecognitionConfig,
|
default_factory=CameraLicensePlateRecognitionConfig, title="LPR config."
|
||||||
title="License Plate Recognition",
|
|
||||||
description="License plate recognition settings including detection thresholds, formatting, and known plates.",
|
|
||||||
)
|
|
||||||
motion: MotionConfig = Field(
|
|
||||||
None,
|
|
||||||
title="Motion detection",
|
|
||||||
description="Default motion detection settings for this camera.",
|
|
||||||
)
|
)
|
||||||
|
motion: MotionConfig = Field(None, title="Motion detection configuration.")
|
||||||
objects: ObjectConfig = Field(
|
objects: ObjectConfig = Field(
|
||||||
default_factory=ObjectConfig,
|
default_factory=ObjectConfig, title="Object configuration."
|
||||||
title="Objects",
|
|
||||||
description="Object tracking defaults including which labels to track and per-object filters.",
|
|
||||||
)
|
)
|
||||||
record: RecordConfig = Field(
|
record: RecordConfig = Field(
|
||||||
default_factory=RecordConfig,
|
default_factory=RecordConfig, title="Record configuration."
|
||||||
title="Recording",
|
|
||||||
description="Recording and retention settings for this camera.",
|
|
||||||
)
|
)
|
||||||
review: ReviewConfig = Field(
|
review: ReviewConfig = Field(
|
||||||
default_factory=ReviewConfig,
|
default_factory=ReviewConfig, title="Review configuration."
|
||||||
title="Review",
|
|
||||||
description="Settings that control alerts, detections, and GenAI review summaries used by the UI and storage for this camera.",
|
|
||||||
)
|
)
|
||||||
semantic_search: CameraSemanticSearchConfig = Field(
|
semantic_search: CameraSemanticSearchConfig = Field(
|
||||||
default_factory=CameraSemanticSearchConfig,
|
default_factory=CameraSemanticSearchConfig,
|
||||||
title="Semantic Search",
|
title="Semantic search configuration.",
|
||||||
description="Settings for semantic search which builds and queries object embeddings to find similar items.",
|
|
||||||
)
|
)
|
||||||
snapshots: SnapshotsConfig = Field(
|
snapshots: SnapshotsConfig = Field(
|
||||||
default_factory=SnapshotsConfig,
|
default_factory=SnapshotsConfig, title="Snapshot configuration."
|
||||||
title="Snapshots",
|
|
||||||
description="Settings for saved JPEG snapshots of tracked objects for this camera.",
|
|
||||||
)
|
)
|
||||||
timestamp_style: TimestampStyleConfig = Field(
|
timestamp_style: TimestampStyleConfig = Field(
|
||||||
default_factory=TimestampStyleConfig,
|
default_factory=TimestampStyleConfig, title="Timestamp style configuration."
|
||||||
title="Timestamp style",
|
|
||||||
description="Styling options for in-feed timestamps applied to recordings and snapshots.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Options without global fallback
|
# Options without global fallback
|
||||||
best_image_timeout: int = Field(
|
best_image_timeout: int = Field(
|
||||||
default=60,
|
default=60,
|
||||||
title="Best image timeout",
|
title="How long to wait for the image with the highest confidence score.",
|
||||||
description="How long to wait for the image with the highest confidence score.",
|
|
||||||
)
|
)
|
||||||
mqtt: CameraMqttConfig = Field(
|
mqtt: CameraMqttConfig = Field(
|
||||||
default_factory=CameraMqttConfig,
|
default_factory=CameraMqttConfig, title="MQTT configuration."
|
||||||
title="MQTT",
|
|
||||||
description="MQTT image publishing settings.",
|
|
||||||
)
|
)
|
||||||
notifications: NotificationConfig = Field(
|
notifications: NotificationConfig = Field(
|
||||||
default_factory=NotificationConfig,
|
default_factory=NotificationConfig, title="Notifications configuration."
|
||||||
title="Notifications",
|
|
||||||
description="Settings to enable and control notifications for this camera.",
|
|
||||||
)
|
)
|
||||||
onvif: OnvifConfig = Field(
|
onvif: OnvifConfig = Field(
|
||||||
default_factory=OnvifConfig,
|
default_factory=OnvifConfig, title="Camera Onvif Configuration."
|
||||||
title="ONVIF",
|
|
||||||
description="ONVIF connection and PTZ autotracking settings for this camera.",
|
|
||||||
)
|
|
||||||
type: CameraTypeEnum = Field(
|
|
||||||
default=CameraTypeEnum.generic,
|
|
||||||
title="Camera type",
|
|
||||||
description="Camera Type",
|
|
||||||
)
|
)
|
||||||
|
type: CameraTypeEnum = Field(default=CameraTypeEnum.generic, title="Camera Type")
|
||||||
ui: CameraUiConfig = Field(
|
ui: CameraUiConfig = Field(
|
||||||
default_factory=CameraUiConfig,
|
default_factory=CameraUiConfig, title="Camera UI Modifications."
|
||||||
title="Camera UI",
|
|
||||||
description="Display ordering and visibility for this camera in the UI. Ordering affects the default dashboard. For more granular control, use camera groups.",
|
|
||||||
)
|
)
|
||||||
webui_url: Optional[str] = Field(
|
webui_url: Optional[str] = Field(
|
||||||
None,
|
None,
|
||||||
title="Camera URL",
|
title="URL to visit the camera directly from system page",
|
||||||
description="URL to visit the camera directly from system page",
|
|
||||||
)
|
)
|
||||||
zones: dict[str, ZoneConfig] = Field(
|
zones: dict[str, ZoneConfig] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Zone configuration."
|
||||||
title="Zones",
|
|
||||||
description="Zones allow you to define a specific area of the frame so you can determine whether or not an object is within a particular area.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of camera."
|
||||||
title="Original camera state",
|
|
||||||
description="Keep track of original state of camera.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
_ffmpeg_cmds: list[dict[str, list[str]]] = PrivateAttr()
|
_ffmpeg_cmds: list[dict[str, list[str]]] = PrivateAttr()
|
||||||
|
|||||||
@@ -8,82 +8,56 @@ __all__ = ["DetectConfig", "StationaryConfig", "StationaryMaxFramesConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class StationaryMaxFramesConfig(FrigateBaseModel):
|
class StationaryMaxFramesConfig(FrigateBaseModel):
|
||||||
default: Optional[int] = Field(
|
default: Optional[int] = Field(default=None, title="Default max frames.", ge=1)
|
||||||
default=None,
|
|
||||||
title="Default max frames",
|
|
||||||
description="Default maximum frames to track a stationary object before stopping.",
|
|
||||||
ge=1,
|
|
||||||
)
|
|
||||||
objects: dict[str, int] = Field(
|
objects: dict[str, int] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Object specific max frames."
|
||||||
title="Object max frames",
|
|
||||||
description="Per-object overrides for maximum frames to track stationary objects.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class StationaryConfig(FrigateBaseModel):
|
class StationaryConfig(FrigateBaseModel):
|
||||||
interval: Optional[int] = Field(
|
interval: Optional[int] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Stationary interval",
|
title="Frame interval for checking stationary objects.",
|
||||||
description="How often (in frames) to run a detection check to confirm a stationary object.",
|
|
||||||
gt=0,
|
gt=0,
|
||||||
)
|
)
|
||||||
threshold: Optional[int] = Field(
|
threshold: Optional[int] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Stationary threshold",
|
title="Number of frames without a position change for an object to be considered stationary",
|
||||||
description="Number of frames with no position change required to mark an object as stationary.",
|
|
||||||
ge=1,
|
ge=1,
|
||||||
)
|
)
|
||||||
max_frames: StationaryMaxFramesConfig = Field(
|
max_frames: StationaryMaxFramesConfig = Field(
|
||||||
default_factory=StationaryMaxFramesConfig,
|
default_factory=StationaryMaxFramesConfig,
|
||||||
title="Max frames",
|
title="Max frames for stationary objects.",
|
||||||
description="Limits how long stationary objects are tracked before being discarded.",
|
|
||||||
)
|
)
|
||||||
classifier: bool = Field(
|
classifier: bool = Field(
|
||||||
default=True,
|
default=True,
|
||||||
title="Enable visual classifier",
|
title="Enable visual classifier for determing if objects with jittery bounding boxes are stationary.",
|
||||||
description="Use a visual classifier to detect truly stationary objects even when bounding boxes jitter.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class DetectConfig(FrigateBaseModel):
|
class DetectConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Detection Enabled.")
|
||||||
default=False,
|
|
||||||
title="Detection enabled",
|
|
||||||
description="Enable or disable object detection for all cameras; can be overridden per-camera. Detection must be enabled for object tracking to run.",
|
|
||||||
)
|
|
||||||
height: Optional[int] = Field(
|
height: Optional[int] = Field(
|
||||||
default=None,
|
default=None, title="Height of the stream for the detect role."
|
||||||
title="Detect height",
|
|
||||||
description="Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution.",
|
|
||||||
)
|
)
|
||||||
width: Optional[int] = Field(
|
width: Optional[int] = Field(
|
||||||
default=None,
|
default=None, title="Width of the stream for the detect role."
|
||||||
title="Detect width",
|
|
||||||
description="Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution.",
|
|
||||||
)
|
)
|
||||||
fps: int = Field(
|
fps: int = Field(
|
||||||
default=5,
|
default=5, title="Number of frames per second to process through detection."
|
||||||
title="Detect FPS",
|
|
||||||
description="Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects).",
|
|
||||||
)
|
)
|
||||||
min_initialized: Optional[int] = Field(
|
min_initialized: Optional[int] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Minimum initialization frames",
|
title="Minimum number of consecutive hits for an object to be initialized by the tracker.",
|
||||||
description="Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2.",
|
|
||||||
)
|
)
|
||||||
max_disappeared: Optional[int] = Field(
|
max_disappeared: Optional[int] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Maximum disappeared frames",
|
title="Maximum number of frames the object can disappear before detection ends.",
|
||||||
description="Number of frames without a detection before a tracked object is considered gone.",
|
|
||||||
)
|
)
|
||||||
stationary: StationaryConfig = Field(
|
stationary: StationaryConfig = Field(
|
||||||
default_factory=StationaryConfig,
|
default_factory=StationaryConfig,
|
||||||
title="Stationary objects config",
|
title="Stationary objects config.",
|
||||||
description="Settings to detect and manage objects that remain stationary for a period of time.",
|
|
||||||
)
|
)
|
||||||
annotation_offset: int = Field(
|
annotation_offset: int = Field(
|
||||||
default=0,
|
default=0, title="Milliseconds to offset detect annotations by."
|
||||||
title="Annotation offset",
|
|
||||||
description="Milliseconds to shift detect annotations to better align timeline bounding boxes with recordings; can be positive or negative.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -35,58 +35,39 @@ DETECT_FFMPEG_OUTPUT_ARGS_DEFAULT = [
|
|||||||
class FfmpegOutputArgsConfig(FrigateBaseModel):
|
class FfmpegOutputArgsConfig(FrigateBaseModel):
|
||||||
detect: Union[str, list[str]] = Field(
|
detect: Union[str, list[str]] = Field(
|
||||||
default=DETECT_FFMPEG_OUTPUT_ARGS_DEFAULT,
|
default=DETECT_FFMPEG_OUTPUT_ARGS_DEFAULT,
|
||||||
title="Detect output arguments",
|
title="Detect role FFmpeg output arguments.",
|
||||||
description="Default output arguments for detect role streams.",
|
|
||||||
)
|
)
|
||||||
record: Union[str, list[str]] = Field(
|
record: Union[str, list[str]] = Field(
|
||||||
default=RECORD_FFMPEG_OUTPUT_ARGS_DEFAULT,
|
default=RECORD_FFMPEG_OUTPUT_ARGS_DEFAULT,
|
||||||
title="Record output arguments",
|
title="Record role FFmpeg output arguments.",
|
||||||
description="Default output arguments for record role streams.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class FfmpegConfig(FrigateBaseModel):
|
class FfmpegConfig(FrigateBaseModel):
|
||||||
path: str = Field(
|
path: str = Field(default="default", title="FFmpeg path")
|
||||||
default="default",
|
|
||||||
title="FFmpeg path",
|
|
||||||
description='Path to the FFmpeg binary to use or a version alias ("5.0" or "7.0").',
|
|
||||||
)
|
|
||||||
global_args: Union[str, list[str]] = Field(
|
global_args: Union[str, list[str]] = Field(
|
||||||
default=FFMPEG_GLOBAL_ARGS_DEFAULT,
|
default=FFMPEG_GLOBAL_ARGS_DEFAULT, title="Global FFmpeg arguments."
|
||||||
title="FFmpeg global arguments",
|
|
||||||
description="Global arguments passed to FFmpeg processes.",
|
|
||||||
)
|
)
|
||||||
hwaccel_args: Union[str, list[str]] = Field(
|
hwaccel_args: Union[str, list[str]] = Field(
|
||||||
default="auto",
|
default="auto", title="FFmpeg hardware acceleration arguments."
|
||||||
title="Hardware acceleration arguments",
|
|
||||||
description="Hardware acceleration arguments for FFmpeg. Provider-specific presets are recommended.",
|
|
||||||
)
|
)
|
||||||
input_args: Union[str, list[str]] = Field(
|
input_args: Union[str, list[str]] = Field(
|
||||||
default=FFMPEG_INPUT_ARGS_DEFAULT,
|
default=FFMPEG_INPUT_ARGS_DEFAULT, title="FFmpeg input arguments."
|
||||||
title="Input arguments",
|
|
||||||
description="Input arguments applied to FFmpeg input streams.",
|
|
||||||
)
|
)
|
||||||
output_args: FfmpegOutputArgsConfig = Field(
|
output_args: FfmpegOutputArgsConfig = Field(
|
||||||
default_factory=FfmpegOutputArgsConfig,
|
default_factory=FfmpegOutputArgsConfig,
|
||||||
title="Output arguments",
|
title="FFmpeg output arguments per role.",
|
||||||
description="Default output arguments used for different FFmpeg roles such as detect and record.",
|
|
||||||
)
|
)
|
||||||
retry_interval: float = Field(
|
retry_interval: float = Field(
|
||||||
default=10.0,
|
default=10.0,
|
||||||
title="FFmpeg retry time",
|
title="Time in seconds to wait before FFmpeg retries connecting to the camera.",
|
||||||
description="Seconds to wait before attempting to reconnect a camera stream after failure. Default is 10.",
|
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
)
|
)
|
||||||
apple_compatibility: bool = Field(
|
apple_compatibility: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Apple compatibility",
|
title="Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players.",
|
||||||
description="Enable HEVC tagging for better Apple player compatibility when recording H.265.",
|
|
||||||
)
|
|
||||||
gpu: int = Field(
|
|
||||||
default=0,
|
|
||||||
title="GPU index",
|
|
||||||
description="Default GPU index used for hardware acceleration if available.",
|
|
||||||
)
|
)
|
||||||
|
gpu: int = Field(default=0, title="GPU index to use for hardware acceleration.")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def ffmpeg_path(self) -> str:
|
def ffmpeg_path(self) -> str:
|
||||||
@@ -114,36 +95,21 @@ class CameraRoleEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class CameraInput(FrigateBaseModel):
|
class CameraInput(FrigateBaseModel):
|
||||||
path: EnvString = Field(
|
path: EnvString = Field(title="Camera input path.")
|
||||||
title="Input path",
|
roles: list[CameraRoleEnum] = Field(title="Roles assigned to this input.")
|
||||||
description="Camera input stream URL or path.",
|
|
||||||
)
|
|
||||||
roles: list[CameraRoleEnum] = Field(
|
|
||||||
title="Input roles",
|
|
||||||
description="Roles for this input stream.",
|
|
||||||
)
|
|
||||||
global_args: Union[str, list[str]] = Field(
|
global_args: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list, title="FFmpeg global arguments."
|
||||||
title="FFmpeg global arguments",
|
|
||||||
description="FFmpeg global arguments for this input stream.",
|
|
||||||
)
|
)
|
||||||
hwaccel_args: Union[str, list[str]] = Field(
|
hwaccel_args: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list, title="FFmpeg hardware acceleration arguments."
|
||||||
title="Hardware acceleration arguments",
|
|
||||||
description="Hardware acceleration arguments for this input stream.",
|
|
||||||
)
|
)
|
||||||
input_args: Union[str, list[str]] = Field(
|
input_args: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list, title="FFmpeg input arguments."
|
||||||
title="Input arguments",
|
|
||||||
description="Input arguments specific to this stream.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class CameraFfmpegConfig(FfmpegConfig):
|
class CameraFfmpegConfig(FfmpegConfig):
|
||||||
inputs: list[CameraInput] = Field(
|
inputs: list[CameraInput] = Field(title="Camera inputs.")
|
||||||
title="Camera inputs",
|
|
||||||
description="List of input stream definitions (paths and roles) for this camera.",
|
|
||||||
)
|
|
||||||
|
|
||||||
@field_validator("inputs")
|
@field_validator("inputs")
|
||||||
@classmethod
|
@classmethod
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ from pydantic import Field
|
|||||||
from ..base import FrigateBaseModel
|
from ..base import FrigateBaseModel
|
||||||
from ..env import EnvString
|
from ..env import EnvString
|
||||||
|
|
||||||
__all__ = ["GenAIConfig", "GenAIProviderEnum", "GenAIRoleEnum"]
|
__all__ = ["GenAIConfig", "GenAIProviderEnum"]
|
||||||
|
|
||||||
|
|
||||||
class GenAIProviderEnum(str, Enum):
|
class GenAIProviderEnum(str, Enum):
|
||||||
@@ -17,53 +17,16 @@ class GenAIProviderEnum(str, Enum):
|
|||||||
llamacpp = "llamacpp"
|
llamacpp = "llamacpp"
|
||||||
|
|
||||||
|
|
||||||
class GenAIRoleEnum(str, Enum):
|
|
||||||
tools = "tools"
|
|
||||||
vision = "vision"
|
|
||||||
embeddings = "embeddings"
|
|
||||||
|
|
||||||
|
|
||||||
class GenAIConfig(FrigateBaseModel):
|
class GenAIConfig(FrigateBaseModel):
|
||||||
"""Primary GenAI Config to define GenAI Provider."""
|
"""Primary GenAI Config to define GenAI Provider."""
|
||||||
|
|
||||||
api_key: Optional[EnvString] = Field(
|
api_key: Optional[EnvString] = Field(default=None, title="Provider API key.")
|
||||||
default=None,
|
base_url: Optional[str] = Field(default=None, title="Provider base url.")
|
||||||
title="API key",
|
model: str = Field(default="gpt-4o", title="GenAI model.")
|
||||||
description="API key required by some providers (can also be set via environment variables).",
|
provider: GenAIProviderEnum | None = Field(default=None, title="GenAI provider.")
|
||||||
)
|
|
||||||
base_url: Optional[str] = Field(
|
|
||||||
default=None,
|
|
||||||
title="Base URL",
|
|
||||||
description="Base URL for self-hosted or compatible providers (for example an Ollama instance).",
|
|
||||||
)
|
|
||||||
model: str = Field(
|
|
||||||
default="gpt-4o",
|
|
||||||
title="Model",
|
|
||||||
description="The model to use from the provider for generating descriptions or summaries.",
|
|
||||||
)
|
|
||||||
provider: GenAIProviderEnum | None = Field(
|
|
||||||
default=None,
|
|
||||||
title="Provider",
|
|
||||||
description="The GenAI provider to use (for example: ollama, gemini, openai).",
|
|
||||||
)
|
|
||||||
roles: list[GenAIRoleEnum] = Field(
|
|
||||||
default_factory=lambda: [
|
|
||||||
GenAIRoleEnum.embeddings,
|
|
||||||
GenAIRoleEnum.vision,
|
|
||||||
GenAIRoleEnum.tools,
|
|
||||||
],
|
|
||||||
title="Roles",
|
|
||||||
description="GenAI roles (tools, vision, embeddings); one provider per role.",
|
|
||||||
)
|
|
||||||
provider_options: dict[str, Any] = Field(
|
provider_options: dict[str, Any] = Field(
|
||||||
default={},
|
default={}, title="GenAI Provider extra options."
|
||||||
title="Provider options",
|
|
||||||
description="Additional provider-specific options to pass to the GenAI client.",
|
|
||||||
json_schema_extra={"additionalProperties": {"type": "string"}},
|
|
||||||
)
|
)
|
||||||
runtime_options: dict[str, Any] = Field(
|
runtime_options: dict[str, Any] = Field(
|
||||||
default={},
|
default={}, title="Options to pass during inference calls."
|
||||||
title="Runtime options",
|
|
||||||
description="Runtime options passed to the provider for each inference call.",
|
|
||||||
json_schema_extra={"additionalProperties": {"type": "string"}},
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -10,18 +10,7 @@ __all__ = ["CameraLiveConfig"]
|
|||||||
class CameraLiveConfig(FrigateBaseModel):
|
class CameraLiveConfig(FrigateBaseModel):
|
||||||
streams: Dict[str, str] = Field(
|
streams: Dict[str, str] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Live stream names",
|
title="Friendly names and restream names to use for live view.",
|
||||||
description="Mapping of configured stream names to restream/go2rtc names used for live playback.",
|
|
||||||
)
|
|
||||||
height: int = Field(
|
|
||||||
default=720,
|
|
||||||
title="Live height",
|
|
||||||
description="Height (pixels) to render the jsmpeg live stream in the Web UI; must be <= detect stream height.",
|
|
||||||
)
|
|
||||||
quality: int = Field(
|
|
||||||
default=8,
|
|
||||||
ge=1,
|
|
||||||
le=31,
|
|
||||||
title="Live quality",
|
|
||||||
description="Encoding quality for the jsmpeg stream (1 highest, 31 lowest).",
|
|
||||||
)
|
)
|
||||||
|
height: int = Field(default=720, title="Live camera view height")
|
||||||
|
quality: int = Field(default=8, ge=1, le=31, title="Live camera view quality")
|
||||||
|
|||||||
@@ -8,64 +8,30 @@ __all__ = ["MotionConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class MotionConfig(FrigateBaseModel):
|
class MotionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable motion on all cameras.")
|
||||||
default=True,
|
|
||||||
title="Enable motion detection",
|
|
||||||
description="Enable or disable motion detection for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
threshold: int = Field(
|
threshold: int = Field(
|
||||||
default=30,
|
default=30,
|
||||||
title="Motion threshold",
|
title="Motion detection threshold (1-255).",
|
||||||
description="Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255).",
|
|
||||||
ge=1,
|
ge=1,
|
||||||
le=255,
|
le=255,
|
||||||
)
|
)
|
||||||
lightning_threshold: float = Field(
|
lightning_threshold: float = Field(
|
||||||
default=0.8,
|
default=0.8, title="Lightning detection threshold (0.3-1.0).", ge=0.3, le=1.0
|
||||||
title="Lightning threshold",
|
|
||||||
description="Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0).",
|
|
||||||
ge=0.3,
|
|
||||||
le=1.0,
|
|
||||||
)
|
|
||||||
improve_contrast: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Improve contrast",
|
|
||||||
description="Apply contrast improvement to frames before motion analysis to help detection.",
|
|
||||||
)
|
|
||||||
contour_area: Optional[int] = Field(
|
|
||||||
default=10,
|
|
||||||
title="Contour area",
|
|
||||||
description="Minimum contour area in pixels required for a motion contour to be counted.",
|
|
||||||
)
|
|
||||||
delta_alpha: float = Field(
|
|
||||||
default=0.2,
|
|
||||||
title="Delta alpha",
|
|
||||||
description="Alpha blending factor used in frame differencing for motion calculation.",
|
|
||||||
)
|
|
||||||
frame_alpha: float = Field(
|
|
||||||
default=0.01,
|
|
||||||
title="Frame alpha",
|
|
||||||
description="Alpha value used when blending frames for motion preprocessing.",
|
|
||||||
)
|
|
||||||
frame_height: Optional[int] = Field(
|
|
||||||
default=100,
|
|
||||||
title="Frame height",
|
|
||||||
description="Height in pixels to scale frames to when computing motion.",
|
|
||||||
)
|
)
|
||||||
|
improve_contrast: bool = Field(default=True, title="Improve Contrast")
|
||||||
|
contour_area: Optional[int] = Field(default=10, title="Contour Area")
|
||||||
|
delta_alpha: float = Field(default=0.2, title="Delta Alpha")
|
||||||
|
frame_alpha: float = Field(default=0.01, title="Frame Alpha")
|
||||||
|
frame_height: Optional[int] = Field(default=100, title="Frame Height")
|
||||||
mask: Union[str, list[str]] = Field(
|
mask: Union[str, list[str]] = Field(
|
||||||
default="",
|
default="", title="Coordinates polygon for the motion mask."
|
||||||
title="Mask coordinates",
|
|
||||||
description="Ordered x,y coordinates defining the motion mask polygon used to include/exclude areas.",
|
|
||||||
)
|
)
|
||||||
mqtt_off_delay: int = Field(
|
mqtt_off_delay: int = Field(
|
||||||
default=30,
|
default=30,
|
||||||
title="MQTT off delay",
|
title="Delay for updating MQTT with no motion detected.",
|
||||||
description="Seconds to wait after last motion before publishing an MQTT 'off' state.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of motion detection."
|
||||||
title="Original motion state",
|
|
||||||
description="Indicates whether motion detection was enabled in the original static configuration.",
|
|
||||||
)
|
)
|
||||||
raw_mask: Union[str, list[str]] = ""
|
raw_mask: Union[str, list[str]] = ""
|
||||||
|
|
||||||
|
|||||||
@@ -6,40 +6,18 @@ __all__ = ["CameraMqttConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class CameraMqttConfig(FrigateBaseModel):
|
class CameraMqttConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Send image over MQTT.")
|
||||||
default=True,
|
timestamp: bool = Field(default=True, title="Add timestamp to MQTT image.")
|
||||||
title="Send image",
|
bounding_box: bool = Field(default=True, title="Add bounding box to MQTT image.")
|
||||||
description="Enable publishing image snapshots for objects to MQTT topics for this camera.",
|
crop: bool = Field(default=True, title="Crop MQTT image to detected object.")
|
||||||
)
|
height: int = Field(default=270, title="MQTT image height.")
|
||||||
timestamp: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Add timestamp",
|
|
||||||
description="Overlay a timestamp on images published to MQTT.",
|
|
||||||
)
|
|
||||||
bounding_box: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Add bounding box",
|
|
||||||
description="Draw bounding boxes on images published over MQTT.",
|
|
||||||
)
|
|
||||||
crop: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Crop image",
|
|
||||||
description="Crop images published to MQTT to the detected object's bounding box.",
|
|
||||||
)
|
|
||||||
height: int = Field(
|
|
||||||
default=270,
|
|
||||||
title="Image height",
|
|
||||||
description="Height (pixels) to resize images published over MQTT.",
|
|
||||||
)
|
|
||||||
required_zones: list[str] = Field(
|
required_zones: list[str] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Required zones",
|
title="List of required zones to be entered in order to send the image.",
|
||||||
description="Zones that an object must enter for an MQTT image to be published.",
|
|
||||||
)
|
)
|
||||||
quality: int = Field(
|
quality: int = Field(
|
||||||
default=70,
|
default=70,
|
||||||
title="JPEG quality",
|
title="Quality of the encoded jpeg (0-100).",
|
||||||
description="JPEG quality for images published to MQTT (0-100).",
|
|
||||||
ge=0,
|
ge=0,
|
||||||
le=100,
|
le=100,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -8,24 +8,11 @@ __all__ = ["NotificationConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class NotificationConfig(FrigateBaseModel):
|
class NotificationConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable notifications")
|
||||||
default=False,
|
email: Optional[str] = Field(default=None, title="Email required for push.")
|
||||||
title="Enable notifications",
|
|
||||||
description="Enable or disable notifications for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
email: Optional[str] = Field(
|
|
||||||
default=None,
|
|
||||||
title="Notification email",
|
|
||||||
description="Email address used for push notifications or required by certain notification providers.",
|
|
||||||
)
|
|
||||||
cooldown: int = Field(
|
cooldown: int = Field(
|
||||||
default=0,
|
default=0, ge=0, title="Cooldown period for notifications (time in seconds)."
|
||||||
ge=0,
|
|
||||||
title="Cooldown period",
|
|
||||||
description="Cooldown (seconds) between notifications to avoid spamming recipients.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of notifications."
|
||||||
title="Original notifications state",
|
|
||||||
description="Indicates whether notifications were enabled in the original static configuration.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -13,38 +13,30 @@ DEFAULT_TRACKED_OBJECTS = ["person"]
|
|||||||
class FilterConfig(FrigateBaseModel):
|
class FilterConfig(FrigateBaseModel):
|
||||||
min_area: Union[int, float] = Field(
|
min_area: Union[int, float] = Field(
|
||||||
default=0,
|
default=0,
|
||||||
title="Minimum object area",
|
title="Minimum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
|
||||||
description="Minimum bounding box area (pixels or percentage) required for this object type. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
|
|
||||||
)
|
)
|
||||||
max_area: Union[int, float] = Field(
|
max_area: Union[int, float] = Field(
|
||||||
default=24000000,
|
default=24000000,
|
||||||
title="Maximum object area",
|
title="Maximum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
|
||||||
description="Maximum bounding box area (pixels or percentage) allowed for this object type. Can be pixels (int) or percentage (float between 0.000001 and 0.99).",
|
|
||||||
)
|
)
|
||||||
min_ratio: float = Field(
|
min_ratio: float = Field(
|
||||||
default=0,
|
default=0,
|
||||||
title="Minimum aspect ratio",
|
title="Minimum ratio of bounding box's width/height for object to be counted.",
|
||||||
description="Minimum width/height ratio required for the bounding box to qualify.",
|
|
||||||
)
|
)
|
||||||
max_ratio: float = Field(
|
max_ratio: float = Field(
|
||||||
default=24000000,
|
default=24000000,
|
||||||
title="Maximum aspect ratio",
|
title="Maximum ratio of bounding box's width/height for object to be counted.",
|
||||||
description="Maximum width/height ratio allowed for the bounding box to qualify.",
|
|
||||||
)
|
)
|
||||||
threshold: float = Field(
|
threshold: float = Field(
|
||||||
default=0.7,
|
default=0.7,
|
||||||
title="Confidence threshold",
|
title="Average detection confidence threshold for object to be counted.",
|
||||||
description="Average detection confidence threshold required for the object to be considered a true positive.",
|
|
||||||
)
|
)
|
||||||
min_score: float = Field(
|
min_score: float = Field(
|
||||||
default=0.5,
|
default=0.5, title="Minimum detection confidence for object to be counted."
|
||||||
title="Minimum confidence",
|
|
||||||
description="Minimum single-frame detection confidence required for the object to be counted.",
|
|
||||||
)
|
)
|
||||||
mask: Optional[Union[str, list[str]]] = Field(
|
mask: Optional[Union[str, list[str]]] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Filter mask",
|
title="Detection area polygon mask for this filter configuration.",
|
||||||
description="Polygon coordinates defining where this filter applies within the frame.",
|
|
||||||
)
|
)
|
||||||
raw_mask: Union[str, list[str]] = ""
|
raw_mask: Union[str, list[str]] = ""
|
||||||
|
|
||||||
@@ -59,64 +51,46 @@ class FilterConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
class GenAIObjectTriggerConfig(FrigateBaseModel):
|
class GenAIObjectTriggerConfig(FrigateBaseModel):
|
||||||
tracked_object_end: bool = Field(
|
tracked_object_end: bool = Field(
|
||||||
default=True,
|
default=True, title="Send once the object is no longer tracked."
|
||||||
title="Send on end",
|
|
||||||
description="Send a request to GenAI when the tracked object ends.",
|
|
||||||
)
|
)
|
||||||
after_significant_updates: Optional[int] = Field(
|
after_significant_updates: Optional[int] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Early GenAI trigger",
|
title="Send an early request to generative AI when X frames accumulated.",
|
||||||
description="Send a request to GenAI after a specified number of significant updates for the tracked object.",
|
|
||||||
ge=1,
|
ge=1,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class GenAIObjectConfig(FrigateBaseModel):
|
class GenAIObjectConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable GenAI for camera.")
|
||||||
default=False,
|
|
||||||
title="Enable GenAI",
|
|
||||||
description="Enable GenAI generation of descriptions for tracked objects by default.",
|
|
||||||
)
|
|
||||||
use_snapshot: bool = Field(
|
use_snapshot: bool = Field(
|
||||||
default=False,
|
default=False, title="Use snapshots for generating descriptions."
|
||||||
title="Use snapshots",
|
|
||||||
description="Use object snapshots instead of thumbnails for GenAI description generation.",
|
|
||||||
)
|
)
|
||||||
prompt: str = Field(
|
prompt: str = Field(
|
||||||
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
|
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
|
||||||
title="Caption prompt",
|
title="Default caption prompt.",
|
||||||
description="Default prompt template used when generating descriptions with GenAI.",
|
|
||||||
)
|
)
|
||||||
object_prompts: dict[str, str] = Field(
|
object_prompts: dict[str, str] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Object specific prompts."
|
||||||
title="Object prompts",
|
|
||||||
description="Per-object prompts to customize GenAI outputs for specific labels.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
objects: Union[str, list[str]] = Field(
|
objects: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="GenAI objects",
|
title="List of objects to run generative AI for.",
|
||||||
description="List of object labels to send to GenAI by default.",
|
|
||||||
)
|
)
|
||||||
required_zones: Union[str, list[str]] = Field(
|
required_zones: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Required zones",
|
title="List of required zones to be entered in order to run generative AI.",
|
||||||
description="Zones that must be entered for objects to qualify for GenAI description generation.",
|
|
||||||
)
|
)
|
||||||
debug_save_thumbnails: bool = Field(
|
debug_save_thumbnails: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Save thumbnails",
|
title="Save thumbnails sent to generative AI for debugging purposes.",
|
||||||
description="Save thumbnails sent to GenAI for debugging and review.",
|
|
||||||
)
|
)
|
||||||
send_triggers: GenAIObjectTriggerConfig = Field(
|
send_triggers: GenAIObjectTriggerConfig = Field(
|
||||||
default_factory=GenAIObjectTriggerConfig,
|
default_factory=GenAIObjectTriggerConfig,
|
||||||
title="GenAI triggers",
|
title="What triggers to use to send frames to generative AI for a tracked object.",
|
||||||
description="Defines when frames should be sent to GenAI (on end, after updates, etc.).",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of generative AI."
|
||||||
title="Original GenAI state",
|
|
||||||
description="Indicates whether GenAI was enabled in the original static config.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@field_validator("required_zones", mode="before")
|
@field_validator("required_zones", mode="before")
|
||||||
@@ -129,25 +103,14 @@ class GenAIObjectConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class ObjectConfig(FrigateBaseModel):
|
class ObjectConfig(FrigateBaseModel):
|
||||||
track: list[str] = Field(
|
track: list[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
|
||||||
default=DEFAULT_TRACKED_OBJECTS,
|
|
||||||
title="Objects to track",
|
|
||||||
description="List of object labels to track for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
filters: dict[str, FilterConfig] = Field(
|
filters: dict[str, FilterConfig] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Object filters."
|
||||||
title="Object filters",
|
|
||||||
description="Filters applied to detected objects to reduce false positives (area, ratio, confidence).",
|
|
||||||
)
|
|
||||||
mask: Union[str, list[str]] = Field(
|
|
||||||
default="",
|
|
||||||
title="Object mask",
|
|
||||||
description="Mask polygon used to prevent object detection in specified areas.",
|
|
||||||
)
|
)
|
||||||
|
mask: Union[str, list[str]] = Field(default="", title="Object mask.")
|
||||||
genai: GenAIObjectConfig = Field(
|
genai: GenAIObjectConfig = Field(
|
||||||
default_factory=GenAIObjectConfig,
|
default_factory=GenAIObjectConfig,
|
||||||
title="GenAI object config",
|
title="Config for using genai to analyze objects.",
|
||||||
description="GenAI options for describing tracked objects and sending frames for generation.",
|
|
||||||
)
|
)
|
||||||
_all_objects: list[str] = PrivateAttr()
|
_all_objects: list[str] = PrivateAttr()
|
||||||
|
|
||||||
|
|||||||
@@ -17,57 +17,37 @@ class ZoomingModeEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class PtzAutotrackConfig(FrigateBaseModel):
|
class PtzAutotrackConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable PTZ object autotracking.")
|
||||||
default=False,
|
|
||||||
title="Enable Autotracking",
|
|
||||||
description="Enable or disable automatic PTZ camera tracking of detected objects.",
|
|
||||||
)
|
|
||||||
calibrate_on_startup: bool = Field(
|
calibrate_on_startup: bool = Field(
|
||||||
default=False,
|
default=False, title="Perform a camera calibration when Frigate starts."
|
||||||
title="Calibrate on start",
|
|
||||||
description="Measure PTZ motor speeds on startup to improve tracking accuracy. Frigate will update config with movement_weights after calibration.",
|
|
||||||
)
|
)
|
||||||
zooming: ZoomingModeEnum = Field(
|
zooming: ZoomingModeEnum = Field(
|
||||||
default=ZoomingModeEnum.disabled,
|
default=ZoomingModeEnum.disabled, title="Autotracker zooming mode."
|
||||||
title="Zoom mode",
|
|
||||||
description="Control zoom behavior: disabled (pan/tilt only), absolute (most compatible), or relative (concurrent pan/tilt/zoom).",
|
|
||||||
)
|
)
|
||||||
zoom_factor: float = Field(
|
zoom_factor: float = Field(
|
||||||
default=0.3,
|
default=0.3,
|
||||||
title="Zoom factor",
|
title="Zooming factor (0.1-0.75).",
|
||||||
description="Control zoom level on tracked objects. Lower values keep more scene in view; higher values zoom in closer but may lose tracking. Values between 0.1 and 0.75.",
|
|
||||||
ge=0.1,
|
ge=0.1,
|
||||||
le=0.75,
|
le=0.75,
|
||||||
)
|
)
|
||||||
track: list[str] = Field(
|
track: list[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
|
||||||
default=DEFAULT_TRACKED_OBJECTS,
|
|
||||||
title="Tracked objects",
|
|
||||||
description="List of object types that should trigger autotracking.",
|
|
||||||
)
|
|
||||||
required_zones: list[str] = Field(
|
required_zones: list[str] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Required zones",
|
title="List of required zones to be entered in order to begin autotracking.",
|
||||||
description="Objects must enter one of these zones before autotracking begins.",
|
|
||||||
)
|
)
|
||||||
return_preset: str = Field(
|
return_preset: str = Field(
|
||||||
default="home",
|
default="home",
|
||||||
title="Return preset",
|
title="Name of camera preset to return to when object tracking is over.",
|
||||||
description="ONVIF preset name configured in camera firmware to return to after tracking ends.",
|
|
||||||
)
|
)
|
||||||
timeout: int = Field(
|
timeout: int = Field(
|
||||||
default=10,
|
default=10, title="Seconds to delay before returning to preset."
|
||||||
title="Return timeout",
|
|
||||||
description="Wait this many seconds after losing tracking before returning camera to preset position.",
|
|
||||||
)
|
)
|
||||||
movement_weights: Optional[Union[str, list[str]]] = Field(
|
movement_weights: Optional[Union[str, list[str]]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Movement weights",
|
title="Internal value used for PTZ movements based on the speed of your camera's motor.",
|
||||||
description="Calibration values automatically generated by camera calibration. Do not modify manually.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of autotracking."
|
||||||
title="Original autotrack state",
|
|
||||||
description="Internal field to track whether autotracking was enabled in configuration.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@field_validator("movement_weights", mode="before")
|
@field_validator("movement_weights", mode="before")
|
||||||
@@ -92,38 +72,16 @@ class PtzAutotrackConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class OnvifConfig(FrigateBaseModel):
|
class OnvifConfig(FrigateBaseModel):
|
||||||
host: str = Field(
|
host: str = Field(default="", title="Onvif Host")
|
||||||
default="",
|
port: int = Field(default=8000, title="Onvif Port")
|
||||||
title="ONVIF host",
|
user: Optional[EnvString] = Field(default=None, title="Onvif Username")
|
||||||
description="Host (and optional scheme) for the ONVIF service for this camera.",
|
password: Optional[EnvString] = Field(default=None, title="Onvif Password")
|
||||||
)
|
tls_insecure: bool = Field(default=False, title="Onvif Disable TLS verification")
|
||||||
port: int = Field(
|
|
||||||
default=8000,
|
|
||||||
title="ONVIF port",
|
|
||||||
description="Port number for the ONVIF service.",
|
|
||||||
)
|
|
||||||
user: Optional[EnvString] = Field(
|
|
||||||
default=None,
|
|
||||||
title="ONVIF username",
|
|
||||||
description="Username for ONVIF authentication; some devices require admin user for ONVIF.",
|
|
||||||
)
|
|
||||||
password: Optional[EnvString] = Field(
|
|
||||||
default=None,
|
|
||||||
title="ONVIF password",
|
|
||||||
description="Password for ONVIF authentication.",
|
|
||||||
)
|
|
||||||
tls_insecure: bool = Field(
|
|
||||||
default=False,
|
|
||||||
title="Disable TLS verify",
|
|
||||||
description="Skip TLS verification and disable digest auth for ONVIF (unsafe; use in safe networks only).",
|
|
||||||
)
|
|
||||||
autotracking: PtzAutotrackConfig = Field(
|
autotracking: PtzAutotrackConfig = Field(
|
||||||
default_factory=PtzAutotrackConfig,
|
default_factory=PtzAutotrackConfig,
|
||||||
title="Autotracking",
|
title="PTZ auto tracking config.",
|
||||||
description="Automatically track moving objects and keep them centered in the frame using PTZ camera movements.",
|
|
||||||
)
|
)
|
||||||
ignore_time_mismatch: bool = Field(
|
ignore_time_mismatch: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Ignore time mismatch",
|
title="Onvif Ignore Time Synchronization Mismatch Between Camera and Server",
|
||||||
description="Ignore time synchronization differences between camera and Frigate server for ONVIF communication.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -21,12 +21,7 @@ __all__ = [
|
|||||||
|
|
||||||
|
|
||||||
class RecordRetainConfig(FrigateBaseModel):
|
class RecordRetainConfig(FrigateBaseModel):
|
||||||
days: float = Field(
|
days: float = Field(default=0, ge=0, title="Default retention period.")
|
||||||
default=0,
|
|
||||||
ge=0,
|
|
||||||
title="Retention days",
|
|
||||||
description="Days to retain recordings.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class RetainModeEnum(str, Enum):
|
class RetainModeEnum(str, Enum):
|
||||||
@@ -36,37 +31,22 @@ class RetainModeEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class ReviewRetainConfig(FrigateBaseModel):
|
class ReviewRetainConfig(FrigateBaseModel):
|
||||||
days: float = Field(
|
days: float = Field(default=10, ge=0, title="Default retention period.")
|
||||||
default=10,
|
mode: RetainModeEnum = Field(default=RetainModeEnum.motion, title="Retain mode.")
|
||||||
ge=0,
|
|
||||||
title="Retention days",
|
|
||||||
description="Number of days to retain recordings of detection events.",
|
|
||||||
)
|
|
||||||
mode: RetainModeEnum = Field(
|
|
||||||
default=RetainModeEnum.motion,
|
|
||||||
title="Retention mode",
|
|
||||||
description="Mode for retention: all (save all segments), motion (save segments with motion), or active_objects (save segments with active objects).",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class EventsConfig(FrigateBaseModel):
|
class EventsConfig(FrigateBaseModel):
|
||||||
pre_capture: int = Field(
|
pre_capture: int = Field(
|
||||||
default=5,
|
default=5,
|
||||||
title="Pre-capture seconds",
|
title="Seconds to retain before event starts.",
|
||||||
description="Number of seconds before the detection event to include in the recording.",
|
|
||||||
le=MAX_PRE_CAPTURE,
|
le=MAX_PRE_CAPTURE,
|
||||||
ge=0,
|
ge=0,
|
||||||
)
|
)
|
||||||
post_capture: int = Field(
|
post_capture: int = Field(
|
||||||
default=5,
|
default=5, ge=0, title="Seconds to retain after event ends."
|
||||||
ge=0,
|
|
||||||
title="Post-capture seconds",
|
|
||||||
description="Number of seconds after the detection event to include in the recording.",
|
|
||||||
)
|
)
|
||||||
retain: ReviewRetainConfig = Field(
|
retain: ReviewRetainConfig = Field(
|
||||||
default_factory=ReviewRetainConfig,
|
default_factory=ReviewRetainConfig, title="Event retention settings."
|
||||||
title="Event retention",
|
|
||||||
description="Retention settings for recordings of detection events.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -80,65 +60,43 @@ class RecordQualityEnum(str, Enum):
|
|||||||
|
|
||||||
class RecordPreviewConfig(FrigateBaseModel):
|
class RecordPreviewConfig(FrigateBaseModel):
|
||||||
quality: RecordQualityEnum = Field(
|
quality: RecordQualityEnum = Field(
|
||||||
default=RecordQualityEnum.medium,
|
default=RecordQualityEnum.medium, title="Quality of recording preview."
|
||||||
title="Preview quality",
|
|
||||||
description="Preview quality level (very_low, low, medium, high, very_high).",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class RecordExportConfig(FrigateBaseModel):
|
class RecordExportConfig(FrigateBaseModel):
|
||||||
hwaccel_args: Union[str, list[str]] = Field(
|
hwaccel_args: Union[str, list[str]] = Field(
|
||||||
default="auto",
|
default="auto", title="Export-specific FFmpeg hardware acceleration arguments."
|
||||||
title="Export hwaccel args",
|
|
||||||
description="Hardware acceleration args to use for export/transcode operations.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class RecordConfig(FrigateBaseModel):
|
class RecordConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable record on all cameras.")
|
||||||
default=False,
|
|
||||||
title="Enable recording",
|
|
||||||
description="Enable or disable recording for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
expire_interval: int = Field(
|
expire_interval: int = Field(
|
||||||
default=60,
|
default=60,
|
||||||
title="Record cleanup interval",
|
title="Number of minutes to wait between cleanup runs.",
|
||||||
description="Minutes between cleanup passes that remove expired recording segments.",
|
|
||||||
)
|
)
|
||||||
continuous: RecordRetainConfig = Field(
|
continuous: RecordRetainConfig = Field(
|
||||||
default_factory=RecordRetainConfig,
|
default_factory=RecordRetainConfig,
|
||||||
title="Continuous retention",
|
title="Continuous recording retention settings.",
|
||||||
description="Number of days to retain recordings regardless of tracked objects or motion. Set to 0 if you only want to retain recordings of alerts and detections.",
|
|
||||||
)
|
)
|
||||||
motion: RecordRetainConfig = Field(
|
motion: RecordRetainConfig = Field(
|
||||||
default_factory=RecordRetainConfig,
|
default_factory=RecordRetainConfig, title="Motion recording retention settings."
|
||||||
title="Motion retention",
|
|
||||||
description="Number of days to retain recordings triggered by motion regardless of tracked objects. Set to 0 if you only want to retain recordings of alerts and detections.",
|
|
||||||
)
|
)
|
||||||
detections: EventsConfig = Field(
|
detections: EventsConfig = Field(
|
||||||
default_factory=EventsConfig,
|
default_factory=EventsConfig, title="Detection specific retention settings."
|
||||||
title="Detection retention",
|
|
||||||
description="Recording retention settings for detection events including pre/post capture durations.",
|
|
||||||
)
|
)
|
||||||
alerts: EventsConfig = Field(
|
alerts: EventsConfig = Field(
|
||||||
default_factory=EventsConfig,
|
default_factory=EventsConfig, title="Alert specific retention settings."
|
||||||
title="Alert retention",
|
|
||||||
description="Recording retention settings for alert events including pre/post capture durations.",
|
|
||||||
)
|
)
|
||||||
export: RecordExportConfig = Field(
|
export: RecordExportConfig = Field(
|
||||||
default_factory=RecordExportConfig,
|
default_factory=RecordExportConfig, title="Recording Export Config"
|
||||||
title="Export config",
|
|
||||||
description="Settings used when exporting recordings such as timelapse and hardware acceleration.",
|
|
||||||
)
|
)
|
||||||
preview: RecordPreviewConfig = Field(
|
preview: RecordPreviewConfig = Field(
|
||||||
default_factory=RecordPreviewConfig,
|
default_factory=RecordPreviewConfig, title="Recording Preview Config"
|
||||||
title="Preview config",
|
|
||||||
description="Settings controlling the quality of recording previews shown in the UI.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of recording."
|
||||||
title="Original recording state",
|
|
||||||
description="Indicates whether recording was enabled in the original static configuration.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
|||||||
@@ -21,32 +21,22 @@ DEFAULT_ALERT_OBJECTS = ["person", "car"]
|
|||||||
class AlertsConfig(FrigateBaseModel):
|
class AlertsConfig(FrigateBaseModel):
|
||||||
"""Configure alerts"""
|
"""Configure alerts"""
|
||||||
|
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable alerts.")
|
||||||
default=True,
|
|
||||||
title="Enable alerts",
|
|
||||||
description="Enable or disable alert generation for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
|
|
||||||
labels: list[str] = Field(
|
labels: list[str] = Field(
|
||||||
default=DEFAULT_ALERT_OBJECTS,
|
default=DEFAULT_ALERT_OBJECTS, title="Labels to create alerts for."
|
||||||
title="Alert labels",
|
|
||||||
description="List of object labels that qualify as alerts (for example: car, person).",
|
|
||||||
)
|
)
|
||||||
required_zones: Union[str, list[str]] = Field(
|
required_zones: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Required zones",
|
title="List of required zones to be entered in order to save the event as an alert.",
|
||||||
description="Zones that an object must enter to be considered an alert; leave empty to allow any zone.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of alerts."
|
||||||
title="Original alerts state",
|
|
||||||
description="Tracks whether alerts were originally enabled in the static configuration.",
|
|
||||||
)
|
)
|
||||||
cutoff_time: int = Field(
|
cutoff_time: int = Field(
|
||||||
default=40,
|
default=40,
|
||||||
title="Alerts cutoff time",
|
title="Time to cutoff alerts after no alert-causing activity has occurred.",
|
||||||
description="Seconds to wait after no alert-causing activity before cutting off an alert.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@field_validator("required_zones", mode="before")
|
@field_validator("required_zones", mode="before")
|
||||||
@@ -61,32 +51,22 @@ class AlertsConfig(FrigateBaseModel):
|
|||||||
class DetectionsConfig(FrigateBaseModel):
|
class DetectionsConfig(FrigateBaseModel):
|
||||||
"""Configure detections"""
|
"""Configure detections"""
|
||||||
|
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable detections.")
|
||||||
default=True,
|
|
||||||
title="Enable detections",
|
|
||||||
description="Enable or disable detection events for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
|
|
||||||
labels: Optional[list[str]] = Field(
|
labels: Optional[list[str]] = Field(
|
||||||
default=None,
|
default=None, title="Labels to create detections for."
|
||||||
title="Detection labels",
|
|
||||||
description="List of object labels that qualify as detection events.",
|
|
||||||
)
|
)
|
||||||
required_zones: Union[str, list[str]] = Field(
|
required_zones: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Required zones",
|
title="List of required zones to be entered in order to save the event as a detection.",
|
||||||
description="Zones that an object must enter to be considered a detection; leave empty to allow any zone.",
|
|
||||||
)
|
)
|
||||||
cutoff_time: int = Field(
|
cutoff_time: int = Field(
|
||||||
default=30,
|
default=30,
|
||||||
title="Detections cutoff time",
|
title="Time to cutoff detection after no detection-causing activity has occurred.",
|
||||||
description="Seconds to wait after no detection-causing activity before cutting off a detection.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of detections."
|
||||||
title="Original detections state",
|
|
||||||
description="Tracks whether detections were originally enabled in the static configuration.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@field_validator("required_zones", mode="before")
|
@field_validator("required_zones", mode="before")
|
||||||
@@ -101,42 +81,27 @@ class DetectionsConfig(FrigateBaseModel):
|
|||||||
class GenAIReviewConfig(FrigateBaseModel):
|
class GenAIReviewConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Enable GenAI descriptions",
|
title="Enable GenAI descriptions for review items.",
|
||||||
description="Enable or disable GenAI-generated descriptions and summaries for review items.",
|
|
||||||
)
|
|
||||||
alerts: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Enable GenAI for alerts",
|
|
||||||
description="Use GenAI to generate descriptions for alert items.",
|
|
||||||
)
|
|
||||||
detections: bool = Field(
|
|
||||||
default=False,
|
|
||||||
title="Enable GenAI for detections",
|
|
||||||
description="Use GenAI to generate descriptions for detection items.",
|
|
||||||
)
|
)
|
||||||
|
alerts: bool = Field(default=True, title="Enable GenAI for alerts.")
|
||||||
|
detections: bool = Field(default=False, title="Enable GenAI for detections.")
|
||||||
image_source: ImageSourceEnum = Field(
|
image_source: ImageSourceEnum = Field(
|
||||||
default=ImageSourceEnum.preview,
|
default=ImageSourceEnum.preview,
|
||||||
title="Review image source",
|
title="Image source for review descriptions.",
|
||||||
description="Source of images sent to GenAI ('preview' or 'recordings'); 'recordings' uses higher quality frames but more tokens.",
|
|
||||||
)
|
)
|
||||||
additional_concerns: list[str] = Field(
|
additional_concerns: list[str] = Field(
|
||||||
default=[],
|
default=[],
|
||||||
title="Additional concerns",
|
title="Additional concerns that GenAI should make note of on this camera.",
|
||||||
description="A list of additional concerns or notes the GenAI should consider when evaluating activity on this camera.",
|
|
||||||
)
|
)
|
||||||
debug_save_thumbnails: bool = Field(
|
debug_save_thumbnails: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Save thumbnails",
|
title="Save thumbnails sent to generative AI for debugging purposes.",
|
||||||
description="Save thumbnails that are sent to the GenAI provider for debugging and review.",
|
|
||||||
)
|
)
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None,
|
default=None, title="Keep track of original state of generative AI."
|
||||||
title="Original GenAI state",
|
|
||||||
description="Tracks whether GenAI review was originally enabled in the static configuration.",
|
|
||||||
)
|
)
|
||||||
preferred_language: str | None = Field(
|
preferred_language: str | None = Field(
|
||||||
title="Preferred language",
|
title="Preferred language for GenAI Response",
|
||||||
description="Preferred language to request from the GenAI provider for generated responses.",
|
|
||||||
default=None,
|
default=None,
|
||||||
)
|
)
|
||||||
activity_context_prompt: str = Field(
|
activity_context_prompt: str = Field(
|
||||||
@@ -174,24 +139,19 @@ Evaluate in this order:
|
|||||||
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
|
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
|
||||||
|
|
||||||
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
|
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
|
||||||
title="Activity context prompt",
|
title="Custom activity context prompt defining normal and suspicious activity patterns for this property.",
|
||||||
description="Custom prompt describing what is and is not suspicious activity to provide context for GenAI summaries.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class ReviewConfig(FrigateBaseModel):
|
class ReviewConfig(FrigateBaseModel):
|
||||||
|
"""Configure reviews"""
|
||||||
|
|
||||||
alerts: AlertsConfig = Field(
|
alerts: AlertsConfig = Field(
|
||||||
default_factory=AlertsConfig,
|
default_factory=AlertsConfig, title="Review alerts config."
|
||||||
title="Alerts config",
|
|
||||||
description="Settings for which tracked objects generate alerts and how alerts are retained.",
|
|
||||||
)
|
)
|
||||||
detections: DetectionsConfig = Field(
|
detections: DetectionsConfig = Field(
|
||||||
default_factory=DetectionsConfig,
|
default_factory=DetectionsConfig, title="Review detections config."
|
||||||
title="Detections config",
|
|
||||||
description="Settings for creating detection events (non-alert) and how long to keep them.",
|
|
||||||
)
|
)
|
||||||
genai: GenAIReviewConfig = Field(
|
genai: GenAIReviewConfig = Field(
|
||||||
default_factory=GenAIReviewConfig,
|
default_factory=GenAIReviewConfig, title="Review description genai config."
|
||||||
title="GenAI config",
|
|
||||||
description="Controls use of generative AI for producing descriptions and summaries of review items.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -9,68 +9,36 @@ __all__ = ["SnapshotsConfig", "RetainConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class RetainConfig(FrigateBaseModel):
|
class RetainConfig(FrigateBaseModel):
|
||||||
default: float = Field(
|
default: float = Field(default=10, title="Default retention period.")
|
||||||
default=10,
|
mode: RetainModeEnum = Field(default=RetainModeEnum.motion, title="Retain mode.")
|
||||||
title="Default retention",
|
|
||||||
description="Default number of days to retain snapshots.",
|
|
||||||
)
|
|
||||||
mode: RetainModeEnum = Field(
|
|
||||||
default=RetainModeEnum.motion,
|
|
||||||
title="Retention mode",
|
|
||||||
description="Mode for retention: all (save all segments), motion (save segments with motion), or active_objects (save segments with active objects).",
|
|
||||||
)
|
|
||||||
objects: dict[str, float] = Field(
|
objects: dict[str, float] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Object retention period."
|
||||||
title="Object retention",
|
|
||||||
description="Per-object overrides for snapshot retention days.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class SnapshotsConfig(FrigateBaseModel):
|
class SnapshotsConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Snapshots enabled.")
|
||||||
default=False,
|
|
||||||
title="Snapshots enabled",
|
|
||||||
description="Enable or disable saving snapshots for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
clean_copy: bool = Field(
|
clean_copy: bool = Field(
|
||||||
default=True,
|
default=True, title="Create a clean copy of the snapshot image."
|
||||||
title="Save clean copy",
|
|
||||||
description="Save an unannotated clean copy of snapshots in addition to annotated ones.",
|
|
||||||
)
|
)
|
||||||
timestamp: bool = Field(
|
timestamp: bool = Field(
|
||||||
default=False,
|
default=False, title="Add a timestamp overlay on the snapshot."
|
||||||
title="Timestamp overlay",
|
|
||||||
description="Overlay a timestamp on saved snapshots.",
|
|
||||||
)
|
)
|
||||||
bounding_box: bool = Field(
|
bounding_box: bool = Field(
|
||||||
default=True,
|
default=True, title="Add a bounding box overlay on the snapshot."
|
||||||
title="Bounding box overlay",
|
|
||||||
description="Draw bounding boxes for tracked objects on saved snapshots.",
|
|
||||||
)
|
|
||||||
crop: bool = Field(
|
|
||||||
default=False,
|
|
||||||
title="Crop snapshot",
|
|
||||||
description="Crop saved snapshots to the detected object's bounding box.",
|
|
||||||
)
|
)
|
||||||
|
crop: bool = Field(default=False, title="Crop the snapshot to the detected object.")
|
||||||
required_zones: list[str] = Field(
|
required_zones: list[str] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Required zones",
|
title="List of required zones to be entered in order to save a snapshot.",
|
||||||
description="Zones an object must enter for a snapshot to be saved.",
|
|
||||||
)
|
|
||||||
height: Optional[int] = Field(
|
|
||||||
default=None,
|
|
||||||
title="Snapshot height",
|
|
||||||
description="Height (pixels) to resize saved snapshots to; leave empty to preserve original size.",
|
|
||||||
)
|
)
|
||||||
|
height: Optional[int] = Field(default=None, title="Snapshot image height.")
|
||||||
retain: RetainConfig = Field(
|
retain: RetainConfig = Field(
|
||||||
default_factory=RetainConfig,
|
default_factory=RetainConfig, title="Snapshot retention."
|
||||||
title="Snapshot retention",
|
|
||||||
description="Retention settings for saved snapshots including default days and per-object overrides.",
|
|
||||||
)
|
)
|
||||||
quality: int = Field(
|
quality: int = Field(
|
||||||
default=70,
|
default=70,
|
||||||
title="JPEG quality",
|
title="Quality of the encoded jpeg (0-100).",
|
||||||
description="JPEG encode quality for saved snapshots (0-100).",
|
|
||||||
ge=0,
|
ge=0,
|
||||||
le=100,
|
le=100,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -27,27 +27,9 @@ class TimestampPositionEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class ColorConfig(FrigateBaseModel):
|
class ColorConfig(FrigateBaseModel):
|
||||||
red: int = Field(
|
red: int = Field(default=255, ge=0, le=255, title="Red")
|
||||||
default=255,
|
green: int = Field(default=255, ge=0, le=255, title="Green")
|
||||||
ge=0,
|
blue: int = Field(default=255, ge=0, le=255, title="Blue")
|
||||||
le=255,
|
|
||||||
title="Red",
|
|
||||||
description="Red component (0-255) for timestamp color.",
|
|
||||||
)
|
|
||||||
green: int = Field(
|
|
||||||
default=255,
|
|
||||||
ge=0,
|
|
||||||
le=255,
|
|
||||||
title="Green",
|
|
||||||
description="Green component (0-255) for timestamp color.",
|
|
||||||
)
|
|
||||||
blue: int = Field(
|
|
||||||
default=255,
|
|
||||||
ge=0,
|
|
||||||
le=255,
|
|
||||||
title="Blue",
|
|
||||||
description="Blue component (0-255) for timestamp color.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TimestampEffectEnum(str, Enum):
|
class TimestampEffectEnum(str, Enum):
|
||||||
@@ -57,27 +39,11 @@ class TimestampEffectEnum(str, Enum):
|
|||||||
|
|
||||||
class TimestampStyleConfig(FrigateBaseModel):
|
class TimestampStyleConfig(FrigateBaseModel):
|
||||||
position: TimestampPositionEnum = Field(
|
position: TimestampPositionEnum = Field(
|
||||||
default=TimestampPositionEnum.tl,
|
default=TimestampPositionEnum.tl, title="Timestamp position."
|
||||||
title="Timestamp position",
|
|
||||||
description="Position of the timestamp on the image (tl/tr/bl/br).",
|
|
||||||
)
|
|
||||||
format: str = Field(
|
|
||||||
default=DEFAULT_TIME_FORMAT,
|
|
||||||
title="Timestamp format",
|
|
||||||
description="Datetime format string used for timestamps (Python datetime format codes).",
|
|
||||||
)
|
|
||||||
color: ColorConfig = Field(
|
|
||||||
default_factory=ColorConfig,
|
|
||||||
title="Timestamp color",
|
|
||||||
description="RGB color values for the timestamp text (all values 0-255).",
|
|
||||||
)
|
|
||||||
thickness: int = Field(
|
|
||||||
default=2,
|
|
||||||
title="Timestamp thickness",
|
|
||||||
description="Line thickness of the timestamp text.",
|
|
||||||
)
|
)
|
||||||
|
format: str = Field(default=DEFAULT_TIME_FORMAT, title="Timestamp format.")
|
||||||
|
color: ColorConfig = Field(default_factory=ColorConfig, title="Timestamp color.")
|
||||||
|
thickness: int = Field(default=2, title="Timestamp thickness.")
|
||||||
effect: Optional[TimestampEffectEnum] = Field(
|
effect: Optional[TimestampEffectEnum] = Field(
|
||||||
default=None,
|
default=None, title="Timestamp effect."
|
||||||
title="Timestamp effect",
|
|
||||||
description="Visual effect for the timestamp text (none, solid, shadow).",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -6,13 +6,7 @@ __all__ = ["CameraUiConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class CameraUiConfig(FrigateBaseModel):
|
class CameraUiConfig(FrigateBaseModel):
|
||||||
order: int = Field(
|
order: int = Field(default=0, title="Order of camera in UI.")
|
||||||
default=0,
|
|
||||||
title="UI order",
|
|
||||||
description="Numeric order used to sort the camera in the UI (default dashboard and lists); larger numbers appear later.",
|
|
||||||
)
|
|
||||||
dashboard: bool = Field(
|
dashboard: bool = Field(
|
||||||
default=True,
|
default=True, title="Show this camera in Frigate dashboard UI."
|
||||||
title="Show in UI",
|
|
||||||
description="Toggle whether this camera is visible everywhere in the Frigate UI. Disabling this will require manually editing the config to view this camera in the UI again.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -14,46 +14,36 @@ logger = logging.getLogger(__name__)
|
|||||||
|
|
||||||
class ZoneConfig(BaseModel):
|
class ZoneConfig(BaseModel):
|
||||||
friendly_name: Optional[str] = Field(
|
friendly_name: Optional[str] = Field(
|
||||||
None,
|
None, title="Zone friendly name used in the Frigate UI."
|
||||||
title="Zone name",
|
|
||||||
description="A user-friendly name for the zone, displayed in the Frigate UI. If not set, a formatted version of the zone name will be used.",
|
|
||||||
)
|
)
|
||||||
filters: dict[str, FilterConfig] = Field(
|
filters: dict[str, FilterConfig] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Zone filters."
|
||||||
title="Zone filters",
|
|
||||||
description="Filters to apply to objects within this zone. Used to reduce false positives or restrict which objects are considered present in the zone.",
|
|
||||||
)
|
)
|
||||||
coordinates: Union[str, list[str]] = Field(
|
coordinates: Union[str, list[str]] = Field(
|
||||||
title="Coordinates",
|
title="Coordinates polygon for the defined zone."
|
||||||
description="Polygon coordinates that define the zone area. Can be a comma-separated string or a list of coordinate strings. Coordinates should be relative (0-1) or absolute (legacy).",
|
|
||||||
)
|
)
|
||||||
distances: Optional[Union[str, list[str]]] = Field(
|
distances: Optional[Union[str, list[str]]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Real-world distances",
|
title="Real-world distances for the sides of quadrilateral for the defined zone.",
|
||||||
description="Optional real-world distances for each side of the zone quadrilateral, used for speed or distance calculations. Must have exactly 4 values if set.",
|
|
||||||
)
|
)
|
||||||
inertia: int = Field(
|
inertia: int = Field(
|
||||||
default=3,
|
default=3,
|
||||||
title="Inertia frames",
|
title="Number of consecutive frames required for object to be considered present in the zone.",
|
||||||
gt=0,
|
gt=0,
|
||||||
description="Number of consecutive frames an object must be detected in the zone before it is considered present. Helps filter out transient detections.",
|
|
||||||
)
|
)
|
||||||
loitering_time: int = Field(
|
loitering_time: int = Field(
|
||||||
default=0,
|
default=0,
|
||||||
ge=0,
|
ge=0,
|
||||||
title="Loitering seconds",
|
title="Number of seconds that an object must loiter to be considered in the zone.",
|
||||||
description="Number of seconds an object must remain in the zone to be considered as loitering. Set to 0 to disable loitering detection.",
|
|
||||||
)
|
)
|
||||||
speed_threshold: Optional[float] = Field(
|
speed_threshold: Optional[float] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
ge=0.1,
|
ge=0.1,
|
||||||
title="Minimum speed",
|
title="Minimum speed value for an object to be considered in the zone.",
|
||||||
description="Minimum speed (in real-world units if distances are set) required for an object to be considered present in the zone. Used for speed-based zone triggers.",
|
|
||||||
)
|
)
|
||||||
objects: Union[str, list[str]] = Field(
|
objects: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Trigger objects",
|
title="List of objects that can trigger the zone.",
|
||||||
description="List of object types (from labelmap) that can trigger this zone. Can be a string or a list of strings. If empty, all objects are considered.",
|
|
||||||
)
|
)
|
||||||
_color: Optional[tuple[int, int, int]] = PrivateAttr()
|
_color: Optional[tuple[int, int, int]] = PrivateAttr()
|
||||||
_contour: np.ndarray = PrivateAttr()
|
_contour: np.ndarray = PrivateAttr()
|
||||||
|
|||||||
@@ -8,21 +8,13 @@ __all__ = ["CameraGroupConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class CameraGroupConfig(FrigateBaseModel):
|
class CameraGroupConfig(FrigateBaseModel):
|
||||||
|
"""Represents a group of cameras."""
|
||||||
|
|
||||||
cameras: Union[str, list[str]] = Field(
|
cameras: Union[str, list[str]] = Field(
|
||||||
default_factory=list,
|
default_factory=list, title="List of cameras in this group."
|
||||||
title="Camera list",
|
|
||||||
description="Array of camera names included in this group.",
|
|
||||||
)
|
|
||||||
icon: str = Field(
|
|
||||||
default="generic",
|
|
||||||
title="Group icon",
|
|
||||||
description="Icon used to represent the camera group in the UI.",
|
|
||||||
)
|
|
||||||
order: int = Field(
|
|
||||||
default=0,
|
|
||||||
title="Sort order",
|
|
||||||
description="Numeric order used to sort camera groups in the UI; larger numbers appear later.",
|
|
||||||
)
|
)
|
||||||
|
icon: str = Field(default="generic", title="Icon that represents camera group.")
|
||||||
|
order: int = Field(default=0, title="Sort order for group.")
|
||||||
|
|
||||||
@field_validator("cameras", mode="before")
|
@field_validator("cameras", mode="before")
|
||||||
@classmethod
|
@classmethod
|
||||||
|
|||||||
@@ -43,43 +43,28 @@ class ObjectClassificationType(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class AudioTranscriptionConfig(FrigateBaseModel):
|
class AudioTranscriptionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable audio transcription.")
|
||||||
default=False,
|
|
||||||
title="Enable audio transcription",
|
|
||||||
description="Enable or disable automatic audio transcription for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
language: str = Field(
|
language: str = Field(
|
||||||
default="en",
|
default="en",
|
||||||
title="Transcription language",
|
title="Language abbreviation to use for audio event transcription/translation.",
|
||||||
description="Language code used for transcription/translation (for example 'en' for English). See https://whisper-api.com/docs/languages/ for supported language codes.",
|
|
||||||
)
|
)
|
||||||
device: Optional[EnrichmentsDeviceEnum] = Field(
|
device: Optional[EnrichmentsDeviceEnum] = Field(
|
||||||
default=EnrichmentsDeviceEnum.CPU,
|
default=EnrichmentsDeviceEnum.CPU,
|
||||||
title="Transcription device",
|
title="The device used for audio transcription.",
|
||||||
description="Device key (CPU/GPU) to run the transcription model on. Only NVIDIA CUDA GPUs are currently supported for transcription.",
|
|
||||||
)
|
)
|
||||||
model_size: str = Field(
|
model_size: str = Field(
|
||||||
default="small",
|
default="small", title="The size of the embeddings model used."
|
||||||
title="Model size",
|
|
||||||
description="Model size to use for offline audio event transcription.",
|
|
||||||
)
|
)
|
||||||
live_enabled: Optional[bool] = Field(
|
live_enabled: Optional[bool] = Field(
|
||||||
default=False,
|
default=False, title="Enable live transcriptions."
|
||||||
title="Live transcription",
|
|
||||||
description="Enable streaming live transcription for audio as it is received.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class BirdClassificationConfig(FrigateBaseModel):
|
class BirdClassificationConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable bird classification.")
|
||||||
default=False,
|
|
||||||
title="Bird classification",
|
|
||||||
description="Enable or disable bird classification.",
|
|
||||||
)
|
|
||||||
threshold: float = Field(
|
threshold: float = Field(
|
||||||
default=0.9,
|
default=0.9,
|
||||||
title="Minimum score",
|
title="Minimum classification score required to be considered a match.",
|
||||||
description="Minimum classification score required to accept a bird classification.",
|
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
@@ -87,62 +72,42 @@ class BirdClassificationConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
class CustomClassificationStateCameraConfig(FrigateBaseModel):
|
class CustomClassificationStateCameraConfig(FrigateBaseModel):
|
||||||
crop: list[float, float, float, float] = Field(
|
crop: list[float, float, float, float] = Field(
|
||||||
title="Classification crop",
|
title="Crop of image frame on this camera to run classification on."
|
||||||
description="Crop coordinates to use for running classification on this camera.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class CustomClassificationStateConfig(FrigateBaseModel):
|
class CustomClassificationStateConfig(FrigateBaseModel):
|
||||||
cameras: Dict[str, CustomClassificationStateCameraConfig] = Field(
|
cameras: Dict[str, CustomClassificationStateCameraConfig] = Field(
|
||||||
title="Classification cameras",
|
title="Cameras to run classification on."
|
||||||
description="Per-camera crop and settings for running state classification.",
|
|
||||||
)
|
)
|
||||||
motion: bool = Field(
|
motion: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Run on motion",
|
title="If classification should be run when motion is detected in the crop.",
|
||||||
description="If true, run classification when motion is detected within the specified crop.",
|
|
||||||
)
|
)
|
||||||
interval: int | None = Field(
|
interval: int | None = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Classification interval",
|
title="Interval to run classification on in seconds.",
|
||||||
description="Interval (seconds) between periodic classification runs for state classification.",
|
|
||||||
gt=0,
|
gt=0,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class CustomClassificationObjectConfig(FrigateBaseModel):
|
class CustomClassificationObjectConfig(FrigateBaseModel):
|
||||||
objects: list[str] = Field(
|
objects: list[str] = Field(title="Object types to classify.")
|
||||||
default_factory=list,
|
|
||||||
title="Classify objects",
|
|
||||||
description="List of object types to run object classification on.",
|
|
||||||
)
|
|
||||||
classification_type: ObjectClassificationType = Field(
|
classification_type: ObjectClassificationType = Field(
|
||||||
default=ObjectClassificationType.sub_label,
|
default=ObjectClassificationType.sub_label,
|
||||||
title="Classification type",
|
title="Type of classification that is applied.",
|
||||||
description="Classification type applied: 'sub_label' (adds sub_label) or other supported types.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class CustomClassificationConfig(FrigateBaseModel):
|
class CustomClassificationConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable running the model.")
|
||||||
default=True,
|
name: str | None = Field(default=None, title="Name of classification model.")
|
||||||
title="Enable model",
|
|
||||||
description="Enable or disable the custom classification model.",
|
|
||||||
)
|
|
||||||
name: str | None = Field(
|
|
||||||
default=None,
|
|
||||||
title="Model name",
|
|
||||||
description="Identifier for the custom classification model to use.",
|
|
||||||
)
|
|
||||||
threshold: float = Field(
|
threshold: float = Field(
|
||||||
default=0.8,
|
default=0.8, title="Classification score threshold to change the state."
|
||||||
title="Score threshold",
|
|
||||||
description="Score threshold used to change the classification state.",
|
|
||||||
)
|
)
|
||||||
save_attempts: int | None = Field(
|
save_attempts: int | None = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Save attempts",
|
title="Number of classification attempts to save in the recent classifications tab. If not specified, defaults to 200 for object classification and 100 for state classification.",
|
||||||
description="How many classification attempts to save for recent classifications UI.",
|
|
||||||
ge=0,
|
ge=0,
|
||||||
)
|
)
|
||||||
object_config: CustomClassificationObjectConfig | None = Field(default=None)
|
object_config: CustomClassificationObjectConfig | None = Field(default=None)
|
||||||
@@ -151,76 +116,47 @@ class CustomClassificationConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
class ClassificationConfig(FrigateBaseModel):
|
class ClassificationConfig(FrigateBaseModel):
|
||||||
bird: BirdClassificationConfig = Field(
|
bird: BirdClassificationConfig = Field(
|
||||||
default_factory=BirdClassificationConfig,
|
default_factory=BirdClassificationConfig, title="Bird classification config."
|
||||||
title="Bird classification config",
|
|
||||||
description="Settings specific to bird classification models.",
|
|
||||||
)
|
)
|
||||||
custom: Dict[str, CustomClassificationConfig] = Field(
|
custom: Dict[str, CustomClassificationConfig] = Field(
|
||||||
default={},
|
default={}, title="Custom Classification Model Configs."
|
||||||
title="Custom Classification Models",
|
|
||||||
description="Configuration for custom classification models used for objects or state detection.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class SemanticSearchConfig(FrigateBaseModel):
|
class SemanticSearchConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable semantic search.")
|
||||||
default=False,
|
|
||||||
title="Enable semantic search",
|
|
||||||
description="Enable or disable the semantic search feature.",
|
|
||||||
)
|
|
||||||
reindex: Optional[bool] = Field(
|
reindex: Optional[bool] = Field(
|
||||||
default=False,
|
default=False, title="Reindex all tracked objects on startup."
|
||||||
title="Reindex on startup",
|
|
||||||
description="Trigger a full reindex of historical tracked objects into the embeddings database.",
|
|
||||||
)
|
)
|
||||||
model: Optional[SemanticSearchModelEnum] = Field(
|
model: Optional[SemanticSearchModelEnum] = Field(
|
||||||
default=SemanticSearchModelEnum.jinav1,
|
default=SemanticSearchModelEnum.jinav1,
|
||||||
title="Semantic search model",
|
title="The CLIP model to use for semantic search.",
|
||||||
description="The embeddings model to use for semantic search (for example 'jinav1').",
|
|
||||||
)
|
)
|
||||||
model_size: str = Field(
|
model_size: str = Field(
|
||||||
default="small",
|
default="small", title="The size of the embeddings model used."
|
||||||
title="Model size",
|
|
||||||
description="Select model size; 'small' runs on CPU and 'large' typically requires GPU.",
|
|
||||||
)
|
)
|
||||||
device: Optional[str] = Field(
|
device: Optional[str] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Device",
|
title="The device key to use for semantic search.",
|
||||||
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
|
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class TriggerConfig(FrigateBaseModel):
|
class TriggerConfig(FrigateBaseModel):
|
||||||
friendly_name: Optional[str] = Field(
|
friendly_name: Optional[str] = Field(
|
||||||
None,
|
None, title="Trigger friendly name used in the Frigate UI."
|
||||||
title="Friendly name",
|
|
||||||
description="Optional friendly name displayed in the UI for this trigger.",
|
|
||||||
)
|
|
||||||
enabled: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Enable this trigger",
|
|
||||||
description="Enable or disable this semantic search trigger.",
|
|
||||||
)
|
|
||||||
type: TriggerType = Field(
|
|
||||||
default=TriggerType.DESCRIPTION,
|
|
||||||
title="Trigger type",
|
|
||||||
description="Type of trigger: 'thumbnail' (match against image) or 'description' (match against text).",
|
|
||||||
)
|
|
||||||
data: str = Field(
|
|
||||||
title="Trigger content",
|
|
||||||
description="Text phrase or thumbnail ID to match against tracked objects.",
|
|
||||||
)
|
)
|
||||||
|
enabled: bool = Field(default=True, title="Enable this trigger")
|
||||||
|
type: TriggerType = Field(default=TriggerType.DESCRIPTION, title="Type of trigger")
|
||||||
|
data: str = Field(title="Trigger content (text phrase or image ID)")
|
||||||
threshold: float = Field(
|
threshold: float = Field(
|
||||||
title="Trigger threshold",
|
title="Confidence score required to run the trigger",
|
||||||
description="Minimum similarity score (0-1) required to activate this trigger.",
|
|
||||||
default=0.8,
|
default=0.8,
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
actions: List[TriggerAction] = Field(
|
actions: List[TriggerAction] = Field(
|
||||||
default=[],
|
default=[], title="Actions to perform when trigger is matched"
|
||||||
title="Trigger actions",
|
|
||||||
description="List of actions to execute when trigger matches (notification, sub_label, attribute).",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
||||||
@@ -229,191 +165,147 @@ class TriggerConfig(FrigateBaseModel):
|
|||||||
class CameraSemanticSearchConfig(FrigateBaseModel):
|
class CameraSemanticSearchConfig(FrigateBaseModel):
|
||||||
triggers: Dict[str, TriggerConfig] = Field(
|
triggers: Dict[str, TriggerConfig] = Field(
|
||||||
default={},
|
default={},
|
||||||
title="Triggers",
|
title="Trigger actions on tracked objects that match existing thumbnails or descriptions",
|
||||||
description="Actions and matching criteria for camera-specific semantic search triggers.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
||||||
|
|
||||||
|
|
||||||
class FaceRecognitionConfig(FrigateBaseModel):
|
class FaceRecognitionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable face recognition.")
|
||||||
default=False,
|
|
||||||
title="Enable face recognition",
|
|
||||||
description="Enable or disable face recognition for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
model_size: str = Field(
|
model_size: str = Field(
|
||||||
default="small",
|
default="small", title="The size of the embeddings model used."
|
||||||
title="Model size",
|
|
||||||
description="Model size to use for face embeddings (small/large); larger may require GPU.",
|
|
||||||
)
|
)
|
||||||
unknown_score: float = Field(
|
unknown_score: float = Field(
|
||||||
title="Unknown score threshold",
|
title="Minimum face distance score required to be marked as a potential match.",
|
||||||
description="Distance threshold below which a face is considered a potential match (higher = stricter).",
|
|
||||||
default=0.8,
|
default=0.8,
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
detection_threshold: float = Field(
|
detection_threshold: float = Field(
|
||||||
default=0.7,
|
default=0.7,
|
||||||
title="Detection threshold",
|
title="Minimum face detection score required to be considered a face.",
|
||||||
description="Minimum detection confidence required to consider a face detection valid.",
|
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
recognition_threshold: float = Field(
|
recognition_threshold: float = Field(
|
||||||
default=0.9,
|
default=0.9,
|
||||||
title="Recognition threshold",
|
title="Minimum face distance score required to be considered a match.",
|
||||||
description="Face embedding distance threshold to consider two faces a match.",
|
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
min_area: int = Field(
|
min_area: int = Field(
|
||||||
default=750,
|
default=750, title="Min area of face box to consider running face recognition."
|
||||||
title="Minimum face area",
|
|
||||||
description="Minimum area (pixels) of a detected face box required to attempt recognition.",
|
|
||||||
)
|
)
|
||||||
min_faces: int = Field(
|
min_faces: int = Field(
|
||||||
default=1,
|
default=1,
|
||||||
gt=0,
|
gt=0,
|
||||||
le=6,
|
le=6,
|
||||||
title="Minimum faces",
|
title="Min face recognitions for the sub label to be applied to the person object.",
|
||||||
description="Minimum number of face recognitions required before applying a recognized sub-label to a person.",
|
|
||||||
)
|
)
|
||||||
save_attempts: int = Field(
|
save_attempts: int = Field(
|
||||||
default=200,
|
default=200,
|
||||||
ge=0,
|
ge=0,
|
||||||
title="Save attempts",
|
title="Number of face attempts to save in the recent recognitions tab.",
|
||||||
description="Number of face recognition attempts to retain for recent recognition UI.",
|
|
||||||
)
|
)
|
||||||
blur_confidence_filter: bool = Field(
|
blur_confidence_filter: bool = Field(
|
||||||
default=True,
|
default=True, title="Apply blur quality filter to face confidence."
|
||||||
title="Blur confidence filter",
|
|
||||||
description="Adjust confidence scores based on image blur to reduce false positives for poor quality faces.",
|
|
||||||
)
|
)
|
||||||
device: Optional[str] = Field(
|
device: Optional[str] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Device",
|
title="The device key to use for face recognition.",
|
||||||
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
|
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class CameraFaceRecognitionConfig(FrigateBaseModel):
|
class CameraFaceRecognitionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable face recognition.")
|
||||||
default=False,
|
|
||||||
title="Enable face recognition",
|
|
||||||
description="Enable or disable face recognition.",
|
|
||||||
)
|
|
||||||
min_area: int = Field(
|
min_area: int = Field(
|
||||||
default=750,
|
default=750, title="Min area of face box to consider running face recognition."
|
||||||
title="Minimum face area",
|
|
||||||
description="Minimum area (pixels) of a detected face box required to attempt recognition.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
||||||
|
|
||||||
|
|
||||||
class ReplaceRule(FrigateBaseModel):
|
class ReplaceRule(FrigateBaseModel):
|
||||||
pattern: str = Field(..., title="Regex pattern")
|
pattern: str = Field(..., title="Regex pattern to match.")
|
||||||
replacement: str = Field(..., title="Replacement string")
|
replacement: str = Field(
|
||||||
|
..., title="Replacement string (supports backrefs like '\\1')."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class LicensePlateRecognitionConfig(FrigateBaseModel):
|
class LicensePlateRecognitionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable license plate recognition.")
|
||||||
default=False,
|
|
||||||
title="Enable LPR",
|
|
||||||
description="Enable or disable license plate recognition for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
|
||||||
model_size: str = Field(
|
model_size: str = Field(
|
||||||
default="small",
|
default="small", title="The size of the embeddings model used."
|
||||||
title="Model size",
|
|
||||||
description="Model size used for text detection/recognition. Most users should use 'small'.",
|
|
||||||
)
|
)
|
||||||
detection_threshold: float = Field(
|
detection_threshold: float = Field(
|
||||||
default=0.7,
|
default=0.7,
|
||||||
title="Detection threshold",
|
title="License plate object confidence score required to begin running recognition.",
|
||||||
description="Detection confidence threshold to begin running OCR on a suspected plate.",
|
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
min_area: int = Field(
|
min_area: int = Field(
|
||||||
default=1000,
|
default=1000,
|
||||||
title="Minimum plate area",
|
title="Minimum area of license plate to begin running recognition.",
|
||||||
description="Minimum plate area (pixels) required to attempt recognition.",
|
|
||||||
)
|
)
|
||||||
recognition_threshold: float = Field(
|
recognition_threshold: float = Field(
|
||||||
default=0.9,
|
default=0.9,
|
||||||
title="Recognition threshold",
|
title="Recognition confidence score required to add the plate to the object as a sub label.",
|
||||||
description="Confidence threshold required for recognized plate text to be attached as a sub-label.",
|
|
||||||
gt=0.0,
|
gt=0.0,
|
||||||
le=1.0,
|
le=1.0,
|
||||||
)
|
)
|
||||||
min_plate_length: int = Field(
|
min_plate_length: int = Field(
|
||||||
default=4,
|
default=4,
|
||||||
title="Min plate length",
|
title="Minimum number of characters a license plate must have to be added to the object as a sub label.",
|
||||||
description="Minimum number of characters a recognized plate must contain to be considered valid.",
|
|
||||||
)
|
)
|
||||||
format: Optional[str] = Field(
|
format: Optional[str] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Plate format regex",
|
title="Regular expression for the expected format of license plate.",
|
||||||
description="Optional regex to validate recognized plate strings against an expected format.",
|
|
||||||
)
|
)
|
||||||
match_distance: int = Field(
|
match_distance: int = Field(
|
||||||
default=1,
|
default=1,
|
||||||
title="Match distance",
|
title="Allow this number of missing/incorrect characters to still cause a detected plate to match a known plate.",
|
||||||
description="Number of character mismatches allowed when comparing detected plates to known plates.",
|
|
||||||
ge=0,
|
ge=0,
|
||||||
)
|
)
|
||||||
known_plates: Optional[Dict[str, List[str]]] = Field(
|
known_plates: Optional[Dict[str, List[str]]] = Field(
|
||||||
default={},
|
default={}, title="Known plates to track (strings or regular expressions)."
|
||||||
title="Known plates",
|
|
||||||
description="List of plates or regexes to specially track or alert on.",
|
|
||||||
)
|
)
|
||||||
enhancement: int = Field(
|
enhancement: int = Field(
|
||||||
default=0,
|
default=0,
|
||||||
title="Enhancement level",
|
title="Amount of contrast adjustment and denoising to apply to license plate images before recognition.",
|
||||||
description="Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution.",
|
|
||||||
ge=0,
|
ge=0,
|
||||||
le=10,
|
le=10,
|
||||||
)
|
)
|
||||||
debug_save_plates: bool = Field(
|
debug_save_plates: bool = Field(
|
||||||
default=False,
|
default=False,
|
||||||
title="Save debug plates",
|
title="Save plates captured for LPR for debugging purposes.",
|
||||||
description="Save plate crop images for debugging LPR performance.",
|
|
||||||
)
|
)
|
||||||
device: Optional[str] = Field(
|
device: Optional[str] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Device",
|
title="The device key to use for LPR.",
|
||||||
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
|
description="This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information",
|
||||||
)
|
)
|
||||||
replace_rules: List[ReplaceRule] = Field(
|
replace_rules: List[ReplaceRule] = Field(
|
||||||
default_factory=list,
|
default_factory=list,
|
||||||
title="Replacement rules",
|
title="List of regex replacement rules for normalizing detected plates. Each rule has 'pattern' and 'replacement'.",
|
||||||
description="Regex replacement rules used to normalize detected plate strings before matching.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
|
class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable license plate recognition.")
|
||||||
default=False,
|
|
||||||
title="Enable LPR",
|
|
||||||
description="Enable or disable LPR on this camera.",
|
|
||||||
)
|
|
||||||
expire_time: int = Field(
|
expire_time: int = Field(
|
||||||
default=3,
|
default=3,
|
||||||
title="Expire seconds",
|
title="Expire plates not seen after number of seconds (for dedicated LPR cameras only).",
|
||||||
description="Time in seconds after which an unseen plate is expired from the tracker (for dedicated LPR cameras only).",
|
|
||||||
gt=0,
|
gt=0,
|
||||||
)
|
)
|
||||||
min_area: int = Field(
|
min_area: int = Field(
|
||||||
default=1000,
|
default=1000,
|
||||||
title="Minimum plate area",
|
title="Minimum area of license plate to begin running recognition.",
|
||||||
description="Minimum plate area (pixels) required to attempt recognition.",
|
|
||||||
)
|
)
|
||||||
enhancement: int = Field(
|
enhancement: int = Field(
|
||||||
default=0,
|
default=0,
|
||||||
title="Enhancement level",
|
title="Amount of contrast adjustment and denoising to apply to license plate images before recognition.",
|
||||||
description="Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution.",
|
|
||||||
ge=0,
|
ge=0,
|
||||||
le=10,
|
le=10,
|
||||||
)
|
)
|
||||||
@@ -422,18 +314,12 @@ class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class CameraAudioTranscriptionConfig(FrigateBaseModel):
|
class CameraAudioTranscriptionConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable audio transcription.")
|
||||||
default=False,
|
|
||||||
title="Enable transcription",
|
|
||||||
description="Enable or disable manually triggered audio event transcription.",
|
|
||||||
)
|
|
||||||
enabled_in_config: Optional[bool] = Field(
|
enabled_in_config: Optional[bool] = Field(
|
||||||
default=None, title="Original transcription state"
|
default=None, title="Keep track of original state of audio transcription."
|
||||||
)
|
)
|
||||||
live_enabled: Optional[bool] = Field(
|
live_enabled: Optional[bool] = Field(
|
||||||
default=False,
|
default=False, title="Enable live transcriptions."
|
||||||
title="Live transcription",
|
|
||||||
description="Enable streaming live transcription for audio as it is received.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ from .camera.audio import AudioConfig
|
|||||||
from .camera.birdseye import BirdseyeConfig
|
from .camera.birdseye import BirdseyeConfig
|
||||||
from .camera.detect import DetectConfig
|
from .camera.detect import DetectConfig
|
||||||
from .camera.ffmpeg import FfmpegConfig
|
from .camera.ffmpeg import FfmpegConfig
|
||||||
from .camera.genai import GenAIConfig, GenAIRoleEnum
|
from .camera.genai import GenAIConfig
|
||||||
from .camera.motion import MotionConfig
|
from .camera.motion import MotionConfig
|
||||||
from .camera.notification import NotificationConfig
|
from .camera.notification import NotificationConfig
|
||||||
from .camera.objects import FilterConfig, ObjectConfig
|
from .camera.objects import FilterConfig, ObjectConfig
|
||||||
@@ -299,189 +299,116 @@ def verify_lpr_and_face(
|
|||||||
|
|
||||||
|
|
||||||
class FrigateConfig(FrigateBaseModel):
|
class FrigateConfig(FrigateBaseModel):
|
||||||
version: Optional[str] = Field(
|
version: Optional[str] = Field(default=None, title="Current config version.")
|
||||||
default=None,
|
|
||||||
title="Current config version",
|
|
||||||
description="Numeric or string version of the active configuration to help detect migrations or format changes.",
|
|
||||||
)
|
|
||||||
safe_mode: bool = Field(
|
safe_mode: bool = Field(
|
||||||
default=False,
|
default=False, title="If Frigate should be started in safe mode."
|
||||||
title="Safe mode",
|
|
||||||
description="When enabled, start Frigate in safe mode with reduced features for troubleshooting.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Fields that install global state should be defined first, so that their validators run first.
|
# Fields that install global state should be defined first, so that their validators run first.
|
||||||
environment_vars: EnvVars = Field(
|
environment_vars: EnvVars = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Frigate environment variables."
|
||||||
title="Environment variables",
|
|
||||||
description="Key/value pairs of environment variables to set for the Frigate process in Home Assistant OS. Non-HAOS users must use Docker environment variable configuration instead.",
|
|
||||||
)
|
)
|
||||||
logger: LoggerConfig = Field(
|
logger: LoggerConfig = Field(
|
||||||
default_factory=LoggerConfig,
|
default_factory=LoggerConfig,
|
||||||
title="Logging",
|
title="Logging configuration.",
|
||||||
description="Controls default log verbosity and per-component log level overrides.",
|
|
||||||
validate_default=True,
|
validate_default=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Global config
|
# Global config
|
||||||
auth: AuthConfig = Field(
|
auth: AuthConfig = Field(default_factory=AuthConfig, title="Auth configuration.")
|
||||||
default_factory=AuthConfig,
|
|
||||||
title="Authentication",
|
|
||||||
description="Authentication and session-related settings including cookie and rate limit options.",
|
|
||||||
)
|
|
||||||
database: DatabaseConfig = Field(
|
database: DatabaseConfig = Field(
|
||||||
default_factory=DatabaseConfig,
|
default_factory=DatabaseConfig, title="Database configuration."
|
||||||
title="Database",
|
|
||||||
description="Settings for the SQLite database used by Frigate to store tracked object and recording metadata.",
|
|
||||||
)
|
)
|
||||||
go2rtc: RestreamConfig = Field(
|
go2rtc: RestreamConfig = Field(
|
||||||
default_factory=RestreamConfig,
|
default_factory=RestreamConfig, title="Global restream configuration."
|
||||||
title="go2rtc",
|
|
||||||
description="Settings for the integrated go2rtc restreaming service used for live stream relaying and translation.",
|
|
||||||
)
|
|
||||||
mqtt: MqttConfig = Field(
|
|
||||||
title="MQTT",
|
|
||||||
description="Settings for connecting and publishing telemetry, snapshots, and event details to an MQTT broker.",
|
|
||||||
)
|
)
|
||||||
|
mqtt: MqttConfig = Field(title="MQTT configuration.")
|
||||||
notifications: NotificationConfig = Field(
|
notifications: NotificationConfig = Field(
|
||||||
default_factory=NotificationConfig,
|
default_factory=NotificationConfig, title="Global notification configuration."
|
||||||
title="Notifications",
|
|
||||||
description="Settings to enable and control notifications for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
)
|
||||||
networking: NetworkingConfig = Field(
|
networking: NetworkingConfig = Field(
|
||||||
default_factory=NetworkingConfig,
|
default_factory=NetworkingConfig, title="Networking configuration"
|
||||||
title="Networking",
|
|
||||||
description="Network-related settings such as IPv6 enablement for Frigate endpoints.",
|
|
||||||
)
|
)
|
||||||
proxy: ProxyConfig = Field(
|
proxy: ProxyConfig = Field(
|
||||||
default_factory=ProxyConfig,
|
default_factory=ProxyConfig, title="Proxy configuration."
|
||||||
title="Proxy",
|
|
||||||
description="Settings for integrating Frigate behind a reverse proxy that passes authenticated user headers.",
|
|
||||||
)
|
)
|
||||||
telemetry: TelemetryConfig = Field(
|
telemetry: TelemetryConfig = Field(
|
||||||
default_factory=TelemetryConfig,
|
default_factory=TelemetryConfig, title="Telemetry configuration."
|
||||||
title="Telemetry",
|
|
||||||
description="System telemetry and stats options including GPU and network bandwidth monitoring.",
|
|
||||||
)
|
|
||||||
tls: TlsConfig = Field(
|
|
||||||
default_factory=TlsConfig,
|
|
||||||
title="TLS",
|
|
||||||
description="TLS settings for Frigate's web endpoints (port 8971).",
|
|
||||||
)
|
|
||||||
ui: UIConfig = Field(
|
|
||||||
default_factory=UIConfig,
|
|
||||||
title="UI",
|
|
||||||
description="User interface preferences such as timezone, time/date formatting, and units.",
|
|
||||||
)
|
)
|
||||||
|
tls: TlsConfig = Field(default_factory=TlsConfig, title="TLS configuration.")
|
||||||
|
ui: UIConfig = Field(default_factory=UIConfig, title="UI configuration.")
|
||||||
|
|
||||||
# Detector config
|
# Detector config
|
||||||
detectors: Dict[str, BaseDetectorConfig] = Field(
|
detectors: Dict[str, BaseDetectorConfig] = Field(
|
||||||
default=DEFAULT_DETECTORS,
|
default=DEFAULT_DETECTORS,
|
||||||
title="Detector hardware",
|
title="Detector hardware configuration.",
|
||||||
description="Configuration for object detectors (CPU, GPU, ONNX backends) and any detector-specific model settings.",
|
|
||||||
)
|
)
|
||||||
model: ModelConfig = Field(
|
model: ModelConfig = Field(
|
||||||
default_factory=ModelConfig,
|
default_factory=ModelConfig, title="Detection model configuration."
|
||||||
title="Detection model",
|
|
||||||
description="Settings to configure a custom object detection model and its input shape.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# GenAI config (named provider configs: name -> GenAIConfig)
|
# GenAI config
|
||||||
genai: Dict[str, GenAIConfig] = Field(
|
genai: GenAIConfig = Field(
|
||||||
default_factory=dict,
|
default_factory=GenAIConfig, title="Generative AI configuration."
|
||||||
title="Generative AI configuration (named providers).",
|
|
||||||
description="Settings for integrated generative AI providers used to generate object descriptions and review summaries.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Camera config
|
# Camera config
|
||||||
cameras: Dict[str, CameraConfig] = Field(title="Cameras", description="Cameras")
|
cameras: Dict[str, CameraConfig] = Field(title="Camera configuration.")
|
||||||
audio: AudioConfig = Field(
|
audio: AudioConfig = Field(
|
||||||
default_factory=AudioConfig,
|
default_factory=AudioConfig, title="Global Audio events configuration."
|
||||||
title="Audio events",
|
|
||||||
description="Settings for audio-based event detection for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
)
|
||||||
birdseye: BirdseyeConfig = Field(
|
birdseye: BirdseyeConfig = Field(
|
||||||
default_factory=BirdseyeConfig,
|
default_factory=BirdseyeConfig, title="Birdseye configuration."
|
||||||
title="Birdseye",
|
|
||||||
description="Settings for the Birdseye composite view that composes multiple camera feeds into a single layout.",
|
|
||||||
)
|
)
|
||||||
detect: DetectConfig = Field(
|
detect: DetectConfig = Field(
|
||||||
default_factory=DetectConfig,
|
default_factory=DetectConfig, title="Global object tracking configuration."
|
||||||
title="Object Detection",
|
|
||||||
description="Settings for the detection/detect role used to run object detection and initialize trackers.",
|
|
||||||
)
|
)
|
||||||
ffmpeg: FfmpegConfig = Field(
|
ffmpeg: FfmpegConfig = Field(
|
||||||
default_factory=FfmpegConfig,
|
default_factory=FfmpegConfig, title="Global FFmpeg configuration."
|
||||||
title="FFmpeg",
|
|
||||||
description="FFmpeg settings including binary path, args, hwaccel options, and per-role output args.",
|
|
||||||
)
|
)
|
||||||
live: CameraLiveConfig = Field(
|
live: CameraLiveConfig = Field(
|
||||||
default_factory=CameraLiveConfig,
|
default_factory=CameraLiveConfig, title="Live playback settings."
|
||||||
title="Live playback",
|
|
||||||
description="Settings used by the Web UI to control live stream resolution and quality.",
|
|
||||||
)
|
)
|
||||||
motion: Optional[MotionConfig] = Field(
|
motion: Optional[MotionConfig] = Field(
|
||||||
default=None,
|
default=None, title="Global motion detection configuration."
|
||||||
title="Motion detection",
|
|
||||||
description="Default motion detection settings applied to cameras unless overridden per-camera.",
|
|
||||||
)
|
)
|
||||||
objects: ObjectConfig = Field(
|
objects: ObjectConfig = Field(
|
||||||
default_factory=ObjectConfig,
|
default_factory=ObjectConfig, title="Global object configuration."
|
||||||
title="Objects",
|
|
||||||
description="Object tracking defaults including which labels to track and per-object filters.",
|
|
||||||
)
|
)
|
||||||
record: RecordConfig = Field(
|
record: RecordConfig = Field(
|
||||||
default_factory=RecordConfig,
|
default_factory=RecordConfig, title="Global record configuration."
|
||||||
title="Recording",
|
|
||||||
description="Recording and retention settings applied to cameras unless overridden per-camera.",
|
|
||||||
)
|
)
|
||||||
review: ReviewConfig = Field(
|
review: ReviewConfig = Field(
|
||||||
default_factory=ReviewConfig,
|
default_factory=ReviewConfig, title="Review configuration."
|
||||||
title="Review",
|
|
||||||
description="Settings that control alerts, detections, and GenAI review summaries used by the UI and storage.",
|
|
||||||
)
|
)
|
||||||
snapshots: SnapshotsConfig = Field(
|
snapshots: SnapshotsConfig = Field(
|
||||||
default_factory=SnapshotsConfig,
|
default_factory=SnapshotsConfig, title="Global snapshots configuration."
|
||||||
title="Snapshots",
|
|
||||||
description="Settings for saved JPEG snapshots of tracked objects for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
)
|
||||||
timestamp_style: TimestampStyleConfig = Field(
|
timestamp_style: TimestampStyleConfig = Field(
|
||||||
default_factory=TimestampStyleConfig,
|
default_factory=TimestampStyleConfig,
|
||||||
title="Timestamp style",
|
title="Global timestamp style configuration.",
|
||||||
description="Styling options for in-feed timestamps applied to debug view and snapshots.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Classification Config
|
# Classification Config
|
||||||
audio_transcription: AudioTranscriptionConfig = Field(
|
audio_transcription: AudioTranscriptionConfig = Field(
|
||||||
default_factory=AudioTranscriptionConfig,
|
default_factory=AudioTranscriptionConfig, title="Audio transcription config."
|
||||||
title="Audio transcription",
|
|
||||||
description="Settings for live and speech audio transcription used for events and live captions.",
|
|
||||||
)
|
)
|
||||||
classification: ClassificationConfig = Field(
|
classification: ClassificationConfig = Field(
|
||||||
default_factory=ClassificationConfig,
|
default_factory=ClassificationConfig, title="Object classification config."
|
||||||
title="Object classification",
|
|
||||||
description="Settings for classification models used to refine object labels or state classification.",
|
|
||||||
)
|
)
|
||||||
semantic_search: SemanticSearchConfig = Field(
|
semantic_search: SemanticSearchConfig = Field(
|
||||||
default_factory=SemanticSearchConfig,
|
default_factory=SemanticSearchConfig, title="Semantic search configuration."
|
||||||
title="Semantic Search",
|
|
||||||
description="Settings for Semantic Search which builds and queries object embeddings to find similar items.",
|
|
||||||
)
|
)
|
||||||
face_recognition: FaceRecognitionConfig = Field(
|
face_recognition: FaceRecognitionConfig = Field(
|
||||||
default_factory=FaceRecognitionConfig,
|
default_factory=FaceRecognitionConfig, title="Face recognition config."
|
||||||
title="Face recognition",
|
|
||||||
description="Settings for face detection and recognition for all cameras; can be overridden per-camera.",
|
|
||||||
)
|
)
|
||||||
lpr: LicensePlateRecognitionConfig = Field(
|
lpr: LicensePlateRecognitionConfig = Field(
|
||||||
default_factory=LicensePlateRecognitionConfig,
|
default_factory=LicensePlateRecognitionConfig,
|
||||||
title="License Plate Recognition",
|
title="License Plate recognition config.",
|
||||||
description="License plate recognition settings including detection thresholds, formatting, and known plates.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
camera_groups: Dict[str, CameraGroupConfig] = Field(
|
camera_groups: Dict[str, CameraGroupConfig] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Camera group configuration"
|
||||||
title="Camera groups",
|
|
||||||
description="Configuration for named camera groups used to organize cameras in the UI.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
_plus_api: PlusApi
|
_plus_api: PlusApi
|
||||||
@@ -504,18 +431,6 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
# set notifications state
|
# set notifications state
|
||||||
self.notifications.enabled_in_config = self.notifications.enabled
|
self.notifications.enabled_in_config = self.notifications.enabled
|
||||||
|
|
||||||
# validate genai: each role (tools, vision, embeddings) at most once
|
|
||||||
role_to_name: dict[GenAIRoleEnum, str] = {}
|
|
||||||
for name, genai_cfg in self.genai.items():
|
|
||||||
for role in genai_cfg.roles:
|
|
||||||
if role in role_to_name:
|
|
||||||
raise ValueError(
|
|
||||||
f"GenAI role '{role.value}' is assigned to both "
|
|
||||||
f"'{role_to_name[role]}' and '{name}'; each role must have "
|
|
||||||
"exactly one provider."
|
|
||||||
)
|
|
||||||
role_to_name[role] = name
|
|
||||||
|
|
||||||
# set default min_score for object attributes
|
# set default min_score for object attributes
|
||||||
for attribute in self.model.all_attributes:
|
for attribute in self.model.all_attributes:
|
||||||
if not self.objects.filters.get(attribute):
|
if not self.objects.filters.get(attribute):
|
||||||
@@ -560,9 +475,6 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
|
|
||||||
# users should not set model themselves
|
# users should not set model themselves
|
||||||
if detector_config.model:
|
if detector_config.model:
|
||||||
logger.warning(
|
|
||||||
"The model key should be specified at the root level of the config, not under detectors. The nested model key will be ignored."
|
|
||||||
)
|
|
||||||
detector_config.model = None
|
detector_config.model = None
|
||||||
|
|
||||||
model_config = self.model.model_dump(exclude_unset=True, warnings="none")
|
model_config = self.model.model_dump(exclude_unset=True, warnings="none")
|
||||||
|
|||||||
@@ -8,8 +8,4 @@ __all__ = ["DatabaseConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class DatabaseConfig(FrigateBaseModel):
|
class DatabaseConfig(FrigateBaseModel):
|
||||||
path: str = Field(
|
path: str = Field(default=DEFAULT_DB_PATH, title="Database path.") # noqa: F821
|
||||||
default=DEFAULT_DB_PATH,
|
|
||||||
title="Database path",
|
|
||||||
description="Filesystem path where the Frigate SQLite database file will be stored.",
|
|
||||||
) # noqa: F821
|
|
||||||
|
|||||||
@@ -9,15 +9,9 @@ __all__ = ["LoggerConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class LoggerConfig(FrigateBaseModel):
|
class LoggerConfig(FrigateBaseModel):
|
||||||
default: LogLevel = Field(
|
default: LogLevel = Field(default=LogLevel.info, title="Default logging level.")
|
||||||
default=LogLevel.info,
|
|
||||||
title="Logging level",
|
|
||||||
description="Default global log verbosity (debug, info, warning, error).",
|
|
||||||
)
|
|
||||||
logs: dict[str, LogLevel] = Field(
|
logs: dict[str, LogLevel] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Log level for specified processes."
|
||||||
title="Per-process log level",
|
|
||||||
description="Per-component log level overrides to increase or decrease verbosity for specific modules.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
|
|||||||
@@ -12,73 +12,25 @@ __all__ = ["MqttConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class MqttConfig(FrigateBaseModel):
|
class MqttConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable MQTT Communication.")
|
||||||
default=True,
|
host: str = Field(default="", title="MQTT Host")
|
||||||
title="Enable MQTT",
|
port: int = Field(default=1883, title="MQTT Port")
|
||||||
description="Enable or disable MQTT integration for state, events, and snapshots.",
|
topic_prefix: str = Field(default="frigate", title="MQTT Topic Prefix")
|
||||||
)
|
client_id: str = Field(default="frigate", title="MQTT Client ID")
|
||||||
host: str = Field(
|
|
||||||
default="",
|
|
||||||
title="MQTT host",
|
|
||||||
description="Hostname or IP address of the MQTT broker.",
|
|
||||||
)
|
|
||||||
port: int = Field(
|
|
||||||
default=1883,
|
|
||||||
title="MQTT port",
|
|
||||||
description="Port of the MQTT broker (usually 1883 for plain MQTT).",
|
|
||||||
)
|
|
||||||
topic_prefix: str = Field(
|
|
||||||
default="frigate",
|
|
||||||
title="Topic prefix",
|
|
||||||
description="MQTT topic prefix for all Frigate topics; must be unique if running multiple instances.",
|
|
||||||
)
|
|
||||||
client_id: str = Field(
|
|
||||||
default="frigate",
|
|
||||||
title="Client ID",
|
|
||||||
description="Client identifier used when connecting to the MQTT broker; should be unique per instance.",
|
|
||||||
)
|
|
||||||
stats_interval: int = Field(
|
stats_interval: int = Field(
|
||||||
default=60,
|
default=60, ge=FREQUENCY_STATS_POINTS, title="MQTT Camera Stats Interval"
|
||||||
ge=FREQUENCY_STATS_POINTS,
|
|
||||||
title="Stats interval",
|
|
||||||
description="Interval in seconds for publishing system and camera stats to MQTT.",
|
|
||||||
)
|
|
||||||
user: Optional[EnvString] = Field(
|
|
||||||
default=None,
|
|
||||||
title="MQTT username",
|
|
||||||
description="Optional MQTT username; can be provided via environment variables or secrets.",
|
|
||||||
)
|
)
|
||||||
|
user: Optional[EnvString] = Field(default=None, title="MQTT Username")
|
||||||
password: Optional[EnvString] = Field(
|
password: Optional[EnvString] = Field(
|
||||||
default=None,
|
default=None, title="MQTT Password", validate_default=True
|
||||||
title="MQTT password",
|
|
||||||
description="Optional MQTT password; can be provided via environment variables or secrets.",
|
|
||||||
validate_default=True,
|
|
||||||
)
|
|
||||||
tls_ca_certs: Optional[str] = Field(
|
|
||||||
default=None,
|
|
||||||
title="TLS CA certs",
|
|
||||||
description="Path to CA certificate for TLS connections to the broker (for self-signed certs).",
|
|
||||||
)
|
)
|
||||||
|
tls_ca_certs: Optional[str] = Field(default=None, title="MQTT TLS CA Certificates")
|
||||||
tls_client_cert: Optional[str] = Field(
|
tls_client_cert: Optional[str] = Field(
|
||||||
default=None,
|
default=None, title="MQTT TLS Client Certificate"
|
||||||
title="Client cert",
|
|
||||||
description="Client certificate path for TLS mutual authentication; do not set user/password when using client certs.",
|
|
||||||
)
|
|
||||||
tls_client_key: Optional[str] = Field(
|
|
||||||
default=None,
|
|
||||||
title="Client key",
|
|
||||||
description="Private key path for the client certificate.",
|
|
||||||
)
|
|
||||||
tls_insecure: Optional[bool] = Field(
|
|
||||||
default=None,
|
|
||||||
title="TLS insecure",
|
|
||||||
description="Allow insecure TLS connections by skipping hostname verification (not recommended).",
|
|
||||||
)
|
|
||||||
qos: int = Field(
|
|
||||||
default=0,
|
|
||||||
title="MQTT QoS",
|
|
||||||
description="Quality of Service level for MQTT publishes/subscriptions (0, 1, or 2).",
|
|
||||||
)
|
)
|
||||||
|
tls_client_key: Optional[str] = Field(default=None, title="MQTT TLS Client Key")
|
||||||
|
tls_insecure: Optional[bool] = Field(default=None, title="MQTT TLS Insecure")
|
||||||
|
qos: int = Field(default=0, title="MQTT QoS")
|
||||||
|
|
||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
def user_requires_pass(self, info: ValidationInfo) -> Self:
|
def user_requires_pass(self, info: ValidationInfo) -> Self:
|
||||||
|
|||||||
@@ -8,34 +8,20 @@ __all__ = ["IPv6Config", "ListenConfig", "NetworkingConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class IPv6Config(FrigateBaseModel):
|
class IPv6Config(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=False, title="Enable IPv6 for port 5000 and/or 8971")
|
||||||
default=False,
|
|
||||||
title="Enable IPv6",
|
|
||||||
description="Enable IPv6 support for Frigate services (API and UI) where applicable.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ListenConfig(FrigateBaseModel):
|
class ListenConfig(FrigateBaseModel):
|
||||||
internal: Union[int, str] = Field(
|
internal: Union[int, str] = Field(
|
||||||
default=5000,
|
default=5000, title="Internal listening port for Frigate"
|
||||||
title="Internal port",
|
|
||||||
description="Internal listening port for Frigate (default 5000).",
|
|
||||||
)
|
)
|
||||||
external: Union[int, str] = Field(
|
external: Union[int, str] = Field(
|
||||||
default=8971,
|
default=8971, title="External listening port for Frigate"
|
||||||
title="External port",
|
|
||||||
description="External listening port for Frigate (default 8971).",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class NetworkingConfig(FrigateBaseModel):
|
class NetworkingConfig(FrigateBaseModel):
|
||||||
ipv6: IPv6Config = Field(
|
ipv6: IPv6Config = Field(default_factory=IPv6Config, title="IPv6 configuration")
|
||||||
default_factory=IPv6Config,
|
|
||||||
title="IPv6 configuration",
|
|
||||||
description="IPv6-specific settings for Frigate network services.",
|
|
||||||
)
|
|
||||||
listen: ListenConfig = Field(
|
listen: ListenConfig = Field(
|
||||||
default_factory=ListenConfig,
|
default_factory=ListenConfig, title="Listening ports configuration"
|
||||||
title="Listening ports configuration",
|
|
||||||
description="Configuration for internal and external listening ports. This is for advanced users. For the majority of use cases it's recommended to change the ports section of your Docker compose file.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -10,47 +10,36 @@ __all__ = ["ProxyConfig", "HeaderMappingConfig"]
|
|||||||
|
|
||||||
class HeaderMappingConfig(FrigateBaseModel):
|
class HeaderMappingConfig(FrigateBaseModel):
|
||||||
user: str = Field(
|
user: str = Field(
|
||||||
default=None,
|
default=None, title="Header name from upstream proxy to identify user."
|
||||||
title="User header",
|
|
||||||
description="Header containing the authenticated username provided by the upstream proxy.",
|
|
||||||
)
|
)
|
||||||
role: str = Field(
|
role: str = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Role header",
|
title="Header name from upstream proxy to identify user role.",
|
||||||
description="Header containing the authenticated user's role or groups from the upstream proxy.",
|
|
||||||
)
|
)
|
||||||
role_map: Optional[dict[str, list[str]]] = Field(
|
role_map: Optional[dict[str, list[str]]] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict,
|
||||||
title=("Role mapping"),
|
title=("Mapping of Frigate roles to upstream group values. "),
|
||||||
description="Map upstream group values to Frigate roles (for example map admin groups to the admin role).",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class ProxyConfig(FrigateBaseModel):
|
class ProxyConfig(FrigateBaseModel):
|
||||||
header_map: HeaderMappingConfig = Field(
|
header_map: HeaderMappingConfig = Field(
|
||||||
default_factory=HeaderMappingConfig,
|
default_factory=HeaderMappingConfig,
|
||||||
title="Header mapping",
|
title="Header mapping definitions for proxy user passing.",
|
||||||
description="Map incoming proxy headers to Frigate user and role fields for proxy-based auth.",
|
|
||||||
)
|
)
|
||||||
logout_url: Optional[str] = Field(
|
logout_url: Optional[str] = Field(
|
||||||
default=None,
|
default=None, title="Redirect url for logging out with proxy."
|
||||||
title="Logout URL",
|
|
||||||
description="URL to redirect users to when logging out via the proxy.",
|
|
||||||
)
|
)
|
||||||
auth_secret: Optional[EnvString] = Field(
|
auth_secret: Optional[EnvString] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
title="Proxy secret",
|
title="Secret value for proxy authentication.",
|
||||||
description="Optional secret checked against the X-Proxy-Secret header to verify trusted proxies.",
|
|
||||||
)
|
)
|
||||||
default_role: Optional[str] = Field(
|
default_role: Optional[str] = Field(
|
||||||
default="viewer",
|
default="viewer", title="Default role for proxy users."
|
||||||
title="Default role",
|
|
||||||
description="Default role assigned to proxy-authenticated users when no role mapping applies (admin or viewer).",
|
|
||||||
)
|
)
|
||||||
separator: Optional[str] = Field(
|
separator: Optional[str] = Field(
|
||||||
default=",",
|
default=",",
|
||||||
title="Separator character",
|
title="The character used to separate values in a mapped header.",
|
||||||
description="Character used to split multiple values provided in proxy headers.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@field_validator("separator", mode="before")
|
@field_validator("separator", mode="before")
|
||||||
|
|||||||
@@ -8,41 +8,22 @@ __all__ = ["TelemetryConfig", "StatsConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class StatsConfig(FrigateBaseModel):
|
class StatsConfig(FrigateBaseModel):
|
||||||
amd_gpu_stats: bool = Field(
|
amd_gpu_stats: bool = Field(default=True, title="Enable AMD GPU stats.")
|
||||||
default=True,
|
intel_gpu_stats: bool = Field(default=True, title="Enable Intel GPU stats.")
|
||||||
title="AMD GPU stats",
|
|
||||||
description="Enable collection of AMD GPU statistics if an AMD GPU is present.",
|
|
||||||
)
|
|
||||||
intel_gpu_stats: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Intel GPU stats",
|
|
||||||
description="Enable collection of Intel GPU statistics if an Intel GPU is present.",
|
|
||||||
)
|
|
||||||
network_bandwidth: bool = Field(
|
network_bandwidth: bool = Field(
|
||||||
default=False,
|
default=False, title="Enable network bandwidth for ffmpeg processes."
|
||||||
title="Network bandwidth",
|
|
||||||
description="Enable per-process network bandwidth monitoring for camera ffmpeg processes and detectors (requires capabilities).",
|
|
||||||
)
|
)
|
||||||
intel_gpu_device: Optional[str] = Field(
|
intel_gpu_device: Optional[str] = Field(
|
||||||
default=None,
|
default=None, title="Define the device to use when gathering SR-IOV stats."
|
||||||
title="SR-IOV device",
|
|
||||||
description="Device identifier used when treating Intel GPUs as SR-IOV to fix GPU stats.",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class TelemetryConfig(FrigateBaseModel):
|
class TelemetryConfig(FrigateBaseModel):
|
||||||
network_interfaces: list[str] = Field(
|
network_interfaces: list[str] = Field(
|
||||||
default=[],
|
default=[],
|
||||||
title="Network interfaces",
|
title="Enabled network interfaces for bandwidth calculation.",
|
||||||
description="List of network interface name prefixes to monitor for bandwidth statistics.",
|
|
||||||
)
|
)
|
||||||
stats: StatsConfig = Field(
|
stats: StatsConfig = Field(
|
||||||
default_factory=StatsConfig,
|
default_factory=StatsConfig, title="System Stats Configuration"
|
||||||
title="System stats",
|
|
||||||
description="Options to enable/disable collection of various system and GPU statistics.",
|
|
||||||
)
|
|
||||||
version_check: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Version check",
|
|
||||||
description="Enable an outbound check to detect if a newer Frigate version is available.",
|
|
||||||
)
|
)
|
||||||
|
version_check: bool = Field(default=True, title="Enable latest version check.")
|
||||||
|
|||||||
@@ -6,8 +6,4 @@ __all__ = ["TlsConfig"]
|
|||||||
|
|
||||||
|
|
||||||
class TlsConfig(FrigateBaseModel):
|
class TlsConfig(FrigateBaseModel):
|
||||||
enabled: bool = Field(
|
enabled: bool = Field(default=True, title="Enable TLS for port 8971")
|
||||||
default=True,
|
|
||||||
title="Enable TLS",
|
|
||||||
description="Enable TLS for Frigate's web UI and API on the configured TLS port.",
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -27,28 +27,16 @@ class UnitSystemEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class UIConfig(FrigateBaseModel):
|
class UIConfig(FrigateBaseModel):
|
||||||
timezone: Optional[str] = Field(
|
timezone: Optional[str] = Field(default=None, title="Override UI timezone.")
|
||||||
default=None,
|
|
||||||
title="Timezone",
|
|
||||||
description="Optional timezone to display across the UI (defaults to browser local time if unset).",
|
|
||||||
)
|
|
||||||
time_format: TimeFormatEnum = Field(
|
time_format: TimeFormatEnum = Field(
|
||||||
default=TimeFormatEnum.browser,
|
default=TimeFormatEnum.browser, title="Override UI time format."
|
||||||
title="Time format",
|
|
||||||
description="Time format to use in the UI (browser, 12hour, or 24hour).",
|
|
||||||
)
|
)
|
||||||
date_style: DateTimeStyleEnum = Field(
|
date_style: DateTimeStyleEnum = Field(
|
||||||
default=DateTimeStyleEnum.short,
|
default=DateTimeStyleEnum.short, title="Override UI dateStyle."
|
||||||
title="Date style",
|
|
||||||
description="Date style to use in the UI (full, long, medium, short).",
|
|
||||||
)
|
)
|
||||||
time_style: DateTimeStyleEnum = Field(
|
time_style: DateTimeStyleEnum = Field(
|
||||||
default=DateTimeStyleEnum.medium,
|
default=DateTimeStyleEnum.medium, title="Override UI timeStyle."
|
||||||
title="Time style",
|
|
||||||
description="Time style to use in the UI (full, long, medium, short).",
|
|
||||||
)
|
)
|
||||||
unit_system: UnitSystemEnum = Field(
|
unit_system: UnitSystemEnum = Field(
|
||||||
default=UnitSystemEnum.metric,
|
default=UnitSystemEnum.metric, title="The unit system to use for measurements."
|
||||||
title="Unit system",
|
|
||||||
description="Unit system for display (metric or imperial) used in the UI and MQTT.",
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ from .api import RealTimeProcessorApi
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter
|
from tflite_runtime.interpreter import Interpreter
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ from .api import RealTimeProcessorApi
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter
|
from tflite_runtime.interpreter import Interpreter
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -73,6 +73,11 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
|||||||
self.__build_detector()
|
self.__build_detector()
|
||||||
|
|
||||||
def __build_detector(self) -> None:
|
def __build_detector(self) -> None:
|
||||||
|
try:
|
||||||
|
from tflite_runtime.interpreter import Interpreter
|
||||||
|
except ModuleNotFoundError:
|
||||||
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
model_path = os.path.join(self.model_dir, "model.tflite")
|
model_path = os.path.join(self.model_dir, "model.tflite")
|
||||||
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
|
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
|
||||||
|
|
||||||
|
|||||||
@@ -45,55 +45,30 @@ class ModelTypeEnum(str, Enum):
|
|||||||
|
|
||||||
|
|
||||||
class ModelConfig(BaseModel):
|
class ModelConfig(BaseModel):
|
||||||
path: Optional[str] = Field(
|
path: Optional[str] = Field(None, title="Custom Object detection model path.")
|
||||||
None,
|
|
||||||
title="Custom Object detection model path",
|
|
||||||
description="Path to a custom detection model file (or plus://<model_id> for Frigate+ models).",
|
|
||||||
)
|
|
||||||
labelmap_path: Optional[str] = Field(
|
labelmap_path: Optional[str] = Field(
|
||||||
None,
|
None, title="Label map for custom object detector."
|
||||||
title="Label map for custom object detector",
|
|
||||||
description="Path to a labelmap file that maps numeric classes to string labels for the detector.",
|
|
||||||
)
|
|
||||||
width: int = Field(
|
|
||||||
default=320,
|
|
||||||
title="Object detection model input width",
|
|
||||||
description="Width of the model input tensor in pixels.",
|
|
||||||
)
|
|
||||||
height: int = Field(
|
|
||||||
default=320,
|
|
||||||
title="Object detection model input height",
|
|
||||||
description="Height of the model input tensor in pixels.",
|
|
||||||
)
|
)
|
||||||
|
width: int = Field(default=320, title="Object detection model input width.")
|
||||||
|
height: int = Field(default=320, title="Object detection model input height.")
|
||||||
labelmap: Dict[int, str] = Field(
|
labelmap: Dict[int, str] = Field(
|
||||||
default_factory=dict,
|
default_factory=dict, title="Labelmap customization."
|
||||||
title="Labelmap customization",
|
|
||||||
description="Overrides or remapping entries to merge into the standard labelmap.",
|
|
||||||
)
|
)
|
||||||
attributes_map: Dict[str, list[str]] = Field(
|
attributes_map: Dict[str, list[str]] = Field(
|
||||||
default=DEFAULT_ATTRIBUTE_LABEL_MAP,
|
default=DEFAULT_ATTRIBUTE_LABEL_MAP,
|
||||||
title="Map of object labels to their attribute labels",
|
title="Map of object labels to their attribute labels.",
|
||||||
description="Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate']).",
|
|
||||||
)
|
)
|
||||||
input_tensor: InputTensorEnum = Field(
|
input_tensor: InputTensorEnum = Field(
|
||||||
default=InputTensorEnum.nhwc,
|
default=InputTensorEnum.nhwc, title="Model Input Tensor Shape"
|
||||||
title="Model Input Tensor Shape",
|
|
||||||
description="Tensor format expected by the model: 'nhwc' or 'nchw'.",
|
|
||||||
)
|
)
|
||||||
input_pixel_format: PixelFormatEnum = Field(
|
input_pixel_format: PixelFormatEnum = Field(
|
||||||
default=PixelFormatEnum.rgb,
|
default=PixelFormatEnum.rgb, title="Model Input Pixel Color Format"
|
||||||
title="Model Input Pixel Color Format",
|
|
||||||
description="Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'.",
|
|
||||||
)
|
)
|
||||||
input_dtype: InputDTypeEnum = Field(
|
input_dtype: InputDTypeEnum = Field(
|
||||||
default=InputDTypeEnum.int,
|
default=InputDTypeEnum.int, title="Model Input D Type"
|
||||||
title="Model Input D Type",
|
|
||||||
description="Data type of the model input tensor (for example 'float32').",
|
|
||||||
)
|
)
|
||||||
model_type: ModelTypeEnum = Field(
|
model_type: ModelTypeEnum = Field(
|
||||||
default=ModelTypeEnum.ssd,
|
default=ModelTypeEnum.ssd, title="Object Detection Model Type"
|
||||||
title="Object Detection Model Type",
|
|
||||||
description="Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization.",
|
|
||||||
)
|
)
|
||||||
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
|
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
|
||||||
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
|
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
|
||||||
@@ -235,20 +210,12 @@ class ModelConfig(BaseModel):
|
|||||||
|
|
||||||
class BaseDetectorConfig(BaseModel):
|
class BaseDetectorConfig(BaseModel):
|
||||||
# the type field must be defined in all subclasses
|
# the type field must be defined in all subclasses
|
||||||
type: str = Field(
|
type: str = Field(default="cpu", title="Detector Type")
|
||||||
default="cpu",
|
|
||||||
title="Detector Type",
|
|
||||||
description="Type of detector to use for object detection (for example 'cpu', 'edgetpu', 'openvino').",
|
|
||||||
)
|
|
||||||
model: Optional[ModelConfig] = Field(
|
model: Optional[ModelConfig] = Field(
|
||||||
default=None,
|
default=None, title="Detector specific model configuration."
|
||||||
title="Detector specific model configuration",
|
|
||||||
description="Detector-specific model configuration options (path, input size, etc.).",
|
|
||||||
)
|
)
|
||||||
model_path: Optional[str] = Field(
|
model_path: Optional[str] = Field(
|
||||||
default=None,
|
default=None, title="Detector specific model path."
|
||||||
title="Detector specific model path",
|
|
||||||
description="File path to the detector model binary if required by the chosen detector.",
|
|
||||||
)
|
)
|
||||||
model_config = ConfigDict(
|
model_config = ConfigDict(
|
||||||
extra="allow", arbitrary_types_allowed=True, protected_namespaces=()
|
extra="allow", arbitrary_types_allowed=True, protected_namespaces=()
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import numpy as np
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter, load_delegate
|
from tflite_runtime.interpreter import Interpreter, load_delegate
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter, load_delegate
|
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import logging
|
import logging
|
||||||
|
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -12,7 +12,7 @@ from ..detector_utils import tflite_detect_raw, tflite_init
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter
|
from tflite_runtime.interpreter import Interpreter
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -21,18 +21,8 @@ DETECTOR_KEY = "cpu"
|
|||||||
|
|
||||||
|
|
||||||
class CpuDetectorConfig(BaseDetectorConfig):
|
class CpuDetectorConfig(BaseDetectorConfig):
|
||||||
"""CPU TFLite detector that runs TensorFlow Lite models on the host CPU without hardware acceleration. Not recommended."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="CPU",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
num_threads: int = Field(
|
num_threads: int = Field(default=3, title="Number of detection threads")
|
||||||
default=3,
|
|
||||||
title="Number of detection threads",
|
|
||||||
description="The number of threads used for CPU-based inference.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CpuTfl(DetectionApi):
|
class CpuTfl(DetectionApi):
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import logging
|
|||||||
import numpy as np
|
import numpy as np
|
||||||
import requests
|
import requests
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -16,28 +16,12 @@ DETECTOR_KEY = "deepstack"
|
|||||||
|
|
||||||
|
|
||||||
class DeepstackDetectorConfig(BaseDetectorConfig):
|
class DeepstackDetectorConfig(BaseDetectorConfig):
|
||||||
"""DeepStack/CodeProject.AI detector that sends images to a remote DeepStack HTTP API for inference. Not recommended."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="DeepStack",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
api_url: str = Field(
|
api_url: str = Field(
|
||||||
default="http://localhost:80/v1/vision/detection",
|
default="http://localhost:80/v1/vision/detection", title="DeepStack API URL"
|
||||||
title="DeepStack API URL",
|
|
||||||
description="The URL of the DeepStack API.",
|
|
||||||
)
|
|
||||||
api_timeout: float = Field(
|
|
||||||
default=0.1,
|
|
||||||
title="DeepStack API timeout (in seconds)",
|
|
||||||
description="Maximum time allowed for a DeepStack API request.",
|
|
||||||
)
|
|
||||||
api_key: str = Field(
|
|
||||||
default="",
|
|
||||||
title="DeepStack API key (if required)",
|
|
||||||
description="Optional API key for authenticated DeepStack services.",
|
|
||||||
)
|
)
|
||||||
|
api_timeout: float = Field(default=0.1, title="DeepStack API timeout (in seconds)")
|
||||||
|
api_key: str = Field(default="", title="DeepStack API key (if required)")
|
||||||
|
|
||||||
|
|
||||||
class DeepStack(DetectionApi):
|
class DeepStack(DetectionApi):
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ import logging
|
|||||||
import queue
|
import queue
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -14,28 +14,10 @@ DETECTOR_KEY = "degirum"
|
|||||||
|
|
||||||
### DETECTOR CONFIG ###
|
### DETECTOR CONFIG ###
|
||||||
class DGDetectorConfig(BaseDetectorConfig):
|
class DGDetectorConfig(BaseDetectorConfig):
|
||||||
"""DeGirum detector for running models via DeGirum cloud or local inference services."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="DeGirum",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
location: str = Field(
|
location: str = Field(default=None, title="Inference Location")
|
||||||
default=None,
|
zoo: str = Field(default=None, title="Model Zoo")
|
||||||
title="Inference Location",
|
token: str = Field(default=None, title="DeGirum Cloud Token")
|
||||||
description="Location of the DeGirim inference engine (e.g. '@cloud', '127.0.0.1').",
|
|
||||||
)
|
|
||||||
zoo: str = Field(
|
|
||||||
default=None,
|
|
||||||
title="Model Zoo",
|
|
||||||
description="Path or URL to the DeGirum model zoo.",
|
|
||||||
)
|
|
||||||
token: str = Field(
|
|
||||||
default=None,
|
|
||||||
title="DeGirum Cloud Token",
|
|
||||||
description="Token for DeGirum Cloud access.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
### ACTUAL DETECTOR ###
|
### ACTUAL DETECTOR ###
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import os
|
|||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -13,7 +13,7 @@ from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter, load_delegate
|
from tflite_runtime.interpreter import Interpreter, load_delegate
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter, load_delegate
|
from tensorflow.lite.python.interpreter import Interpreter, load_delegate
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -21,18 +21,8 @@ DETECTOR_KEY = "edgetpu"
|
|||||||
|
|
||||||
|
|
||||||
class EdgeTpuDetectorConfig(BaseDetectorConfig):
|
class EdgeTpuDetectorConfig(BaseDetectorConfig):
|
||||||
"""EdgeTPU detector that runs TensorFlow Lite models compiled for Coral EdgeTPU using the EdgeTPU delegate."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="EdgeTPU",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
device: str = Field(
|
device: str = Field(default=None, title="Device Type")
|
||||||
default=None,
|
|
||||||
title="Device Type",
|
|
||||||
description="The device to use for EdgeTPU inference (e.g. 'usb', 'pci').",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class EdgeTpuTfl(DetectionApi):
|
class EdgeTpuTfl(DetectionApi):
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ from typing import Dict, List, Optional, Tuple
|
|||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.const import MODEL_CACHE_DIR
|
from frigate.const import MODEL_CACHE_DIR
|
||||||
@@ -410,15 +410,5 @@ class HailoDetector(DetectionApi):
|
|||||||
|
|
||||||
# ----------------- HailoDetectorConfig Class ----------------- #
|
# ----------------- HailoDetectorConfig Class ----------------- #
|
||||||
class HailoDetectorConfig(BaseDetectorConfig):
|
class HailoDetectorConfig(BaseDetectorConfig):
|
||||||
"""Hailo-8/Hailo-8L detector using HEF models and the HailoRT SDK for inference on Hailo hardware."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="Hailo-8/Hailo-8L",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
device: str = Field(
|
device: str = Field(default="PCIe", title="Device Type")
|
||||||
default="PCIe",
|
|
||||||
title="Device Type",
|
|
||||||
description="The device to use for Hailo inference (e.g. 'PCIe', 'M.2').",
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ from queue import Queue
|
|||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import BaseModel, ConfigDict, Field
|
from pydantic import BaseModel, Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -30,18 +30,8 @@ class ModelConfig(BaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class MemryXDetectorConfig(BaseDetectorConfig):
|
class MemryXDetectorConfig(BaseDetectorConfig):
|
||||||
"""MemryX MX3 detector that runs compiled DFP models on MemryX accelerators."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="MemryX",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
device: str = Field(
|
device: str = Field(default="PCIe", title="Device Path")
|
||||||
default="PCIe",
|
|
||||||
title="Device Path",
|
|
||||||
description="The device to use for MemryX inference (e.g. 'PCIe').",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class MemryXDetector(DetectionApi):
|
class MemryXDetector(DetectionApi):
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import logging
|
import logging
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -23,18 +23,8 @@ DETECTOR_KEY = "onnx"
|
|||||||
|
|
||||||
|
|
||||||
class ONNXDetectorConfig(BaseDetectorConfig):
|
class ONNXDetectorConfig(BaseDetectorConfig):
|
||||||
"""ONNX detector for running ONNX models; will use available acceleration backends (CUDA/ROCm/OpenVINO) when available."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="ONNX",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
device: str = Field(
|
device: str = Field(default="AUTO", title="Device Type")
|
||||||
default="AUTO",
|
|
||||||
title="Device Type",
|
|
||||||
description="The device to use for ONNX inference (e.g. 'AUTO', 'CPU', 'GPU').",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ONNXDetector(DetectionApi):
|
class ONNXDetector(DetectionApi):
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ import logging
|
|||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import openvino as ov
|
import openvino as ov
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -20,18 +20,8 @@ DETECTOR_KEY = "openvino"
|
|||||||
|
|
||||||
|
|
||||||
class OvDetectorConfig(BaseDetectorConfig):
|
class OvDetectorConfig(BaseDetectorConfig):
|
||||||
"""OpenVINO detector for AMD and Intel CPUs, Intel GPUs and Intel VPU hardware."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="OpenVINO",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
device: str = Field(
|
device: str = Field(default=None, title="Device Type")
|
||||||
default=None,
|
|
||||||
title="Device Type",
|
|
||||||
description="The device to use for OpenVINO inference (e.g. 'CPU', 'GPU', 'NPU').",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class OvDetector(DetectionApi):
|
class OvDetector(DetectionApi):
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ from typing import Literal
|
|||||||
|
|
||||||
import cv2
|
import cv2
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
|
|
||||||
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
|
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -29,20 +29,8 @@ model_cache_dir = os.path.join(MODEL_CACHE_DIR, "rknn_cache/")
|
|||||||
|
|
||||||
|
|
||||||
class RknnDetectorConfig(BaseDetectorConfig):
|
class RknnDetectorConfig(BaseDetectorConfig):
|
||||||
"""RKNN detector for Rockchip NPUs; runs compiled RKNN models on Rockchip hardware."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="RKNN",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
num_cores: int = Field(
|
num_cores: int = Field(default=0, ge=0, le=3, title="Number of NPU cores to use.")
|
||||||
default=0,
|
|
||||||
ge=0,
|
|
||||||
le=3,
|
|
||||||
title="Number of NPU cores to use.",
|
|
||||||
description="The number of NPU cores to use (0 for auto).",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class Rknn(DetectionApi):
|
class Rknn(DetectionApi):
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ import logging
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from pydantic import ConfigDict
|
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -28,12 +27,6 @@ DETECTOR_KEY = "synaptics"
|
|||||||
|
|
||||||
|
|
||||||
class SynapDetectorConfig(BaseDetectorConfig):
|
class SynapDetectorConfig(BaseDetectorConfig):
|
||||||
"""Synaptics NPU detector for models in .synap format using the Synap SDK on Synaptics hardware."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="Synaptics",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
import logging
|
import logging
|
||||||
|
|
||||||
from pydantic import ConfigDict
|
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -19,12 +18,6 @@ DETECTOR_KEY = "teflon_tfl"
|
|||||||
|
|
||||||
|
|
||||||
class TeflonDetectorConfig(BaseDetectorConfig):
|
class TeflonDetectorConfig(BaseDetectorConfig):
|
||||||
"""Teflon delegate detector for TFLite using Mesa Teflon delegate library to accelerate inference on supported GPUs."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="Teflon",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ try:
|
|||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
TRT_SUPPORT = False
|
TRT_SUPPORT = False
|
||||||
|
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -46,16 +46,8 @@ if TRT_SUPPORT:
|
|||||||
|
|
||||||
|
|
||||||
class TensorRTDetectorConfig(BaseDetectorConfig):
|
class TensorRTDetectorConfig(BaseDetectorConfig):
|
||||||
"""TensorRT detector for Nvidia Jetson devices using serialized TensorRT engines for accelerated inference."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="TensorRT",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
device: int = Field(
|
device: int = Field(default=0, title="GPU Device Index")
|
||||||
default=0, title="GPU Device Index", description="The GPU device index to use."
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class HostDeviceMem(object):
|
class HostDeviceMem(object):
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from typing import Any, List
|
|||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import zmq
|
import zmq
|
||||||
from pydantic import ConfigDict, Field
|
from pydantic import Field
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
@@ -17,28 +17,14 @@ DETECTOR_KEY = "zmq"
|
|||||||
|
|
||||||
|
|
||||||
class ZmqDetectorConfig(BaseDetectorConfig):
|
class ZmqDetectorConfig(BaseDetectorConfig):
|
||||||
"""ZMQ IPC detector that offloads inference to an external process via a ZeroMQ IPC endpoint."""
|
|
||||||
|
|
||||||
model_config = ConfigDict(
|
|
||||||
title="ZMQ IPC",
|
|
||||||
)
|
|
||||||
|
|
||||||
type: Literal[DETECTOR_KEY]
|
type: Literal[DETECTOR_KEY]
|
||||||
endpoint: str = Field(
|
endpoint: str = Field(
|
||||||
default="ipc:///tmp/cache/zmq_detector",
|
default="ipc:///tmp/cache/zmq_detector", title="ZMQ IPC endpoint"
|
||||||
title="ZMQ IPC endpoint",
|
|
||||||
description="The ZMQ endpoint to connect to.",
|
|
||||||
)
|
)
|
||||||
request_timeout_ms: int = Field(
|
request_timeout_ms: int = Field(
|
||||||
default=200,
|
default=200, title="ZMQ request timeout in milliseconds"
|
||||||
title="ZMQ request timeout in milliseconds",
|
|
||||||
description="Timeout for ZMQ requests in milliseconds.",
|
|
||||||
)
|
|
||||||
linger_ms: int = Field(
|
|
||||||
default=0,
|
|
||||||
title="ZMQ socket linger in milliseconds",
|
|
||||||
description="Socket linger period in milliseconds.",
|
|
||||||
)
|
)
|
||||||
|
linger_ms: int = Field(default=0, title="ZMQ socket linger in milliseconds")
|
||||||
|
|
||||||
|
|
||||||
class ZmqIpcDetector(DetectionApi):
|
class ZmqIpcDetector(DetectionApi):
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ from frigate.data_processing.real_time.license_plate import (
|
|||||||
from frigate.data_processing.types import DataProcessorMetrics, PostProcessDataEnum
|
from frigate.data_processing.types import DataProcessorMetrics, PostProcessDataEnum
|
||||||
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
|
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
|
||||||
from frigate.events.types import EventTypeEnum, RegenerateDescriptionEnum
|
from frigate.events.types import EventTypeEnum, RegenerateDescriptionEnum
|
||||||
from frigate.genai import GenAIClientManager
|
from frigate.genai import get_genai_client
|
||||||
from frigate.models import Event, Recordings, ReviewSegment, Trigger
|
from frigate.models import Event, Recordings, ReviewSegment, Trigger
|
||||||
from frigate.util.builtin import serialize
|
from frigate.util.builtin import serialize
|
||||||
from frigate.util.file import get_event_thumbnail_bytes
|
from frigate.util.file import get_event_thumbnail_bytes
|
||||||
@@ -144,7 +144,7 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
self.frame_manager = SharedMemoryFrameManager()
|
self.frame_manager = SharedMemoryFrameManager()
|
||||||
|
|
||||||
self.detected_license_plates: dict[str, dict[str, Any]] = {}
|
self.detected_license_plates: dict[str, dict[str, Any]] = {}
|
||||||
self.genai_manager = GenAIClientManager(config)
|
self.genai_client = get_genai_client(config)
|
||||||
|
|
||||||
# model runners to share between realtime and post processors
|
# model runners to share between realtime and post processors
|
||||||
if self.config.lpr.enabled:
|
if self.config.lpr.enabled:
|
||||||
@@ -203,15 +203,12 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
# post processors
|
# post processors
|
||||||
self.post_processors: list[PostProcessorApi] = []
|
self.post_processors: list[PostProcessorApi] = []
|
||||||
|
|
||||||
if self.genai_manager.vision_client is not None and any(
|
if self.genai_client is not None and any(
|
||||||
c.review.genai.enabled_in_config for c in self.config.cameras.values()
|
c.review.genai.enabled_in_config for c in self.config.cameras.values()
|
||||||
):
|
):
|
||||||
self.post_processors.append(
|
self.post_processors.append(
|
||||||
ReviewDescriptionProcessor(
|
ReviewDescriptionProcessor(
|
||||||
self.config,
|
self.config, self.requestor, self.metrics, self.genai_client
|
||||||
self.requestor,
|
|
||||||
self.metrics,
|
|
||||||
self.genai_manager.vision_client,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -249,7 +246,7 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
)
|
)
|
||||||
self.post_processors.append(semantic_trigger_processor)
|
self.post_processors.append(semantic_trigger_processor)
|
||||||
|
|
||||||
if self.genai_manager.vision_client is not None and any(
|
if self.genai_client is not None and any(
|
||||||
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
|
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
|
||||||
):
|
):
|
||||||
self.post_processors.append(
|
self.post_processors.append(
|
||||||
@@ -258,7 +255,7 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
self.embeddings,
|
self.embeddings,
|
||||||
self.requestor,
|
self.requestor,
|
||||||
self.metrics,
|
self.metrics,
|
||||||
self.genai_manager.vision_client,
|
self.genai_client,
|
||||||
semantic_trigger_processor,
|
semantic_trigger_processor,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ from .base_embedding import BaseEmbedding
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter
|
from tflite_runtime.interpreter import Interpreter
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ from frigate.video import start_or_restart_ffmpeg, stop_ffmpeg
|
|||||||
try:
|
try:
|
||||||
from tflite_runtime.interpreter import Interpreter
|
from tflite_runtime.interpreter import Interpreter
|
||||||
except ModuleNotFoundError:
|
except ModuleNotFoundError:
|
||||||
from ai_edge_litert.interpreter import Interpreter
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|||||||
@@ -9,24 +9,13 @@ from typing import Any, Optional
|
|||||||
|
|
||||||
from playhouse.shortcuts import model_to_dict
|
from playhouse.shortcuts import model_to_dict
|
||||||
|
|
||||||
from frigate.config import CameraConfig, GenAIConfig, GenAIProviderEnum
|
from frigate.config import CameraConfig, FrigateConfig, GenAIConfig, GenAIProviderEnum
|
||||||
from frigate.const import CLIPS_DIR
|
from frigate.const import CLIPS_DIR
|
||||||
from frigate.data_processing.post.types import ReviewMetadata
|
from frigate.data_processing.post.types import ReviewMetadata
|
||||||
from frigate.genai.manager import GenAIClientManager
|
|
||||||
from frigate.models import Event
|
from frigate.models import Event
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
"GenAIClient",
|
|
||||||
"GenAIClientManager",
|
|
||||||
"GenAIConfig",
|
|
||||||
"GenAIProviderEnum",
|
|
||||||
"PROVIDERS",
|
|
||||||
"load_providers",
|
|
||||||
"register_genai_provider",
|
|
||||||
]
|
|
||||||
|
|
||||||
PROVIDERS = {}
|
PROVIDERS = {}
|
||||||
|
|
||||||
|
|
||||||
@@ -363,6 +352,19 @@ Guidelines:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_genai_client(config: FrigateConfig) -> Optional[GenAIClient]:
|
||||||
|
"""Get the GenAI client."""
|
||||||
|
if not config.genai.provider:
|
||||||
|
return None
|
||||||
|
|
||||||
|
load_providers()
|
||||||
|
provider = PROVIDERS.get(config.genai.provider)
|
||||||
|
if provider:
|
||||||
|
return provider(config.genai)
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
def load_providers():
|
def load_providers():
|
||||||
package_dir = os.path.dirname(__file__)
|
package_dir = os.path.dirname(__file__)
|
||||||
for filename in os.listdir(package_dir):
|
for filename in os.listdir(package_dir):
|
||||||
|
|||||||
@@ -5,12 +5,10 @@ import json
|
|||||||
import logging
|
import logging
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
import httpx
|
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from frigate.config import GenAIProviderEnum
|
from frigate.config import GenAIProviderEnum
|
||||||
from frigate.genai import GenAIClient, register_genai_provider
|
from frigate.genai import GenAIClient, register_genai_provider
|
||||||
from frigate.genai.utils import parse_tool_calls_from_message
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -69,7 +67,6 @@ class LlamaCppClient(GenAIClient):
|
|||||||
|
|
||||||
# Build request payload with llama.cpp native options
|
# Build request payload with llama.cpp native options
|
||||||
payload = {
|
payload = {
|
||||||
"model": self.genai_config.model,
|
|
||||||
"messages": [
|
"messages": [
|
||||||
{
|
{
|
||||||
"role": "user",
|
"role": "user",
|
||||||
@@ -102,79 +99,7 @@ class LlamaCppClient(GenAIClient):
|
|||||||
|
|
||||||
def get_context_size(self) -> int:
|
def get_context_size(self) -> int:
|
||||||
"""Get the context window size for llama.cpp."""
|
"""Get the context window size for llama.cpp."""
|
||||||
return self.provider_options.get("context_size", 4096)
|
return self.genai_config.provider_options.get("context_size", 4096)
|
||||||
|
|
||||||
def _build_payload(
|
|
||||||
self,
|
|
||||||
messages: list[dict[str, Any]],
|
|
||||||
tools: Optional[list[dict[str, Any]]],
|
|
||||||
tool_choice: Optional[str],
|
|
||||||
stream: bool = False,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Build request payload for chat completions (sync or stream)."""
|
|
||||||
openai_tool_choice = None
|
|
||||||
if tool_choice:
|
|
||||||
if tool_choice == "none":
|
|
||||||
openai_tool_choice = "none"
|
|
||||||
elif tool_choice == "auto":
|
|
||||||
openai_tool_choice = "auto"
|
|
||||||
elif tool_choice == "required":
|
|
||||||
openai_tool_choice = "required"
|
|
||||||
|
|
||||||
payload: dict[str, Any] = {
|
|
||||||
"messages": messages,
|
|
||||||
"model": self.genai_config.model,
|
|
||||||
}
|
|
||||||
if stream:
|
|
||||||
payload["stream"] = True
|
|
||||||
if tools:
|
|
||||||
payload["tools"] = tools
|
|
||||||
if openai_tool_choice is not None:
|
|
||||||
payload["tool_choice"] = openai_tool_choice
|
|
||||||
provider_opts = {
|
|
||||||
k: v for k, v in self.provider_options.items() if k != "context_size"
|
|
||||||
}
|
|
||||||
payload.update(provider_opts)
|
|
||||||
return payload
|
|
||||||
|
|
||||||
def _message_from_choice(self, choice: dict[str, Any]) -> dict[str, Any]:
|
|
||||||
"""Parse OpenAI-style choice into {content, tool_calls, finish_reason}."""
|
|
||||||
message = choice.get("message", {})
|
|
||||||
content = message.get("content")
|
|
||||||
content = content.strip() if content else None
|
|
||||||
tool_calls = parse_tool_calls_from_message(message)
|
|
||||||
finish_reason = choice.get("finish_reason") or (
|
|
||||||
"tool_calls" if tool_calls else "stop" if content else "error"
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"content": content,
|
|
||||||
"tool_calls": tool_calls,
|
|
||||||
"finish_reason": finish_reason,
|
|
||||||
}
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _streamed_tool_calls_to_list(
|
|
||||||
tool_calls_by_index: dict[int, dict[str, Any]],
|
|
||||||
) -> Optional[list[dict[str, Any]]]:
|
|
||||||
"""Convert streamed tool_calls index map to list of {id, name, arguments}."""
|
|
||||||
if not tool_calls_by_index:
|
|
||||||
return None
|
|
||||||
result = []
|
|
||||||
for idx in sorted(tool_calls_by_index.keys()):
|
|
||||||
t = tool_calls_by_index[idx]
|
|
||||||
args_str = t.get("arguments") or "{}"
|
|
||||||
try:
|
|
||||||
arguments = json.loads(args_str)
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
arguments = {}
|
|
||||||
result.append(
|
|
||||||
{
|
|
||||||
"id": t.get("id", ""),
|
|
||||||
"name": t.get("name", ""),
|
|
||||||
"arguments": arguments,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return result if result else None
|
|
||||||
|
|
||||||
def chat_with_tools(
|
def chat_with_tools(
|
||||||
self,
|
self,
|
||||||
@@ -197,8 +122,31 @@ class LlamaCppClient(GenAIClient):
|
|||||||
"tool_calls": None,
|
"tool_calls": None,
|
||||||
"finish_reason": "error",
|
"finish_reason": "error",
|
||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
payload = self._build_payload(messages, tools, tool_choice, stream=False)
|
openai_tool_choice = None
|
||||||
|
if tool_choice:
|
||||||
|
if tool_choice == "none":
|
||||||
|
openai_tool_choice = "none"
|
||||||
|
elif tool_choice == "auto":
|
||||||
|
openai_tool_choice = "auto"
|
||||||
|
elif tool_choice == "required":
|
||||||
|
openai_tool_choice = "required"
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"messages": messages,
|
||||||
|
}
|
||||||
|
|
||||||
|
if tools:
|
||||||
|
payload["tools"] = tools
|
||||||
|
if openai_tool_choice is not None:
|
||||||
|
payload["tool_choice"] = openai_tool_choice
|
||||||
|
|
||||||
|
provider_opts = {
|
||||||
|
k: v for k, v in self.provider_options.items() if k != "context_size"
|
||||||
|
}
|
||||||
|
payload.update(provider_opts)
|
||||||
|
|
||||||
response = requests.post(
|
response = requests.post(
|
||||||
f"{self.provider}/v1/chat/completions",
|
f"{self.provider}/v1/chat/completions",
|
||||||
json=payload,
|
json=payload,
|
||||||
@@ -206,13 +154,60 @@ class LlamaCppClient(GenAIClient):
|
|||||||
)
|
)
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
result = response.json()
|
result = response.json()
|
||||||
|
|
||||||
if result is None or "choices" not in result or len(result["choices"]) == 0:
|
if result is None or "choices" not in result or len(result["choices"]) == 0:
|
||||||
return {
|
return {
|
||||||
"content": None,
|
"content": None,
|
||||||
"tool_calls": None,
|
"tool_calls": None,
|
||||||
"finish_reason": "error",
|
"finish_reason": "error",
|
||||||
}
|
}
|
||||||
return self._message_from_choice(result["choices"][0])
|
|
||||||
|
choice = result["choices"][0]
|
||||||
|
message = choice.get("message", {})
|
||||||
|
|
||||||
|
content = message.get("content")
|
||||||
|
if content:
|
||||||
|
content = content.strip()
|
||||||
|
else:
|
||||||
|
content = None
|
||||||
|
|
||||||
|
tool_calls = None
|
||||||
|
if "tool_calls" in message and message["tool_calls"]:
|
||||||
|
tool_calls = []
|
||||||
|
for tool_call in message["tool_calls"]:
|
||||||
|
try:
|
||||||
|
function_data = tool_call.get("function", {})
|
||||||
|
arguments_str = function_data.get("arguments", "{}")
|
||||||
|
arguments = json.loads(arguments_str)
|
||||||
|
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Failed to parse tool call arguments: {e}, "
|
||||||
|
f"tool: {function_data.get('name', 'unknown')}"
|
||||||
|
)
|
||||||
|
arguments = {}
|
||||||
|
|
||||||
|
tool_calls.append(
|
||||||
|
{
|
||||||
|
"id": tool_call.get("id", ""),
|
||||||
|
"name": function_data.get("name", ""),
|
||||||
|
"arguments": arguments,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
finish_reason = "error"
|
||||||
|
if "finish_reason" in choice and choice["finish_reason"]:
|
||||||
|
finish_reason = choice["finish_reason"]
|
||||||
|
elif tool_calls:
|
||||||
|
finish_reason = "tool_calls"
|
||||||
|
elif content:
|
||||||
|
finish_reason = "stop"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"content": content,
|
||||||
|
"tool_calls": tool_calls,
|
||||||
|
"finish_reason": finish_reason,
|
||||||
|
}
|
||||||
|
|
||||||
except requests.exceptions.Timeout as e:
|
except requests.exceptions.Timeout as e:
|
||||||
logger.warning("llama.cpp request timed out: %s", str(e))
|
logger.warning("llama.cpp request timed out: %s", str(e))
|
||||||
return {
|
return {
|
||||||
@@ -224,7 +219,8 @@ class LlamaCppClient(GenAIClient):
|
|||||||
error_detail = str(e)
|
error_detail = str(e)
|
||||||
if hasattr(e, "response") and e.response is not None:
|
if hasattr(e, "response") and e.response is not None:
|
||||||
try:
|
try:
|
||||||
error_detail = f"{str(e)} - Response: {e.response.text[:500]}"
|
error_body = e.response.text
|
||||||
|
error_detail = f"{str(e)} - Response: {error_body[:500]}"
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
logger.warning("llama.cpp returned an error: %s", error_detail)
|
logger.warning("llama.cpp returned an error: %s", error_detail)
|
||||||
@@ -240,111 +236,3 @@ class LlamaCppClient(GenAIClient):
|
|||||||
"tool_calls": None,
|
"tool_calls": None,
|
||||||
"finish_reason": "error",
|
"finish_reason": "error",
|
||||||
}
|
}
|
||||||
|
|
||||||
async def chat_with_tools_stream(
|
|
||||||
self,
|
|
||||||
messages: list[dict[str, Any]],
|
|
||||||
tools: Optional[list[dict[str, Any]]] = None,
|
|
||||||
tool_choice: Optional[str] = "auto",
|
|
||||||
):
|
|
||||||
"""Stream chat with tools via OpenAI-compatible streaming API."""
|
|
||||||
if self.provider is None:
|
|
||||||
logger.warning(
|
|
||||||
"llama.cpp provider has not been initialized. Check your llama.cpp configuration."
|
|
||||||
)
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
return
|
|
||||||
try:
|
|
||||||
payload = self._build_payload(messages, tools, tool_choice, stream=True)
|
|
||||||
content_parts: list[str] = []
|
|
||||||
tool_calls_by_index: dict[int, dict[str, Any]] = {}
|
|
||||||
finish_reason = "stop"
|
|
||||||
|
|
||||||
async with httpx.AsyncClient(timeout=float(self.timeout)) as client:
|
|
||||||
async with client.stream(
|
|
||||||
"POST",
|
|
||||||
f"{self.provider}/v1/chat/completions",
|
|
||||||
json=payload,
|
|
||||||
) as response:
|
|
||||||
response.raise_for_status()
|
|
||||||
async for line in response.aiter_lines():
|
|
||||||
if not line.startswith("data: "):
|
|
||||||
continue
|
|
||||||
data_str = line[6:].strip()
|
|
||||||
if data_str == "[DONE]":
|
|
||||||
break
|
|
||||||
try:
|
|
||||||
data = json.loads(data_str)
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
continue
|
|
||||||
choices = data.get("choices") or []
|
|
||||||
if not choices:
|
|
||||||
continue
|
|
||||||
delta = choices[0].get("delta", {})
|
|
||||||
if choices[0].get("finish_reason"):
|
|
||||||
finish_reason = choices[0]["finish_reason"]
|
|
||||||
if delta.get("content"):
|
|
||||||
content_parts.append(delta["content"])
|
|
||||||
yield ("content_delta", delta["content"])
|
|
||||||
for tc in delta.get("tool_calls") or []:
|
|
||||||
idx = tc.get("index", 0)
|
|
||||||
fn = tc.get("function") or {}
|
|
||||||
if idx not in tool_calls_by_index:
|
|
||||||
tool_calls_by_index[idx] = {
|
|
||||||
"id": tc.get("id", ""),
|
|
||||||
"name": tc.get("name") or fn.get("name", ""),
|
|
||||||
"arguments": "",
|
|
||||||
}
|
|
||||||
t = tool_calls_by_index[idx]
|
|
||||||
if tc.get("id"):
|
|
||||||
t["id"] = tc["id"]
|
|
||||||
name = tc.get("name") or fn.get("name")
|
|
||||||
if name:
|
|
||||||
t["name"] = name
|
|
||||||
arg = tc.get("arguments") or fn.get("arguments")
|
|
||||||
if arg is not None:
|
|
||||||
t["arguments"] += (
|
|
||||||
arg if isinstance(arg, str) else json.dumps(arg)
|
|
||||||
)
|
|
||||||
|
|
||||||
full_content = "".join(content_parts).strip() or None
|
|
||||||
tool_calls_list = self._streamed_tool_calls_to_list(tool_calls_by_index)
|
|
||||||
if tool_calls_list:
|
|
||||||
finish_reason = "tool_calls"
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": full_content,
|
|
||||||
"tool_calls": tool_calls_list,
|
|
||||||
"finish_reason": finish_reason,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
except httpx.HTTPStatusError as e:
|
|
||||||
logger.warning("llama.cpp streaming HTTP error: %s", e)
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(
|
|
||||||
"Unexpected error in llama.cpp chat_with_tools_stream: %s", str(e)
|
|
||||||
)
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -1,89 +0,0 @@
|
|||||||
"""GenAI client manager for Frigate.
|
|
||||||
|
|
||||||
Manages GenAI provider clients from Frigate config. Configuration is read only
|
|
||||||
in _update_config(); no other code should read config.genai. Exposes clients
|
|
||||||
by role: tool_client, vision_client, embeddings_client.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import TYPE_CHECKING, Optional
|
|
||||||
|
|
||||||
from frigate.config import FrigateConfig
|
|
||||||
from frigate.config.camera.genai import GenAIRoleEnum
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from frigate.genai import GenAIClient
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class GenAIClientManager:
|
|
||||||
"""Manages GenAI provider clients from Frigate config."""
|
|
||||||
|
|
||||||
def __init__(self, config: FrigateConfig) -> None:
|
|
||||||
self._config = config
|
|
||||||
self._tool_client: Optional[GenAIClient] = None
|
|
||||||
self._vision_client: Optional[GenAIClient] = None
|
|
||||||
self._embeddings_client: Optional[GenAIClient] = None
|
|
||||||
self._update_config()
|
|
||||||
|
|
||||||
def _update_config(self) -> None:
|
|
||||||
"""Build role clients from current Frigate config.genai.
|
|
||||||
|
|
||||||
Called from __init__ and can be called again when config is reloaded.
|
|
||||||
Each role (tools, vision, embeddings) gets the client for the provider
|
|
||||||
that has that role in its roles list.
|
|
||||||
"""
|
|
||||||
from frigate.genai import PROVIDERS, load_providers
|
|
||||||
|
|
||||||
self._tool_client = None
|
|
||||||
self._vision_client = None
|
|
||||||
self._embeddings_client = None
|
|
||||||
|
|
||||||
if not self._config.genai:
|
|
||||||
return
|
|
||||||
|
|
||||||
load_providers()
|
|
||||||
|
|
||||||
for _name, genai_cfg in self._config.genai.items():
|
|
||||||
if not genai_cfg.provider:
|
|
||||||
continue
|
|
||||||
provider_cls = PROVIDERS.get(genai_cfg.provider)
|
|
||||||
if not provider_cls:
|
|
||||||
logger.warning(
|
|
||||||
"Unknown GenAI provider %s in config, skipping.",
|
|
||||||
genai_cfg.provider,
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
client = provider_cls(genai_cfg)
|
|
||||||
except Exception as e:
|
|
||||||
logger.exception(
|
|
||||||
"Failed to create GenAI client for provider %s: %s",
|
|
||||||
genai_cfg.provider,
|
|
||||||
e,
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
for role in genai_cfg.roles:
|
|
||||||
if role == GenAIRoleEnum.tools:
|
|
||||||
self._tool_client = client
|
|
||||||
elif role == GenAIRoleEnum.vision:
|
|
||||||
self._vision_client = client
|
|
||||||
elif role == GenAIRoleEnum.embeddings:
|
|
||||||
self._embeddings_client = client
|
|
||||||
|
|
||||||
@property
|
|
||||||
def tool_client(self) -> "Optional[GenAIClient]":
|
|
||||||
"""Client configured for the tools role (e.g. chat with function calling)."""
|
|
||||||
return self._tool_client
|
|
||||||
|
|
||||||
@property
|
|
||||||
def vision_client(self) -> "Optional[GenAIClient]":
|
|
||||||
"""Client configured for the vision role (e.g. review descriptions, object descriptions)."""
|
|
||||||
return self._vision_client
|
|
||||||
|
|
||||||
@property
|
|
||||||
def embeddings_client(self) -> "Optional[GenAIClient]":
|
|
||||||
"""Client configured for the embeddings role."""
|
|
||||||
return self._embeddings_client
|
|
||||||
@@ -1,16 +1,15 @@
|
|||||||
"""Ollama Provider for Frigate AI."""
|
"""Ollama Provider for Frigate AI."""
|
||||||
|
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
from httpx import RemoteProtocolError, TimeoutException
|
from httpx import RemoteProtocolError, TimeoutException
|
||||||
from ollama import AsyncClient as OllamaAsyncClient
|
|
||||||
from ollama import Client as ApiClient
|
from ollama import Client as ApiClient
|
||||||
from ollama import ResponseError
|
from ollama import ResponseError
|
||||||
|
|
||||||
from frigate.config import GenAIProviderEnum
|
from frigate.config import GenAIProviderEnum
|
||||||
from frigate.genai import GenAIClient, register_genai_provider
|
from frigate.genai import GenAIClient, register_genai_provider
|
||||||
from frigate.genai.utils import parse_tool_calls_from_message
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -89,73 +88,6 @@ class OllamaClient(GenAIClient):
|
|||||||
"num_ctx", 4096
|
"num_ctx", 4096
|
||||||
)
|
)
|
||||||
|
|
||||||
def _build_request_params(
|
|
||||||
self,
|
|
||||||
messages: list[dict[str, Any]],
|
|
||||||
tools: Optional[list[dict[str, Any]]],
|
|
||||||
tool_choice: Optional[str],
|
|
||||||
stream: bool = False,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Build request_messages and params for chat (sync or stream)."""
|
|
||||||
request_messages = []
|
|
||||||
for msg in messages:
|
|
||||||
msg_dict = {
|
|
||||||
"role": msg.get("role"),
|
|
||||||
"content": msg.get("content", ""),
|
|
||||||
}
|
|
||||||
if msg.get("tool_call_id"):
|
|
||||||
msg_dict["tool_call_id"] = msg["tool_call_id"]
|
|
||||||
if msg.get("name"):
|
|
||||||
msg_dict["name"] = msg["name"]
|
|
||||||
if msg.get("tool_calls"):
|
|
||||||
msg_dict["tool_calls"] = msg["tool_calls"]
|
|
||||||
request_messages.append(msg_dict)
|
|
||||||
|
|
||||||
request_params: dict[str, Any] = {
|
|
||||||
"model": self.genai_config.model,
|
|
||||||
"messages": request_messages,
|
|
||||||
**self.provider_options,
|
|
||||||
}
|
|
||||||
if stream:
|
|
||||||
request_params["stream"] = True
|
|
||||||
if tools:
|
|
||||||
request_params["tools"] = tools
|
|
||||||
if tool_choice:
|
|
||||||
request_params["tool_choice"] = (
|
|
||||||
"none"
|
|
||||||
if tool_choice == "none"
|
|
||||||
else "required"
|
|
||||||
if tool_choice == "required"
|
|
||||||
else "auto"
|
|
||||||
)
|
|
||||||
return request_params
|
|
||||||
|
|
||||||
def _message_from_response(self, response: dict[str, Any]) -> dict[str, Any]:
|
|
||||||
"""Parse Ollama chat response into {content, tool_calls, finish_reason}."""
|
|
||||||
if not response or "message" not in response:
|
|
||||||
return {
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
}
|
|
||||||
message = response["message"]
|
|
||||||
content = message.get("content", "").strip() if message.get("content") else None
|
|
||||||
tool_calls = parse_tool_calls_from_message(message)
|
|
||||||
finish_reason = "error"
|
|
||||||
if response.get("done"):
|
|
||||||
finish_reason = (
|
|
||||||
"tool_calls" if tool_calls else "stop" if content else "error"
|
|
||||||
)
|
|
||||||
elif tool_calls:
|
|
||||||
finish_reason = "tool_calls"
|
|
||||||
elif content:
|
|
||||||
finish_reason = "stop"
|
|
||||||
return {
|
|
||||||
"content": content,
|
|
||||||
"tool_calls": tool_calls,
|
|
||||||
"finish_reason": finish_reason,
|
|
||||||
}
|
|
||||||
|
|
||||||
def chat_with_tools(
|
def chat_with_tools(
|
||||||
self,
|
self,
|
||||||
messages: list[dict[str, Any]],
|
messages: list[dict[str, Any]],
|
||||||
@@ -171,12 +103,93 @@ class OllamaClient(GenAIClient):
|
|||||||
"tool_calls": None,
|
"tool_calls": None,
|
||||||
"finish_reason": "error",
|
"finish_reason": "error",
|
||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
request_params = self._build_request_params(
|
request_messages = []
|
||||||
messages, tools, tool_choice, stream=False
|
for msg in messages:
|
||||||
)
|
msg_dict = {
|
||||||
|
"role": msg.get("role"),
|
||||||
|
"content": msg.get("content", ""),
|
||||||
|
}
|
||||||
|
if msg.get("tool_call_id"):
|
||||||
|
msg_dict["tool_call_id"] = msg["tool_call_id"]
|
||||||
|
if msg.get("name"):
|
||||||
|
msg_dict["name"] = msg["name"]
|
||||||
|
if msg.get("tool_calls"):
|
||||||
|
msg_dict["tool_calls"] = msg["tool_calls"]
|
||||||
|
request_messages.append(msg_dict)
|
||||||
|
|
||||||
|
request_params = {
|
||||||
|
"model": self.genai_config.model,
|
||||||
|
"messages": request_messages,
|
||||||
|
}
|
||||||
|
|
||||||
|
if tools:
|
||||||
|
request_params["tools"] = tools
|
||||||
|
if tool_choice:
|
||||||
|
if tool_choice == "none":
|
||||||
|
request_params["tool_choice"] = "none"
|
||||||
|
elif tool_choice == "required":
|
||||||
|
request_params["tool_choice"] = "required"
|
||||||
|
elif tool_choice == "auto":
|
||||||
|
request_params["tool_choice"] = "auto"
|
||||||
|
|
||||||
|
request_params.update(self.provider_options)
|
||||||
|
|
||||||
response = self.provider.chat(**request_params)
|
response = self.provider.chat(**request_params)
|
||||||
return self._message_from_response(response)
|
|
||||||
|
if not response or "message" not in response:
|
||||||
|
return {
|
||||||
|
"content": None,
|
||||||
|
"tool_calls": None,
|
||||||
|
"finish_reason": "error",
|
||||||
|
}
|
||||||
|
|
||||||
|
message = response["message"]
|
||||||
|
content = (
|
||||||
|
message.get("content", "").strip() if message.get("content") else None
|
||||||
|
)
|
||||||
|
|
||||||
|
tool_calls = None
|
||||||
|
if "tool_calls" in message and message["tool_calls"]:
|
||||||
|
tool_calls = []
|
||||||
|
for tool_call in message["tool_calls"]:
|
||||||
|
try:
|
||||||
|
function_data = tool_call.get("function", {})
|
||||||
|
arguments_str = function_data.get("arguments", "{}")
|
||||||
|
arguments = json.loads(arguments_str)
|
||||||
|
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Failed to parse tool call arguments: {e}, "
|
||||||
|
f"tool: {function_data.get('name', 'unknown')}"
|
||||||
|
)
|
||||||
|
arguments = {}
|
||||||
|
|
||||||
|
tool_calls.append(
|
||||||
|
{
|
||||||
|
"id": tool_call.get("id", ""),
|
||||||
|
"name": function_data.get("name", ""),
|
||||||
|
"arguments": arguments,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
finish_reason = "error"
|
||||||
|
if "done" in response and response["done"]:
|
||||||
|
if tool_calls:
|
||||||
|
finish_reason = "tool_calls"
|
||||||
|
elif content:
|
||||||
|
finish_reason = "stop"
|
||||||
|
elif tool_calls:
|
||||||
|
finish_reason = "tool_calls"
|
||||||
|
elif content:
|
||||||
|
finish_reason = "stop"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"content": content,
|
||||||
|
"tool_calls": tool_calls,
|
||||||
|
"finish_reason": finish_reason,
|
||||||
|
}
|
||||||
|
|
||||||
except (TimeoutException, ResponseError, ConnectionError) as e:
|
except (TimeoutException, ResponseError, ConnectionError) as e:
|
||||||
logger.warning("Ollama returned an error: %s", str(e))
|
logger.warning("Ollama returned an error: %s", str(e))
|
||||||
return {
|
return {
|
||||||
@@ -191,89 +204,3 @@ class OllamaClient(GenAIClient):
|
|||||||
"tool_calls": None,
|
"tool_calls": None,
|
||||||
"finish_reason": "error",
|
"finish_reason": "error",
|
||||||
}
|
}
|
||||||
|
|
||||||
async def chat_with_tools_stream(
|
|
||||||
self,
|
|
||||||
messages: list[dict[str, Any]],
|
|
||||||
tools: Optional[list[dict[str, Any]]] = None,
|
|
||||||
tool_choice: Optional[str] = "auto",
|
|
||||||
):
|
|
||||||
"""Stream chat with tools; yields content deltas then final message."""
|
|
||||||
if self.provider is None:
|
|
||||||
logger.warning(
|
|
||||||
"Ollama provider has not been initialized. Check your Ollama configuration."
|
|
||||||
)
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
return
|
|
||||||
try:
|
|
||||||
request_params = self._build_request_params(
|
|
||||||
messages, tools, tool_choice, stream=True
|
|
||||||
)
|
|
||||||
async_client = OllamaAsyncClient(
|
|
||||||
host=self.genai_config.base_url,
|
|
||||||
timeout=self.timeout,
|
|
||||||
)
|
|
||||||
content_parts: list[str] = []
|
|
||||||
final_message: dict[str, Any] | None = None
|
|
||||||
try:
|
|
||||||
stream = await async_client.chat(**request_params)
|
|
||||||
async for chunk in stream:
|
|
||||||
if not chunk or "message" not in chunk:
|
|
||||||
continue
|
|
||||||
msg = chunk.get("message", {})
|
|
||||||
delta = msg.get("content") or ""
|
|
||||||
if delta:
|
|
||||||
content_parts.append(delta)
|
|
||||||
yield ("content_delta", delta)
|
|
||||||
if chunk.get("done"):
|
|
||||||
full_content = "".join(content_parts).strip() or None
|
|
||||||
tool_calls = parse_tool_calls_from_message(msg)
|
|
||||||
final_message = {
|
|
||||||
"content": full_content,
|
|
||||||
"tool_calls": tool_calls,
|
|
||||||
"finish_reason": "tool_calls" if tool_calls else "stop",
|
|
||||||
}
|
|
||||||
break
|
|
||||||
finally:
|
|
||||||
await async_client.close()
|
|
||||||
|
|
||||||
if final_message is not None:
|
|
||||||
yield ("message", final_message)
|
|
||||||
else:
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": "".join(content_parts).strip() or None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "stop",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
except (TimeoutException, ResponseError, ConnectionError) as e:
|
|
||||||
logger.warning("Ollama streaming error: %s", str(e))
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(
|
|
||||||
"Unexpected error in Ollama chat_with_tools_stream: %s", str(e)
|
|
||||||
)
|
|
||||||
yield (
|
|
||||||
"message",
|
|
||||||
{
|
|
||||||
"content": None,
|
|
||||||
"tool_calls": None,
|
|
||||||
"finish_reason": "error",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -1,70 +0,0 @@
|
|||||||
"""Shared helpers for GenAI providers and chat (OpenAI-style messages, tool call parsing)."""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
from typing import Any, List, Optional
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_tool_calls_from_message(
|
|
||||||
message: dict[str, Any],
|
|
||||||
) -> Optional[list[dict[str, Any]]]:
|
|
||||||
"""
|
|
||||||
Parse tool_calls from an OpenAI-style message dict.
|
|
||||||
|
|
||||||
Message may have "tool_calls" as a list of:
|
|
||||||
{"id": str, "function": {"name": str, "arguments": str}, ...}
|
|
||||||
|
|
||||||
Returns a list of {"id", "name", "arguments"} with arguments parsed as dict,
|
|
||||||
or None if no tool_calls. Used by Ollama and LlamaCpp (non-stream) responses.
|
|
||||||
"""
|
|
||||||
raw = message.get("tool_calls")
|
|
||||||
if not raw or not isinstance(raw, list):
|
|
||||||
return None
|
|
||||||
result = []
|
|
||||||
for tool_call in raw:
|
|
||||||
function_data = tool_call.get("function") or {}
|
|
||||||
try:
|
|
||||||
arguments_str = function_data.get("arguments") or "{}"
|
|
||||||
arguments = json.loads(arguments_str)
|
|
||||||
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
|
||||||
logger.warning(
|
|
||||||
"Failed to parse tool call arguments: %s, tool: %s",
|
|
||||||
e,
|
|
||||||
function_data.get("name", "unknown"),
|
|
||||||
)
|
|
||||||
arguments = {}
|
|
||||||
result.append(
|
|
||||||
{
|
|
||||||
"id": tool_call.get("id", ""),
|
|
||||||
"name": function_data.get("name", ""),
|
|
||||||
"arguments": arguments,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return result if result else None
|
|
||||||
|
|
||||||
|
|
||||||
def build_assistant_message_for_conversation(
|
|
||||||
content: Any,
|
|
||||||
tool_calls_raw: Optional[List[dict[str, Any]]],
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Build the assistant message dict in OpenAI format for appending to a conversation.
|
|
||||||
|
|
||||||
tool_calls_raw: list of {"id", "name", "arguments"} (arguments as dict), or None.
|
|
||||||
"""
|
|
||||||
msg: dict[str, Any] = {"role": "assistant", "content": content}
|
|
||||||
if tool_calls_raw:
|
|
||||||
msg["tool_calls"] = [
|
|
||||||
{
|
|
||||||
"id": tc["id"],
|
|
||||||
"type": "function",
|
|
||||||
"function": {
|
|
||||||
"name": tc["name"],
|
|
||||||
"arguments": json.dumps(tc.get("arguments") or {}),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for tc in tool_calls_raw
|
|
||||||
]
|
|
||||||
return msg
|
|
||||||
@@ -195,8 +195,7 @@ def flatten_config_data(
|
|||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
items = []
|
items = []
|
||||||
for key, value in config_data.items():
|
for key, value in config_data.items():
|
||||||
escaped_key = escape_config_key_segment(str(key))
|
new_key = f"{parent_key}.{key}" if parent_key else key
|
||||||
new_key = f"{parent_key}.{escaped_key}" if parent_key else escaped_key
|
|
||||||
if isinstance(value, dict):
|
if isinstance(value, dict):
|
||||||
items.extend(flatten_config_data(value, new_key).items())
|
items.extend(flatten_config_data(value, new_key).items())
|
||||||
else:
|
else:
|
||||||
@@ -204,41 +203,6 @@ def flatten_config_data(
|
|||||||
return dict(items)
|
return dict(items)
|
||||||
|
|
||||||
|
|
||||||
def escape_config_key_segment(segment: str) -> str:
|
|
||||||
"""Escape dots and backslashes so they can be treated as literal key chars."""
|
|
||||||
return segment.replace("\\", "\\\\").replace(".", "\\.")
|
|
||||||
|
|
||||||
|
|
||||||
def split_config_key_path(key_path_str: str) -> list[str]:
|
|
||||||
"""Split a dotted config path, honoring \\. as a literal dot in a key."""
|
|
||||||
parts: list[str] = []
|
|
||||||
current: list[str] = []
|
|
||||||
escaped = False
|
|
||||||
|
|
||||||
for char in key_path_str:
|
|
||||||
if escaped:
|
|
||||||
current.append(char)
|
|
||||||
escaped = False
|
|
||||||
continue
|
|
||||||
|
|
||||||
if char == "\\":
|
|
||||||
escaped = True
|
|
||||||
continue
|
|
||||||
|
|
||||||
if char == ".":
|
|
||||||
parts.append("".join(current))
|
|
||||||
current = []
|
|
||||||
continue
|
|
||||||
|
|
||||||
current.append(char)
|
|
||||||
|
|
||||||
if escaped:
|
|
||||||
current.append("\\")
|
|
||||||
|
|
||||||
parts.append("".join(current))
|
|
||||||
return parts
|
|
||||||
|
|
||||||
|
|
||||||
def update_yaml_file_bulk(file_path: str, updates: Dict[str, Any]):
|
def update_yaml_file_bulk(file_path: str, updates: Dict[str, Any]):
|
||||||
yaml = YAML()
|
yaml = YAML()
|
||||||
yaml.indent(mapping=2, sequence=4, offset=2)
|
yaml.indent(mapping=2, sequence=4, offset=2)
|
||||||
@@ -254,7 +218,7 @@ def update_yaml_file_bulk(file_path: str, updates: Dict[str, Any]):
|
|||||||
|
|
||||||
# Apply all updates
|
# Apply all updates
|
||||||
for key_path_str, new_value in updates.items():
|
for key_path_str, new_value in updates.items():
|
||||||
key_path = split_config_key_path(key_path_str)
|
key_path = key_path_str.split(".")
|
||||||
for i in range(len(key_path)):
|
for i in range(len(key_path)):
|
||||||
try:
|
try:
|
||||||
index = int(key_path[i])
|
index = int(key_path[i])
|
||||||
|
|||||||
@@ -438,13 +438,6 @@ def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
|||||||
"""Handle migrating frigate config to 0.18-0"""
|
"""Handle migrating frigate config to 0.18-0"""
|
||||||
new_config = config.copy()
|
new_config = config.copy()
|
||||||
|
|
||||||
# Migrate GenAI to new format
|
|
||||||
genai = new_config.get("genai")
|
|
||||||
|
|
||||||
if genai and genai.get("provider"):
|
|
||||||
genai["roles"] = ["embeddings", "vision", "tools"]
|
|
||||||
new_config["genai"] = {"default": genai}
|
|
||||||
|
|
||||||
# Remove deprecated sync_recordings from global record config
|
# Remove deprecated sync_recordings from global record config
|
||||||
if new_config.get("record", {}).get("sync_recordings") is not None:
|
if new_config.get("record", {}).get("sync_recordings") is not None:
|
||||||
del new_config["record"]["sync_recordings"]
|
del new_config["record"]["sync_recordings"]
|
||||||
|
|||||||
@@ -1,46 +0,0 @@
|
|||||||
"""JSON schema utilities for Frigate."""
|
|
||||||
|
|
||||||
from typing import Any, Dict, Type
|
|
||||||
|
|
||||||
from pydantic import BaseModel, TypeAdapter
|
|
||||||
|
|
||||||
|
|
||||||
def get_config_schema(config_class: Type[BaseModel]) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Returns the JSON schema for FrigateConfig with polymorphic detectors.
|
|
||||||
|
|
||||||
This utility patches the FrigateConfig schema to include the full polymorphic
|
|
||||||
definitions for detectors. By default, Pydantic's schema for Dict[str, BaseDetectorConfig]
|
|
||||||
only includes the base class fields. This function replaces it with a reference
|
|
||||||
to the DetectorConfig union, which includes all available detector subclasses.
|
|
||||||
"""
|
|
||||||
# Import here to ensure all detector plugins are loaded through the detectors module
|
|
||||||
from frigate.detectors import DetectorConfig
|
|
||||||
|
|
||||||
# Get the base schema for FrigateConfig
|
|
||||||
schema = config_class.model_json_schema()
|
|
||||||
|
|
||||||
# Get the schema for the polymorphic DetectorConfig union
|
|
||||||
detector_adapter: TypeAdapter = TypeAdapter(DetectorConfig)
|
|
||||||
detector_schema = detector_adapter.json_schema()
|
|
||||||
|
|
||||||
# Ensure $defs exists in FrigateConfig schema
|
|
||||||
if "$defs" not in schema:
|
|
||||||
schema["$defs"] = {}
|
|
||||||
|
|
||||||
# Merge $defs from DetectorConfig into FrigateConfig schema
|
|
||||||
# This includes the specific schemas for each detector plugin (OvDetectorConfig, etc.)
|
|
||||||
if "$defs" in detector_schema:
|
|
||||||
schema["$defs"].update(detector_schema["$defs"])
|
|
||||||
|
|
||||||
# Extract the union schema (oneOf/discriminator) and add it as a definition
|
|
||||||
detector_union_schema = {k: v for k, v in detector_schema.items() if k != "$defs"}
|
|
||||||
schema["$defs"]["DetectorConfig"] = detector_union_schema
|
|
||||||
|
|
||||||
# Update the 'detectors' property to use the polymorphic DetectorConfig definition
|
|
||||||
if "detectors" in schema.get("properties", {}):
|
|
||||||
schema["properties"]["detectors"]["additionalProperties"] = {
|
|
||||||
"$ref": "#/$defs/DetectorConfig"
|
|
||||||
}
|
|
||||||
|
|
||||||
return schema
|
|
||||||
@@ -121,7 +121,7 @@ def get_cpu_stats() -> dict[str, dict]:
|
|||||||
pid = str(process.info["pid"])
|
pid = str(process.info["pid"])
|
||||||
try:
|
try:
|
||||||
cpu_percent = process.info["cpu_percent"]
|
cpu_percent = process.info["cpu_percent"]
|
||||||
cmdline = " ".join(process.info["cmdline"]).rstrip()
|
cmdline = process.info["cmdline"]
|
||||||
|
|
||||||
with open(f"/proc/{pid}/stat", "r") as f:
|
with open(f"/proc/{pid}/stat", "r") as f:
|
||||||
stats = f.readline().split()
|
stats = f.readline().split()
|
||||||
@@ -155,7 +155,7 @@ def get_cpu_stats() -> dict[str, dict]:
|
|||||||
"cpu": str(cpu_percent),
|
"cpu": str(cpu_percent),
|
||||||
"cpu_average": str(round(cpu_average_usage, 2)),
|
"cpu_average": str(round(cpu_average_usage, 2)),
|
||||||
"mem": f"{mem_pct}",
|
"mem": f"{mem_pct}",
|
||||||
"cmdline": clean_camera_user_pass(cmdline),
|
"cmdline": clean_camera_user_pass(" ".join(cmdline)),
|
||||||
}
|
}
|
||||||
except Exception:
|
except Exception:
|
||||||
continue
|
continue
|
||||||
|
|||||||
@@ -8,18 +8,20 @@ and generates JSON translation files with titles and descriptions for the web UI
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import sys
|
import shutil
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, get_args, get_origin
|
from typing import Any, Dict, Optional, get_args, get_origin
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from pydantic.fields import FieldInfo
|
||||||
|
|
||||||
from frigate.config.config import FrigateConfig
|
from frigate.config.config import FrigateConfig
|
||||||
from frigate.util.schema import get_config_schema
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
logging.basicConfig(level=logging.INFO)
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def get_field_translations(field_info) -> Dict[str, str]:
|
def get_field_translations(field_info: FieldInfo) -> Dict[str, str]:
|
||||||
"""Extract title and description from a Pydantic field."""
|
"""Extract title and description from a Pydantic field."""
|
||||||
translations = {}
|
translations = {}
|
||||||
|
|
||||||
@@ -32,147 +34,50 @@ def get_field_translations(field_info) -> Dict[str, str]:
|
|||||||
return translations
|
return translations
|
||||||
|
|
||||||
|
|
||||||
def extract_translations_from_schema(
|
def process_model_fields(model: type[BaseModel]) -> Dict[str, Any]:
|
||||||
schema: Dict[str, Any], defs: Dict[str, Any] = None
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
"""
|
||||||
Recursively extract translations (titles and descriptions) from a JSON schema.
|
Recursively process a Pydantic model to extract translations.
|
||||||
|
|
||||||
Returns a dictionary structure with label and description for each field,
|
Returns a nested dictionary structure matching the config schema,
|
||||||
and nested fields directly under their parent keys.
|
with title and description for each field.
|
||||||
"""
|
"""
|
||||||
if defs is None:
|
|
||||||
defs = schema.get("$defs", {})
|
|
||||||
|
|
||||||
translations = {}
|
translations = {}
|
||||||
|
|
||||||
# Add top-level title and description if present
|
model_fields = model.model_fields
|
||||||
if "title" in schema:
|
|
||||||
translations["label"] = schema["title"]
|
|
||||||
if "description" in schema:
|
|
||||||
translations["description"] = schema["description"]
|
|
||||||
|
|
||||||
# Process nested properties
|
for field_name, field_info in model_fields.items():
|
||||||
properties = schema.get("properties", {})
|
field_translations = get_field_translations(field_info)
|
||||||
for field_name, field_schema in properties.items():
|
|
||||||
field_translations = {}
|
|
||||||
|
|
||||||
# Handle $ref references
|
# Get the field's type annotation
|
||||||
if "$ref" in field_schema:
|
field_type = field_info.annotation
|
||||||
ref_path = field_schema["$ref"]
|
|
||||||
if ref_path.startswith("#/$defs/"):
|
|
||||||
ref_name = ref_path.split("/")[-1]
|
|
||||||
if ref_name in defs:
|
|
||||||
ref_schema = defs[ref_name]
|
|
||||||
# Extract from the referenced schema
|
|
||||||
ref_translations = extract_translations_from_schema(
|
|
||||||
ref_schema, defs=defs
|
|
||||||
)
|
|
||||||
# Use the $ref field's own title/description if present
|
|
||||||
if "title" in field_schema:
|
|
||||||
field_translations["label"] = field_schema["title"]
|
|
||||||
elif "label" in ref_translations:
|
|
||||||
field_translations["label"] = ref_translations["label"]
|
|
||||||
if "description" in field_schema:
|
|
||||||
field_translations["description"] = field_schema["description"]
|
|
||||||
elif "description" in ref_translations:
|
|
||||||
field_translations["description"] = ref_translations[
|
|
||||||
"description"
|
|
||||||
]
|
|
||||||
# Add nested properties from referenced schema
|
|
||||||
nested_without_root = {
|
|
||||||
k: v
|
|
||||||
for k, v in ref_translations.items()
|
|
||||||
if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
field_translations.update(nested_without_root)
|
|
||||||
# Handle additionalProperties with $ref (for dict types)
|
|
||||||
elif "additionalProperties" in field_schema:
|
|
||||||
additional_props = field_schema["additionalProperties"]
|
|
||||||
# Extract title and description from the field itself
|
|
||||||
if "title" in field_schema:
|
|
||||||
field_translations["label"] = field_schema["title"]
|
|
||||||
if "description" in field_schema:
|
|
||||||
field_translations["description"] = field_schema["description"]
|
|
||||||
|
|
||||||
# If additionalProperties contains a $ref, extract nested translations
|
# Handle Optional types
|
||||||
if "$ref" in additional_props:
|
origin = get_origin(field_type)
|
||||||
ref_path = additional_props["$ref"]
|
|
||||||
if ref_path.startswith("#/$defs/"):
|
|
||||||
ref_name = ref_path.split("/")[-1]
|
|
||||||
if ref_name in defs:
|
|
||||||
ref_schema = defs[ref_name]
|
|
||||||
nested = extract_translations_from_schema(ref_schema, defs=defs)
|
|
||||||
nested_without_root = {
|
|
||||||
k: v
|
|
||||||
for k, v in nested.items()
|
|
||||||
if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
field_translations.update(nested_without_root)
|
|
||||||
# Handle items with $ref (for array types)
|
|
||||||
elif "items" in field_schema:
|
|
||||||
items = field_schema["items"]
|
|
||||||
# Extract title and description from the field itself
|
|
||||||
if "title" in field_schema:
|
|
||||||
field_translations["label"] = field_schema["title"]
|
|
||||||
if "description" in field_schema:
|
|
||||||
field_translations["description"] = field_schema["description"]
|
|
||||||
|
|
||||||
# If items contains a $ref, extract nested translations
|
if origin is Optional or (
|
||||||
if "$ref" in items:
|
hasattr(origin, "__name__") and origin.__name__ == "UnionType"
|
||||||
ref_path = items["$ref"]
|
):
|
||||||
if ref_path.startswith("#/$defs/"):
|
args = get_args(field_type)
|
||||||
ref_name = ref_path.split("/")[-1]
|
field_type = next(
|
||||||
if ref_name in defs:
|
(arg for arg in args if arg is not type(None)), field_type
|
||||||
ref_schema = defs[ref_name]
|
)
|
||||||
nested = extract_translations_from_schema(ref_schema, defs=defs)
|
|
||||||
nested_without_root = {
|
|
||||||
k: v
|
|
||||||
for k, v in nested.items()
|
|
||||||
if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
field_translations.update(nested_without_root)
|
|
||||||
else:
|
|
||||||
# Extract title and description
|
|
||||||
if "title" in field_schema:
|
|
||||||
field_translations["label"] = field_schema["title"]
|
|
||||||
if "description" in field_schema:
|
|
||||||
field_translations["description"] = field_schema["description"]
|
|
||||||
|
|
||||||
# Recursively process nested properties
|
# Handle Dict types (like Dict[str, CameraConfig])
|
||||||
if "properties" in field_schema:
|
if get_origin(field_type) is dict:
|
||||||
nested = extract_translations_from_schema(field_schema, defs=defs)
|
dict_args = get_args(field_type)
|
||||||
# Merge nested translations
|
|
||||||
nested_without_root = {
|
if len(dict_args) >= 2:
|
||||||
k: v for k, v in nested.items() if k not in ("label", "description")
|
value_type = dict_args[1]
|
||||||
}
|
|
||||||
field_translations.update(nested_without_root)
|
if isinstance(value_type, type) and issubclass(value_type, BaseModel):
|
||||||
# Handle anyOf cases
|
nested_translations = process_model_fields(value_type)
|
||||||
elif "anyOf" in field_schema:
|
|
||||||
for item in field_schema["anyOf"]:
|
if nested_translations:
|
||||||
if "properties" in item:
|
field_translations["properties"] = nested_translations
|
||||||
nested = extract_translations_from_schema(item, defs=defs)
|
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
|
||||||
nested_without_root = {
|
nested_translations = process_model_fields(field_type)
|
||||||
k: v
|
if nested_translations:
|
||||||
for k, v in nested.items()
|
field_translations["properties"] = nested_translations
|
||||||
if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
field_translations.update(nested_without_root)
|
|
||||||
elif "$ref" in item:
|
|
||||||
ref_path = item["$ref"]
|
|
||||||
if ref_path.startswith("#/$defs/"):
|
|
||||||
ref_name = ref_path.split("/")[-1]
|
|
||||||
if ref_name in defs:
|
|
||||||
ref_schema = defs[ref_name]
|
|
||||||
nested = extract_translations_from_schema(
|
|
||||||
ref_schema, defs=defs
|
|
||||||
)
|
|
||||||
nested_without_root = {
|
|
||||||
k: v
|
|
||||||
for k, v in nested.items()
|
|
||||||
if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
field_translations.update(nested_without_root)
|
|
||||||
|
|
||||||
if field_translations:
|
if field_translations:
|
||||||
translations[field_name] = field_translations
|
translations[field_name] = field_translations
|
||||||
@@ -180,350 +85,76 @@ def extract_translations_from_schema(
|
|||||||
return translations
|
return translations
|
||||||
|
|
||||||
|
|
||||||
def generate_section_translation(config_class: type) -> Dict[str, Any]:
|
def generate_section_translation(
|
||||||
|
section_name: str, field_info: FieldInfo
|
||||||
|
) -> Dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
Generate translation structure for a config section using its JSON schema.
|
Generate translation structure for a top-level config section.
|
||||||
"""
|
"""
|
||||||
schema = config_class.model_json_schema()
|
section_translations = get_field_translations(field_info)
|
||||||
return extract_translations_from_schema(schema)
|
field_type = field_info.annotation
|
||||||
|
origin = get_origin(field_type)
|
||||||
|
|
||||||
|
if origin is Optional or (
|
||||||
|
hasattr(origin, "__name__") and origin.__name__ == "UnionType"
|
||||||
|
):
|
||||||
|
args = get_args(field_type)
|
||||||
|
field_type = next((arg for arg in args if arg is not type(None)), field_type)
|
||||||
|
|
||||||
def get_detector_translations(
|
# Handle Dict types (like detectors, cameras, camera_groups)
|
||||||
config_schema: Dict[str, Any],
|
if get_origin(field_type) is dict:
|
||||||
) -> tuple[Dict[str, Any], set[str]]:
|
dict_args = get_args(field_type)
|
||||||
"""Build detector type translations with nested fields based on schema definitions."""
|
if len(dict_args) >= 2:
|
||||||
defs = config_schema.get("$defs", {})
|
value_type = dict_args[1]
|
||||||
detector_schema = defs.get("DetectorConfig", {})
|
if isinstance(value_type, type) and issubclass(value_type, BaseModel):
|
||||||
discriminator = detector_schema.get("discriminator", {})
|
nested = process_model_fields(value_type)
|
||||||
mapping = discriminator.get("mapping", {})
|
if nested:
|
||||||
|
section_translations["properties"] = nested
|
||||||
|
|
||||||
type_translations: Dict[str, Any] = {}
|
# If the field itself is a BaseModel, process it
|
||||||
nested_field_keys: set[str] = set()
|
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
|
||||||
for detector_type, ref in mapping.items():
|
nested = process_model_fields(field_type)
|
||||||
if not isinstance(ref, str):
|
if nested:
|
||||||
continue
|
section_translations["properties"] = nested
|
||||||
|
|
||||||
if not ref.startswith("#/$defs/"):
|
return section_translations
|
||||||
continue
|
|
||||||
|
|
||||||
ref_name = ref.split("/")[-1]
|
|
||||||
ref_schema = defs.get(ref_name, {})
|
|
||||||
if not ref_schema:
|
|
||||||
continue
|
|
||||||
|
|
||||||
type_entry: Dict[str, str] = {}
|
|
||||||
title = ref_schema.get("title")
|
|
||||||
description = ref_schema.get("description")
|
|
||||||
if title:
|
|
||||||
type_entry["label"] = title
|
|
||||||
if description:
|
|
||||||
type_entry["description"] = description
|
|
||||||
|
|
||||||
nested = extract_translations_from_schema(ref_schema, defs=defs)
|
|
||||||
nested_without_root = {
|
|
||||||
k: v for k, v in nested.items() if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
if nested_without_root:
|
|
||||||
type_entry.update(nested_without_root)
|
|
||||||
nested_field_keys.update(nested_without_root.keys())
|
|
||||||
|
|
||||||
if type_entry:
|
|
||||||
type_translations[detector_type] = type_entry
|
|
||||||
|
|
||||||
return type_translations, nested_field_keys
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
"""Main function to generate config translations."""
|
"""Main function to generate config translations."""
|
||||||
|
|
||||||
# Define output directory
|
# Define output directory
|
||||||
if len(sys.argv) > 1:
|
output_dir = Path(__file__).parent / "web" / "public" / "locales" / "en" / "config"
|
||||||
output_dir = Path(sys.argv[1])
|
|
||||||
else:
|
|
||||||
output_dir = (
|
|
||||||
Path(__file__).parent / "web" / "public" / "locales" / "en" / "config"
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"Output directory: {output_dir}")
|
logger.info(f"Output directory: {output_dir}")
|
||||||
|
|
||||||
# Ensure the output directory exists; do not delete existing files.
|
# Clean and recreate the output directory
|
||||||
|
if output_dir.exists():
|
||||||
|
logger.info(f"Removing existing directory: {output_dir}")
|
||||||
|
shutil.rmtree(output_dir)
|
||||||
|
|
||||||
|
logger.info(f"Creating directory: {output_dir}")
|
||||||
output_dir.mkdir(parents=True, exist_ok=True)
|
output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
logger.info(
|
|
||||||
f"Using output directory (existing files will be overwritten): {output_dir}"
|
|
||||||
)
|
|
||||||
|
|
||||||
config_fields = FrigateConfig.model_fields
|
config_fields = FrigateConfig.model_fields
|
||||||
config_schema = get_config_schema(FrigateConfig)
|
|
||||||
logger.info(f"Found {len(config_fields)} top-level config sections")
|
logger.info(f"Found {len(config_fields)} top-level config sections")
|
||||||
|
|
||||||
global_translations = {}
|
|
||||||
|
|
||||||
for field_name, field_info in config_fields.items():
|
for field_name, field_info in config_fields.items():
|
||||||
if field_name.startswith("_"):
|
if field_name.startswith("_"):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
logger.info(f"Processing section: {field_name}")
|
logger.info(f"Processing section: {field_name}")
|
||||||
|
section_data = generate_section_translation(field_name, field_info)
|
||||||
# Get the field's type
|
|
||||||
field_type = field_info.annotation
|
|
||||||
from typing import Optional, Union
|
|
||||||
|
|
||||||
origin = get_origin(field_type)
|
|
||||||
if (
|
|
||||||
origin is Optional
|
|
||||||
or origin is Union
|
|
||||||
or (
|
|
||||||
hasattr(origin, "__name__")
|
|
||||||
and origin.__name__ in ("UnionType", "Union")
|
|
||||||
)
|
|
||||||
):
|
|
||||||
args = get_args(field_type)
|
|
||||||
field_type = next(
|
|
||||||
(arg for arg in args if arg is not type(None)), field_type
|
|
||||||
)
|
|
||||||
|
|
||||||
# Handle Dict[str, SomeModel] - extract the value type
|
|
||||||
if origin is dict:
|
|
||||||
args = get_args(field_type)
|
|
||||||
if args and len(args) > 1:
|
|
||||||
field_type = args[1] # Get value type from Dict[key, value]
|
|
||||||
|
|
||||||
# Start with field's top-level metadata (label, description)
|
|
||||||
section_data = get_field_translations(field_info)
|
|
||||||
|
|
||||||
# Generate nested translations from the field type's schema
|
|
||||||
if hasattr(field_type, "model_json_schema"):
|
|
||||||
schema = field_type.model_json_schema()
|
|
||||||
# Extract nested properties from schema
|
|
||||||
nested = extract_translations_from_schema(schema)
|
|
||||||
# Remove top-level label/description from nested since we got those from field_info
|
|
||||||
nested_without_root = {
|
|
||||||
k: v for k, v in nested.items() if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
section_data.update(nested_without_root)
|
|
||||||
|
|
||||||
if field_name == "detectors":
|
|
||||||
detector_types, detector_field_keys = get_detector_translations(
|
|
||||||
config_schema
|
|
||||||
)
|
|
||||||
section_data.update(detector_types)
|
|
||||||
for key in detector_field_keys:
|
|
||||||
if key == "type":
|
|
||||||
continue
|
|
||||||
section_data.pop(key, None)
|
|
||||||
|
|
||||||
if not section_data:
|
if not section_data:
|
||||||
logger.warning(f"No translations found for section: {field_name}")
|
logger.warning(f"No translations found for section: {field_name}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Add camera-level fields to global config documentation if applicable
|
output_file = output_dir / f"{field_name}.json"
|
||||||
CAMERA_LEVEL_FIELDS = {
|
with open(output_file, "w", encoding="utf-8") as f:
|
||||||
"birdseye": (
|
json.dump(section_data, f, indent=2, ensure_ascii=False)
|
||||||
"frigate.config.camera.birdseye",
|
|
||||||
"BirdseyeCameraConfig",
|
|
||||||
["order"],
|
|
||||||
),
|
|
||||||
"ffmpeg": (
|
|
||||||
"frigate.config.camera.ffmpeg",
|
|
||||||
"CameraFfmpegConfig",
|
|
||||||
["inputs"],
|
|
||||||
),
|
|
||||||
"lpr": (
|
|
||||||
"frigate.config.classification",
|
|
||||||
"CameraLicensePlateRecognitionConfig",
|
|
||||||
["expire_time"],
|
|
||||||
),
|
|
||||||
"semantic_search": (
|
|
||||||
"frigate.config.classification",
|
|
||||||
"CameraSemanticSearchConfig",
|
|
||||||
["triggers"],
|
|
||||||
),
|
|
||||||
}
|
|
||||||
|
|
||||||
if field_name in CAMERA_LEVEL_FIELDS:
|
logger.info(f"Generated: {output_file}")
|
||||||
module_path, class_name, field_names = CAMERA_LEVEL_FIELDS[field_name]
|
|
||||||
try:
|
|
||||||
import importlib
|
|
||||||
|
|
||||||
module = importlib.import_module(module_path)
|
|
||||||
camera_class = getattr(module, class_name)
|
|
||||||
schema = camera_class.model_json_schema()
|
|
||||||
camera_fields = schema.get("properties", {})
|
|
||||||
defs = schema.get("$defs", {})
|
|
||||||
|
|
||||||
for fname in field_names:
|
|
||||||
if fname in camera_fields:
|
|
||||||
field_schema = camera_fields[fname]
|
|
||||||
field_trans = {}
|
|
||||||
if "title" in field_schema:
|
|
||||||
field_trans["label"] = field_schema["title"]
|
|
||||||
if "description" in field_schema:
|
|
||||||
field_trans["description"] = field_schema["description"]
|
|
||||||
|
|
||||||
# Extract nested properties based on schema type
|
|
||||||
nested_to_extract = None
|
|
||||||
|
|
||||||
# Handle direct $ref
|
|
||||||
if "$ref" in field_schema:
|
|
||||||
ref_path = field_schema["$ref"]
|
|
||||||
if ref_path.startswith("#/$defs/"):
|
|
||||||
ref_name = ref_path.split("/")[-1]
|
|
||||||
if ref_name in defs:
|
|
||||||
nested_to_extract = defs[ref_name]
|
|
||||||
|
|
||||||
# Handle additionalProperties with $ref (for dict types)
|
|
||||||
elif "additionalProperties" in field_schema:
|
|
||||||
additional_props = field_schema["additionalProperties"]
|
|
||||||
if "$ref" in additional_props:
|
|
||||||
ref_path = additional_props["$ref"]
|
|
||||||
if ref_path.startswith("#/$defs/"):
|
|
||||||
ref_name = ref_path.split("/")[-1]
|
|
||||||
if ref_name in defs:
|
|
||||||
nested_to_extract = defs[ref_name]
|
|
||||||
|
|
||||||
# Handle items with $ref (for array types)
|
|
||||||
elif "items" in field_schema:
|
|
||||||
items = field_schema["items"]
|
|
||||||
if "$ref" in items:
|
|
||||||
ref_path = items["$ref"]
|
|
||||||
if ref_path.startswith("#/$defs/"):
|
|
||||||
ref_name = ref_path.split("/")[-1]
|
|
||||||
if ref_name in defs:
|
|
||||||
nested_to_extract = defs[ref_name]
|
|
||||||
|
|
||||||
# Extract nested properties if we found a schema to use
|
|
||||||
if nested_to_extract:
|
|
||||||
nested = extract_translations_from_schema(
|
|
||||||
nested_to_extract, defs=defs
|
|
||||||
)
|
|
||||||
nested_without_root = {
|
|
||||||
k: v
|
|
||||||
for k, v in nested.items()
|
|
||||||
if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
field_trans.update(nested_without_root)
|
|
||||||
|
|
||||||
if field_trans:
|
|
||||||
section_data[fname] = field_trans
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(
|
|
||||||
f"Could not add camera-level fields for {field_name}: {e}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add to global translations instead of writing separate files
|
|
||||||
global_translations[field_name] = section_data
|
|
||||||
|
|
||||||
logger.info(f"Added section to global translations: {field_name}")
|
|
||||||
|
|
||||||
# Handle camera-level configs that aren't top-level FrigateConfig fields
|
|
||||||
# These are defined as fields in CameraConfig, so we extract title/description from there
|
|
||||||
camera_level_configs = {
|
|
||||||
"camera_mqtt": ("frigate.config.camera.mqtt", "CameraMqttConfig", "mqtt"),
|
|
||||||
"camera_ui": ("frigate.config.camera.ui", "CameraUiConfig", "ui"),
|
|
||||||
"onvif": ("frigate.config.camera.onvif", "OnvifConfig", "onvif"),
|
|
||||||
}
|
|
||||||
|
|
||||||
# Import CameraConfig to extract field metadata
|
|
||||||
from frigate.config.camera.camera import CameraConfig
|
|
||||||
|
|
||||||
camera_config_schema = CameraConfig.model_json_schema()
|
|
||||||
camera_properties = camera_config_schema.get("properties", {})
|
|
||||||
|
|
||||||
for config_name, (
|
|
||||||
module_path,
|
|
||||||
class_name,
|
|
||||||
camera_field_name,
|
|
||||||
) in camera_level_configs.items():
|
|
||||||
try:
|
|
||||||
logger.info(f"Processing camera-level section: {config_name}")
|
|
||||||
import importlib
|
|
||||||
|
|
||||||
module = importlib.import_module(module_path)
|
|
||||||
config_class = getattr(module, class_name)
|
|
||||||
|
|
||||||
section_data = {}
|
|
||||||
|
|
||||||
# Extract top-level label and description from CameraConfig field definition
|
|
||||||
if camera_field_name in camera_properties:
|
|
||||||
field_schema = camera_properties[camera_field_name]
|
|
||||||
if "title" in field_schema:
|
|
||||||
section_data["label"] = field_schema["title"]
|
|
||||||
if "description" in field_schema:
|
|
||||||
section_data["description"] = field_schema["description"]
|
|
||||||
|
|
||||||
# Process model fields from schema
|
|
||||||
schema = config_class.model_json_schema()
|
|
||||||
nested = extract_translations_from_schema(schema)
|
|
||||||
# Remove top-level label/description since we got those from CameraConfig
|
|
||||||
nested_without_root = {
|
|
||||||
k: v for k, v in nested.items() if k not in ("label", "description")
|
|
||||||
}
|
|
||||||
section_data.update(nested_without_root)
|
|
||||||
|
|
||||||
# Add camera-level section into global translations (do not write separate file)
|
|
||||||
global_translations[config_name] = section_data
|
|
||||||
logger.info(
|
|
||||||
f"Added camera-level section to global translations: {config_name}"
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Failed to generate {config_name}: {e}")
|
|
||||||
|
|
||||||
# Remove top-level 'cameras' field if present so it remains a separate file
|
|
||||||
if "cameras" in global_translations:
|
|
||||||
logger.info(
|
|
||||||
"Removing top-level 'cameras' from global translations to keep it as a separate cameras.json"
|
|
||||||
)
|
|
||||||
del global_translations["cameras"]
|
|
||||||
|
|
||||||
# Write consolidated global.json with per-section keys
|
|
||||||
global_file = output_dir / "global.json"
|
|
||||||
with open(global_file, "w", encoding="utf-8") as f:
|
|
||||||
json.dump(global_translations, f, indent=2, ensure_ascii=False)
|
|
||||||
f.write("\n")
|
|
||||||
|
|
||||||
logger.info(f"Generated consolidated translations: {global_file}")
|
|
||||||
|
|
||||||
if not global_translations:
|
|
||||||
logger.warning("No global translations were generated!")
|
|
||||||
else:
|
|
||||||
logger.info(f"Global contains {len(global_translations)} sections")
|
|
||||||
|
|
||||||
# Generate cameras.json from CameraConfig schema
|
|
||||||
cameras_file = output_dir / "cameras.json"
|
|
||||||
logger.info(f"Generating cameras.json: {cameras_file}")
|
|
||||||
try:
|
|
||||||
if "camera_config_schema" in locals():
|
|
||||||
camera_schema = camera_config_schema
|
|
||||||
else:
|
|
||||||
from frigate.config.camera.camera import CameraConfig
|
|
||||||
|
|
||||||
camera_schema = CameraConfig.model_json_schema()
|
|
||||||
|
|
||||||
camera_translations = extract_translations_from_schema(camera_schema)
|
|
||||||
|
|
||||||
# Change descriptions to use 'for this camera' for fields that are global
|
|
||||||
def sanitize_camera_descriptions(obj):
|
|
||||||
if isinstance(obj, dict):
|
|
||||||
for k, v in list(obj.items()):
|
|
||||||
if k == "description" and isinstance(v, str):
|
|
||||||
obj[k] = v.replace(
|
|
||||||
"for all cameras; can be overridden per-camera",
|
|
||||||
"for this camera",
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
sanitize_camera_descriptions(v)
|
|
||||||
elif isinstance(obj, list):
|
|
||||||
for item in obj:
|
|
||||||
sanitize_camera_descriptions(item)
|
|
||||||
|
|
||||||
sanitize_camera_descriptions(camera_translations)
|
|
||||||
|
|
||||||
with open(cameras_file, "w", encoding="utf-8") as f:
|
|
||||||
json.dump(camera_translations, f, indent=2, ensure_ascii=False)
|
|
||||||
f.write("\n")
|
|
||||||
logger.info(f"Generated cameras.json: {cameras_file}")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Failed to generate cameras.json: {e}")
|
|
||||||
|
|
||||||
logger.info("Translation generation complete!")
|
logger.info("Translation generation complete!")
|
||||||
|
|
||||||
|
|||||||
2873
web/package-lock.json
generated
2873
web/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -38,10 +38,6 @@
|
|||||||
"@radix-ui/react-toggle": "^1.1.2",
|
"@radix-ui/react-toggle": "^1.1.2",
|
||||||
"@radix-ui/react-toggle-group": "^1.1.2",
|
"@radix-ui/react-toggle-group": "^1.1.2",
|
||||||
"@radix-ui/react-tooltip": "^1.2.8",
|
"@radix-ui/react-tooltip": "^1.2.8",
|
||||||
"@rjsf/core": "^6.3.1",
|
|
||||||
"@rjsf/shadcn": "^6.3.1",
|
|
||||||
"@rjsf/utils": "^6.3.1",
|
|
||||||
"@rjsf/validator-ajv8": "^6.3.1",
|
|
||||||
"apexcharts": "^3.52.0",
|
"apexcharts": "^3.52.0",
|
||||||
"axios": "^1.7.7",
|
"axios": "^1.7.7",
|
||||||
"class-variance-authority": "^0.7.1",
|
"class-variance-authority": "^0.7.1",
|
||||||
@@ -75,8 +71,6 @@
|
|||||||
"react-icons": "^5.5.0",
|
"react-icons": "^5.5.0",
|
||||||
"react-konva": "^18.2.10",
|
"react-konva": "^18.2.10",
|
||||||
"react-router-dom": "^6.30.3",
|
"react-router-dom": "^6.30.3",
|
||||||
"react-markdown": "^9.0.1",
|
|
||||||
"remark-gfm": "^4.0.0",
|
|
||||||
"react-swipeable": "^7.0.2",
|
"react-swipeable": "^7.0.2",
|
||||||
"react-tracked": "^2.0.1",
|
"react-tracked": "^2.0.1",
|
||||||
"react-transition-group": "^4.4.5",
|
"react-transition-group": "^4.4.5",
|
||||||
|
|||||||
@@ -115,10 +115,8 @@
|
|||||||
"internalID": "The Internal ID Frigate uses in the configuration and database"
|
"internalID": "The Internal ID Frigate uses in the configuration and database"
|
||||||
},
|
},
|
||||||
"button": {
|
"button": {
|
||||||
"add": "Add",
|
|
||||||
"apply": "Apply",
|
"apply": "Apply",
|
||||||
"reset": "Reset",
|
"reset": "Reset",
|
||||||
"undo": "Undo",
|
|
||||||
"done": "Done",
|
"done": "Done",
|
||||||
"enabled": "Enabled",
|
"enabled": "Enabled",
|
||||||
"enable": "Enable",
|
"enable": "Enable",
|
||||||
@@ -129,7 +127,6 @@
|
|||||||
"cancel": "Cancel",
|
"cancel": "Cancel",
|
||||||
"close": "Close",
|
"close": "Close",
|
||||||
"copy": "Copy",
|
"copy": "Copy",
|
||||||
"copiedToClipboard": "Copied to clipboard",
|
|
||||||
"back": "Back",
|
"back": "Back",
|
||||||
"history": "History",
|
"history": "History",
|
||||||
"fullscreen": "Fullscreen",
|
"fullscreen": "Fullscreen",
|
||||||
@@ -153,14 +150,7 @@
|
|||||||
"export": "Export",
|
"export": "Export",
|
||||||
"deleteNow": "Delete Now",
|
"deleteNow": "Delete Now",
|
||||||
"next": "Next",
|
"next": "Next",
|
||||||
"continue": "Continue",
|
"continue": "Continue"
|
||||||
"modified": "Modified",
|
|
||||||
"overridden": "Overridden",
|
|
||||||
"resetToGlobal": "Reset to Global",
|
|
||||||
"resetToDefault": "Reset to Default",
|
|
||||||
"saveAll": "Save All",
|
|
||||||
"savingAll": "Saving All…",
|
|
||||||
"undoAll": "Undo All"
|
|
||||||
},
|
},
|
||||||
"menu": {
|
"menu": {
|
||||||
"system": "System",
|
"system": "System",
|
||||||
@@ -255,7 +245,6 @@
|
|||||||
"uiPlayground": "UI Playground",
|
"uiPlayground": "UI Playground",
|
||||||
"faceLibrary": "Face Library",
|
"faceLibrary": "Face Library",
|
||||||
"classification": "Classification",
|
"classification": "Classification",
|
||||||
"chat": "Chat",
|
|
||||||
"user": {
|
"user": {
|
||||||
"title": "User",
|
"title": "User",
|
||||||
"account": "Account",
|
"account": "Account",
|
||||||
|
|||||||
26
web/public/locales/en/config/audio.json
Normal file
26
web/public/locales/en/config/audio.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"label": "Global Audio events configuration.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable audio events."
|
||||||
|
},
|
||||||
|
"max_not_heard": {
|
||||||
|
"label": "Seconds of not hearing the type of audio to end the event."
|
||||||
|
},
|
||||||
|
"min_volume": {
|
||||||
|
"label": "Min volume required to run audio detection."
|
||||||
|
},
|
||||||
|
"listen": {
|
||||||
|
"label": "Audio to listen for."
|
||||||
|
},
|
||||||
|
"filters": {
|
||||||
|
"label": "Audio filters."
|
||||||
|
},
|
||||||
|
"enabled_in_config": {
|
||||||
|
"label": "Keep track of original state of audio detection."
|
||||||
|
},
|
||||||
|
"num_threads": {
|
||||||
|
"label": "Number of detection threads"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
23
web/public/locales/en/config/audio_transcription.json
Normal file
23
web/public/locales/en/config/audio_transcription.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"label": "Audio transcription config.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable audio transcription."
|
||||||
|
},
|
||||||
|
"language": {
|
||||||
|
"label": "Language abbreviation to use for audio event transcription/translation."
|
||||||
|
},
|
||||||
|
"device": {
|
||||||
|
"label": "The device used for license plate recognition."
|
||||||
|
},
|
||||||
|
"model_size": {
|
||||||
|
"label": "The size of the embeddings model used."
|
||||||
|
},
|
||||||
|
"enabled_in_config": {
|
||||||
|
"label": "Keep track of original state of camera."
|
||||||
|
},
|
||||||
|
"live_enabled": {
|
||||||
|
"label": "Enable live transcriptions."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
35
web/public/locales/en/config/auth.json
Normal file
35
web/public/locales/en/config/auth.json
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
{
|
||||||
|
"label": "Auth configuration.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable authentication"
|
||||||
|
},
|
||||||
|
"reset_admin_password": {
|
||||||
|
"label": "Reset the admin password on startup"
|
||||||
|
},
|
||||||
|
"cookie_name": {
|
||||||
|
"label": "Name for jwt token cookie"
|
||||||
|
},
|
||||||
|
"cookie_secure": {
|
||||||
|
"label": "Set secure flag on cookie"
|
||||||
|
},
|
||||||
|
"session_length": {
|
||||||
|
"label": "Session length for jwt session tokens"
|
||||||
|
},
|
||||||
|
"refresh_time": {
|
||||||
|
"label": "Refresh the session if it is going to expire in this many seconds"
|
||||||
|
},
|
||||||
|
"failed_login_rate_limit": {
|
||||||
|
"label": "Rate limits for failed login attempts."
|
||||||
|
},
|
||||||
|
"trusted_proxies": {
|
||||||
|
"label": "Trusted proxies for determining IP address to rate limit"
|
||||||
|
},
|
||||||
|
"hash_iterations": {
|
||||||
|
"label": "Password hash iterations"
|
||||||
|
},
|
||||||
|
"roles": {
|
||||||
|
"label": "Role to camera mappings. Empty list grants access to all cameras."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
37
web/public/locales/en/config/birdseye.json
Normal file
37
web/public/locales/en/config/birdseye.json
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
{
|
||||||
|
"label": "Birdseye configuration.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable birdseye view."
|
||||||
|
},
|
||||||
|
"mode": {
|
||||||
|
"label": "Tracking mode."
|
||||||
|
},
|
||||||
|
"restream": {
|
||||||
|
"label": "Restream birdseye via RTSP."
|
||||||
|
},
|
||||||
|
"width": {
|
||||||
|
"label": "Birdseye width."
|
||||||
|
},
|
||||||
|
"height": {
|
||||||
|
"label": "Birdseye height."
|
||||||
|
},
|
||||||
|
"quality": {
|
||||||
|
"label": "Encoding quality."
|
||||||
|
},
|
||||||
|
"inactivity_threshold": {
|
||||||
|
"label": "Birdseye Inactivity Threshold"
|
||||||
|
},
|
||||||
|
"layout": {
|
||||||
|
"label": "Birdseye Layout Config",
|
||||||
|
"properties": {
|
||||||
|
"scaling_factor": {
|
||||||
|
"label": "Birdseye Scaling Factor"
|
||||||
|
},
|
||||||
|
"max_cameras": {
|
||||||
|
"label": "Max cameras"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
14
web/public/locales/en/config/camera_groups.json
Normal file
14
web/public/locales/en/config/camera_groups.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"label": "Camera group configuration",
|
||||||
|
"properties": {
|
||||||
|
"cameras": {
|
||||||
|
"label": "List of cameras in this group."
|
||||||
|
},
|
||||||
|
"icon": {
|
||||||
|
"label": "Icon that represents camera group."
|
||||||
|
},
|
||||||
|
"order": {
|
||||||
|
"label": "Sort order for group."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
File diff suppressed because it is too large
Load Diff
58
web/public/locales/en/config/classification.json
Normal file
58
web/public/locales/en/config/classification.json
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
{
|
||||||
|
"label": "Object classification config.",
|
||||||
|
"properties": {
|
||||||
|
"bird": {
|
||||||
|
"label": "Bird classification config.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable bird classification."
|
||||||
|
},
|
||||||
|
"threshold": {
|
||||||
|
"label": "Minimum classification score required to be considered a match."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"custom": {
|
||||||
|
"label": "Custom Classification Model Configs.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable running the model."
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
"label": "Name of classification model."
|
||||||
|
},
|
||||||
|
"threshold": {
|
||||||
|
"label": "Classification score threshold to change the state."
|
||||||
|
},
|
||||||
|
"object_config": {
|
||||||
|
"properties": {
|
||||||
|
"objects": {
|
||||||
|
"label": "Object types to classify."
|
||||||
|
},
|
||||||
|
"classification_type": {
|
||||||
|
"label": "Type of classification that is applied."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"state_config": {
|
||||||
|
"properties": {
|
||||||
|
"cameras": {
|
||||||
|
"label": "Cameras to run classification on.",
|
||||||
|
"properties": {
|
||||||
|
"crop": {
|
||||||
|
"label": "Crop of image frame on this camera to run classification on."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"motion": {
|
||||||
|
"label": "If classification should be run when motion is detected in the crop."
|
||||||
|
},
|
||||||
|
"interval": {
|
||||||
|
"label": "Interval to run classification on in seconds."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
8
web/public/locales/en/config/database.json
Normal file
8
web/public/locales/en/config/database.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"label": "Database configuration.",
|
||||||
|
"properties": {
|
||||||
|
"path": {
|
||||||
|
"label": "Database path."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
51
web/public/locales/en/config/detect.json
Normal file
51
web/public/locales/en/config/detect.json
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
{
|
||||||
|
"label": "Global object tracking configuration.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Detection Enabled."
|
||||||
|
},
|
||||||
|
"height": {
|
||||||
|
"label": "Height of the stream for the detect role."
|
||||||
|
},
|
||||||
|
"width": {
|
||||||
|
"label": "Width of the stream for the detect role."
|
||||||
|
},
|
||||||
|
"fps": {
|
||||||
|
"label": "Number of frames per second to process through detection."
|
||||||
|
},
|
||||||
|
"min_initialized": {
|
||||||
|
"label": "Minimum number of consecutive hits for an object to be initialized by the tracker."
|
||||||
|
},
|
||||||
|
"max_disappeared": {
|
||||||
|
"label": "Maximum number of frames the object can disappear before detection ends."
|
||||||
|
},
|
||||||
|
"stationary": {
|
||||||
|
"label": "Stationary objects config.",
|
||||||
|
"properties": {
|
||||||
|
"interval": {
|
||||||
|
"label": "Frame interval for checking stationary objects."
|
||||||
|
},
|
||||||
|
"threshold": {
|
||||||
|
"label": "Number of frames without a position change for an object to be considered stationary"
|
||||||
|
},
|
||||||
|
"max_frames": {
|
||||||
|
"label": "Max frames for stationary objects.",
|
||||||
|
"properties": {
|
||||||
|
"default": {
|
||||||
|
"label": "Default max frames."
|
||||||
|
},
|
||||||
|
"objects": {
|
||||||
|
"label": "Object specific max frames."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"classifier": {
|
||||||
|
"label": "Enable visual classifier for determing if objects with jittery bounding boxes are stationary."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"annotation_offset": {
|
||||||
|
"label": "Milliseconds to offset detect annotations by."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
14
web/public/locales/en/config/detectors.json
Normal file
14
web/public/locales/en/config/detectors.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"label": "Detector hardware configuration.",
|
||||||
|
"properties": {
|
||||||
|
"type": {
|
||||||
|
"label": "Detector Type"
|
||||||
|
},
|
||||||
|
"model": {
|
||||||
|
"label": "Detector specific model configuration."
|
||||||
|
},
|
||||||
|
"model_path": {
|
||||||
|
"label": "Detector specific model path."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
3
web/public/locales/en/config/environment_vars.json
Normal file
3
web/public/locales/en/config/environment_vars.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"label": "Frigate environment variables."
|
||||||
|
}
|
||||||
36
web/public/locales/en/config/face_recognition.json
Normal file
36
web/public/locales/en/config/face_recognition.json
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
{
|
||||||
|
"label": "Face recognition config.",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"label": "Enable face recognition."
|
||||||
|
},
|
||||||
|
"model_size": {
|
||||||
|
"label": "The size of the embeddings model used."
|
||||||
|
},
|
||||||
|
"unknown_score": {
|
||||||
|
"label": "Minimum face distance score required to be marked as a potential match."
|
||||||
|
},
|
||||||
|
"detection_threshold": {
|
||||||
|
"label": "Minimum face detection score required to be considered a face."
|
||||||
|
},
|
||||||
|
"recognition_threshold": {
|
||||||
|
"label": "Minimum face distance score required to be considered a match."
|
||||||
|
},
|
||||||
|
"min_area": {
|
||||||
|
"label": "Min area of face box to consider running face recognition."
|
||||||
|
},
|
||||||
|
"min_faces": {
|
||||||
|
"label": "Min face recognitions for the sub label to be applied to the person object."
|
||||||
|
},
|
||||||
|
"save_attempts": {
|
||||||
|
"label": "Number of face attempts to save in the recent recognitions tab."
|
||||||
|
},
|
||||||
|
"blur_confidence_filter": {
|
||||||
|
"label": "Apply blur quality filter to face confidence."
|
||||||
|
},
|
||||||
|
"device": {
|
||||||
|
"label": "The device key to use for face recognition.",
|
||||||
|
"description": "This is an override, to target a specific device. See https://onnxruntime.ai/docs/execution-providers/ for more information"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
34
web/public/locales/en/config/ffmpeg.json
Normal file
34
web/public/locales/en/config/ffmpeg.json
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
{
|
||||||
|
"label": "Global FFmpeg configuration.",
|
||||||
|
"properties": {
|
||||||
|
"path": {
|
||||||
|
"label": "FFmpeg path"
|
||||||
|
},
|
||||||
|
"global_args": {
|
||||||
|
"label": "Global FFmpeg arguments."
|
||||||
|
},
|
||||||
|
"hwaccel_args": {
|
||||||
|
"label": "FFmpeg hardware acceleration arguments."
|
||||||
|
},
|
||||||
|
"input_args": {
|
||||||
|
"label": "FFmpeg input arguments."
|
||||||
|
},
|
||||||
|
"output_args": {
|
||||||
|
"label": "FFmpeg output arguments per role.",
|
||||||
|
"properties": {
|
||||||
|
"detect": {
|
||||||
|
"label": "Detect role FFmpeg output arguments."
|
||||||
|
},
|
||||||
|
"record": {
|
||||||
|
"label": "Record role FFmpeg output arguments."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"retry_interval": {
|
||||||
|
"label": "Time in seconds to wait before FFmpeg retries connecting to the camera."
|
||||||
|
},
|
||||||
|
"apple_compatibility": {
|
||||||
|
"label": "Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
23
web/public/locales/en/config/genai.json
Normal file
23
web/public/locales/en/config/genai.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"label": "Generative AI configuration.",
|
||||||
|
"properties": {
|
||||||
|
"api_key": {
|
||||||
|
"label": "Provider API key."
|
||||||
|
},
|
||||||
|
"base_url": {
|
||||||
|
"label": "Provider base url."
|
||||||
|
},
|
||||||
|
"model": {
|
||||||
|
"label": "GenAI model."
|
||||||
|
},
|
||||||
|
"provider": {
|
||||||
|
"label": "GenAI provider."
|
||||||
|
},
|
||||||
|
"provider_options": {
|
||||||
|
"label": "GenAI Provider extra options."
|
||||||
|
},
|
||||||
|
"runtime_options": {
|
||||||
|
"label": "Options to pass during inference calls."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
File diff suppressed because it is too large
Load Diff
3
web/public/locales/en/config/go2rtc.json
Normal file
3
web/public/locales/en/config/go2rtc.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"label": "Global restream configuration."
|
||||||
|
}
|
||||||
@@ -1,73 +0,0 @@
|
|||||||
{
|
|
||||||
"audio": {
|
|
||||||
"global": {
|
|
||||||
"detection": "Global Detection",
|
|
||||||
"sensitivity": "Global Sensitivity"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"detection": "Detection",
|
|
||||||
"sensitivity": "Sensitivity"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"timestamp_style": {
|
|
||||||
"global": {
|
|
||||||
"appearance": "Global Appearance"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"appearance": "Appearance"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"motion": {
|
|
||||||
"global": {
|
|
||||||
"sensitivity": "Global Sensitivity",
|
|
||||||
"algorithm": "Global Algorithm"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"sensitivity": "Sensitivity",
|
|
||||||
"algorithm": "Algorithm"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"snapshots": {
|
|
||||||
"global": {
|
|
||||||
"display": "Global Display"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"display": "Display"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"detect": {
|
|
||||||
"global": {
|
|
||||||
"resolution": "Global Resolution",
|
|
||||||
"tracking": "Global Tracking"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"resolution": "Resolution",
|
|
||||||
"tracking": "Tracking"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"objects": {
|
|
||||||
"global": {
|
|
||||||
"tracking": "Global Tracking",
|
|
||||||
"filtering": "Global Filtering"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"tracking": "Tracking",
|
|
||||||
"filtering": "Filtering"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"record": {
|
|
||||||
"global": {
|
|
||||||
"retention": "Global Retention",
|
|
||||||
"events": "Global Events"
|
|
||||||
},
|
|
||||||
"cameras": {
|
|
||||||
"retention": "Retention",
|
|
||||||
"events": "Events"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"ffmpeg": {
|
|
||||||
"cameras": {
|
|
||||||
"cameraFfmpeg": "Camera-specific FFmpeg arguments"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
14
web/public/locales/en/config/live.json
Normal file
14
web/public/locales/en/config/live.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"label": "Live playback settings.",
|
||||||
|
"properties": {
|
||||||
|
"streams": {
|
||||||
|
"label": "Friendly names and restream names to use for live view."
|
||||||
|
},
|
||||||
|
"height": {
|
||||||
|
"label": "Live camera view height"
|
||||||
|
},
|
||||||
|
"quality": {
|
||||||
|
"label": "Live camera view quality"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
11
web/public/locales/en/config/logger.json
Normal file
11
web/public/locales/en/config/logger.json
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
{
|
||||||
|
"label": "Logging configuration.",
|
||||||
|
"properties": {
|
||||||
|
"default": {
|
||||||
|
"label": "Default logging level."
|
||||||
|
},
|
||||||
|
"logs": {
|
||||||
|
"label": "Log level for specified processes."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user