mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-20 19:28:53 -05:00
Compare commits
3 Commits
dependabot
...
misc-fixes
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
722497ba0b | ||
|
|
0a466929eb | ||
|
|
c8e2fdb190 |
@@ -79,12 +79,6 @@ cameras:
|
||||
|
||||
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
|
||||
|
||||
:::note
|
||||
|
||||
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
|
||||
@@ -101,7 +95,7 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
|
||||
|
||||
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
|
||||
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
|
||||
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
|
||||
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
|
||||
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
|
||||
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
|
||||
|
||||
@@ -1,247 +0,0 @@
|
||||
---
|
||||
id: genai
|
||||
title: Generative AI
|
||||
---
|
||||
|
||||
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
|
||||
|
||||
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
|
||||
|
||||
## Configuration
|
||||
|
||||
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
|
||||
|
||||
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-2.0-flash
|
||||
|
||||
cameras:
|
||||
front_camera:
|
||||
genai:
|
||||
enabled: True # <- enable GenAI for your front camera
|
||||
use_snapshot: True
|
||||
objects:
|
||||
- person
|
||||
required_zones:
|
||||
- steps
|
||||
indoor_camera:
|
||||
objects:
|
||||
genai:
|
||||
enabled: False # <- disable GenAI for your indoor camera
|
||||
```
|
||||
|
||||
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
|
||||
|
||||
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
|
||||
|
||||
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
|
||||
|
||||
## Ollama
|
||||
|
||||
:::warning
|
||||
|
||||
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
|
||||
|
||||
:::
|
||||
|
||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||
|
||||
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
|
||||
|
||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
|
||||
|
||||
### Model Types: Instruct vs Thinking
|
||||
|
||||
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
|
||||
|
||||
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
|
||||
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
|
||||
|
||||
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
|
||||
|
||||
**Recommendation:**
|
||||
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
|
||||
|
||||
:::note
|
||||
|
||||
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
||||
|
||||
:::
|
||||
|
||||
#### Ollama Cloud models
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: qwen3-vl:4b
|
||||
```
|
||||
|
||||
## Google Gemini
|
||||
|
||||
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||
|
||||
### Get API Key
|
||||
|
||||
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
|
||||
|
||||
1. Accept the Terms of Service
|
||||
2. Click "Get API Key" from the right hand navigation
|
||||
3. Click "Create API key in new project"
|
||||
4. Copy the API key for use in your config
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-2.5-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
:::
|
||||
|
||||
## OpenAI
|
||||
|
||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
|
||||
|
||||
### Get API Key
|
||||
|
||||
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: openai
|
||||
api_key: "{FRIGATE_OPENAI_API_KEY}"
|
||||
model: gpt-4o
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
:::
|
||||
|
||||
## Azure OpenAI
|
||||
|
||||
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
|
||||
|
||||
### Create Resource and Get API Key
|
||||
|
||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: azure_openai
|
||||
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
|
||||
model: gpt-5-mini
|
||||
api_key: "{FRIGATE_OPENAI_API_KEY}"
|
||||
```
|
||||
|
||||
## Usage and Best Practices
|
||||
|
||||
Frigate's thumbnail search excels at identifying specific details about tracked objects – for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate’s default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
|
||||
|
||||
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate’s default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what’s happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they’re moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation’s context.
|
||||
|
||||
### Using GenAI for notifications
|
||||
|
||||
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
|
||||
|
||||
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
send_triggers:
|
||||
tracked_object_end: true # default
|
||||
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
|
||||
```
|
||||
|
||||
## Custom Prompts
|
||||
|
||||
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
|
||||
|
||||
```
|
||||
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
|
||||
|
||||
:::
|
||||
|
||||
You are also able to define custom prompts in your configuration.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: qwen3-vl:8b-instruct
|
||||
|
||||
objects:
|
||||
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
||||
object_prompts:
|
||||
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
|
||||
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
|
||||
```
|
||||
|
||||
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
front_door:
|
||||
objects:
|
||||
genai:
|
||||
enabled: True
|
||||
use_snapshot: True
|
||||
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
|
||||
object_prompts:
|
||||
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
|
||||
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
|
||||
objects:
|
||||
- person
|
||||
- cat
|
||||
required_zones:
|
||||
- steps
|
||||
```
|
||||
|
||||
### Experiment with prompts
|
||||
|
||||
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
|
||||
|
||||
- OpenAI - [ChatGPT](https://chatgpt.com)
|
||||
- Gemini - [Google AI Studio](https://aistudio.google.com)
|
||||
- Ollama - [Open WebUI](https://docs.openwebui.com/)
|
||||
@@ -17,11 +17,23 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
|
||||
|
||||
:::
|
||||
|
||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||
|
||||
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
|
||||
|
||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
|
||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
|
||||
|
||||
### Model Types: Instruct vs Thinking
|
||||
|
||||
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
|
||||
|
||||
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
|
||||
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
|
||||
|
||||
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
|
||||
|
||||
**Recommendation:**
|
||||
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
|
||||
|
||||
### Supported Models
|
||||
|
||||
@@ -54,26 +66,26 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
|
||||
|
||||
:::
|
||||
|
||||
#### Ollama Cloud models
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: minicpm-v:8b
|
||||
provider_options: # other Ollama client options can be defined
|
||||
keep_alive: -1
|
||||
options:
|
||||
num_ctx: 8192 # make sure the context matches other services that are using ollama
|
||||
model: qwen3-vl:4b
|
||||
```
|
||||
|
||||
## Google Gemini
|
||||
|
||||
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
|
||||
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||
|
||||
### Get API Key
|
||||
|
||||
@@ -90,16 +102,32 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
|
||||
genai:
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-1.5-flash
|
||||
model: gemini-2.5-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
|
||||
|
||||
```
|
||||
genai:
|
||||
provider: gemini
|
||||
...
|
||||
provider_options:
|
||||
base_url: https://...
|
||||
```
|
||||
|
||||
Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai).
|
||||
|
||||
:::
|
||||
|
||||
## OpenAI
|
||||
|
||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
|
||||
|
||||
### Get API Key
|
||||
|
||||
@@ -143,17 +171,18 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
|
||||
|
||||
### Create Resource and Get API Key
|
||||
|
||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
|
||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: azure_openai
|
||||
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
|
||||
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
|
||||
model: gpt-5-mini
|
||||
api_key: "{FRIGATE_OPENAI_API_KEY}"
|
||||
```
|
||||
|
||||
@@ -125,10 +125,10 @@ review:
|
||||
|
||||
## Review Reports
|
||||
|
||||
Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
|
||||
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
|
||||
|
||||
### Requesting Reports Programmatically
|
||||
|
||||
Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
|
||||
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
|
||||
|
||||
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.
|
||||
|
||||
@@ -11,12 +11,6 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
|
||||
|
||||
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
|
||||
|
||||
:::tip
|
||||
|
||||
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
|
||||
|
||||
:::
|
||||
|
||||
### Choosing a detect resolution
|
||||
|
||||
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.
|
||||
|
||||
@@ -42,7 +42,7 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
|
||||
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
|
||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
|
||||
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
|
||||
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
|
||||
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
|
||||
|
||||
## Detectors
|
||||
|
||||
@@ -55,10 +55,12 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Most Hardware**
|
||||
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
|
||||
|
||||
- [Supports many model architectures](../../configuration/object_detectors#configuration)
|
||||
- Runs best with tiny or small size models
|
||||
|
||||
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
|
||||
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
|
||||
|
||||
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
||||
@@ -87,6 +89,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Nvidia**
|
||||
|
||||
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
|
||||
|
||||
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
||||
- Runs well with any size models including large
|
||||
|
||||
@@ -149,7 +152,9 @@ The OpenVINO detector type is able to run on:
|
||||
|
||||
:::note
|
||||
|
||||
Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
|
||||
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported.
|
||||
|
||||
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ cameras:
|
||||
|
||||
## Steps
|
||||
|
||||
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
|
||||
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
|
||||
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
|
||||
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
|
||||
3. Restart Frigate.
|
||||
|
||||
6
docs/package-lock.json
generated
6
docs/package-lock.json
generated
@@ -9064,9 +9064,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/diff": {
|
||||
"version": "5.2.2",
|
||||
"resolved": "https://registry.npmjs.org/diff/-/diff-5.2.2.tgz",
|
||||
"integrity": "sha512-vtcDfH3TOjP8UekytvnHH1o1P4FcUdt4eQ1Y+Abap1tk/OB2MWQvcwS2ClCd1zuIhc3JKOx6p3kod8Vfys3E+A==",
|
||||
"version": "5.2.0",
|
||||
"resolved": "https://registry.npmjs.org/diff/-/diff-5.2.0.tgz",
|
||||
"integrity": "sha512-uIFDxqpRZGZ6ThOk84hEfqWoHx2devRFvpTZcTHur85vImfaxUbTW9Ryh4CpCuDnToOP1CEtXKIgytHBPVff5A==",
|
||||
"license": "BSD-3-Clause",
|
||||
"engines": {
|
||||
"node": ">=0.3.1"
|
||||
|
||||
@@ -22,7 +22,6 @@ class GeminiClient(GenAIClient):
|
||||
"""Initialize the client."""
|
||||
# Merge provider_options into HttpOptions
|
||||
http_options_dict = {
|
||||
"api_version": "v1",
|
||||
"timeout": int(self.timeout * 1000), # requires milliseconds
|
||||
"retry_options": types.HttpRetryOptions(
|
||||
attempts=3,
|
||||
|
||||
@@ -64,12 +64,10 @@ def stop_ffmpeg(ffmpeg_process: sp.Popen[Any], logger: logging.Logger):
|
||||
try:
|
||||
logger.info("Waiting for ffmpeg to exit gracefully...")
|
||||
ffmpeg_process.communicate(timeout=30)
|
||||
logger.info("FFmpeg has exited")
|
||||
except sp.TimeoutExpired:
|
||||
logger.info("FFmpeg didn't exit. Force killing...")
|
||||
ffmpeg_process.kill()
|
||||
ffmpeg_process.communicate()
|
||||
logger.info("FFmpeg has been killed")
|
||||
ffmpeg_process = None
|
||||
|
||||
|
||||
|
||||
@@ -181,16 +181,6 @@
|
||||
"restricted": {
|
||||
"title": "No Cameras Available",
|
||||
"description": "You don't have permission to view any cameras in this group."
|
||||
},
|
||||
"default": {
|
||||
"title": "No Cameras Configured",
|
||||
"description": "Get started by connecting a camera to Frigate.",
|
||||
"buttonText": "Add Camera"
|
||||
},
|
||||
"group": {
|
||||
"title": "No Cameras in Group",
|
||||
"description": "This camera group has no assigned or enabled cameras.",
|
||||
"buttonText": "Manage Groups"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -386,11 +386,11 @@
|
||||
"title": "Camera Review Settings",
|
||||
"object_descriptions": {
|
||||
"title": "Generative AI Object Descriptions",
|
||||
"desc": "Temporarily enable/disable Generative AI object descriptions for this camera until Frigate restarts. When disabled, AI generated descriptions will not be requested for tracked objects on this camera."
|
||||
"desc": "Temporarily enable/disable Generative AI object descriptions for this camera. When disabled, AI generated descriptions will not be requested for tracked objects on this camera."
|
||||
},
|
||||
"review_descriptions": {
|
||||
"title": "Generative AI Review Descriptions",
|
||||
"desc": "Temporarily enable/disable Generative AI review descriptions for this camera until Frigate restarts. When disabled, AI generated descriptions will not be requested for review items on this camera."
|
||||
"desc": "Temporarily enable/disable Generative AI review descriptions for this camera. When disabled, AI generated descriptions will not be requested for review items on this camera."
|
||||
},
|
||||
"review": {
|
||||
"title": "Review",
|
||||
|
||||
@@ -35,9 +35,7 @@ export function EmptyCard({
|
||||
{icon}
|
||||
{TitleComponent}
|
||||
{description && (
|
||||
<div className="mb-3 text-center text-secondary-foreground">
|
||||
{description}
|
||||
</div>
|
||||
<div className="mb-3 text-secondary-foreground">{description}</div>
|
||||
)}
|
||||
{buttonText?.length && (
|
||||
<Button size="sm" variant="select">
|
||||
|
||||
@@ -13,7 +13,7 @@ import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { REVIEW_PADDING } from "@/types/review";
|
||||
import {
|
||||
ASPECT_PORTRAIT_LAYOUT,
|
||||
ASPECT_VERTICAL_LAYOUT,
|
||||
ASPECT_WIDE_LAYOUT,
|
||||
Recording,
|
||||
} from "@/types/record";
|
||||
@@ -39,7 +39,6 @@ import { useApiHost } from "@/api";
|
||||
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
|
||||
import ObjectTrackOverlay from "../ObjectTrackOverlay";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
import { VideoResolutionType } from "@/types/live";
|
||||
|
||||
type TrackingDetailsProps = {
|
||||
className?: string;
|
||||
@@ -254,25 +253,16 @@ export function TrackingDetails({
|
||||
|
||||
const [timelineSize] = useResizeObserver(timelineContainerRef);
|
||||
|
||||
const [fullResolution, setFullResolution] = useState<VideoResolutionType>({
|
||||
width: 0,
|
||||
height: 0,
|
||||
});
|
||||
|
||||
const aspectRatio = useMemo(() => {
|
||||
if (!config) {
|
||||
return 16 / 9;
|
||||
}
|
||||
|
||||
if (fullResolution.width && fullResolution.height) {
|
||||
return fullResolution.width / fullResolution.height;
|
||||
}
|
||||
|
||||
return (
|
||||
config.cameras[event.camera].detect.width /
|
||||
config.cameras[event.camera].detect.height
|
||||
);
|
||||
}, [config, event, fullResolution]);
|
||||
}, [config, event]);
|
||||
|
||||
const label = event.sub_label
|
||||
? event.sub_label
|
||||
@@ -470,7 +460,7 @@ export function TrackingDetails({
|
||||
return "normal";
|
||||
} else if (aspectRatio > ASPECT_WIDE_LAYOUT) {
|
||||
return "wide";
|
||||
} else if (aspectRatio < ASPECT_PORTRAIT_LAYOUT) {
|
||||
} else if (aspectRatio < ASPECT_VERTICAL_LAYOUT) {
|
||||
return "tall";
|
||||
} else {
|
||||
return "normal";
|
||||
@@ -566,7 +556,6 @@ export function TrackingDetails({
|
||||
onSeekToTime={handleSeekToTime}
|
||||
onUploadFrame={onUploadFrameToPlus}
|
||||
onPlaying={() => setIsVideoLoading(false)}
|
||||
setFullResolution={setFullResolution}
|
||||
isDetailMode={true}
|
||||
camera={event.camera}
|
||||
currentTimeOverride={currentTime}
|
||||
@@ -634,7 +623,7 @@ export function TrackingDetails({
|
||||
<div
|
||||
className={cn(
|
||||
isDesktop && "justify-start overflow-hidden",
|
||||
aspectRatio > 1 && aspectRatio < ASPECT_PORTRAIT_LAYOUT
|
||||
aspectRatio > 1 && aspectRatio < 1.5
|
||||
? "lg:basis-3/5"
|
||||
: "lg:basis-2/5",
|
||||
)}
|
||||
|
||||
@@ -16,7 +16,7 @@ import { zodResolver } from "@hookform/resolvers/zod";
|
||||
import { useForm, useFieldArray } from "react-hook-form";
|
||||
import { z } from "zod";
|
||||
import axios from "axios";
|
||||
import { toast } from "sonner";
|
||||
import { toast, Toaster } from "sonner";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { useState, useMemo, useEffect } from "react";
|
||||
import { LuTrash2, LuPlus } from "react-icons/lu";
|
||||
@@ -26,7 +26,6 @@ import useSWR from "swr";
|
||||
import { processCameraName } from "@/utils/cameraUtil";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { ConfigSetBody } from "@/types/cameraWizard";
|
||||
import { Toaster } from "../ui/sonner";
|
||||
|
||||
const RoleEnum = z.enum(["audio", "detect", "record"]);
|
||||
type Role = z.infer<typeof RoleEnum>;
|
||||
|
||||
@@ -44,5 +44,4 @@ export type RecordingStartingPoint = {
|
||||
export type RecordingPlayerError = "stalled" | "startup";
|
||||
|
||||
export const ASPECT_VERTICAL_LAYOUT = 1.5;
|
||||
export const ASPECT_PORTRAIT_LAYOUT = 1.333;
|
||||
export const ASPECT_WIDE_LAYOUT = 2;
|
||||
|
||||
@@ -447,7 +447,7 @@ export default function LiveDashboardView({
|
||||
)}
|
||||
|
||||
{cameras.length == 0 && !includeBirdseye ? (
|
||||
<NoCameraView cameraGroup={cameraGroup} />
|
||||
<NoCameraView />
|
||||
) : (
|
||||
<>
|
||||
{!fullscreen && events && events.length > 0 && (
|
||||
@@ -666,39 +666,28 @@ export default function LiveDashboardView({
|
||||
);
|
||||
}
|
||||
|
||||
function NoCameraView({ cameraGroup }: { cameraGroup?: string }) {
|
||||
function NoCameraView() {
|
||||
const { t } = useTranslation(["views/live"]);
|
||||
const { auth } = useContext(AuthContext);
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const isDefault = cameraGroup === "default";
|
||||
// Check if this is a restricted user with no cameras in this group
|
||||
const isRestricted = !isAdmin && auth.isAuthenticated;
|
||||
|
||||
let type: "default" | "group" | "restricted";
|
||||
if (isRestricted) {
|
||||
type = "restricted";
|
||||
} else if (isDefault) {
|
||||
type = "default";
|
||||
} else {
|
||||
type = "group";
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex size-full items-center justify-center">
|
||||
<EmptyCard
|
||||
icon={<BsFillCameraVideoOffFill className="size-8" />}
|
||||
title={t(`noCameras.${type}.title`)}
|
||||
description={t(`noCameras.${type}.description`)}
|
||||
buttonText={
|
||||
type !== "restricted" && isDefault
|
||||
? t(`noCameras.${type}.buttonText`)
|
||||
: undefined
|
||||
title={
|
||||
isRestricted ? t("noCameras.restricted.title") : t("noCameras.title")
|
||||
}
|
||||
link={
|
||||
type !== "restricted" && isDefault
|
||||
? "/settings?page=cameraManagement"
|
||||
: undefined
|
||||
description={
|
||||
isRestricted
|
||||
? t("noCameras.restricted.description")
|
||||
: t("noCameras.description")
|
||||
}
|
||||
buttonText={!isRestricted ? t("noCameras.buttonText") : undefined}
|
||||
link={!isRestricted ? "/settings?page=cameraManagement" : undefined}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import Heading from "@/components/ui/heading";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import { Toaster } from "sonner";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import useSWR from "swr";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import Heading from "@/components/ui/heading";
|
||||
import { useCallback, useContext, useEffect, useMemo, useState } from "react";
|
||||
import { toast } from "sonner";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import { Toaster, toast } from "sonner";
|
||||
import {
|
||||
Form,
|
||||
FormControl,
|
||||
@@ -159,12 +158,11 @@ export default function CameraReviewSettingsView({
|
||||
});
|
||||
}
|
||||
setChangedValue(true);
|
||||
setUnsavedChanges(true);
|
||||
setSelectDetections(isChecked as boolean);
|
||||
},
|
||||
// we know that these deps are correct
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
[watchedAlertsZones, setUnsavedChanges],
|
||||
[watchedAlertsZones],
|
||||
);
|
||||
|
||||
const saveToConfig = useCallback(
|
||||
@@ -199,8 +197,6 @@ export default function CameraReviewSettingsView({
|
||||
position: "top-center",
|
||||
},
|
||||
);
|
||||
setChangedValue(false);
|
||||
setUnsavedChanges(false);
|
||||
updateConfig();
|
||||
} else {
|
||||
toast.error(
|
||||
@@ -233,14 +229,7 @@ export default function CameraReviewSettingsView({
|
||||
setIsLoading(false);
|
||||
});
|
||||
},
|
||||
[
|
||||
updateConfig,
|
||||
setIsLoading,
|
||||
selectedCamera,
|
||||
cameraConfig,
|
||||
t,
|
||||
setUnsavedChanges,
|
||||
],
|
||||
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t],
|
||||
);
|
||||
|
||||
const onCancel = useCallback(() => {
|
||||
@@ -506,7 +495,6 @@ export default function CameraReviewSettingsView({
|
||||
)}
|
||||
onCheckedChange={(checked) => {
|
||||
setChangedValue(true);
|
||||
setUnsavedChanges(true);
|
||||
return checked
|
||||
? field.onChange([
|
||||
...field.value,
|
||||
@@ -612,8 +600,6 @@ export default function CameraReviewSettingsView({
|
||||
zone.name,
|
||||
)}
|
||||
onCheckedChange={(checked) => {
|
||||
setChangedValue(true);
|
||||
setUnsavedChanges(true);
|
||||
return checked
|
||||
? field.onChange([
|
||||
...field.value,
|
||||
@@ -713,6 +699,7 @@ export default function CameraReviewSettingsView({
|
||||
)}
|
||||
/>
|
||||
</div>
|
||||
<Separator className="my-2 flex bg-secondary" />
|
||||
|
||||
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
|
||||
<Button
|
||||
@@ -725,7 +712,7 @@ export default function CameraReviewSettingsView({
|
||||
</Button>
|
||||
<Button
|
||||
variant="select"
|
||||
disabled={!changedValue || isLoading}
|
||||
disabled={isLoading}
|
||||
className="flex flex-1"
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
type="submit"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import Heading from "@/components/ui/heading";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { useCallback, useContext, useEffect, useState } from "react";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import { Toaster } from "sonner";
|
||||
import { Separator } from "../../components/ui/separator";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { toast } from "sonner";
|
||||
|
||||
@@ -664,7 +664,9 @@ export default function TriggerView({
|
||||
<TableHeader className="sticky top-0 bg-muted/50">
|
||||
<TableRow>
|
||||
<TableHead className="w-4"></TableHead>
|
||||
<TableHead>{t("triggers.table.name")}</TableHead>
|
||||
<TableHead>
|
||||
{t("name", { ns: "triggers.table.name" })}
|
||||
</TableHead>
|
||||
<TableHead>{t("triggers.table.type")}</TableHead>
|
||||
<TableHead>
|
||||
{t("triggers.table.lastTriggered")}
|
||||
|
||||
@@ -2,7 +2,7 @@ import Heading from "@/components/ui/heading";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { Switch } from "@/components/ui/switch";
|
||||
import { useCallback, useContext, useEffect } from "react";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import { Toaster } from "sonner";
|
||||
import { toast } from "sonner";
|
||||
import { Separator } from "../../components/ui/separator";
|
||||
import { Button } from "../../components/ui/button";
|
||||
|
||||
Reference in New Issue
Block a user