Compare commits

...

27 Commits

Author SHA1 Message Date
Josh Hawkins
f589a60cde add description to api endpoint 2026-01-22 11:11:08 -06:00
Josh Hawkins
b2ceb15db4 frontend 2026-01-22 11:07:26 -06:00
Josh Hawkins
b569f30820 tests 2026-01-22 10:48:20 -06:00
Josh Hawkins
db485ddafa remove frame extraction logic 2026-01-22 10:48:08 -06:00
Josh Hawkins
a6c5f4a82b use latest preview frame for latest image when camera is offline 2026-01-22 10:42:23 -06:00
Nicolas Mowen
31ee62b760 Implement LLM Chat API with tool calling support (#21731)
* Implement initial tools definiton APIs

* Add initial chat completion API with tool support

* Implement other providers

* Cleanup
2026-01-20 09:13:12 -06:00
John Shaw
16d94c3cfa Remove parents in remove_empty_directories (#21726)
The original implementation did a full directory tree walk to find and remove
empty directories, so this implementation should remove the parents as well,
like the original did.
2026-01-19 21:24:27 -07:00
Nicolas Mowen
bcccae7f9c Implement llama.cpp GenAI Provider (#21690)
* Implement llama.cpp GenAI Provider

* Add docs

* Update links

* Fix broken mqtt links

* Fix more broken anchors
2026-01-18 06:34:30 -07:00
John Shaw
1cc50f68a0 Optimize empty directory cleanup for recordings (#21695)
The previous empty directory cleanup did a full recursive directory
walk, which can be extremely slow. This new implementation only removes
directories which have a chance of being empty due to a recent file
deletion.
2026-01-17 15:47:21 -07:00
Nicolas Mowen
38a630af57 Refactor Time-Lapse Export (#21668)
* refactor time lapse creation to be a separate API call with ability to pass arbitrary ffmpeg args

* Add CPU fallback
2026-01-15 10:30:55 -07:00
Eugeny Tulupov
d9f8e603c9 Update go2rtc to v1.9.13 (#21648)
Co-authored-by: Eugeny Tulupov <eugeny.tulupov@spirent.com>
2026-01-14 08:15:45 -07:00
Josh Hawkins
594a706347 Fix incorrect counting in sync_recordings (#21626) 2026-01-12 18:25:07 -07:00
Josh Hawkins
e5fec56893 use same logging pattern in sync_recordings as the other sync functions (#21625) 2026-01-12 17:20:27 -07:00
Josh Hawkins
f1a19128ed Media sync API refactor and UI (#21542)
* generic job infrastructure

* types and dispatcher changes for jobs

* save data in memory only for completed jobs

* implement media sync job and endpoints

* change logs to debug

* websocket hook and types

* frontend

* i18n

* docs tweaks

* endpoint descriptions

* tweak docs
2026-01-06 08:20:19 -07:00
Josh Hawkins
a77b0a7c4b Add media sync API endpoint (#21526)
* add media cleanup functions

* add endpoint

* remove scheduled sync recordings from cleanup

* move to utils dir

* tweak import

* remove sync_recordings and add config migrator

* remove sync_recordings

* docs

* remove key

* clean up docs

* docs fix

* docs tweak
2026-01-04 11:21:55 -07:00
Nicolas Mowen
1c95eb2c39 Add API to handle deleting recordings (#21520)
* Add recording delete API

* Re-organize recordings apis

* Fix import

* Consolidate query types
2026-01-03 08:19:41 -07:00
Nicolas Mowen
26744efb1e Exports Improvements (#21521)
* Add images to case folder view

* Add ability to select case in export dialog

* Add to mobile review too
2026-01-03 08:03:33 -07:00
Nicolas Mowen
aa0b082184 Add support for GPU and NPU temperatures (#21495)
* Add rockchip temps

* Add support for GPU and NPU temperatures in the frontend

* Add support for Nvidia temperature

* Improve separation

* Adjust graph scaling
2025-12-31 13:32:07 -07:00
Andrew Roberts
7fb8d9b050 Camera-specific hwaccel settings for timelapse exports (correct base) (#21386)
* added hwaccel_args to camera.record.export config struct

* populate camera.record.export.hwaccel_args with a cascade up to camera then global if 'auto'

* use new hwaccel args in export

* added documentation for camera-specific hwaccel export

* fix c/p error

* missed an import

* fleshed out the docs and comments a bit

* ruff lint

* separated out the tips in the doc

* fix documentation

* fix and simplify reference config doc
2025-12-22 09:10:40 -07:00
Nicolas Mowen
b8bc98a423 Refactor temperature reporting for detectors and implement Hailo temp reading (#21395)
* Add Hailo temperature retrieval

* Refactor `get_hailo_temps()` to use ctxmanager

* Show Hailo temps in system UI

* Move hailo_platform import to get_hailo_temps

* Refactor temperatures calculations to use within detector block

* Adjust webUI to handle new location

---------

Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com>
2025-12-22 08:25:38 -07:00
Nicolas Mowen
f9e06bb7b7 Export filter UI (#21322)
* Get started on export filters

* implement basic filter

* Implement filtering and adjust api

* Improve filter handling

* Improve navigation

* Cleanup

* handle scrolling
2025-12-16 16:10:48 -06:00
Josh Hawkins
7cc16161b3 Camera connection quality indicator (#21297)
* add camera connection quality metrics and indicator

* formatting

* move stall calcs to watchdog

* clean up

* change watchdog to 1s and separately track time for ffmpeg retry_interval

* implement status caching to reduce message volume
2025-12-15 14:02:03 -07:00
Nicolas Mowen
08311a6ee2 Case management UI (#21299)
* Refactor export cards to match existing cards in other UI pages

* Show cases separately from exports

* Add proper filtering and display of cases

* Add ability to edit and select cases for exports

* Cleanup typing

* Hide if no unassigned

* Cleanup hiding logic

* fix scrolling

* Improve layout
2025-12-15 13:10:50 -07:00
Josh Hawkins
a08c044144 refactor vainfo to search for first GPU (#21296)
use existing LibvaGpuSelector to pick appropritate libva device
2025-12-15 08:58:50 -07:00
Nicolas Mowen
5cced22f65 implement case management for export apis (#21295) 2025-12-15 08:54:13 -07:00
Nicolas Mowen
b962c95725 Create scaffolding for case management (#21293) 2025-12-15 08:28:52 -07:00
Nicolas Mowen
0cbec25494 Update version 2025-12-15 07:46:31 -07:00
85 changed files with 6257 additions and 1122 deletions

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.17.0
VERSION = 0.18.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty

View File

@@ -55,7 +55,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.10/go2rtc_linux_${TARGETARCH}" go2rtc
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.13/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio
ARG TARGETARCH

View File

@@ -234,7 +234,7 @@ To do this:
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.10, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.13, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@@ -238,7 +238,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.

View File

@@ -1,231 +0,0 @@
---
id: genai
title: Generative AI
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
cameras:
front_camera:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
enabled: False # <- disable GenAI for your indoor camera
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:4b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end: true # default
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@@ -5,7 +5,7 @@ title: Configuring Generative AI
## Configuration
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
@@ -41,12 +41,12 @@ If you are trying to use a single model for Frigate and HomeAssistant, it will n
The following models are recommended:
| Model | Notes |
| ----------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
| Model | Notes |
| ------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::note
@@ -61,12 +61,46 @@ genai:
provider: ollama
base_url: http://localhost:11434
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
## llama.cpp
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a C++ implementation of LLaMA that provides a high-performance inference server. Using llama.cpp directly gives you access to all native llama.cpp options and parameters.
:::warning
Using llama.cpp on CPU is not recommended, high inference times make using Generative AI impractical.
:::
It is highly recommended to host the llama.cpp server on a machine with a discrete graphics card, or on an Apple silicon Mac for best performance.
### Supported Models
You must use a vision capable model with Frigate. The llama.cpp server supports various vision models in GGUF format.
### Configuration
```yaml
genai:
provider: llamacpp
base_url: http://localhost:8080
model: your-model-name
provider_options:
temperature: 0.7
repeat_penalty: 1.05
top_p: 0.8
top_k: 40
min_p: 0.05
seed: -1
```
All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters.
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.

View File

@@ -11,7 +11,7 @@ By default, descriptions will be generated for all tracked objects and all zones
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt#frigatecamera_nameobject_descriptionsset).
## Usage and Best Practices
@@ -42,10 +42,10 @@ genai:
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.

View File

@@ -7,7 +7,7 @@ Generative AI can be used to automatically generate structured summaries of revi
Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well.
Generative AI review summaries can also be toggled dynamically for a [camera via MQTT](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset).
Generative AI review summaries can also be toggled dynamically for a [camera via MQTT](/integrations/mqtt#frigatecamera_namereview_descriptionsset).
## Review Summary Usage and Best Practices

View File

@@ -139,7 +139,13 @@ record:
:::tip
When using `hwaccel_args` globally hardware encoding is used for time lapse generation. The encoder determines its own behavior so the resulting file size may be undesirably large.
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras.<camera>.record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
:::
:::tip
The encoder determines its own behavior so the resulting file size may be undesirably large.
To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (where `n` stands for the value of the quantisation parameter). The value can be adjusted to get an acceptable tradeoff between quality and file size for the given scenario.
:::
@@ -148,19 +154,16 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
## Syncing Recordings With Disk
## Syncing Media Files With Disk
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.
Media files (event snapshots, event thumbnails, review thumbnails, previews, exports, and recordings) can become orphaned when database entries are deleted but the corresponding files remain on disk.
```yaml
record:
sync_recordings: True
```
Normal operation may leave small numbers of orphaned files until Frigate's scheduled cleanup, but crashes, configuration changes, or upgrades may cause more orphaned files that Frigate does not clean up. This feature checks the file system for media files and removes any that are not referenced in the database.
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
The Maintenance pane in the Frigate UI or an API endpoint `POST /api/media/sync` can be used to trigger a media sync. When using the API, a job ID is returned and the operation continues on the server. Status can be checked with the `/api/media/sync/status/{job_id}` endpoint.
:::warning
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.
This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended.
:::

View File

@@ -510,8 +510,6 @@ record:
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Continuous retention settings
continuous:
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
@@ -534,6 +532,8 @@ record:
# The -r (framerate) dictates how smooth the output video is.
# So the args would be -vf setpts=0.02*PTS -r 30 in that case.
timelapse_args: "-vf setpts=0.04*PTS -r 30"
# Optional: Global hardware acceleration settings for timelapse exports. (default: inherit)
hwaccel_args: auto
# Optional: Recording Preview Settings
preview:
# Optional: Quality of recording preview (default: shown below).
@@ -749,7 +749,7 @@ classification:
interval: None
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# Uses https://github.com/AlexxIT/go2rtc (v1.9.13)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
@@ -835,6 +835,11 @@ cameras:
# Optional: camera specific output args (default: inherit)
# output_args:
# Optional: camera specific hwaccel args for timelapse export (default: inherit)
# record:
# export:
# hwaccel_args:
# Optional: timeout for highest scoring image before allowing it
# to be replaced by a newer image. (default: shown below)
best_image_timeout: 60

View File

@@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.13) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#configuration) for more advanced configurations and features.
:::note
@@ -187,7 +187,7 @@ In this configuration:
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@@ -11,7 +11,7 @@ Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect
## Setup a go2rtc stream
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#module-streams), not just rtsp.
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#module-streams), not just rtsp.
:::tip
@@ -47,8 +47,8 @@ After adding this to the config, restart Frigate and try to watch the live strea
- Check Video Codec:
- If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
```yaml
go2rtc:
streams:

View File

@@ -28,7 +28,7 @@ const sidebars: SidebarsConfig = {
{
type: "link",
label: "Go2RTC Configuration Reference",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.13#configuration",
} as PropSidebarItemLink,
],
Detectors: [

View File

@@ -326,6 +326,59 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/HTTPValidationError"
/media/sync:
post:
tags:
- App
summary: Start media sync job
description: |-
Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
Returns 202 with job details when queued, or 409 if a job is already running.
operationId: sync_media_media_sync_post
requestBody:
required: true
content:
application/json:
responses:
"202":
description: Accepted - Job queued
"409":
description: Conflict - Job already running
"422":
description: Validation Error
/media/sync/current:
get:
tags:
- App
summary: Get current media sync job
description: |-
Retrieve the current running media sync job, if any. Returns the job details or null when no job is active.
operationId: get_media_sync_current_media_sync_current_get
responses:
"200":
description: Successful Response
"422":
description: Validation Error
/media/sync/status/{job_id}:
get:
tags:
- App
summary: Get media sync job status
description: |-
Get status and results for the specified media sync job id. Returns 200 with job details including results, or 404 if the job is not found.
operationId: get_media_sync_status_media_sync_status__job_id__get
parameters:
- name: job_id
in: path
responses:
"200":
description: Successful Response
"404":
description: Not Found - Job not found
"422":
description: Validation Error
/faces/train/{name}/classify:
post:
tags:

View File

@@ -25,15 +25,22 @@ from pydantic import ValidationError
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.request.app_body import AppConfigSetBody, MediaSyncBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateTopic,
)
from frigate.ffmpeg_presets import FFMPEG_HWACCEL_VAAPI, _gpu_selector
from frigate.jobs.media_sync import (
get_current_media_sync_job,
get_media_sync_job_by_id,
start_media_sync_job,
)
from frigate.models import Event, Timeline
from frigate.stats.prometheus import get_metrics, update_metrics
from frigate.types import JobStatusTypesEnum
from frigate.util.builtin import (
clean_camera_user_pass,
flatten_config_data,
@@ -458,7 +465,15 @@ def config_set(request: Request, body: AppConfigSetBody):
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
def vainfo():
vainfo = vainfo_hwaccel()
# Use LibvaGpuSelector to pick an appropriate libva device (if available)
selected_gpu = ""
try:
selected_gpu = _gpu_selector.get_gpu_arg(FFMPEG_HWACCEL_VAAPI, 0) or ""
except Exception:
selected_gpu = ""
# If selected_gpu is empty, pass None to vainfo_hwaccel to run plain `vainfo`.
vainfo = vainfo_hwaccel(device_name=selected_gpu or None)
return JSONResponse(
content={
"return_code": vainfo.returncode,
@@ -593,6 +608,98 @@ def restart():
)
@router.post(
"/media/sync",
dependencies=[Depends(require_role(["admin"]))],
summary="Start media sync job",
description="""Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
Returns 202 with job details when queued, or 409 if a job is already running.""",
)
def sync_media(body: MediaSyncBody = Body(...)):
"""Start async media sync job - remove orphaned files.
Syncs specified media types: event snapshots, event thumbnails, review thumbnails,
previews, exports, and/or recordings. Job runs in background; use /media/sync/current
or /media/sync/status/{job_id} to check status.
Args:
body: MediaSyncBody with dry_run flag and media_types list.
media_types can include: 'all', 'event_snapshots', 'event_thumbnails',
'review_thumbnails', 'previews', 'exports', 'recordings'
Returns:
202 Accepted with job_id, or 409 Conflict if job already running.
"""
job_id = start_media_sync_job(
dry_run=body.dry_run, media_types=body.media_types, force=body.force
)
if job_id is None:
# A job is already running
current = get_current_media_sync_job()
return JSONResponse(
content={
"error": "A media sync job is already running",
"current_job_id": current.id if current else None,
},
status_code=409,
)
return JSONResponse(
content={
"job": {
"job_type": "media_sync",
"status": JobStatusTypesEnum.queued,
"id": job_id,
}
},
status_code=202,
)
@router.get(
"/media/sync/current",
dependencies=[Depends(require_role(["admin"]))],
summary="Get current media sync job",
description="""Retrieve the current running media sync job, if any. Returns the job details
or null when no job is active.""",
)
def get_media_sync_current():
"""Get the current running media sync job, if any."""
job = get_current_media_sync_job()
if job is None:
return JSONResponse(content={"job": None}, status_code=200)
return JSONResponse(
content={"job": job.to_dict()},
status_code=200,
)
@router.get(
"/media/sync/status/{job_id}",
dependencies=[Depends(require_role(["admin"]))],
summary="Get media sync job status",
description="""Get status and results for the specified media sync job id. Returns 200 with
job details including results, or 404 if the job is not found.""",
)
def get_media_sync_status(job_id: str):
"""Get the status of a specific media sync job."""
job = get_media_sync_job_by_id(job_id)
if job is None:
return JSONResponse(
content={"error": "Job not found"},
status_code=404,
)
return JSONResponse(
content={"job": job.to_dict()},
status_code=200,
)
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
def get_labels(camera: str = ""):
try:

476
frigate/api/chat.py Normal file
View File

@@ -0,0 +1,476 @@
"""Chat and LLM tool calling APIs."""
import json
import logging
from datetime import datetime, timezone
from typing import Any, Dict, List
from fastapi import APIRouter, Body, Depends, Request
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
)
from frigate.api.defs.query.events_query_parameters import EventsQueryParams
from frigate.api.defs.request.chat_body import ChatCompletionRequest
from frigate.api.defs.response.chat_response import (
ChatCompletionResponse,
ChatMessageResponse,
)
from frigate.api.defs.tags import Tags
from frigate.api.event import events
from frigate.genai import get_genai_client
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.chat])
class ToolExecuteRequest(BaseModel):
"""Request model for tool execution."""
tool_name: str
arguments: Dict[str, Any]
def get_tool_definitions() -> List[Dict[str, Any]]:
"""
Get OpenAI-compatible tool definitions for Frigate.
Returns a list of tool definitions that can be used with OpenAI-compatible
function calling APIs.
"""
return [
{
"type": "function",
"function": {
"name": "search_objects",
"description": (
"Search for detected objects in Frigate by camera, object label, time range, "
"zones, and other filters. Use this to answer questions about when "
"objects were detected, what objects appeared, or to find specific object detections. "
"An 'object' in Frigate represents a tracked detection (e.g., a person, package, car)."
),
"parameters": {
"type": "object",
"properties": {
"camera": {
"type": "string",
"description": "Camera name to filter by (optional). Use 'all' for all cameras.",
},
"label": {
"type": "string",
"description": "Object label to filter by (e.g., 'person', 'package', 'car').",
},
"after": {
"type": "string",
"description": "Start time in ISO 8601 format (e.g., '2024-01-01T00:00:00Z').",
},
"before": {
"type": "string",
"description": "End time in ISO 8601 format (e.g., '2024-01-01T23:59:59Z').",
},
"zones": {
"type": "array",
"items": {"type": "string"},
"description": "List of zone names to filter by.",
},
"limit": {
"type": "integer",
"description": "Maximum number of objects to return (default: 10).",
"default": 10,
},
},
},
"required": [],
},
},
]
@router.get(
"/chat/tools",
dependencies=[Depends(allow_any_authenticated())],
summary="Get available tools",
description="Returns OpenAI-compatible tool definitions for function calling.",
)
def get_tools(request: Request) -> JSONResponse:
"""Get list of available tools for LLM function calling."""
tools = get_tool_definitions()
return JSONResponse(content={"tools": tools})
async def _execute_search_objects(
request: Request,
arguments: Dict[str, Any],
allowed_cameras: List[str],
) -> JSONResponse:
"""
Execute the search_objects tool.
This searches for detected objects (events) in Frigate using the same
logic as the events API endpoint.
"""
# Parse ISO 8601 timestamps to Unix timestamps if provided
after = arguments.get("after")
before = arguments.get("before")
if after:
try:
after_dt = datetime.fromisoformat(after.replace("Z", "+00:00"))
after = after_dt.timestamp()
except (ValueError, AttributeError):
logger.warning(f"Invalid 'after' timestamp format: {after}")
after = None
if before:
try:
before_dt = datetime.fromisoformat(before.replace("Z", "+00:00"))
before = before_dt.timestamp()
except (ValueError, AttributeError):
logger.warning(f"Invalid 'before' timestamp format: {before}")
before = None
# Convert zones array to comma-separated string if provided
zones = arguments.get("zones")
if isinstance(zones, list):
zones = ",".join(zones)
elif zones is None:
zones = "all"
# Build query parameters compatible with EventsQueryParams
query_params = EventsQueryParams(
camera=arguments.get("camera", "all"),
cameras=arguments.get("camera", "all"),
label=arguments.get("label", "all"),
labels=arguments.get("label", "all"),
zones=zones,
zone=zones,
after=after,
before=before,
limit=arguments.get("limit", 10),
)
try:
# Call the events endpoint function directly
# The events function is synchronous and takes params and allowed_cameras
response = events(query_params, allowed_cameras)
# The response is already a JSONResponse with event data
# Return it as-is for the LLM
return response
except Exception as e:
logger.error(f"Error executing search_objects: {e}", exc_info=True)
return JSONResponse(
content={
"success": False,
"message": f"Error searching objects: {str(e)}",
},
status_code=500,
)
@router.post(
"/chat/execute",
dependencies=[Depends(allow_any_authenticated())],
summary="Execute a tool",
description="Execute a tool function call from an LLM.",
)
async def execute_tool(
request: Request,
body: ToolExecuteRequest = Body(...),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
) -> JSONResponse:
"""
Execute a tool function call.
This endpoint receives tool calls from LLMs and executes the corresponding
Frigate operations, returning results in a format the LLM can understand.
"""
tool_name = body.tool_name
arguments = body.arguments
logger.debug(f"Executing tool: {tool_name} with arguments: {arguments}")
if tool_name == "search_objects":
return await _execute_search_objects(request, arguments, allowed_cameras)
return JSONResponse(
content={
"success": False,
"message": f"Unknown tool: {tool_name}",
"tool": tool_name,
},
status_code=400,
)
async def _execute_tool_internal(
tool_name: str,
arguments: Dict[str, Any],
request: Request,
allowed_cameras: List[str],
) -> Dict[str, Any]:
"""
Internal helper to execute a tool and return the result as a dict.
This is used by the chat completion endpoint to execute tools.
"""
if tool_name == "search_objects":
response = await _execute_search_objects(request, arguments, allowed_cameras)
try:
if hasattr(response, "body"):
body_str = response.body.decode("utf-8")
return json.loads(body_str)
elif hasattr(response, "content"):
return response.content
else:
return {}
except (json.JSONDecodeError, AttributeError) as e:
logger.warning(f"Failed to extract tool result: {e}")
return {"error": "Failed to parse tool result"}
else:
return {"error": f"Unknown tool: {tool_name}"}
@router.post(
"/chat/completion",
response_model=ChatCompletionResponse,
dependencies=[Depends(allow_any_authenticated())],
summary="Chat completion with tool calling",
description=(
"Send a chat message to the configured GenAI provider with tool calling support. "
"The LLM can call Frigate tools to answer questions about your cameras and events."
),
)
async def chat_completion(
request: Request,
body: ChatCompletionRequest = Body(...),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
) -> JSONResponse:
"""
Chat completion endpoint with tool calling support.
This endpoint:
1. Gets the configured GenAI client
2. Gets tool definitions
3. Sends messages + tools to LLM
4. Handles tool_calls if present
5. Executes tools and sends results back to LLM
6. Repeats until final answer
7. Returns response to user
"""
genai_client = get_genai_client(request.app.frigate_config)
if not genai_client:
return JSONResponse(
content={
"error": "GenAI is not configured. Please configure a GenAI provider in your Frigate config.",
},
status_code=400,
)
tools = get_tool_definitions()
conversation = []
current_datetime = datetime.now(timezone.utc)
current_date_str = current_datetime.strftime("%Y-%m-%d")
current_time_str = current_datetime.strftime("%H:%M:%S %Z")
system_prompt = f"""You are a helpful assistant for Frigate, a security camera NVR system. You help users answer questions about their cameras, detected objects, and events.
Current date and time: {current_date_str} at {current_time_str} (UTC)
When users ask questions about "today", "yesterday", "this week", etc., use the current date above as reference.
When searching for objects or events, use ISO 8601 format for dates (e.g., {current_date_str}T00:00:00Z for the start of today).
Always be accurate with time calculations based on the current date provided."""
conversation.append(
{
"role": "system",
"content": system_prompt,
}
)
for msg in body.messages:
msg_dict = {
"role": msg.role,
"content": msg.content,
}
if msg.tool_call_id:
msg_dict["tool_call_id"] = msg.tool_call_id
if msg.name:
msg_dict["name"] = msg.name
conversation.append(msg_dict)
tool_iterations = 0
max_iterations = body.max_tool_iterations
logger.debug(
f"Starting chat completion with {len(conversation)} message(s), "
f"{len(tools)} tool(s) available, max_iterations={max_iterations}"
)
try:
while tool_iterations < max_iterations:
logger.debug(
f"Calling LLM (iteration {tool_iterations + 1}/{max_iterations}) "
f"with {len(conversation)} message(s) in conversation"
)
response = genai_client.chat_with_tools(
messages=conversation,
tools=tools if tools else None,
tool_choice="auto",
)
if response.get("finish_reason") == "error":
logger.error("GenAI client returned an error")
return JSONResponse(
content={
"error": "An error occurred while processing your request.",
},
status_code=500,
)
assistant_message = {
"role": "assistant",
"content": response.get("content"),
}
if response.get("tool_calls"):
assistant_message["tool_calls"] = [
{
"id": tc["id"],
"type": "function",
"function": {
"name": tc["name"],
"arguments": json.dumps(tc["arguments"]),
},
}
for tc in response["tool_calls"]
]
conversation.append(assistant_message)
tool_calls = response.get("tool_calls")
if not tool_calls:
logger.debug(
f"Chat completion finished with final answer (iterations: {tool_iterations})"
)
return JSONResponse(
content=ChatCompletionResponse(
message=ChatMessageResponse(
role="assistant",
content=response.get("content"),
tool_calls=None,
),
finish_reason=response.get("finish_reason", "stop"),
tool_iterations=tool_iterations,
).model_dump(),
)
# Execute tools
tool_iterations += 1
logger.debug(
f"Tool calls detected (iteration {tool_iterations}/{max_iterations}): "
f"{len(tool_calls)} tool(s) to execute"
)
tool_results = []
for tool_call in tool_calls:
tool_name = tool_call["name"]
tool_args = tool_call["arguments"]
tool_call_id = tool_call["id"]
logger.debug(
f"Executing tool: {tool_name} (id: {tool_call_id}) with arguments: {json.dumps(tool_args, indent=2)}"
)
try:
tool_result = await _execute_tool_internal(
tool_name, tool_args, request, allowed_cameras
)
if isinstance(tool_result, dict):
result_content = json.dumps(tool_result)
result_summary = tool_result
if isinstance(tool_result, dict) and isinstance(
tool_result.get("content"), list
):
result_count = len(tool_result.get("content", []))
result_summary = {
"count": result_count,
"sample": tool_result.get("content", [])[:2]
if result_count > 0
else [],
}
logger.debug(
f"Tool {tool_name} (id: {tool_call_id}) completed successfully. "
f"Result: {json.dumps(result_summary, indent=2)}"
)
elif isinstance(tool_result, str):
result_content = tool_result
logger.debug(
f"Tool {tool_name} (id: {tool_call_id}) completed successfully. "
f"Result length: {len(result_content)} characters"
)
else:
result_content = str(tool_result)
logger.debug(
f"Tool {tool_name} (id: {tool_call_id}) completed successfully. "
f"Result type: {type(tool_result).__name__}"
)
tool_results.append(
{
"role": "tool",
"tool_call_id": tool_call_id,
"content": result_content,
}
)
except Exception as e:
logger.error(
f"Error executing tool {tool_name} (id: {tool_call_id}): {e}",
exc_info=True,
)
error_content = json.dumps(
{"error": f"Tool execution failed: {str(e)}"}
)
tool_results.append(
{
"role": "tool",
"tool_call_id": tool_call_id,
"content": error_content,
}
)
logger.debug(
f"Tool {tool_name} (id: {tool_call_id}) failed. Error result added to conversation."
)
conversation.extend(tool_results)
logger.debug(
f"Added {len(tool_results)} tool result(s) to conversation. "
f"Continuing with next LLM call..."
)
logger.warning(
f"Max tool iterations ({max_iterations}) reached. Returning partial response."
)
return JSONResponse(
content=ChatCompletionResponse(
message=ChatMessageResponse(
role="assistant",
content="I reached the maximum number of tool call iterations. Please try rephrasing your question.",
tool_calls=None,
),
finish_reason="length",
tool_iterations=tool_iterations,
).model_dump(),
)
except Exception as e:
logger.error(f"Error in chat completion: {e}", exc_info=True)
return JSONResponse(
content={
"error": "An error occurred while processing your request.",
},
status_code=500,
)

View File

@@ -1,8 +1,7 @@
from enum import Enum
from typing import Optional, Union
from typing import Optional
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
class Extension(str, Enum):
@@ -48,15 +47,3 @@ class MediaMjpegFeedQueryParams(BaseModel):
mask: Optional[int] = None
motion: Optional[int] = None
regions: Optional[int] = None
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"
class MediaRecordingsAvailabilityQueryParams(BaseModel):
cameras: str = "all"
before: Union[float, SkipJsonSchema[None]] = None
after: Union[float, SkipJsonSchema[None]] = None
scale: int = 30

View File

@@ -0,0 +1,21 @@
from typing import Optional, Union
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"
class MediaRecordingsAvailabilityQueryParams(BaseModel):
cameras: str = "all"
before: Union[float, SkipJsonSchema[None]] = None
after: Union[float, SkipJsonSchema[None]] = None
scale: int = 30
class RecordingsDeleteQueryParams(BaseModel):
keep: Optional[str] = None
cameras: Optional[str] = "all"

View File

@@ -1,6 +1,6 @@
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
from pydantic import BaseModel, Field
class AppConfigSetBody(BaseModel):
@@ -27,3 +27,16 @@ class AppPostLoginBody(BaseModel):
class AppPutRoleBody(BaseModel):
role: str
class MediaSyncBody(BaseModel):
dry_run: bool = Field(
default=True, description="If True, only report orphans without deleting them"
)
media_types: List[str] = Field(
default=["all"],
description="Types of media to sync: 'all', 'event_snapshots', 'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'",
)
force: bool = Field(
default=False, description="If True, bypass safety threshold checks"
)

View File

@@ -0,0 +1,34 @@
"""Chat API request models."""
from typing import Optional
from pydantic import BaseModel, Field
class ChatMessage(BaseModel):
"""A single message in a chat conversation."""
role: str = Field(
description="Message role: 'user', 'assistant', 'system', or 'tool'"
)
content: str = Field(description="Message content")
tool_call_id: Optional[str] = Field(
default=None, description="For tool messages, the ID of the tool call"
)
name: Optional[str] = Field(
default=None, description="For tool messages, the tool name"
)
class ChatCompletionRequest(BaseModel):
"""Request for chat completion with tool calling."""
messages: list[ChatMessage] = Field(
description="List of messages in the conversation"
)
max_tool_iterations: int = Field(
default=5,
ge=1,
le=10,
description="Maximum number of tool call iterations (default: 5)",
)

View File

@@ -0,0 +1,35 @@
from typing import Optional
from pydantic import BaseModel, Field
class ExportCaseCreateBody(BaseModel):
"""Request body for creating a new export case."""
name: str = Field(max_length=100, description="Friendly name of the export case")
description: Optional[str] = Field(
default=None, description="Optional description of the export case"
)
class ExportCaseUpdateBody(BaseModel):
"""Request body for updating an existing export case."""
name: Optional[str] = Field(
default=None,
max_length=100,
description="Updated friendly name of the export case",
)
description: Optional[str] = Field(
default=None, description="Updated description of the export case"
)
class ExportCaseAssignBody(BaseModel):
"""Request body for assigning or unassigning an export to a case."""
export_case_id: Optional[str] = Field(
default=None,
max_length=30,
description="Case ID to assign to the export, or null to unassign",
)

View File

@@ -1,20 +1,49 @@
from typing import Union
from typing import Optional, Union
from pydantic import BaseModel, Field
from pydantic.json_schema import SkipJsonSchema
from frigate.record.export import (
PlaybackFactorEnum,
PlaybackSourceEnum,
)
from frigate.record.export import PlaybackSourceEnum
class ExportRecordingsBody(BaseModel):
playback: PlaybackFactorEnum = Field(
default=PlaybackFactorEnum.realtime, title="Playback factor"
)
source: PlaybackSourceEnum = Field(
default=PlaybackSourceEnum.recordings, title="Playback source"
)
name: str = Field(title="Friendly name", default=None, max_length=256)
image_path: Union[str, SkipJsonSchema[None]] = None
export_case_id: Optional[str] = Field(
default=None,
title="Export case ID",
max_length=30,
description="ID of the export case to assign this export to",
)
class ExportRecordingsCustomBody(BaseModel):
source: PlaybackSourceEnum = Field(
default=PlaybackSourceEnum.recordings, title="Playback source"
)
name: str = Field(title="Friendly name", default=None, max_length=256)
image_path: Union[str, SkipJsonSchema[None]] = None
export_case_id: Optional[str] = Field(
default=None,
title="Export case ID",
max_length=30,
description="ID of the export case to assign this export to",
)
ffmpeg_input_args: Optional[str] = Field(
default=None,
title="FFmpeg input arguments",
description="Custom FFmpeg input arguments. If not provided, defaults to timelapse input args.",
)
ffmpeg_output_args: Optional[str] = Field(
default=None,
title="FFmpeg output arguments",
description="Custom FFmpeg output arguments. If not provided, defaults to timelapse output args.",
)
cpu_fallback: bool = Field(
default=False,
title="CPU Fallback",
description="If true, retry export without hardware acceleration if the initial export fails.",
)

View File

@@ -0,0 +1,37 @@
"""Chat API response models."""
from typing import Any, Optional
from pydantic import BaseModel, Field
class ToolCall(BaseModel):
"""A tool call from the LLM."""
id: str = Field(description="Unique identifier for this tool call")
name: str = Field(description="Tool name to call")
arguments: dict[str, Any] = Field(description="Arguments for the tool call")
class ChatMessageResponse(BaseModel):
"""A message in the chat response."""
role: str = Field(description="Message role")
content: Optional[str] = Field(
default=None, description="Message content (None if tool calls present)"
)
tool_calls: Optional[list[ToolCall]] = Field(
default=None, description="Tool calls if LLM wants to call tools"
)
class ChatCompletionResponse(BaseModel):
"""Response from chat completion."""
message: ChatMessageResponse = Field(description="The assistant's message")
finish_reason: str = Field(
description="Reason generation stopped: 'stop', 'tool_calls', 'length', 'error'"
)
tool_iterations: int = Field(
default=0, description="Number of tool call iterations performed"
)

View File

@@ -0,0 +1,22 @@
from typing import List, Optional
from pydantic import BaseModel, Field
class ExportCaseModel(BaseModel):
"""Model representing a single export case."""
id: str = Field(description="Unique identifier for the export case")
name: str = Field(description="Friendly name of the export case")
description: Optional[str] = Field(
default=None, description="Optional description of the export case"
)
created_at: float = Field(
description="Unix timestamp when the export case was created"
)
updated_at: float = Field(
description="Unix timestamp when the export case was last updated"
)
ExportCasesResponse = List[ExportCaseModel]

View File

@@ -15,6 +15,9 @@ class ExportModel(BaseModel):
in_progress: bool = Field(
description="Whether the export is currently being processed"
)
export_case_id: Optional[str] = Field(
default=None, description="ID of the export case this export belongs to"
)
class StartExportResponse(BaseModel):

View File

@@ -3,13 +3,15 @@ from enum import Enum
class Tags(Enum):
app = "App"
auth = "Auth"
camera = "Camera"
preview = "Preview"
chat = "Chat"
events = "Events"
export = "Export"
classification = "Classification"
logs = "Logs"
media = "Media"
notifications = "Notifications"
preview = "Preview"
recordings = "Recordings"
review = "Review"
export = "Export"
events = "Events"
classification = "Classification"
auth = "Auth"

View File

@@ -4,10 +4,10 @@ import logging
import random
import string
from pathlib import Path
from typing import List
from typing import List, Optional
import psutil
from fastapi import APIRouter, Depends, Request
from fastapi import APIRouter, Depends, Query, Request
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filepath
from peewee import DoesNotExist
@@ -19,8 +19,20 @@ from frigate.api.auth import (
require_camera_access,
require_role,
)
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.request.export_case_body import (
ExportCaseAssignBody,
ExportCaseCreateBody,
ExportCaseUpdateBody,
)
from frigate.api.defs.request.export_recordings_body import (
ExportRecordingsBody,
ExportRecordingsCustomBody,
)
from frigate.api.defs.request.export_rename_body import ExportRenameBody
from frigate.api.defs.response.export_case_response import (
ExportCaseModel,
ExportCasesResponse,
)
from frigate.api.defs.response.export_response import (
ExportModel,
ExportsResponse,
@@ -29,9 +41,9 @@ from frigate.api.defs.response.export_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import CLIPS_DIR, EXPORT_DIR
from frigate.models import Export, Previews, Recordings
from frigate.models import Export, ExportCase, Previews, Recordings
from frigate.record.export import (
PlaybackFactorEnum,
DEFAULT_TIME_LAPSE_FFMPEG_ARGS,
PlaybackSourceEnum,
RecordingExporter,
)
@@ -52,17 +64,182 @@ router = APIRouter(tags=[Tags.export])
)
def get_exports(
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
export_case_id: Optional[str] = None,
cameras: Optional[str] = Query(default="all"),
start_date: Optional[float] = None,
end_date: Optional[float] = None,
):
exports = (
Export.select()
.where(Export.camera << allowed_cameras)
.order_by(Export.date.desc())
.dicts()
.iterator()
)
query = Export.select().where(Export.camera << allowed_cameras)
if export_case_id is not None:
if export_case_id == "unassigned":
query = query.where(Export.export_case.is_null(True))
else:
query = query.where(Export.export_case == export_case_id)
if cameras and cameras != "all":
requested = set(cameras.split(","))
filtered_cameras = list(requested.intersection(allowed_cameras))
if not filtered_cameras:
return JSONResponse(content=[])
query = query.where(Export.camera << filtered_cameras)
if start_date is not None:
query = query.where(Export.date >= start_date)
if end_date is not None:
query = query.where(Export.date <= end_date)
exports = query.order_by(Export.date.desc()).dicts().iterator()
return JSONResponse(content=[e for e in exports])
@router.get(
"/cases",
response_model=ExportCasesResponse,
dependencies=[Depends(allow_any_authenticated())],
summary="Get export cases",
description="Gets all export cases from the database.",
)
def get_export_cases():
cases = (
ExportCase.select().order_by(ExportCase.created_at.desc()).dicts().iterator()
)
return JSONResponse(content=[c for c in cases])
@router.post(
"/cases",
response_model=ExportCaseModel,
dependencies=[Depends(require_role(["admin"]))],
summary="Create export case",
description="Creates a new export case.",
)
def create_export_case(body: ExportCaseCreateBody):
case = ExportCase.create(
id="".join(random.choices(string.ascii_lowercase + string.digits, k=12)),
name=body.name,
description=body.description,
created_at=Path().stat().st_mtime,
updated_at=Path().stat().st_mtime,
)
return JSONResponse(content=model_to_dict(case))
@router.get(
"/cases/{case_id}",
response_model=ExportCaseModel,
dependencies=[Depends(allow_any_authenticated())],
summary="Get a single export case",
description="Gets a specific export case by ID.",
)
def get_export_case(case_id: str):
try:
case = ExportCase.get(ExportCase.id == case_id)
return JSONResponse(content=model_to_dict(case))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
@router.patch(
"/cases/{case_id}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Update export case",
description="Updates an existing export case.",
)
def update_export_case(case_id: str, body: ExportCaseUpdateBody):
try:
case = ExportCase.get(ExportCase.id == case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
if body.name is not None:
case.name = body.name
if body.description is not None:
case.description = body.description
case.save()
return JSONResponse(
content={"success": True, "message": "Successfully updated export case."}
)
@router.delete(
"/cases/{case_id}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete export case",
description="""Deletes an export case.\n Exports that reference this case will have their export_case set to null.\n """,
)
def delete_export_case(case_id: str):
try:
case = ExportCase.get(ExportCase.id == case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
# Unassign exports from this case but keep the exports themselves
Export.update(export_case=None).where(Export.export_case == case).execute()
case.delete_instance()
return JSONResponse(
content={"success": True, "message": "Successfully deleted export case."}
)
@router.patch(
"/export/{export_id}/case",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Assign export to case",
description=(
"Assigns an export to a case, or unassigns it if export_case_id is null."
),
)
async def assign_export_case(
export_id: str,
body: ExportCaseAssignBody,
request: Request,
):
try:
export: Export = Export.get(Export.id == export_id)
await require_camera_access(export.camera, request=request)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export not found."},
status_code=404,
)
if body.export_case_id is not None:
try:
ExportCase.get(ExportCase.id == body.export_case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found."},
status_code=404,
)
export.export_case = body.export_case_id
else:
export.export_case = None
export.save()
return JSONResponse(
content={"success": True, "message": "Successfully updated export case."}
)
@router.post(
"/export/{camera_name}/start/{start_time}/end/{end_time}",
response_model=StartExportResponse,
@@ -88,11 +265,20 @@ def export_recording(
status_code=404,
)
playback_factor = body.playback
playback_source = body.source
friendly_name = body.name
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
export_case_id = body.export_case_id
if export_case_id is not None:
try:
ExportCase.get(ExportCase.id == export_case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
@@ -151,16 +337,12 @@ def export_recording(
existing_image,
int(start_time),
int(end_time),
(
PlaybackFactorEnum[playback_factor]
if playback_factor in PlaybackFactorEnum.__members__.values()
else PlaybackFactorEnum.realtime
),
(
PlaybackSourceEnum[playback_source]
if playback_source in PlaybackSourceEnum.__members__.values()
else PlaybackSourceEnum.recordings
),
export_case_id,
)
exporter.start()
return JSONResponse(
@@ -271,6 +453,138 @@ async def export_delete(event_id: str, request: Request):
)
@router.post(
"/export/custom/{camera_name}/start/{start_time}/end/{end_time}",
response_model=StartExportResponse,
dependencies=[Depends(require_camera_access)],
summary="Start custom recording export",
description="""Starts an export of a recording for the specified time range using custom FFmpeg arguments.
The export can be from recordings or preview footage. Returns the export ID if
successful, or an error message if the camera is invalid or no recordings/previews
are found for the time range. If ffmpeg_input_args and ffmpeg_output_args are not provided,
defaults to timelapse export settings.""",
)
def export_recording_custom(
request: Request,
camera_name: str,
start_time: float,
end_time: float,
body: ExportRecordingsCustomBody,
):
if not camera_name or not request.app.frigate_config.cameras.get(camera_name):
return JSONResponse(
content=(
{"success": False, "message": f"{camera_name} is not a valid camera."}
),
status_code=404,
)
playback_source = body.source
friendly_name = body.name
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
ffmpeg_input_args = body.ffmpeg_input_args
ffmpeg_output_args = body.ffmpeg_output_args
cpu_fallback = body.cpu_fallback
export_case_id = body.export_case_id
if export_case_id is not None:
try:
ExportCase.get(ExportCase.id == export_case_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export case not found"},
status_code=404,
)
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
content=({"success": False, "message": "Invalid image path"}),
status_code=400,
)
if playback_source == "recordings":
recordings_count = (
Recordings.select()
.where(
Recordings.start_time.between(start_time, end_time)
| Recordings.end_time.between(start_time, end_time)
| (
(start_time > Recordings.start_time)
& (end_time < Recordings.end_time)
)
)
.where(Recordings.camera == camera_name)
.count()
)
if recordings_count <= 0:
return JSONResponse(
content=(
{"success": False, "message": "No recordings found for time range"}
),
status_code=400,
)
else:
previews_count = (
Previews.select()
.where(
Previews.start_time.between(start_time, end_time)
| Previews.end_time.between(start_time, end_time)
| ((start_time > Previews.start_time) & (end_time < Previews.end_time))
)
.where(Previews.camera == camera_name)
.count()
)
if not is_current_hour(start_time) and previews_count <= 0:
return JSONResponse(
content=(
{"success": False, "message": "No previews found for time range"}
),
status_code=400,
)
export_id = f"{camera_name}_{''.join(random.choices(string.ascii_lowercase + string.digits, k=6))}"
# Set default values if not provided (timelapse defaults)
if ffmpeg_input_args is None:
ffmpeg_input_args = ""
if ffmpeg_output_args is None:
ffmpeg_output_args = DEFAULT_TIME_LAPSE_FFMPEG_ARGS
exporter = RecordingExporter(
request.app.frigate_config,
export_id,
camera_name,
friendly_name,
existing_image,
int(start_time),
int(end_time),
(
PlaybackSourceEnum[playback_source]
if playback_source in PlaybackSourceEnum.__members__.values()
else PlaybackSourceEnum.recordings
),
export_case_id,
ffmpeg_input_args,
ffmpeg_output_args,
cpu_fallback,
)
exporter.start()
return JSONResponse(
content=(
{
"success": True,
"message": "Starting export of recording.",
"export_id": export_id,
}
),
status_code=200,
)
@router.get(
"/exports/{export_id}",
response_model=ExportModel,

View File

@@ -16,12 +16,14 @@ from frigate.api import app as main_app
from frigate.api import (
auth,
camera,
chat,
classification,
event,
export,
media,
notification,
preview,
record,
review,
)
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
@@ -120,6 +122,7 @@ def create_fastapi_app(
# Order of include_router matters: https://fastapi.tiangolo.com/tutorial/path-params/#order-matters
app.include_router(auth.router)
app.include_router(camera.router)
app.include_router(chat.router)
app.include_router(classification.router)
app.include_router(review.router)
app.include_router(main_app.router)
@@ -128,6 +131,7 @@ def create_fastapi_app(
app.include_router(export.router)
app.include_router(event.router)
app.include_router(media.router)
app.include_router(record.router)
# App Properties
app.frigate_config = frigate_config
app.embeddings = embeddings

View File

@@ -8,9 +8,8 @@ import os
import subprocess as sp
import time
from datetime import datetime, timedelta, timezone
from functools import reduce
from pathlib import Path as FilePath
from typing import Any, List
from typing import Any
from urllib.parse import unquote
import cv2
@@ -19,12 +18,11 @@ import pytz
from fastapi import APIRouter, Depends, Path, Query, Request, Response
from fastapi.responses import FileResponse, JSONResponse, StreamingResponse
from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn, operator
from peewee import DoesNotExist, fn
from tzlocal import get_localzone_name
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
)
from frigate.api.defs.query.media_query_parameters import (
@@ -32,8 +30,6 @@ from frigate.api.defs.query.media_query_parameters import (
MediaEventsSnapshotQueryParams,
MediaLatestFrameQueryParams,
MediaMjpegFeedQueryParams,
MediaRecordingsAvailabilityQueryParams,
MediaRecordingsSummaryQueryParams,
)
from frigate.api.defs.tags import Tags
from frigate.camera.state import CameraState
@@ -44,13 +40,12 @@ from frigate.const import (
INSTALL_DIR,
MAX_SEGMENT_DURATION,
PREVIEW_FRAME_TYPE,
RECORD_DIR,
)
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
from frigate.output.preview import get_most_recent_preview_frame
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
@@ -131,7 +126,9 @@ async def camera_ptz_info(request: Request, camera_name: str):
@router.get(
"/{camera_name}/latest.{extension}", dependencies=[Depends(require_camera_access)]
"/{camera_name}/latest.{extension}",
dependencies=[Depends(require_camera_access)],
description="Returns the latest frame from the specified camera in the requested format (jpg, png, webp). Falls back to preview frames if the camera is offline.",
)
async def latest_frame(
request: Request,
@@ -165,20 +162,37 @@ async def latest_frame(
or 10
)
is_offline = False
if frame is None or datetime.now().timestamp() > (
frame_processor.get_current_frame_time(camera_name) + retry_interval
):
if request.app.camera_error_image is None:
error_image = glob.glob(
os.path.join(INSTALL_DIR, "frigate/images/camera-error.jpg")
)
last_frame_time = frame_processor.get_current_frame_time(camera_name)
preview_path = get_most_recent_preview_frame(
camera_name, before=last_frame_time
)
if len(error_image) > 0:
request.app.camera_error_image = cv2.imread(
error_image[0], cv2.IMREAD_UNCHANGED
if preview_path:
logger.debug(f"Using most recent preview frame for {camera_name}")
frame = cv2.imread(preview_path, cv2.IMREAD_UNCHANGED)
if frame is not None:
is_offline = True
if frame is None or not is_offline:
logger.debug(
f"No live or preview frame available for {camera_name}. Using error image."
)
if request.app.camera_error_image is None:
error_image = glob.glob(
os.path.join(INSTALL_DIR, "frigate/images/camera-error.jpg")
)
frame = request.app.camera_error_image
if len(error_image) > 0:
request.app.camera_error_image = cv2.imread(
error_image[0], cv2.IMREAD_UNCHANGED
)
frame = request.app.camera_error_image
height = int(params.height or str(frame.shape[0]))
width = int(height * frame.shape[1] / frame.shape[0])
@@ -200,14 +214,18 @@ async def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
headers = {
"Cache-Control": "no-store" if not params.store else "private, max-age=60",
}
if is_offline:
headers["X-Frigate-Offline"] = "true"
return Response(
content=img.tobytes(),
media_type=extension.get_mime_type(),
headers={
"Cache-Control": "no-store"
if not params.store
else "private, max-age=60",
},
headers=headers,
)
elif (
camera_name == "birdseye"
@@ -397,333 +415,6 @@ async def submit_recording_snapshot_to_plus(
)
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
def get_recordings_storage_usage(request: Request):
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
"storage"
][RECORD_DIR]
if not recording_stats:
return JSONResponse({})
total_mb = recording_stats["total"]
camera_usages: dict[str, dict] = (
request.app.storage_maintainer.calculate_camera_usages()
)
for camera_name in camera_usages.keys():
if camera_usages.get(camera_name, {}).get("usage"):
camera_usages[camera_name]["usage_percent"] = (
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
) * 100
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Returns true/false by day indicating if recordings exist"""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera << camera_list)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
if min_time is None or max_time is None:
return JSONResponse(content={})
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
days: dict[str, bool] = {}
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
period_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day")
)
.where(
(Recordings.camera << camera_list)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
for g in period_query:
days[g.day] = True
return JSONResponse(content=dict(sorted(days.items())))
@router.get(
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera == camera_name)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
days: dict[str, dict] = {}
if min_time is None or max_time is None:
return JSONResponse(content=list(days.values()))
dst_periods = get_dst_transitions(timezone, min_time, max_time)
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
recording_groups = (
Recordings.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.SUM(Recordings.duration).alias("duration"),
fn.SUM(Recordings.motion).alias("motion"),
fn.SUM(Recordings.objects).alias("objects"),
)
.where(
(Recordings.camera == camera_name)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
event_groups = (
Event.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Event.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.COUNT(Event.id).alias("count"),
)
.where(Event.camera == camera_name, Event.has_clip)
.where(
(Event.start_time >= period_start) & (Event.start_time <= period_end)
)
.group_by((Event.start_time + period_offset).cast("int") / 3600)
.namedtuples()
)
event_map = {g.hour: g.count for g in event_groups}
for recording_group in recording_groups:
parts = recording_group.hour.split()
hour = parts[1]
day = parts[0]
events_count = event_map.get(recording_group.hour, 0)
hour_data = {
"hour": hour,
"events": events_count,
"motion": recording_group.motion,
"objects": recording_group.objects,
"duration": round(recording_group.duration),
}
if day in days:
# merge counts if already present (edge-case at DST boundary)
days[day]["events"] += events_count or 0
days[day]["hours"].append(hour_data)
else:
days[day] = {
"events": events_count or 0,
"hours": [hour_data],
"day": day,
}
return JSONResponse(content=list(days.values()))
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
async def recordings(
camera_name: str,
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
before: float = datetime.now().timestamp(),
):
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
recordings = (
Recordings.select(
Recordings.id,
Recordings.start_time,
Recordings.end_time,
Recordings.segment_size,
Recordings.motion,
Recordings.objects,
Recordings.duration,
)
.where(
Recordings.camera == camera_name,
Recordings.end_time >= after,
Recordings.start_time <= before,
)
.order_by(Recordings.start_time)
.dicts()
.iterator()
)
return JSONResponse(content=list(recordings))
@router.get(
"/recordings/unavailable",
response_model=list[dict],
dependencies=[Depends(allow_any_authenticated())],
)
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get time ranges with no recordings."""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
before = params.before or datetime.datetime.now().timestamp()
after = (
params.after
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
)
scale = params.scale
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
camera_list = allowed_cameras
# Get recording start times
data: list[Recordings] = (
Recordings.select(Recordings.start_time, Recordings.end_time)
.where(reduce(operator.and_, clauses))
.order_by(Recordings.start_time.asc())
.dicts()
.iterator()
)
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Iterate through time segments and check if each has any recording
no_recording_segments = []
current = after
current_gap_start = None
while current < before:
segment_end = min(current + scale, before)
# Check if this segment overlaps with any recording
has_recording = any(
rec_start < segment_end and rec_end > current
for rec_start, rec_end in recordings
)
if not has_recording:
# This segment has no recordings
if current_gap_start is None:
current_gap_start = current # Start a new gap
else:
# This segment has recordings
if current_gap_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(current)}
)
current_gap_start = None
current = segment_end
# Append the last gap if it exists
if current_gap_start is not None:
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
dependencies=[Depends(require_camera_access)],

479
frigate/api/record.py Normal file
View File

@@ -0,0 +1,479 @@
"""Recording APIs."""
import logging
from datetime import datetime, timedelta
from functools import reduce
from pathlib import Path
from typing import List
from urllib.parse import unquote
from fastapi import APIRouter, Depends, Request
from fastapi import Path as PathParam
from fastapi.responses import JSONResponse
from peewee import fn, operator
from frigate.api.auth import (
allow_any_authenticated,
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
)
from frigate.api.defs.query.recordings_query_parameters import (
MediaRecordingsAvailabilityQueryParams,
MediaRecordingsSummaryQueryParams,
RecordingsDeleteQueryParams,
)
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import RECORD_DIR
from frigate.models import Event, Recordings
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.recordings])
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
def get_recordings_storage_usage(request: Request):
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
"storage"
][RECORD_DIR]
if not recording_stats:
return JSONResponse({})
total_mb = recording_stats["total"]
camera_usages: dict[str, dict] = (
request.app.storage_maintainer.calculate_camera_usages()
)
for camera_name in camera_usages.keys():
if camera_usages.get(camera_name, {}).get("usage"):
camera_usages[camera_name]["usage_percent"] = (
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
) * 100
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Returns true/false by day indicating if recordings exist"""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera << camera_list)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
if min_time is None or max_time is None:
return JSONResponse(content={})
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
days: dict[str, bool] = {}
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
period_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day")
)
.where(
(Recordings.camera << camera_list)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
for g in period_query:
days[g.day] = True
return JSONResponse(content=dict(sorted(days.items())))
@router.get(
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
time_range_query = (
Recordings.select(
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera == camera_name)
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
days: dict[str, dict] = {}
if min_time is None or max_time is None:
return JSONResponse(content=list(days.values()))
dst_periods = get_dst_transitions(timezone, min_time, max_time)
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
recording_groups = (
Recordings.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.SUM(Recordings.duration).alias("duration"),
fn.SUM(Recordings.motion).alias("motion"),
fn.SUM(Recordings.objects).alias("objects"),
)
.where(
(Recordings.camera == camera_name)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
event_groups = (
Event.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Event.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.COUNT(Event.id).alias("count"),
)
.where(Event.camera == camera_name, Event.has_clip)
.where(
(Event.start_time >= period_start) & (Event.start_time <= period_end)
)
.group_by((Event.start_time + period_offset).cast("int") / 3600)
.namedtuples()
)
event_map = {g.hour: g.count for g in event_groups}
for recording_group in recording_groups:
parts = recording_group.hour.split()
hour = parts[1]
day = parts[0]
events_count = event_map.get(recording_group.hour, 0)
hour_data = {
"hour": hour,
"events": events_count,
"motion": recording_group.motion,
"objects": recording_group.objects,
"duration": round(recording_group.duration),
}
if day in days:
# merge counts if already present (edge-case at DST boundary)
days[day]["events"] += events_count or 0
days[day]["hours"].append(hour_data)
else:
days[day] = {
"events": events_count or 0,
"hours": [hour_data],
"day": day,
}
return JSONResponse(content=list(days.values()))
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
async def recordings(
camera_name: str,
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
before: float = datetime.now().timestamp(),
):
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
recordings = (
Recordings.select(
Recordings.id,
Recordings.start_time,
Recordings.end_time,
Recordings.segment_size,
Recordings.motion,
Recordings.objects,
Recordings.duration,
)
.where(
Recordings.camera == camera_name,
Recordings.end_time >= after,
Recordings.start_time <= before,
)
.order_by(Recordings.start_time)
.dicts()
.iterator()
)
return JSONResponse(content=list(recordings))
@router.get(
"/recordings/unavailable",
response_model=list[dict],
dependencies=[Depends(allow_any_authenticated())],
)
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get time ranges with no recordings."""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
before = params.before or datetime.datetime.now().timestamp()
after = (
params.after
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
)
scale = params.scale
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
camera_list = allowed_cameras
# Get recording start times
data: list[Recordings] = (
Recordings.select(Recordings.start_time, Recordings.end_time)
.where(reduce(operator.and_, clauses))
.order_by(Recordings.start_time.asc())
.dicts()
.iterator()
)
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Iterate through time segments and check if each has any recording
no_recording_segments = []
current = after
current_gap_start = None
while current < before:
segment_end = min(current + scale, before)
# Check if this segment overlaps with any recording
has_recording = any(
rec_start < segment_end and rec_end > current
for rec_start, rec_end in recordings
)
if not has_recording:
# This segment has no recordings
if current_gap_start is None:
current_gap_start = current # Start a new gap
else:
# This segment has recordings
if current_gap_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(current)}
)
current_gap_start = None
current = segment_end
# Append the last gap if it exists
if current_gap_start is not None:
no_recording_segments.append(
{"start_time": int(current_gap_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)
@router.delete(
"/recordings/start/{start}/end/{end}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete recordings",
description="""Deletes recordings within the specified time range.
Recordings can be filtered by cameras and kept based on motion, objects, or audio attributes.
""",
)
async def delete_recordings(
start: float = PathParam(..., description="Start timestamp (unix)"),
end: float = PathParam(..., description="End timestamp (unix)"),
params: RecordingsDeleteQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Delete recordings in the specified time range."""
if start >= end:
return JSONResponse(
content={
"success": False,
"message": "Start time must be less than end time.",
},
status_code=400,
)
cameras = params.cameras
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(
content={
"success": False,
"message": "No valid cameras found in the request.",
},
status_code=400,
)
camera_list = list(filtered)
else:
camera_list = allowed_cameras
# Parse keep parameter
keep_set = set()
if params.keep:
keep_set = set(params.keep.split(","))
# Build query to find overlapping recordings
clauses = [
(
Recordings.start_time.between(start, end)
| Recordings.end_time.between(start, end)
| ((start > Recordings.start_time) & (end < Recordings.end_time))
),
(Recordings.camera << camera_list),
]
keep_clauses = []
if "motion" in keep_set:
keep_clauses.append(Recordings.motion.is_null(False) & (Recordings.motion > 0))
if "object" in keep_set:
keep_clauses.append(
Recordings.objects.is_null(False) & (Recordings.objects > 0)
)
if "audio" in keep_set:
keep_clauses.append(Recordings.dBFS.is_null(False))
if keep_clauses:
keep_condition = reduce(operator.or_, keep_clauses)
clauses.append(~keep_condition)
recordings_to_delete = (
Recordings.select(Recordings.id, Recordings.path)
.where(reduce(operator.and_, clauses))
.dicts()
.iterator()
)
recording_ids = []
deleted_count = 0
error_count = 0
for recording in recordings_to_delete:
recording_ids.append(recording["id"])
try:
Path(recording["path"]).unlink(missing_ok=True)
deleted_count += 1
except Exception as e:
logger.error(f"Failed to delete recording file {recording['path']}: {e}")
error_count += 1
if recording_ids:
max_deletes = 100000
recording_ids_list = list(recording_ids)
for i in range(0, len(recording_ids_list), max_deletes):
Recordings.delete().where(
Recordings.id << recording_ids_list[i : i + max_deletes]
).execute()
message = f"Successfully deleted {deleted_count} recording(s)."
if error_count > 0:
message += f" {error_count} file deletion error(s) occurred."
return JSONResponse(
content={"success": True, "message": message},
status_code=200,
)

View File

@@ -19,6 +19,8 @@ class CameraMetrics:
process_pid: Synchronized
capture_process_pid: Synchronized
ffmpeg_pid: Synchronized
reconnects_last_hour: Synchronized
stalls_last_hour: Synchronized
def __init__(self, manager: SyncManager):
self.camera_fps = manager.Value("d", 0)
@@ -35,6 +37,8 @@ class CameraMetrics:
self.process_pid = manager.Value("i", 0)
self.capture_process_pid = manager.Value("i", 0)
self.ffmpeg_pid = manager.Value("i", 0)
self.reconnects_last_hour = manager.Value("i", 0)
self.stalls_last_hour = manager.Value("i", 0)
class PTZMetrics:

View File

@@ -28,6 +28,7 @@ from frigate.const import (
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_EVENT_DESCRIPTION,
UPDATE_JOB_STATE,
UPDATE_MODEL_STATE,
UPDATE_REVIEW_DESCRIPTION,
UPSERT_REVIEW_SEGMENT,
@@ -60,6 +61,7 @@ class Dispatcher:
self.camera_activity = CameraActivityManager(config, self.publish)
self.audio_activity = AudioActivityManager(config, self.publish)
self.model_state: dict[str, ModelStatusTypesEnum] = {}
self.job_state: dict[str, dict[str, Any]] = {} # {job_type: job_data}
self.embeddings_reindex: dict[str, Any] = {}
self.birdseye_layout: dict[str, Any] = {}
self.audio_transcription_state: str = "idle"
@@ -180,6 +182,19 @@ class Dispatcher:
def handle_model_state() -> None:
self.publish("model_state", json.dumps(self.model_state.copy()))
def handle_update_job_state() -> None:
if payload and isinstance(payload, dict):
job_type = payload.get("job_type")
if job_type:
self.job_state[job_type] = payload
self.publish(
"job_state",
json.dumps(self.job_state),
)
def handle_job_state() -> None:
self.publish("job_state", json.dumps(self.job_state.copy()))
def handle_update_audio_transcription_state() -> None:
if payload:
self.audio_transcription_state = payload
@@ -277,6 +292,7 @@ class Dispatcher:
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
UPDATE_REVIEW_DESCRIPTION: handle_update_review_description,
UPDATE_MODEL_STATE: handle_update_model_state,
UPDATE_JOB_STATE: handle_update_job_state,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
UPDATE_AUDIO_TRANSCRIPTION_STATE: handle_update_audio_transcription_state,
@@ -284,6 +300,7 @@ class Dispatcher:
"restart": handle_restart,
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
"modelState": handle_model_state,
"jobState": handle_job_state,
"audioTranscriptionState": handle_audio_transcription_state,
"birdseyeLayout": handle_birdseye_layout,
"onConnect": handle_on_connect,

View File

@@ -14,6 +14,7 @@ class GenAIProviderEnum(str, Enum):
azure_openai = "azure_openai"
gemini = "gemini"
ollama = "ollama"
llamacpp = "llamacpp"
class GenAIConfig(FrigateBaseModel):

View File

@@ -1,5 +1,5 @@
from enum import Enum
from typing import Optional
from typing import Optional, Union
from pydantic import Field
@@ -19,8 +19,6 @@ __all__ = [
"RetainModeEnum",
]
DEFAULT_TIME_LAPSE_FFMPEG_ARGS = "-vf setpts=0.04*PTS -r 30"
class RecordRetainConfig(FrigateBaseModel):
days: float = Field(default=0, ge=0, title="Default retention period.")
@@ -67,16 +65,13 @@ class RecordPreviewConfig(FrigateBaseModel):
class RecordExportConfig(FrigateBaseModel):
timelapse_args: str = Field(
default=DEFAULT_TIME_LAPSE_FFMPEG_ARGS, title="Timelapse Args"
hwaccel_args: Union[str, list[str]] = Field(
default="auto", title="Export-specific FFmpeg hardware acceleration arguments."
)
class RecordConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable record on all cameras.")
sync_recordings: bool = Field(
default=False, title="Sync recordings with disk on startup and once a day."
)
expire_interval: int = Field(
default=60,
title="Number of minutes to wait between cleanup runs.",

View File

@@ -523,6 +523,14 @@ class FrigateConfig(FrigateBaseModel):
if camera_config.ffmpeg.hwaccel_args == "auto":
camera_config.ffmpeg.hwaccel_args = self.ffmpeg.hwaccel_args
# Resolve export hwaccel_args: camera export -> camera ffmpeg -> global ffmpeg
# This allows per-camera override for exports (e.g., when camera resolution
# exceeds hardware encoder limits)
if camera_config.record.export.hwaccel_args == "auto":
camera_config.record.export.hwaccel_args = (
camera_config.ffmpeg.hwaccel_args
)
for input in camera_config.ffmpeg.inputs:
need_detect_dimensions = "detect" in input.roles and (
camera_config.detect.height is None

View File

@@ -119,6 +119,7 @@ UPDATE_REVIEW_DESCRIPTION = "update_review_description"
UPDATE_MODEL_STATE = "update_model_state"
UPDATE_EMBEDDINGS_REINDEX_PROGRESS = "handle_embeddings_reindex_progress"
UPDATE_BIRDSEYE_LAYOUT = "update_birdseye_layout"
UPDATE_JOB_STATE = "update_job_state"
NOTIFICATION_TEST = "notification_test"
# IO Nice Values

View File

@@ -285,6 +285,64 @@ Guidelines:
"""Get the context window size for this provider in tokens."""
return 4096
def chat_with_tools(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
) -> dict[str, Any]:
"""
Send chat messages to LLM with optional tool definitions.
This method handles conversation-style interactions with the LLM,
including function calling/tool usage capabilities.
Args:
messages: List of message dictionaries. Each message should have:
- 'role': str - One of 'user', 'assistant', 'system', or 'tool'
- 'content': str - The message content
- 'tool_call_id': Optional[str] - For tool responses, the ID of the tool call
- 'name': Optional[str] - For tool messages, the tool name
tools: Optional list of tool definitions in OpenAI-compatible format.
Each tool should have 'type': 'function' and 'function' with:
- 'name': str - Tool name
- 'description': str - Tool description
- 'parameters': dict - JSON schema for parameters
tool_choice: How the model should handle tools:
- 'auto': Model decides whether to call tools
- 'none': Model must not call tools
- 'required': Model must call at least one tool
- Or a dict specifying a specific tool to call
**kwargs: Additional provider-specific parameters.
Returns:
Dictionary with:
- 'content': Optional[str] - The text response from the LLM, None if tool calls
- 'tool_calls': Optional[List[Dict]] - List of tool calls if LLM wants to call tools.
Each tool call dict has:
- 'id': str - Unique identifier for this tool call
- 'name': str - Tool name to call
- 'arguments': dict - Arguments for the tool call (parsed JSON)
- 'finish_reason': str - Reason generation stopped:
- 'stop': Normal completion
- 'tool_calls': LLM wants to call tools
- 'length': Hit token limit
- 'error': An error occurred
Raises:
NotImplementedError: If the provider doesn't implement this method.
"""
# Base implementation - each provider should override this
logger.warning(
f"{self.__class__.__name__} does not support chat_with_tools. "
"This method should be overridden by the provider implementation."
)
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
def get_genai_client(config: FrigateConfig) -> Optional[GenAIClient]:
"""Get the GenAI client."""

View File

@@ -1,8 +1,9 @@
"""Azure OpenAI Provider for Frigate AI."""
import base64
import json
import logging
from typing import Optional
from typing import Any, Optional
from urllib.parse import parse_qs, urlparse
from openai import AzureOpenAI
@@ -75,3 +76,93 @@ class OpenAIClient(GenAIClient):
def get_context_size(self) -> int:
"""Get the context window size for Azure OpenAI."""
return 128000
def chat_with_tools(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
) -> dict[str, Any]:
try:
openai_tool_choice = None
if tool_choice:
if tool_choice == "none":
openai_tool_choice = "none"
elif tool_choice == "auto":
openai_tool_choice = "auto"
elif tool_choice == "required":
openai_tool_choice = "required"
request_params = {
"model": self.genai_config.model,
"messages": messages,
"timeout": self.timeout,
}
if tools:
request_params["tools"] = tools
if openai_tool_choice is not None:
request_params["tool_choice"] = openai_tool_choice
result = self.provider.chat.completions.create(**request_params)
if (
result is None
or not hasattr(result, "choices")
or len(result.choices) == 0
):
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
choice = result.choices[0]
message = choice.message
content = message.content.strip() if message.content else None
tool_calls = None
if message.tool_calls:
tool_calls = []
for tool_call in message.tool_calls:
try:
arguments = json.loads(tool_call.function.arguments)
except (json.JSONDecodeError, AttributeError) as e:
logger.warning(
f"Failed to parse tool call arguments: {e}, "
f"tool: {tool_call.function.name if hasattr(tool_call.function, 'name') else 'unknown'}"
)
arguments = {}
tool_calls.append(
{
"id": tool_call.id if hasattr(tool_call, "id") else "",
"name": tool_call.function.name
if hasattr(tool_call.function, "name")
else "",
"arguments": arguments,
}
)
finish_reason = "error"
if hasattr(choice, "finish_reason") and choice.finish_reason:
finish_reason = choice.finish_reason
elif tool_calls:
finish_reason = "tool_calls"
elif content:
finish_reason = "stop"
return {
"content": content,
"tool_calls": tool_calls,
"finish_reason": finish_reason,
}
except Exception as e:
logger.warning("Azure OpenAI returned an error: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}

View File

@@ -1,7 +1,8 @@
"""Gemini Provider for Frigate AI."""
import json
import logging
from typing import Optional
from typing import Any, Optional
import google.generativeai as genai
from google.api_core.exceptions import GoogleAPICallError
@@ -58,3 +59,188 @@ class GeminiClient(GenAIClient):
"""Get the context window size for Gemini."""
# Gemini Pro Vision has a 1M token context window
return 1000000
def chat_with_tools(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
) -> dict[str, Any]:
try:
if tools:
function_declarations = []
for tool in tools:
if tool.get("type") == "function":
func_def = tool.get("function", {})
function_declarations.append(
genai.protos.FunctionDeclaration(
name=func_def.get("name"),
description=func_def.get("description"),
parameters=genai.protos.Schema(
type=genai.protos.Type.OBJECT,
properties={
prop_name: genai.protos.Schema(
type=_convert_json_type_to_gemini(
prop.get("type")
),
description=prop.get("description"),
)
for prop_name, prop in func_def.get(
"parameters", {}
)
.get("properties", {})
.items()
},
required=func_def.get("parameters", {}).get(
"required", []
),
),
)
)
tool_config = genai.protos.Tool(
function_declarations=function_declarations
)
if tool_choice == "none":
function_calling_config = genai.protos.FunctionCallingConfig(
mode=genai.protos.FunctionCallingConfig.Mode.NONE
)
elif tool_choice == "required":
function_calling_config = genai.protos.FunctionCallingConfig(
mode=genai.protos.FunctionCallingConfig.Mode.ANY
)
else:
function_calling_config = genai.protos.FunctionCallingConfig(
mode=genai.protos.FunctionCallingConfig.Mode.AUTO
)
else:
tool_config = None
function_calling_config = None
contents = []
for msg in messages:
role = msg.get("role")
content = msg.get("content", "")
if role == "system":
continue
elif role == "user":
contents.append({"role": "user", "parts": [content]})
elif role == "assistant":
parts = [content] if content else []
if "tool_calls" in msg:
for tc in msg["tool_calls"]:
parts.append(
genai.protos.FunctionCall(
name=tc["function"]["name"],
args=json.loads(tc["function"]["arguments"]),
)
)
contents.append({"role": "model", "parts": parts})
elif role == "tool":
tool_name = msg.get("name", "")
tool_result = (
json.loads(content) if isinstance(content, str) else content
)
contents.append(
{
"role": "function",
"parts": [
genai.protos.FunctionResponse(
name=tool_name,
response=tool_result,
)
],
}
)
generation_config = genai.types.GenerationConfig(
candidate_count=1,
)
if function_calling_config:
generation_config.function_calling_config = function_calling_config
response = self.provider.generate_content(
contents,
tools=[tool_config] if tool_config else None,
generation_config=generation_config,
request_options=genai.types.RequestOptions(timeout=self.timeout),
)
content = None
tool_calls = None
if response.candidates and response.candidates[0].content:
parts = response.candidates[0].content.parts
text_parts = [p.text for p in parts if hasattr(p, "text") and p.text]
if text_parts:
content = " ".join(text_parts).strip()
function_calls = [
p.function_call
for p in parts
if hasattr(p, "function_call") and p.function_call
]
if function_calls:
tool_calls = []
for fc in function_calls:
tool_calls.append(
{
"id": f"call_{hash(fc.name)}",
"name": fc.name,
"arguments": dict(fc.args)
if hasattr(fc, "args")
else {},
}
)
finish_reason = "error"
if response.candidates:
finish_reason_map = {
genai.types.FinishReason.STOP: "stop",
genai.types.FinishReason.MAX_TOKENS: "length",
genai.types.FinishReason.SAFETY: "stop",
genai.types.FinishReason.RECITATION: "stop",
genai.types.FinishReason.OTHER: "error",
}
finish_reason = finish_reason_map.get(
response.candidates[0].finish_reason, "error"
)
elif tool_calls:
finish_reason = "tool_calls"
elif content:
finish_reason = "stop"
return {
"content": content,
"tool_calls": tool_calls,
"finish_reason": finish_reason,
}
except GoogleAPICallError as e:
logger.warning("Gemini returned an error: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
except Exception as e:
logger.warning("Unexpected error in Gemini chat_with_tools: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
def _convert_json_type_to_gemini(json_type: str) -> genai.protos.Type:
type_map = {
"string": genai.protos.Type.STRING,
"integer": genai.protos.Type.INTEGER,
"number": genai.protos.Type.NUMBER,
"boolean": genai.protos.Type.BOOLEAN,
"array": genai.protos.Type.ARRAY,
"object": genai.protos.Type.OBJECT,
}
return type_map.get(json_type, genai.protos.Type.STRING)

231
frigate/genai/llama_cpp.py Normal file
View File

@@ -0,0 +1,231 @@
"""llama.cpp Provider for Frigate AI."""
import base64
import json
import logging
from typing import Any, Optional
import requests
from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider
logger = logging.getLogger(__name__)
@register_genai_provider(GenAIProviderEnum.llamacpp)
class LlamaCppClient(GenAIClient):
"""Generative AI client for Frigate using llama.cpp server."""
LOCAL_OPTIMIZED_OPTIONS = {
"temperature": 0.7,
"repeat_penalty": 1.05,
"top_p": 0.8,
}
provider: str # base_url
provider_options: dict[str, Any]
def _init_provider(self):
"""Initialize the client."""
self.provider_options = {
**self.LOCAL_OPTIMIZED_OPTIONS,
**self.genai_config.provider_options,
}
return (
self.genai_config.base_url.rstrip("/")
if self.genai_config.base_url
else None
)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to llama.cpp server."""
if self.provider is None:
logger.warning(
"llama.cpp provider has not been initialized, a description will not be generated. Check your llama.cpp configuration."
)
return None
try:
content = []
for image in images:
encoded_image = base64.b64encode(image).decode("utf-8")
content.append(
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{encoded_image}",
},
}
)
content.append(
{
"type": "text",
"text": prompt,
}
)
# Build request payload with llama.cpp native options
payload = {
"messages": [
{
"role": "user",
"content": content,
},
],
**self.provider_options,
}
response = requests.post(
f"{self.provider}/v1/chat/completions",
json=payload,
timeout=self.timeout,
)
response.raise_for_status()
result = response.json()
if (
result is not None
and "choices" in result
and len(result["choices"]) > 0
):
choice = result["choices"][0]
if "message" in choice and "content" in choice["message"]:
return choice["message"]["content"].strip()
return None
except Exception as e:
logger.warning("llama.cpp returned an error: %s", str(e))
return None
def get_context_size(self) -> int:
"""Get the context window size for llama.cpp."""
return self.genai_config.provider_options.get("context_size", 4096)
def chat_with_tools(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
) -> dict[str, Any]:
"""
Send chat messages to llama.cpp server with optional tool definitions.
Uses the OpenAI-compatible endpoint but passes through all native llama.cpp
parameters (like slot_id, temperature, etc.) via provider_options.
"""
if self.provider is None:
logger.warning(
"llama.cpp provider has not been initialized. Check your llama.cpp configuration."
)
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
try:
openai_tool_choice = None
if tool_choice:
if tool_choice == "none":
openai_tool_choice = "none"
elif tool_choice == "auto":
openai_tool_choice = "auto"
elif tool_choice == "required":
openai_tool_choice = "required"
payload = {
"messages": messages,
}
if tools:
payload["tools"] = tools
if openai_tool_choice is not None:
payload["tool_choice"] = openai_tool_choice
provider_opts = {
k: v for k, v in self.provider_options.items() if k != "context_size"
}
payload.update(provider_opts)
response = requests.post(
f"{self.provider}/v1/chat/completions",
json=payload,
timeout=self.timeout,
)
response.raise_for_status()
result = response.json()
if result is None or "choices" not in result or len(result["choices"]) == 0:
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
choice = result["choices"][0]
message = choice.get("message", {})
content = message.get("content")
if content:
content = content.strip()
else:
content = None
tool_calls = None
if "tool_calls" in message and message["tool_calls"]:
tool_calls = []
for tool_call in message["tool_calls"]:
try:
function_data = tool_call.get("function", {})
arguments_str = function_data.get("arguments", "{}")
arguments = json.loads(arguments_str)
except (json.JSONDecodeError, KeyError, TypeError) as e:
logger.warning(
f"Failed to parse tool call arguments: {e}, "
f"tool: {function_data.get('name', 'unknown')}"
)
arguments = {}
tool_calls.append(
{
"id": tool_call.get("id", ""),
"name": function_data.get("name", ""),
"arguments": arguments,
}
)
finish_reason = "error"
if "finish_reason" in choice and choice["finish_reason"]:
finish_reason = choice["finish_reason"]
elif tool_calls:
finish_reason = "tool_calls"
elif content:
finish_reason = "stop"
return {
"content": content,
"tool_calls": tool_calls,
"finish_reason": finish_reason,
}
except requests.exceptions.Timeout as e:
logger.warning("llama.cpp request timed out: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
except requests.exceptions.RequestException as e:
logger.warning("llama.cpp returned an error: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
except Exception as e:
logger.warning("Unexpected error in llama.cpp chat_with_tools: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}

View File

@@ -1,5 +1,6 @@
"""Ollama Provider for Frigate AI."""
import json
import logging
from typing import Any, Optional
@@ -77,3 +78,120 @@ class OllamaClient(GenAIClient):
return self.genai_config.provider_options.get("options", {}).get(
"num_ctx", 4096
)
def chat_with_tools(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
) -> dict[str, Any]:
if self.provider is None:
logger.warning(
"Ollama provider has not been initialized. Check your Ollama configuration."
)
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
try:
request_messages = []
for msg in messages:
msg_dict = {
"role": msg.get("role"),
"content": msg.get("content", ""),
}
if msg.get("tool_call_id"):
msg_dict["tool_call_id"] = msg["tool_call_id"]
if msg.get("name"):
msg_dict["name"] = msg["name"]
if msg.get("tool_calls"):
msg_dict["tool_calls"] = msg["tool_calls"]
request_messages.append(msg_dict)
request_params = {
"model": self.genai_config.model,
"messages": request_messages,
}
if tools:
request_params["tools"] = tools
if tool_choice:
if tool_choice == "none":
request_params["tool_choice"] = "none"
elif tool_choice == "required":
request_params["tool_choice"] = "required"
elif tool_choice == "auto":
request_params["tool_choice"] = "auto"
request_params.update(self.provider_options)
response = self.provider.chat(**request_params)
if not response or "message" not in response:
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
message = response["message"]
content = (
message.get("content", "").strip() if message.get("content") else None
)
tool_calls = None
if "tool_calls" in message and message["tool_calls"]:
tool_calls = []
for tool_call in message["tool_calls"]:
try:
function_data = tool_call.get("function", {})
arguments_str = function_data.get("arguments", "{}")
arguments = json.loads(arguments_str)
except (json.JSONDecodeError, KeyError, TypeError) as e:
logger.warning(
f"Failed to parse tool call arguments: {e}, "
f"tool: {function_data.get('name', 'unknown')}"
)
arguments = {}
tool_calls.append(
{
"id": tool_call.get("id", ""),
"name": function_data.get("name", ""),
"arguments": arguments,
}
)
finish_reason = "error"
if "done" in response and response["done"]:
if tool_calls:
finish_reason = "tool_calls"
elif content:
finish_reason = "stop"
elif tool_calls:
finish_reason = "tool_calls"
elif content:
finish_reason = "stop"
return {
"content": content,
"tool_calls": tool_calls,
"finish_reason": finish_reason,
}
except (TimeoutException, ResponseError, ConnectionError) as e:
logger.warning("Ollama returned an error: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
except Exception as e:
logger.warning("Unexpected error in Ollama chat_with_tools: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}

View File

@@ -1,8 +1,9 @@
"""OpenAI Provider for Frigate AI."""
import base64
import json
import logging
from typing import Optional
from typing import Any, Optional
from httpx import TimeoutException
from openai import OpenAI
@@ -100,3 +101,113 @@ class OpenAIClient(GenAIClient):
f"Using default context size {self.context_size} for model {self.genai_config.model}"
)
return self.context_size
def chat_with_tools(
self,
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
) -> dict[str, Any]:
"""
Send chat messages to OpenAI with optional tool definitions.
Implements function calling/tool usage for OpenAI models.
"""
try:
openai_tool_choice = None
if tool_choice:
if tool_choice == "none":
openai_tool_choice = "none"
elif tool_choice == "auto":
openai_tool_choice = "auto"
elif tool_choice == "required":
openai_tool_choice = "required"
request_params = {
"model": self.genai_config.model,
"messages": messages,
"timeout": self.timeout,
}
if tools:
request_params["tools"] = tools
if openai_tool_choice is not None:
request_params["tool_choice"] = openai_tool_choice
if isinstance(self.genai_config.provider_options, dict):
excluded_options = {"context_size"}
provider_opts = {
k: v
for k, v in self.genai_config.provider_options.items()
if k not in excluded_options
}
request_params.update(provider_opts)
result = self.provider.chat.completions.create(**request_params)
if (
result is None
or not hasattr(result, "choices")
or len(result.choices) == 0
):
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
choice = result.choices[0]
message = choice.message
content = message.content.strip() if message.content else None
tool_calls = None
if message.tool_calls:
tool_calls = []
for tool_call in message.tool_calls:
try:
arguments = json.loads(tool_call.function.arguments)
except (json.JSONDecodeError, AttributeError) as e:
logger.warning(
f"Failed to parse tool call arguments: {e}, "
f"tool: {tool_call.function.name if hasattr(tool_call.function, 'name') else 'unknown'}"
)
arguments = {}
tool_calls.append(
{
"id": tool_call.id if hasattr(tool_call, "id") else "",
"name": tool_call.function.name
if hasattr(tool_call.function, "name")
else "",
"arguments": arguments,
}
)
finish_reason = "error"
if hasattr(choice, "finish_reason") and choice.finish_reason:
finish_reason = choice.finish_reason
elif tool_calls:
finish_reason = "tool_calls"
elif content:
finish_reason = "stop"
return {
"content": content,
"tool_calls": tool_calls,
"finish_reason": finish_reason,
}
except TimeoutException as e:
logger.warning("OpenAI request timed out: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
except Exception as e:
logger.warning("OpenAI returned an error: %s", str(e))
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}

0
frigate/jobs/__init__.py Normal file
View File

21
frigate/jobs/job.py Normal file
View File

@@ -0,0 +1,21 @@
"""Generic base class for long-running background jobs."""
from dataclasses import asdict, dataclass, field
from typing import Any, Optional
@dataclass
class Job:
"""Base class for long-running background jobs."""
id: str = field(default_factory=lambda: __import__("uuid").uuid4().__str__()[:12])
job_type: str = "" # Must be set by subclasses
status: str = "queued" # queued, running, success, failed, cancelled
results: Optional[dict[str, Any]] = None
start_time: Optional[float] = None
end_time: Optional[float] = None
error_message: Optional[str] = None
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for WebSocket transmission."""
return asdict(self)

70
frigate/jobs/manager.py Normal file
View File

@@ -0,0 +1,70 @@
"""Generic job management for long-running background tasks."""
import threading
from typing import Optional
from frigate.jobs.job import Job
from frigate.types import JobStatusTypesEnum
# Global state and locks for enforcing single concurrent job per job type
_job_locks: dict[str, threading.Lock] = {}
_current_jobs: dict[str, Optional[Job]] = {}
# Keep completed jobs for retrieval, keyed by (job_type, job_id)
_completed_jobs: dict[tuple[str, str], Job] = {}
def _get_lock(job_type: str) -> threading.Lock:
"""Get or create a lock for the specified job type."""
if job_type not in _job_locks:
_job_locks[job_type] = threading.Lock()
return _job_locks[job_type]
def set_current_job(job: Job) -> None:
"""Set the current job for a given job type."""
lock = _get_lock(job.job_type)
with lock:
# Store the previous job if it was completed
old_job = _current_jobs.get(job.job_type)
if old_job and old_job.status in (
JobStatusTypesEnum.success,
JobStatusTypesEnum.failed,
JobStatusTypesEnum.cancelled,
):
_completed_jobs[(job.job_type, old_job.id)] = old_job
_current_jobs[job.job_type] = job
def clear_current_job(job_type: str, job_id: Optional[str] = None) -> None:
"""Clear the current job for a given job type, optionally checking the ID."""
lock = _get_lock(job_type)
with lock:
if job_type in _current_jobs:
current = _current_jobs[job_type]
if current is None or (job_id is None or current.id == job_id):
_current_jobs[job_type] = None
def get_current_job(job_type: str) -> Optional[Job]:
"""Get the current running/queued job for a given job type, if any."""
lock = _get_lock(job_type)
with lock:
return _current_jobs.get(job_type)
def get_job_by_id(job_type: str, job_id: str) -> Optional[Job]:
"""Get job by ID. Checks current job first, then completed jobs."""
lock = _get_lock(job_type)
with lock:
# Check if it's the current job
current = _current_jobs.get(job_type)
if current and current.id == job_id:
return current
# Check if it's a completed job
return _completed_jobs.get((job_type, job_id))
def job_is_running(job_type: str) -> bool:
"""Check if a job of the given type is currently running or queued."""
job = get_current_job(job_type)
return job is not None and job.status in ("queued", "running")

135
frigate/jobs/media_sync.py Normal file
View File

@@ -0,0 +1,135 @@
"""Media sync job management with background execution."""
import logging
import threading
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import UPDATE_JOB_STATE
from frigate.jobs.job import Job
from frigate.jobs.manager import (
get_current_job,
get_job_by_id,
job_is_running,
set_current_job,
)
from frigate.types import JobStatusTypesEnum
from frigate.util.media import sync_all_media
logger = logging.getLogger(__name__)
@dataclass
class MediaSyncJob(Job):
"""In-memory job state for media sync operations."""
job_type: str = "media_sync"
dry_run: bool = False
media_types: list[str] = field(default_factory=lambda: ["all"])
force: bool = False
class MediaSyncRunner(threading.Thread):
"""Thread-based runner for media sync jobs."""
def __init__(self, job: MediaSyncJob) -> None:
super().__init__(daemon=True, name="media_sync")
self.job = job
self.requestor = InterProcessRequestor()
def run(self) -> None:
"""Execute the media sync job and broadcast status updates."""
try:
# Update job status to running
self.job.status = JobStatusTypesEnum.running
self.job.start_time = datetime.now().timestamp()
self._broadcast_status()
# Execute sync with provided parameters
logger.debug(
f"Starting media sync job {self.job.id}: "
f"media_types={self.job.media_types}, "
f"dry_run={self.job.dry_run}, "
f"force={self.job.force}"
)
results = sync_all_media(
dry_run=self.job.dry_run,
media_types=self.job.media_types,
force=self.job.force,
)
# Store results and mark as complete
self.job.results = results.to_dict()
self.job.status = JobStatusTypesEnum.success
self.job.end_time = datetime.now().timestamp()
logger.debug(f"Media sync job {self.job.id} completed successfully")
self._broadcast_status()
except Exception as e:
logger.error(f"Media sync job {self.job.id} failed: {e}", exc_info=True)
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(e)
self.job.end_time = datetime.now().timestamp()
self._broadcast_status()
finally:
if self.requestor:
self.requestor.stop()
def _broadcast_status(self) -> None:
"""Broadcast job status update via IPC to all WebSocket subscribers."""
try:
self.requestor.send_data(
UPDATE_JOB_STATE,
self.job.to_dict(),
)
except Exception as e:
logger.warning(f"Failed to broadcast media sync status: {e}")
def start_media_sync_job(
dry_run: bool = False,
media_types: Optional[list[str]] = None,
force: bool = False,
) -> Optional[str]:
"""Start a new media sync job if none is currently running.
Returns job ID on success, None if job already running.
"""
# Check if a job is already running
if job_is_running("media_sync"):
current = get_current_job("media_sync")
logger.warning(
f"Media sync job {current.id} is already running. Rejecting new request."
)
return None
# Create and start new job
job = MediaSyncJob(
dry_run=dry_run,
media_types=media_types or ["all"],
force=force,
)
logger.debug(f"Creating new media sync job: {job.id}")
set_current_job(job)
# Start the background runner
runner = MediaSyncRunner(job)
runner.start()
return job.id
def get_current_media_sync_job() -> Optional[MediaSyncJob]:
"""Get the current running/queued media sync job, if any."""
return get_current_job("media_sync")
def get_media_sync_job_by_id(job_id: str) -> Optional[MediaSyncJob]:
"""Get media sync job by ID. Currently only tracks the current job."""
return get_job_by_id("media_sync", job_id)

View File

@@ -80,6 +80,14 @@ class Recordings(Model):
regions = IntegerField(null=True)
class ExportCase(Model):
id = CharField(null=False, primary_key=True, max_length=30)
name = CharField(index=True, max_length=100)
description = TextField(null=True)
created_at = DateTimeField()
updated_at = DateTimeField()
class Export(Model):
id = CharField(null=False, primary_key=True, max_length=30)
camera = CharField(index=True, max_length=20)
@@ -88,6 +96,12 @@ class Export(Model):
video_path = CharField(unique=True)
thumb_path = CharField(unique=True)
in_progress = BooleanField()
export_case = ForeignKeyField(
ExportCase,
null=True,
backref="exports",
column_name="export_case_id",
)
class ReviewSegment(Model):

View File

@@ -57,6 +57,51 @@ def get_cache_image_name(camera: str, frame_time: float) -> str:
)
def get_most_recent_preview_frame(camera: str, before: float = None) -> str | None:
"""Get the most recent preview frame for a camera."""
if not os.path.exists(PREVIEW_CACHE_DIR):
return None
try:
# files are named preview_{camera}-{timestamp}.webp
# we want the largest timestamp that is less than or equal to before
preview_files = [
f
for f in os.listdir(PREVIEW_CACHE_DIR)
if f.startswith(f"preview_{camera}-")
and f.endswith(f".{PREVIEW_FRAME_TYPE}")
]
if not preview_files:
return None
# sort by timestamp in descending order
# filenames are like preview_front-1712345678.901234.webp
preview_files.sort(reverse=True)
if before is None:
return os.path.join(PREVIEW_CACHE_DIR, preview_files[0])
for file_name in preview_files:
try:
# Extract timestamp: preview_front-1712345678.901234.webp
# Split by dash and extension
timestamp_part = file_name.split("-")[-1].split(
f".{PREVIEW_FRAME_TYPE}"
)[0]
timestamp = float(timestamp_part)
if timestamp <= before:
return os.path.join(PREVIEW_CACHE_DIR, file_name)
except (ValueError, IndexError):
continue
return None
except Exception as e:
logger.error(f"Error searching for most recent preview frame: {e}")
return None
class FFMpegConverter(threading.Thread):
"""Convert a list of still frames into a vfr mp4."""

View File

@@ -13,9 +13,8 @@ from playhouse.sqlite_ext import SqliteExtDatabase
from frigate.config import CameraConfig, FrigateConfig, RetainModeEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, MAX_WAL_SIZE, RECORD_DIR
from frigate.models import Previews, Recordings, ReviewSegment, UserReviewStatus
from frigate.record.util import remove_empty_directories, sync_recordings
from frigate.util.builtin import clear_and_unlink
from frigate.util.time import get_tomorrow_at_time
from frigate.util.media import remove_empty_directories
logger = logging.getLogger(__name__)
@@ -61,7 +60,7 @@ class RecordingCleanup(threading.Thread):
db.execute_sql("PRAGMA wal_checkpoint(TRUNCATE);")
db.close()
def expire_review_segments(self, config: CameraConfig, now: datetime) -> None:
def expire_review_segments(self, config: CameraConfig, now: datetime) -> set[Path]:
"""Delete review segments that are expired"""
alert_expire_date = (
now - datetime.timedelta(days=config.record.alerts.retain.days)
@@ -85,9 +84,12 @@ class RecordingCleanup(threading.Thread):
.namedtuples()
)
maybe_empty_dirs = set()
thumbs_to_delete = list(map(lambda x: x[1], expired_reviews))
for thumb_path in thumbs_to_delete:
Path(thumb_path).unlink(missing_ok=True)
thumb_path = Path(thumb_path)
thumb_path.unlink(missing_ok=True)
maybe_empty_dirs.add(thumb_path.parent)
max_deletes = 100000
deleted_reviews_list = list(map(lambda x: x[0], expired_reviews))
@@ -100,13 +102,15 @@ class RecordingCleanup(threading.Thread):
<< deleted_reviews_list[i : i + max_deletes]
).execute()
return maybe_empty_dirs
def expire_existing_camera_recordings(
self,
continuous_expire_date: float,
motion_expire_date: float,
config: CameraConfig,
reviews: ReviewSegment,
) -> None:
) -> set[Path]:
"""Delete recordings for existing camera based on retention config."""
# Get the timestamp for cutoff of retained days
@@ -135,6 +139,8 @@ class RecordingCleanup(threading.Thread):
.iterator()
)
maybe_empty_dirs = set()
# loop over recordings and see if they overlap with any non-expired reviews
# TODO: expire segments based on segment stats according to config
review_start = 0
@@ -188,8 +194,10 @@ class RecordingCleanup(threading.Thread):
)
or (mode == RetainModeEnum.active_objects and recording.objects == 0)
):
Path(recording.path).unlink(missing_ok=True)
recording_path = Path(recording.path)
recording_path.unlink(missing_ok=True)
deleted_recordings.add(recording.id)
maybe_empty_dirs.add(recording_path.parent)
else:
kept_recordings.append((recording.start_time, recording.end_time))
@@ -250,8 +258,10 @@ class RecordingCleanup(threading.Thread):
# Delete previews without any relevant recordings
if not keep:
Path(preview.path).unlink(missing_ok=True)
preview_path = Path(preview.path)
preview_path.unlink(missing_ok=True)
deleted_previews.add(preview.id)
maybe_empty_dirs.add(preview_path.parent)
# expire previews
logger.debug(f"Expiring {len(deleted_previews)} previews")
@@ -263,7 +273,9 @@ class RecordingCleanup(threading.Thread):
Previews.id << deleted_previews_list[i : i + max_deletes]
).execute()
def expire_recordings(self) -> None:
return maybe_empty_dirs
def expire_recordings(self) -> set[Path]:
"""Delete recordings based on retention config."""
logger.debug("Start expire recordings.")
logger.debug("Start deleted cameras.")
@@ -288,10 +300,14 @@ class RecordingCleanup(threading.Thread):
.iterator()
)
maybe_empty_dirs = set()
deleted_recordings = set()
for recording in no_camera_recordings:
Path(recording.path).unlink(missing_ok=True)
recording_path = Path(recording.path)
recording_path.unlink(missing_ok=True)
deleted_recordings.add(recording.id)
maybe_empty_dirs.add(recording_path.parent)
logger.debug(f"Expiring {len(deleted_recordings)} recordings")
# delete up to 100,000 at a time
@@ -308,7 +324,7 @@ class RecordingCleanup(threading.Thread):
logger.debug(f"Start camera: {camera}.")
now = datetime.datetime.now()
self.expire_review_segments(config, now)
maybe_empty_dirs |= self.expire_review_segments(config, now)
continuous_expire_date = (
now - datetime.timedelta(days=config.record.continuous.days)
).timestamp()
@@ -338,7 +354,7 @@ class RecordingCleanup(threading.Thread):
.namedtuples()
)
self.expire_existing_camera_recordings(
maybe_empty_dirs |= self.expire_existing_camera_recordings(
continuous_expire_date, motion_expire_date, config, reviews
)
logger.debug(f"End camera: {camera}.")
@@ -346,12 +362,9 @@ class RecordingCleanup(threading.Thread):
logger.debug("End all cameras.")
logger.debug("End expire recordings.")
def run(self) -> None:
# on startup sync recordings with disk if enabled
if self.config.record.sync_recordings:
sync_recordings(limited=False)
next_sync = get_tomorrow_at_time(3)
return maybe_empty_dirs
def run(self) -> None:
# Expire tmp clips every minute, recordings and clean directories every hour.
for counter in itertools.cycle(range(self.config.record.expire_interval)):
if self.stop_event.wait(60):
@@ -360,16 +373,8 @@ class RecordingCleanup(threading.Thread):
self.clean_tmp_previews()
if (
self.config.record.sync_recordings
and datetime.datetime.now().astimezone(datetime.timezone.utc)
> next_sync
):
sync_recordings(limited=True)
next_sync = get_tomorrow_at_time(3)
if counter == 0:
self.clean_tmp_clips()
self.expire_recordings()
remove_empty_directories(RECORD_DIR)
maybe_empty_dirs = self.expire_recordings()
remove_empty_directories(Path(RECORD_DIR), maybe_empty_dirs)
self.truncate_wal()

View File

@@ -33,6 +33,7 @@ from frigate.util.time import is_current_hour
logger = logging.getLogger(__name__)
DEFAULT_TIME_LAPSE_FFMPEG_ARGS = "-vf setpts=0.04*PTS -r 30"
TIMELAPSE_DATA_INPUT_ARGS = "-an -skip_frame nokey"
@@ -40,11 +41,6 @@ def lower_priority():
os.nice(PROCESS_PRIORITY_LOW)
class PlaybackFactorEnum(str, Enum):
realtime = "realtime"
timelapse_25x = "timelapse_25x"
class PlaybackSourceEnum(str, Enum):
recordings = "recordings"
preview = "preview"
@@ -62,8 +58,11 @@ class RecordingExporter(threading.Thread):
image: Optional[str],
start_time: int,
end_time: int,
playback_factor: PlaybackFactorEnum,
playback_source: PlaybackSourceEnum,
export_case_id: Optional[str] = None,
ffmpeg_input_args: Optional[str] = None,
ffmpeg_output_args: Optional[str] = None,
cpu_fallback: bool = False,
) -> None:
super().__init__()
self.config = config
@@ -73,8 +72,11 @@ class RecordingExporter(threading.Thread):
self.user_provided_image = image
self.start_time = start_time
self.end_time = end_time
self.playback_factor = playback_factor
self.playback_source = playback_source
self.export_case_id = export_case_id
self.ffmpeg_input_args = ffmpeg_input_args
self.ffmpeg_output_args = ffmpeg_output_args
self.cpu_fallback = cpu_fallback
# ensure export thumb dir
Path(os.path.join(CLIPS_DIR, "export")).mkdir(exist_ok=True)
@@ -179,7 +181,9 @@ class RecordingExporter(threading.Thread):
return thumb_path
def get_record_export_command(self, video_path: str) -> list[str]:
def get_record_export_command(
self, video_path: str, use_hwaccel: bool = True
) -> list[str]:
if (self.end_time - self.start_time) <= MAX_PLAYLIST_SECONDS:
playlist_lines = f"http://127.0.0.1:5000/vod/{self.camera}/start/{self.start_time}/end/{self.end_time}/index.m3u8"
ffmpeg_input = (
@@ -218,20 +222,25 @@ class RecordingExporter(threading.Thread):
ffmpeg_input = "-y -protocol_whitelist pipe,file,http,tcp -f concat -safe 0 -i /dev/stdin"
if self.playback_factor == PlaybackFactorEnum.realtime:
ffmpeg_cmd = (
f"{self.config.ffmpeg.ffmpeg_path} -hide_banner {ffmpeg_input} -c copy -movflags +faststart"
).split(" ")
elif self.playback_factor == PlaybackFactorEnum.timelapse_25x:
if self.ffmpeg_input_args is not None and self.ffmpeg_output_args is not None:
hwaccel_args = (
self.config.cameras[self.camera].record.export.hwaccel_args
if use_hwaccel
else None
)
ffmpeg_cmd = (
parse_preset_hardware_acceleration_encode(
self.config.ffmpeg.ffmpeg_path,
self.config.ffmpeg.hwaccel_args,
f"-an {ffmpeg_input}",
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart",
hwaccel_args,
f"{self.ffmpeg_input_args} -an {ffmpeg_input}".strip(),
f"{self.ffmpeg_output_args} -movflags +faststart".strip(),
EncodeTypeEnum.timelapse,
)
).split(" ")
else:
ffmpeg_cmd = (
f"{self.config.ffmpeg.ffmpeg_path} -hide_banner {ffmpeg_input} -c copy -movflags +faststart"
).split(" ")
# add metadata
title = f"Frigate Recording for {self.camera}, {self.get_datetime_from_timestamp(self.start_time)} - {self.get_datetime_from_timestamp(self.end_time)}"
@@ -241,7 +250,9 @@ class RecordingExporter(threading.Thread):
return ffmpeg_cmd, playlist_lines
def get_preview_export_command(self, video_path: str) -> list[str]:
def get_preview_export_command(
self, video_path: str, use_hwaccel: bool = True
) -> list[str]:
playlist_lines = []
codec = "-c copy"
@@ -309,20 +320,25 @@ class RecordingExporter(threading.Thread):
"-y -protocol_whitelist pipe,file,tcp -f concat -safe 0 -i /dev/stdin"
)
if self.playback_factor == PlaybackFactorEnum.realtime:
ffmpeg_cmd = (
f"{self.config.ffmpeg.ffmpeg_path} -hide_banner {ffmpeg_input} {codec} -movflags +faststart {video_path}"
).split(" ")
elif self.playback_factor == PlaybackFactorEnum.timelapse_25x:
if self.ffmpeg_input_args is not None and self.ffmpeg_output_args is not None:
hwaccel_args = (
self.config.cameras[self.camera].record.export.hwaccel_args
if use_hwaccel
else None
)
ffmpeg_cmd = (
parse_preset_hardware_acceleration_encode(
self.config.ffmpeg.ffmpeg_path,
self.config.ffmpeg.hwaccel_args,
f"{TIMELAPSE_DATA_INPUT_ARGS} {ffmpeg_input}",
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart {video_path}",
hwaccel_args,
f"{self.ffmpeg_input_args} {TIMELAPSE_DATA_INPUT_ARGS} {ffmpeg_input}".strip(),
f"{self.ffmpeg_output_args} -movflags +faststart {video_path}".strip(),
EncodeTypeEnum.timelapse,
)
).split(" ")
else:
ffmpeg_cmd = (
f"{self.config.ffmpeg.ffmpeg_path} -hide_banner {ffmpeg_input} {codec} -movflags +faststart {video_path}"
).split(" ")
# add metadata
title = f"Frigate Preview for {self.camera}, {self.get_datetime_from_timestamp(self.start_time)} - {self.get_datetime_from_timestamp(self.end_time)}"
@@ -348,17 +364,20 @@ class RecordingExporter(threading.Thread):
video_path = f"{EXPORT_DIR}/{self.camera}_{filename_start_datetime}-{filename_end_datetime}_{cleaned_export_id}.mp4"
thumb_path = self.save_thumbnail(self.export_id)
Export.insert(
{
Export.id: self.export_id,
Export.camera: self.camera,
Export.name: export_name,
Export.date: self.start_time,
Export.video_path: video_path,
Export.thumb_path: thumb_path,
Export.in_progress: True,
}
).execute()
export_values = {
Export.id: self.export_id,
Export.camera: self.camera,
Export.name: export_name,
Export.date: self.start_time,
Export.video_path: video_path,
Export.thumb_path: thumb_path,
Export.in_progress: True,
}
if self.export_case_id is not None:
export_values[Export.export_case] = self.export_case_id
Export.insert(export_values).execute()
try:
if self.playback_source == PlaybackSourceEnum.recordings:
@@ -376,6 +395,34 @@ class RecordingExporter(threading.Thread):
capture_output=True,
)
# If export failed and cpu_fallback is enabled, retry without hwaccel
if (
p.returncode != 0
and self.cpu_fallback
and self.ffmpeg_input_args is not None
and self.ffmpeg_output_args is not None
):
logger.warning(
f"Export with hardware acceleration failed, retrying without hwaccel for {self.export_id}"
)
if self.playback_source == PlaybackSourceEnum.recordings:
ffmpeg_cmd, playlist_lines = self.get_record_export_command(
video_path, use_hwaccel=False
)
else:
ffmpeg_cmd, playlist_lines = self.get_preview_export_command(
video_path, use_hwaccel=False
)
p = sp.run(
ffmpeg_cmd,
input="\n".join(playlist_lines),
encoding="ascii",
preexec_fn=lower_priority,
capture_output=True,
)
if p.returncode != 0:
logger.error(
f"Failed to export {self.playback_source.value} for command {' '.join(ffmpeg_cmd)}"

View File

@@ -1,147 +0,0 @@
"""Recordings Utilities."""
import datetime
import logging
import os
from peewee import DatabaseError, chunked
from frigate.const import RECORD_DIR
from frigate.models import Recordings, RecordingsToDelete
logger = logging.getLogger(__name__)
def remove_empty_directories(directory: str) -> None:
# list all directories recursively and sort them by path,
# longest first
paths = sorted(
[x[0] for x in os.walk(directory)],
key=lambda p: len(str(p)),
reverse=True,
)
for path in paths:
# don't delete the parent
if path == directory:
continue
if len(os.listdir(path)) == 0:
os.rmdir(path)
def sync_recordings(limited: bool) -> None:
"""Check the db for stale recordings entries that don't exist in the filesystem."""
def delete_db_entries_without_file(check_timestamp: float) -> bool:
"""Delete db entries where file was deleted outside of frigate."""
if limited:
recordings = Recordings.select(Recordings.id, Recordings.path).where(
Recordings.start_time >= check_timestamp
)
else:
# get all recordings in the db
recordings = Recordings.select(Recordings.id, Recordings.path)
# Use pagination to process records in chunks
page_size = 1000
num_pages = (recordings.count() + page_size - 1) // page_size
recordings_to_delete = set()
for page in range(num_pages):
for recording in recordings.paginate(page, page_size):
if not os.path.exists(recording.path):
recordings_to_delete.add(recording.id)
if len(recordings_to_delete) == 0:
return True
logger.info(
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
)
# convert back to list of dictionaries for insertion
recordings_to_delete = [
{"id": recording_id} for recording_id in recordings_to_delete
]
if float(len(recordings_to_delete)) / max(1, recordings.count()) > 0.5:
logger.warning(
f"Deleting {(len(recordings_to_delete) / max(1, recordings.count()) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
return False
# create a temporary table for deletion
RecordingsToDelete.create_table(temporary=True)
# insert ids to the temporary table
max_inserts = 1000
for batch in chunked(recordings_to_delete, max_inserts):
RecordingsToDelete.insert_many(batch).execute()
try:
# delete records in the main table that exist in the temporary table
query = Recordings.delete().where(
Recordings.id.in_(RecordingsToDelete.select(RecordingsToDelete.id))
)
query.execute()
except DatabaseError as e:
logger.error(f"Database error during recordings db cleanup: {e}")
return True
def delete_files_without_db_entry(files_on_disk: list[str]):
"""Delete files where file is not inside frigate db."""
files_to_delete = []
for file in files_on_disk:
if not Recordings.select().where(Recordings.path == file).exists():
files_to_delete.append(file)
if len(files_to_delete) == 0:
return True
logger.info(
f"Deleting {len(files_to_delete)} recordings files with missing DB entries"
)
if float(len(files_to_delete)) / max(1, len(files_on_disk)) > 0.5:
logger.debug(
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
return False
for file in files_to_delete:
os.unlink(file)
return True
logger.debug("Start sync recordings.")
# start checking on the hour 36 hours ago
check_point = datetime.datetime.now().replace(
minute=0, second=0, microsecond=0
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
db_success = delete_db_entries_without_file(check_point.timestamp())
# only try to cleanup files if db cleanup was successful
if db_success:
if limited:
# get recording files from last 36 hours
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
if root > hour_check
}
else:
# get all recordings files on disk and put them in a set
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
}
delete_files_without_db_entry(files_on_disk)
logger.debug("End sync recordings.")

View File

@@ -22,6 +22,7 @@ from frigate.util.services import (
get_bandwidth_stats,
get_cpu_stats,
get_fs_type,
get_hailo_temps,
get_intel_gpu_stats,
get_jetson_stats,
get_nvidia_gpu_stats,
@@ -91,9 +92,80 @@ def get_temperatures() -> dict[str, float]:
if temp is not None:
temps[apex] = temp
# Get temperatures for Hailo devices
temps.update(get_hailo_temps())
return temps
def get_detector_temperature(
detector_type: str,
detector_index_by_type: dict[str, int],
) -> Optional[float]:
"""Get temperature for a specific detector based on its type."""
if detector_type == "edgetpu":
# Get temperatures for all attached Corals
base = "/sys/class/apex/"
if os.path.isdir(base):
apex_devices = sorted(os.listdir(base))
index = detector_index_by_type.get("edgetpu", 0)
if index < len(apex_devices):
apex_name = apex_devices[index]
temp = read_temperature(os.path.join(base, apex_name, "temp"))
if temp is not None:
return temp
elif detector_type == "hailo8l":
# Get temperatures for Hailo devices
hailo_temps = get_hailo_temps()
if hailo_temps:
hailo_device_names = sorted(hailo_temps.keys())
index = detector_index_by_type.get("hailo8l", 0)
if index < len(hailo_device_names):
device_name = hailo_device_names[index]
return hailo_temps[device_name]
elif detector_type == "rknn":
# Rockchip temperatures are handled by the GPU / NPU stats
# as there are not detector specific temperatures
pass
return None
def get_detector_stats(
stats_tracking: StatsTrackingTypes,
) -> dict[str, dict[str, Any]]:
"""Get stats for all detectors, including temperatures based on detector type."""
detector_stats: dict[str, dict[str, Any]] = {}
detector_type_indices: dict[str, int] = {}
for name, detector in stats_tracking["detectors"].items():
pid = detector.detect_process.pid if detector.detect_process else None
detector_type = detector.detector_config.type
# Keep track of the index for each detector type to match temperatures correctly
current_index = detector_type_indices.get(detector_type, 0)
detector_type_indices[detector_type] = current_index + 1
detector_stat = {
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"pid": pid,
}
temp = get_detector_temperature(detector_type, {detector_type: current_index})
if temp is not None:
detector_stat["temperature"] = round(temp, 1)
detector_stats[name] = detector_stat
return detector_stats
def get_processing_stats(
config: FrigateConfig, stats: dict[str, str], hwaccel_errors: list[str]
) -> None:
@@ -174,6 +246,7 @@ async def set_gpu_stats(
"mem": str(round(float(nvidia_usage[i]["mem"]), 2)) + "%",
"enc": str(round(float(nvidia_usage[i]["enc"]), 2)) + "%",
"dec": str(round(float(nvidia_usage[i]["dec"]), 2)) + "%",
"temp": str(nvidia_usage[i]["temp"]),
}
else:
@@ -279,6 +352,32 @@ def stats_snapshot(
if camera_stats.capture_process_pid.value
else None
)
# Calculate connection quality based on current state
# This is computed at stats-collection time so offline cameras
# correctly show as unusable rather than excellent
expected_fps = config.cameras[name].detect.fps
current_fps = camera_stats.camera_fps.value
reconnects = camera_stats.reconnects_last_hour.value
stalls = camera_stats.stalls_last_hour.value
if current_fps < 0.1:
quality_str = "unusable"
elif reconnects == 0 and current_fps >= 0.9 * expected_fps and stalls < 5:
quality_str = "excellent"
elif reconnects <= 2 and current_fps >= 0.6 * expected_fps:
quality_str = "fair"
elif reconnects > 10 or current_fps < 1.0 or stalls > 100:
quality_str = "unusable"
else:
quality_str = "poor"
connection_quality = {
"connection_quality": quality_str,
"expected_fps": expected_fps,
"reconnects_last_hour": reconnects,
"stalls_last_hour": stalls,
}
stats["cameras"][name] = {
"camera_fps": round(camera_stats.camera_fps.value, 2),
"process_fps": round(camera_stats.process_fps.value, 2),
@@ -290,20 +389,10 @@ def stats_snapshot(
"ffmpeg_pid": ffmpeg_pid,
"audio_rms": round(camera_stats.audio_rms.value, 4),
"audio_dBFS": round(camera_stats.audio_dBFS.value, 4),
**connection_quality,
}
stats["detectors"] = {}
for name, detector in stats_tracking["detectors"].items():
pid = detector.detect_process.pid if detector.detect_process else None
stats["detectors"][name] = {
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"pid": pid,
}
stats["detectors"] = get_detector_stats(stats_tracking)
stats["camera_fps"] = round(total_camera_fps, 2)
stats["process_fps"] = round(total_process_fps, 2)
stats["skipped_fps"] = round(total_skipped_fps, 2)
@@ -389,7 +478,6 @@ def stats_snapshot(
"version": VERSION,
"latest_version": stats_tracking["latest_frigate_version"],
"storage": {},
"temperatures": get_temperatures(),
"last_updated": int(time.time()),
}

View File

@@ -0,0 +1,107 @@
import os
import shutil
from unittest.mock import MagicMock
import cv2
import numpy as np
from frigate.output.preview import PREVIEW_CACHE_DIR, PREVIEW_FRAME_TYPE
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestHttpLatestFrame(BaseTestHttp):
def setUp(self):
super().setUp([])
self.app = super().create_app()
self.app.detected_frames_processor = MagicMock()
if os.path.exists(PREVIEW_CACHE_DIR):
shutil.rmtree(PREVIEW_CACHE_DIR)
os.makedirs(PREVIEW_CACHE_DIR)
def tearDown(self):
if os.path.exists(PREVIEW_CACHE_DIR):
shutil.rmtree(PREVIEW_CACHE_DIR)
super().tearDown()
def test_latest_frame_fallback_to_preview(self):
camera = "front_door"
# 1. Mock frame processor to return None (simulating offline/missing frame)
self.app.detected_frames_processor.get_current_frame.return_value = None
# Return a timestamp that is after our dummy preview frame
self.app.detected_frames_processor.get_current_frame_time.return_value = (
1234567891.0
)
# 2. Create a dummy preview file
dummy_frame = np.zeros((180, 320, 3), np.uint8)
cv2.putText(
dummy_frame,
"PREVIEW",
(50, 50),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(255, 255, 255),
2,
)
preview_path = os.path.join(
PREVIEW_CACHE_DIR, f"preview_{camera}-1234567890.0.{PREVIEW_FRAME_TYPE}"
)
cv2.imwrite(preview_path, dummy_frame)
with AuthTestClient(self.app) as client:
response = client.get(f"/{camera}/latest.webp")
assert response.status_code == 200
assert response.headers.get("X-Frigate-Offline") == "true"
# Verify we got an image (webp)
assert response.headers.get("content-type") == "image/webp"
def test_latest_frame_no_fallback_when_live(self):
camera = "front_door"
# 1. Mock frame processor to return a live frame
dummy_frame = np.zeros((180, 320, 3), np.uint8)
self.app.detected_frames_processor.get_current_frame.return_value = dummy_frame
self.app.detected_frames_processor.get_current_frame_time.return_value = (
2000000000.0 # Way in the future
)
with AuthTestClient(self.app) as client:
response = client.get(f"/{camera}/latest.webp")
assert response.status_code == 200
assert "X-Frigate-Offline" not in response.headers
def test_latest_frame_stale_falls_back_to_preview(self):
camera = "front_door"
# 1. Mock frame processor to return a stale frame
dummy_frame = np.zeros((180, 320, 3), np.uint8)
self.app.detected_frames_processor.get_current_frame.return_value = dummy_frame
# Return a timestamp that is after our dummy preview frame, but way in the past
self.app.detected_frames_processor.get_current_frame_time.return_value = 1000.0
# 2. Create a dummy preview file
preview_path = os.path.join(
PREVIEW_CACHE_DIR, f"preview_{camera}-999.0.{PREVIEW_FRAME_TYPE}"
)
cv2.imwrite(preview_path, dummy_frame)
with AuthTestClient(self.app) as client:
response = client.get(f"/{camera}/latest.webp")
assert response.status_code == 200
assert response.headers.get("X-Frigate-Offline") == "true"
def test_latest_frame_no_preview_found(self):
camera = "front_door"
# 1. Mock frame processor to return None
self.app.detected_frames_processor.get_current_frame.return_value = None
# 2. No preview file created
with AuthTestClient(self.app) as client:
response = client.get(f"/{camera}/latest.webp")
# Should fall back to camera-error.jpg (which might not exist in test env, but let's see)
# If camera-error.jpg is not found, it returns 500 "Unable to get valid frame" in latest_frame
# OR it uses request.app.camera_error_image if already loaded.
# Since we didn't provide camera-error.jpg, it might 500 if glob fails or return 500 if frame is None.
assert response.status_code in [200, 500]
assert "X-Frigate-Offline" not in response.headers

View File

@@ -0,0 +1,80 @@
import os
import shutil
import unittest
from frigate.output.preview import (
PREVIEW_CACHE_DIR,
PREVIEW_FRAME_TYPE,
get_most_recent_preview_frame,
)
class TestPreviewLoader(unittest.TestCase):
def setUp(self):
if os.path.exists(PREVIEW_CACHE_DIR):
shutil.rmtree(PREVIEW_CACHE_DIR)
os.makedirs(PREVIEW_CACHE_DIR)
def tearDown(self):
if os.path.exists(PREVIEW_CACHE_DIR):
shutil.rmtree(PREVIEW_CACHE_DIR)
def test_get_most_recent_preview_frame_missing(self):
self.assertIsNone(get_most_recent_preview_frame("test_camera"))
def test_get_most_recent_preview_frame_exists(self):
camera = "test_camera"
# create dummy preview files
for ts in ["1000.0", "2000.0", "1500.0"]:
with open(
os.path.join(
PREVIEW_CACHE_DIR, f"preview_{camera}-{ts}.{PREVIEW_FRAME_TYPE}"
),
"w",
) as f:
f.write(f"test_{ts}")
expected_path = os.path.join(
PREVIEW_CACHE_DIR, f"preview_{camera}-2000.0.{PREVIEW_FRAME_TYPE}"
)
self.assertEqual(get_most_recent_preview_frame(camera), expected_path)
def test_get_most_recent_preview_frame_before(self):
camera = "test_camera"
# create dummy preview files
for ts in ["1000.0", "2000.0"]:
with open(
os.path.join(
PREVIEW_CACHE_DIR, f"preview_{camera}-{ts}.{PREVIEW_FRAME_TYPE}"
),
"w",
) as f:
f.write(f"test_{ts}")
# Test finding frame before or at 1500
expected_path = os.path.join(
PREVIEW_CACHE_DIR, f"preview_{camera}-1000.0.{PREVIEW_FRAME_TYPE}"
)
self.assertEqual(
get_most_recent_preview_frame(camera, before=1500.0), expected_path
)
# Test finding frame before or at 999
self.assertIsNone(get_most_recent_preview_frame(camera, before=999.0))
def test_get_most_recent_preview_frame_other_camera(self):
camera = "test_camera"
other_camera = "other_camera"
with open(
os.path.join(
PREVIEW_CACHE_DIR, f"preview_{other_camera}-3000.0.{PREVIEW_FRAME_TYPE}"
),
"w",
) as f:
f.write("test")
self.assertIsNone(get_most_recent_preview_frame(camera))
def test_get_most_recent_preview_frame_no_directory(self):
shutil.rmtree(PREVIEW_CACHE_DIR)
self.assertIsNone(get_most_recent_preview_frame("test_camera"))

View File

@@ -26,6 +26,15 @@ class ModelStatusTypesEnum(str, Enum):
failed = "failed"
class JobStatusTypesEnum(str, Enum):
pending = "pending"
queued = "queued"
running = "running"
success = "success"
failed = "failed"
cancelled = "cancelled"
class TrackedObjectUpdateTypesEnum(str, Enum):
description = "description"
face = "face"

View File

@@ -13,7 +13,7 @@ from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__)
CURRENT_CONFIG_VERSION = "0.17-0"
CURRENT_CONFIG_VERSION = "0.18-0"
DEFAULT_CONFIG_FILE = os.path.join(CONFIG_DIR, "config.yml")
@@ -98,6 +98,13 @@ def migrate_frigate_config(config_file: str):
yaml.dump(new_config, f)
previous_version = "0.17-0"
if previous_version < "0.18-0":
logger.info(f"Migrating frigate config from {previous_version} to 0.18-0...")
new_config = migrate_018_0(config)
with open(config_file, "w") as f:
yaml.dump(new_config, f)
previous_version = "0.18-0"
logger.info("Finished frigate config migration...")
@@ -427,6 +434,49 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
return new_config
def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
"""Handle migrating frigate config to 0.18-0"""
new_config = config.copy()
# Remove deprecated sync_recordings from global record config
if new_config.get("record", {}).get("sync_recordings") is not None:
del new_config["record"]["sync_recordings"]
# Remove deprecated timelapse_args from global record export config
if new_config.get("record", {}).get("export", {}).get("timelapse_args") is not None:
del new_config["record"]["export"]["timelapse_args"]
# Remove export section if empty
if not new_config.get("record", {}).get("export"):
del new_config["record"]["export"]
# Remove record section if empty
if not new_config.get("record"):
del new_config["record"]
# Remove deprecated sync_recordings and timelapse_args from camera-specific record configs
for name, camera in config.get("cameras", {}).items():
camera_config: dict[str, dict[str, Any]] = camera.copy()
if camera_config.get("record", {}).get("sync_recordings") is not None:
del camera_config["record"]["sync_recordings"]
if (
camera_config.get("record", {}).get("export", {}).get("timelapse_args")
is not None
):
del camera_config["record"]["export"]["timelapse_args"]
# Remove export section if empty
if not camera_config.get("record", {}).get("export"):
del camera_config["record"]["export"]
# Remove record section if empty
if not camera_config.get("record"):
del camera_config["record"]
new_config["cameras"][name] = camera_config
new_config["version"] = "0.18-0"
return new_config
def get_relative_coordinates(
mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
) -> Union[str, list]:

808
frigate/util/media.py Normal file
View File

@@ -0,0 +1,808 @@
"""Recordings Utilities."""
import datetime
import errno
import logging
import os
from dataclasses import dataclass, field
from pathlib import Path
from typing import Iterable
from peewee import DatabaseError, chunked
from frigate.const import CLIPS_DIR, EXPORT_DIR, RECORD_DIR, THUMB_DIR
from frigate.models import (
Event,
Export,
Previews,
Recordings,
RecordingsToDelete,
ReviewSegment,
)
logger = logging.getLogger(__name__)
# Safety threshold - abort if more than 50% of files would be deleted
SAFETY_THRESHOLD = 0.5
@dataclass
class SyncResult:
"""Result of a sync operation."""
media_type: str
files_checked: int = 0
orphans_found: int = 0
orphans_deleted: int = 0
orphan_paths: list[str] = field(default_factory=list)
aborted: bool = False
error: str | None = None
def to_dict(self) -> dict:
return {
"media_type": self.media_type,
"files_checked": self.files_checked,
"orphans_found": self.orphans_found,
"orphans_deleted": self.orphans_deleted,
"aborted": self.aborted,
"error": self.error,
}
def remove_empty_directories(root: Path, paths: Iterable[Path]) -> None:
"""
Remove directories if they exist and are empty.
Silently ignores non-existent and non-empty directories.
Attempts to remove parent directories as well, stopping at the given root.
"""
count = 0
while True:
parents = set()
for path in paths:
if path == root:
continue
try:
path.rmdir()
count += 1
except FileNotFoundError:
pass
except OSError as e:
if e.errno == errno.ENOTEMPTY:
continue
raise
parents.add(path.parent)
if not parents:
break
paths = parents
logger.debug("Removed {count} empty directories")
def sync_recordings(
limited: bool = False, dry_run: bool = False, force: bool = False
) -> SyncResult:
"""Sync recordings between the database and disk using the SyncResult format."""
result = SyncResult(media_type="recordings")
try:
logger.debug("Start sync recordings.")
# start checking on the hour 36 hours ago
check_point = datetime.datetime.now().replace(
minute=0, second=0, microsecond=0
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
# Gather DB recordings to inspect
if limited:
recordings_query = Recordings.select(Recordings.id, Recordings.path).where(
Recordings.start_time >= check_point.timestamp()
)
else:
recordings_query = Recordings.select(Recordings.id, Recordings.path)
recordings_count = recordings_query.count()
page_size = 1000
num_pages = (recordings_count + page_size - 1) // page_size
recordings_to_delete: list[dict] = []
for page in range(num_pages):
for recording in recordings_query.paginate(page, page_size):
if not os.path.exists(recording.path):
recordings_to_delete.append(
{"id": recording.id, "path": recording.path}
)
result.orphans_found += len(recordings_to_delete)
result.orphan_paths.extend(
[
recording["path"]
for recording in recordings_to_delete
if recording.get("path")
]
)
if (
recordings_count
and len(recordings_to_delete) / recordings_count > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries (force=True, bypassing safety threshold)"
)
else:
logger.warning(
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
result.aborted = True
return result
if recordings_to_delete and not dry_run:
logger.info(
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
)
RecordingsToDelete.create_table(temporary=True)
max_inserts = 1000
for batch in chunked(recordings_to_delete, max_inserts):
RecordingsToDelete.insert_many(batch).execute()
try:
deleted = (
Recordings.delete()
.where(
Recordings.id.in_(
RecordingsToDelete.select(RecordingsToDelete.id)
)
)
.execute()
)
result.orphans_deleted += int(deleted)
except DatabaseError as e:
logger.error(f"Database error during recordings db cleanup: {e}")
result.error = str(e)
result.aborted = True
return result
if result.aborted:
logger.warning("Recording DB sync aborted; skipping file cleanup.")
return result
# Only try to cleanup files if db cleanup was successful or dry_run
if limited:
# get recording files from last 36 hours
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
if root > hour_check
}
else:
# get all recordings files on disk and put them in a set
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
}
result.files_checked = len(files_on_disk)
files_to_delete: list[str] = []
for file in files_on_disk:
if not Recordings.select().where(Recordings.path == file).exists():
files_to_delete.append(file)
result.orphans_found += len(files_to_delete)
result.orphan_paths.extend(files_to_delete)
if (
files_on_disk
and len(files_to_delete) / len(files_on_disk) > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files (force=True, bypassing safety threshold)"
)
else:
logger.warning(
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files, could be due to configuration error. Aborting..."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Recordings sync (dry run): Found {len(files_to_delete)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(files_to_delete)} orphaned recordings files")
for file in files_to_delete:
try:
os.unlink(file)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file}: {e}")
logger.debug("End sync recordings.")
except Exception as e:
logger.error(f"Error syncing recordings: {e}")
result.error = str(e)
return result
def sync_event_snapshots(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync event snapshots - delete files not referenced by any event.
Event snapshots are stored at: CLIPS_DIR/{camera}-{event_id}.jpg
Also checks for clean variants: {camera}-{event_id}-clean.webp and -clean.png
"""
result = SyncResult(media_type="event_snapshots")
try:
# Get all event IDs with snapshots from DB
events_with_snapshots = set(
f"{e.camera}-{e.id}"
for e in Event.select(Event.id, Event.camera).where(
Event.has_snapshot == True
)
)
# Find snapshot files on disk (directly in CLIPS_DIR, not subdirectories)
snapshot_files: list[tuple[str, str]] = [] # (full_path, base_name)
if os.path.isdir(CLIPS_DIR):
for file in os.listdir(CLIPS_DIR):
file_path = os.path.join(CLIPS_DIR, file)
if os.path.isfile(file_path) and file.endswith(
(".jpg", "-clean.webp", "-clean.png")
):
# Extract base name (camera-event_id) from filename
base_name = file
for suffix in ["-clean.webp", "-clean.png", ".jpg"]:
if file.endswith(suffix):
base_name = file[: -len(suffix)]
break
snapshot_files.append((file_path, base_name))
result.files_checked = len(snapshot_files)
# Find orphans
orphans: list[str] = []
for file_path, base_name in snapshot_files:
if base_name not in events_with_snapshots:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Event snapshots sync (dry run): Found {len(orphans)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned event snapshot files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing event snapshots: {e}")
result.error = str(e)
return result
def sync_event_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync event thumbnails - delete files not referenced by any event.
Event thumbnails are stored at: THUMB_DIR/{camera}/{event_id}.webp
Only events without inline thumbnail (thumbnail field is None/empty) use files.
"""
result = SyncResult(media_type="event_thumbnails")
try:
# Get all events that use file-based thumbnails
# Events with thumbnail field populated don't need files
events_with_file_thumbs = set(
(e.camera, e.id)
for e in Event.select(Event.id, Event.camera, Event.thumbnail).where(
(Event.thumbnail.is_null(True)) | (Event.thumbnail == "")
)
)
# Find thumbnail files on disk
thumbnail_files: list[
tuple[str, str, str]
] = [] # (full_path, camera, event_id)
if os.path.isdir(THUMB_DIR):
for camera_dir in os.listdir(THUMB_DIR):
camera_path = os.path.join(THUMB_DIR, camera_dir)
if not os.path.isdir(camera_path):
continue
for file in os.listdir(camera_path):
if file.endswith(".webp"):
event_id = file[:-5] # Remove .webp
file_path = os.path.join(camera_path, file)
thumbnail_files.append((file_path, camera_dir, event_id))
result.files_checked = len(thumbnail_files)
# Find orphans - files where event doesn't exist or event has inline thumbnail
orphans: list[str] = []
for file_path, camera, event_id in thumbnail_files:
if (camera, event_id) not in events_with_file_thumbs:
# Check if event exists with inline thumbnail
event_exists = Event.select().where(Event.id == event_id).exists()
if not event_exists:
orphans.append(file_path)
# If event exists with inline thumbnail, the file is also orphaned
elif event_exists:
event = Event.get_or_none(Event.id == event_id)
if event and event.thumbnail:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Event thumbnails sync (dry run): Found {len(orphans)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned event thumbnail files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing event thumbnails: {e}")
result.error = str(e)
return result
def sync_review_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync review segment thumbnails - delete files not referenced by any review segment.
Review thumbnails are stored at: CLIPS_DIR/review/thumb-{camera}-{review_id}.webp
The full path is stored in ReviewSegment.thumb_path
"""
result = SyncResult(media_type="review_thumbnails")
try:
# Get all thumb paths from DB
review_thumb_paths = set(
r.thumb_path
for r in ReviewSegment.select(ReviewSegment.thumb_path)
if r.thumb_path
)
# Find review thumbnail files on disk
review_dir = os.path.join(CLIPS_DIR, "review")
thumbnail_files: list[str] = []
if os.path.isdir(review_dir):
for file in os.listdir(review_dir):
if file.startswith("thumb-") and file.endswith(".webp"):
file_path = os.path.join(review_dir, file)
thumbnail_files.append(file_path)
result.files_checked = len(thumbnail_files)
# Find orphans
orphans: list[str] = []
for file_path in thumbnail_files:
if file_path not in review_thumb_paths:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Review thumbnails sync (dry run): Found {len(orphans)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned review thumbnail files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing review thumbnails: {e}")
result.error = str(e)
return result
def sync_previews(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync preview files - delete files not referenced by any preview record.
Previews are stored at: CLIPS_DIR/previews/{camera}/*.mp4
The full path is stored in Previews.path
"""
result = SyncResult(media_type="previews")
try:
# Get all preview paths from DB
preview_paths = set(p.path for p in Previews.select(Previews.path) if p.path)
# Find preview files on disk
previews_dir = os.path.join(CLIPS_DIR, "previews")
preview_files: list[str] = []
if os.path.isdir(previews_dir):
for camera_dir in os.listdir(previews_dir):
camera_path = os.path.join(previews_dir, camera_dir)
if not os.path.isdir(camera_path):
continue
for file in os.listdir(camera_path):
if file.endswith(".mp4"):
file_path = os.path.join(camera_path, file)
preview_files.append(file_path)
result.files_checked = len(preview_files)
# Find orphans
orphans: list[str] = []
for file_path in preview_files:
if file_path not in preview_paths:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(f"Previews sync (dry run): Found {len(orphans)} orphaned files")
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned preview files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing previews: {e}")
result.error = str(e)
return result
def sync_exports(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync export files - delete files not referenced by any export record.
Export videos are stored at: EXPORT_DIR/*.mp4
Export thumbnails are stored at: CLIPS_DIR/export/*.jpg
The paths are stored in Export.video_path and Export.thumb_path
"""
result = SyncResult(media_type="exports")
try:
# Get all export paths from DB
export_video_paths = set()
export_thumb_paths = set()
for e in Export.select(Export.video_path, Export.thumb_path):
if e.video_path:
export_video_paths.add(e.video_path)
if e.thumb_path:
export_thumb_paths.add(e.thumb_path)
# Find export video files on disk
export_files: list[str] = []
if os.path.isdir(EXPORT_DIR):
for file in os.listdir(EXPORT_DIR):
if file.endswith(".mp4"):
file_path = os.path.join(EXPORT_DIR, file)
export_files.append(file_path)
# Find export thumbnail files on disk
export_thumb_dir = os.path.join(CLIPS_DIR, "export")
thumb_files: list[str] = []
if os.path.isdir(export_thumb_dir):
for file in os.listdir(export_thumb_dir):
if file.endswith(".jpg"):
file_path = os.path.join(export_thumb_dir, file)
thumb_files.append(file_path)
result.files_checked = len(export_files) + len(thumb_files)
# Find orphans
orphans: list[str] = []
for file_path in export_files:
if file_path not in export_video_paths:
orphans.append(file_path)
for file_path in thumb_files:
if file_path not in export_thumb_paths:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(f"Exports sync (dry run): Found {len(orphans)} orphaned files")
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned export files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing exports: {e}")
result.error = str(e)
return result
@dataclass
class MediaSyncResults:
"""Combined results from all media sync operations."""
event_snapshots: SyncResult | None = None
event_thumbnails: SyncResult | None = None
review_thumbnails: SyncResult | None = None
previews: SyncResult | None = None
exports: SyncResult | None = None
recordings: SyncResult | None = None
@property
def total_files_checked(self) -> int:
total = 0
for result in [
self.event_snapshots,
self.event_thumbnails,
self.review_thumbnails,
self.previews,
self.exports,
self.recordings,
]:
if result:
total += result.files_checked
return total
@property
def total_orphans_found(self) -> int:
total = 0
for result in [
self.event_snapshots,
self.event_thumbnails,
self.review_thumbnails,
self.previews,
self.exports,
self.recordings,
]:
if result:
total += result.orphans_found
return total
@property
def total_orphans_deleted(self) -> int:
total = 0
for result in [
self.event_snapshots,
self.event_thumbnails,
self.review_thumbnails,
self.previews,
self.exports,
self.recordings,
]:
if result:
total += result.orphans_deleted
return total
def to_dict(self) -> dict:
"""Convert results to dictionary for API response."""
results = {}
for name, result in [
("event_snapshots", self.event_snapshots),
("event_thumbnails", self.event_thumbnails),
("review_thumbnails", self.review_thumbnails),
("previews", self.previews),
("exports", self.exports),
("recordings", self.recordings),
]:
if result:
results[name] = {
"files_checked": result.files_checked,
"orphans_found": result.orphans_found,
"orphans_deleted": result.orphans_deleted,
"aborted": result.aborted,
"error": result.error,
}
results["totals"] = {
"files_checked": self.total_files_checked,
"orphans_found": self.total_orphans_found,
"orphans_deleted": self.total_orphans_deleted,
}
return results
def sync_all_media(
dry_run: bool = False, media_types: list[str] = ["all"], force: bool = False
) -> MediaSyncResults:
"""Sync specified media types with the database.
Args:
dry_run: If True, only report orphans without deleting them.
media_types: List of media types to sync. Can include: 'all', 'event_snapshots',
'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'
force: If True, bypass safety threshold checks.
Returns:
MediaSyncResults with details of each sync operation.
"""
logger.debug(
f"Starting media sync (dry_run={dry_run}, media_types={media_types}, force={force})"
)
results = MediaSyncResults()
# Determine which media types to sync
sync_all = "all" in media_types
if sync_all or "event_snapshots" in media_types:
results.event_snapshots = sync_event_snapshots(dry_run=dry_run, force=force)
if sync_all or "event_thumbnails" in media_types:
results.event_thumbnails = sync_event_thumbnails(dry_run=dry_run, force=force)
if sync_all or "review_thumbnails" in media_types:
results.review_thumbnails = sync_review_thumbnails(dry_run=dry_run, force=force)
if sync_all or "previews" in media_types:
results.previews = sync_previews(dry_run=dry_run, force=force)
if sync_all or "exports" in media_types:
results.exports = sync_exports(dry_run=dry_run, force=force)
if sync_all or "recordings" in media_types:
results.recordings = sync_recordings(dry_run=dry_run, force=force)
logger.info(
f"Media sync complete: checked {results.total_files_checked} files, "
f"found {results.total_orphans_found} orphans, "
f"deleted {results.total_orphans_deleted}"
)
return results

View File

@@ -417,12 +417,12 @@ def get_openvino_npu_stats() -> Optional[dict[str, str]]:
else:
usage = 0.0
return {"npu": f"{round(usage, 2)}", "mem": "-"}
return {"npu": f"{round(usage, 2)}", "mem": "-%"}
except (FileNotFoundError, PermissionError, ValueError):
return None
def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
def get_rockchip_gpu_stats() -> Optional[dict[str, str | float]]:
"""Get GPU stats using rk."""
try:
with open("/sys/kernel/debug/rkrga/load", "r") as f:
@@ -440,7 +440,16 @@ def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
return None
average_load = f"{round(sum(load_values) / len(load_values), 2)}%"
return {"gpu": average_load, "mem": "-"}
stats: dict[str, str | float] = {"gpu": average_load, "mem": "-%"}
try:
with open("/sys/class/thermal/thermal_zone5/temp", "r") as f:
line = f.readline().strip()
stats["temp"] = round(int(line) / 1000, 1)
except (FileNotFoundError, OSError, ValueError):
pass
return stats
def get_rockchip_npu_stats() -> Optional[dict[str, float | str]]:
@@ -463,13 +472,25 @@ def get_rockchip_npu_stats() -> Optional[dict[str, float | str]]:
percentages = [int(load) for load in core_loads]
mean = round(sum(percentages) / len(percentages), 2)
return {"npu": mean, "mem": "-"}
stats: dict[str, float | str] = {"npu": mean, "mem": "-%"}
try:
with open("/sys/class/thermal/thermal_zone6/temp", "r") as f:
line = f.readline().strip()
stats["temp"] = round(int(line) / 1000, 1)
except (FileNotFoundError, OSError, ValueError):
pass
return stats
def try_get_info(f, h, default="N/A"):
def try_get_info(f, h, default="N/A", sensor=None):
try:
if h:
v = f(h)
if sensor is not None:
v = f(h, sensor)
else:
v = f(h)
else:
v = f()
except nvml.NVMLError_NotSupported:
@@ -498,6 +519,9 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle)
dec = try_get_info(nvml.nvmlDeviceGetDecoderUtilization, handle)
temp = try_get_info(
nvml.nvmlDeviceGetTemperature, handle, default=None, sensor=0
)
pstate = try_get_info(nvml.nvmlDeviceGetPowerState, handle, default=None)
if util != "N/A":
@@ -510,6 +534,11 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
else:
gpu_mem_util = -1
if temp != "N/A" and temp is not None:
temp = float(temp)
else:
temp = None
if enc != "N/A":
enc_util = enc[0]
else:
@@ -527,6 +556,7 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
"enc": enc_util,
"dec": dec_util,
"pstate": pstate or "unknown",
"temp": temp,
}
except Exception:
pass
@@ -549,6 +579,53 @@ def get_jetson_stats() -> Optional[dict[int, dict]]:
return results
def get_hailo_temps() -> dict[str, float]:
"""Get temperatures for Hailo devices."""
try:
from hailo_platform import Device
except ModuleNotFoundError:
return {}
temps = {}
try:
device_ids = Device.scan()
for i, device_id in enumerate(device_ids):
try:
with Device(device_id) as device:
temp_info = device.control.get_chip_temperature()
# Get board name and normalise it
identity = device.control.identify()
board_name = None
for line in str(identity).split("\n"):
if line.startswith("Board Name:"):
board_name = (
line.split(":", 1)[1].strip().lower().replace("-", "")
)
break
if not board_name:
board_name = f"hailo{i}"
# Use indexed name if multiple devices, otherwise just the board name
device_name = (
f"{board_name}-{i}" if len(device_ids) > 1 else board_name
)
# ts1_temperature is also available, but appeared to be the same as ts0 in testing.
temps[device_name] = round(temp_info.ts0_temperature, 1)
except Exception as e:
logger.debug(
f"Failed to get temperature for Hailo device {device_id}: {e}"
)
continue
except Exception as e:
logger.debug(f"Failed to scan for Hailo devices: {e}")
return temps
def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedProcess:
"""Run ffprobe on stream."""
clean_path = escape_special_characters(path)
@@ -584,12 +661,17 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess:
"""Run vainfo."""
ffprobe_cmd = (
["vainfo"]
if not device_name
else ["vainfo", "--display", "drm", "--device", f"/dev/dri/{device_name}"]
)
return sp.run(ffprobe_cmd, capture_output=True)
if not device_name:
cmd = ["vainfo"]
else:
if os.path.isabs(device_name) and device_name.startswith("/dev/dri/"):
device_path = device_name
else:
device_path = f"/dev/dri/{device_name}"
cmd = ["vainfo", "--display", "drm", "--device", device_path]
return sp.run(cmd, capture_output=True)
def get_nvidia_driver_info() -> dict[str, Any]:

View File

@@ -3,6 +3,7 @@ import queue
import subprocess as sp
import threading
import time
from collections import deque
from datetime import datetime, timedelta, timezone
from multiprocessing import Queue, Value
from multiprocessing.synchronize import Event as MpEvent
@@ -115,6 +116,7 @@ def capture_frames(
frame_rate.start()
skipped_eps = EventsPerSecond()
skipped_eps.start()
config_subscriber = CameraConfigUpdateSubscriber(
None, {config.name: config}, [CameraConfigUpdateEnum.enabled]
)
@@ -179,6 +181,9 @@ class CameraWatchdog(threading.Thread):
camera_fps,
skipped_fps,
ffmpeg_pid,
stalls,
reconnects,
detection_frame,
stop_event,
):
threading.Thread.__init__(self)
@@ -199,6 +204,10 @@ class CameraWatchdog(threading.Thread):
self.frame_index = 0
self.stop_event = stop_event
self.sleeptime = self.config.ffmpeg.retry_interval
self.reconnect_timestamps = deque()
self.stalls = stalls
self.reconnects = reconnects
self.detection_frame = detection_frame
self.config_subscriber = CameraConfigUpdateSubscriber(
None,
@@ -213,6 +222,35 @@ class CameraWatchdog(threading.Thread):
self.latest_invalid_segment_time: float = 0
self.latest_cache_segment_time: float = 0
# Stall tracking (based on last processed frame)
self._stall_timestamps: deque[float] = deque()
self._stall_active: bool = False
# Status caching to reduce message volume
self._last_detect_status: str | None = None
self._last_record_status: str | None = None
self._last_status_update_time: float = 0.0
def _send_detect_status(self, status: str, now: float) -> None:
"""Send detect status only if changed or retry_interval has elapsed."""
if (
status != self._last_detect_status
or (now - self._last_status_update_time) >= self.sleeptime
):
self.requestor.send_data(f"{self.config.name}/status/detect", status)
self._last_detect_status = status
self._last_status_update_time = now
def _send_record_status(self, status: str, now: float) -> None:
"""Send record status only if changed or retry_interval has elapsed."""
if (
status != self._last_record_status
or (now - self._last_status_update_time) >= self.sleeptime
):
self.requestor.send_data(f"{self.config.name}/status/record", status)
self._last_record_status = status
self._last_status_update_time = now
def _update_enabled_state(self) -> bool:
"""Fetch the latest config and update enabled state."""
self.config_subscriber.check_for_updates()
@@ -239,6 +277,14 @@ class CameraWatchdog(threading.Thread):
else:
self.ffmpeg_detect_process.wait()
# Update reconnects
now = datetime.now().timestamp()
self.reconnect_timestamps.append(now)
while self.reconnect_timestamps and self.reconnect_timestamps[0] < now - 3600:
self.reconnect_timestamps.popleft()
if self.reconnects:
self.reconnects.value = len(self.reconnect_timestamps)
# Wait for old capture thread to fully exit before starting a new one
if self.capture_thread is not None and self.capture_thread.is_alive():
self.logger.info("Waiting for capture thread to exit...")
@@ -261,7 +307,10 @@ class CameraWatchdog(threading.Thread):
self.start_all_ffmpeg()
time.sleep(self.sleeptime)
while not self.stop_event.wait(self.sleeptime):
last_restart_time = datetime.now().timestamp()
# 1 second watchdog loop
while not self.stop_event.wait(1):
enabled = self._update_enabled_state()
if enabled != self.was_enabled:
if enabled:
@@ -277,12 +326,9 @@ class CameraWatchdog(threading.Thread):
self.stop_all_ffmpeg()
# update camera status
self.requestor.send_data(
f"{self.config.name}/status/detect", "disabled"
)
self.requestor.send_data(
f"{self.config.name}/status/record", "disabled"
)
now = datetime.now().timestamp()
self._send_detect_status("disabled", now)
self._send_record_status("disabled", now)
self.was_enabled = enabled
continue
@@ -321,36 +367,44 @@ class CameraWatchdog(threading.Thread):
now = datetime.now().timestamp()
# Check if enough time has passed to allow ffmpeg restart (backoff pacing)
time_since_last_restart = now - last_restart_time
can_restart = time_since_last_restart >= self.sleeptime
if not self.capture_thread.is_alive():
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
self._send_detect_status("offline", now)
self.camera_fps.value = 0
self.logger.error(
f"Ffmpeg process crashed unexpectedly for {self.config.name}."
)
self.reset_capture_thread(terminate=False)
if can_restart:
self.reset_capture_thread(terminate=False)
last_restart_time = now
elif self.camera_fps.value >= (self.config.detect.fps + 10):
self.fps_overflow_count += 1
if self.fps_overflow_count == 3:
self.requestor.send_data(
f"{self.config.name}/status/detect", "offline"
)
self._send_detect_status("offline", now)
self.fps_overflow_count = 0
self.camera_fps.value = 0
self.logger.info(
f"{self.config.name} exceeded fps limit. Exiting ffmpeg..."
)
self.reset_capture_thread(drain_output=False)
if can_restart:
self.reset_capture_thread(drain_output=False)
last_restart_time = now
elif now - self.capture_thread.current_frame.value > 20:
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
self._send_detect_status("offline", now)
self.camera_fps.value = 0
self.logger.info(
f"No frames received from {self.config.name} in 20 seconds. Exiting ffmpeg..."
)
self.reset_capture_thread()
if can_restart:
self.reset_capture_thread()
last_restart_time = now
else:
# process is running normally
self.requestor.send_data(f"{self.config.name}/status/detect", "online")
self._send_detect_status("online", now)
self.fps_overflow_count = 0
for p in self.ffmpeg_other_processes:
@@ -421,9 +475,7 @@ class CameraWatchdog(threading.Thread):
continue
else:
self.requestor.send_data(
f"{self.config.name}/status/record", "online"
)
self._send_record_status("online", now)
p["latest_segment_time"] = self.latest_cache_segment_time
if poll is None:
@@ -439,6 +491,34 @@ class CameraWatchdog(threading.Thread):
p["cmd"], self.logger, p["logpipe"], ffmpeg_process=p["process"]
)
# Update stall metrics based on last processed frame timestamp
now = datetime.now().timestamp()
processed_ts = (
float(self.detection_frame.value) if self.detection_frame else 0.0
)
if processed_ts > 0:
delta = now - processed_ts
observed_fps = (
self.camera_fps.value
if self.camera_fps.value > 0
else self.config.detect.fps
)
interval = 1.0 / max(observed_fps, 0.1)
stall_threshold = max(2.0 * interval, 2.0)
if delta > stall_threshold:
if not self._stall_active:
self._stall_timestamps.append(now)
self._stall_active = True
else:
self._stall_active = False
while self._stall_timestamps and self._stall_timestamps[0] < now - 3600:
self._stall_timestamps.popleft()
if self.stalls:
self.stalls.value = len(self._stall_timestamps)
self.stop_all_ffmpeg()
self.logpipe.close()
self.config_subscriber.stop()
@@ -576,6 +656,9 @@ class CameraCapture(FrigateProcess):
self.camera_metrics.camera_fps,
self.camera_metrics.skipped_fps,
self.camera_metrics.ffmpeg_pid,
self.camera_metrics.stalls_last_hour,
self.camera_metrics.reconnects_last_hour,
self.camera_metrics.detection_frame,
self.stop_event,
)
camera_watchdog.start()

View File

@@ -0,0 +1,50 @@
"""Peewee migrations -- 033_create_export_case_table.py.
Some examples (model - class or model name)::
> Model = migrator.orm['model_name'] # Return model in current state by name
> migrator.sql(sql) # Run custom SQL
> migrator.python(func, *args, **kwargs) # Run python code
> migrator.create_model(Model) # Create a model (could be used as decorator)
> migrator.remove_model(model, cascade=True) # Remove a model
> migrator.add_fields(model, **fields) # Add fields to a model
> migrator.change_fields(model, **fields) # Change fields
> migrator.remove_fields(model, *field_names, cascade=True)
> migrator.rename_field(model, old_field_name, new_field_name)
> migrator.rename_table(model, new_table_name)
> migrator.add_index(model, *col_names, unique=False)
> migrator.drop_index(model, *col_names)
> migrator.add_not_null(model, *field_names)
> migrator.drop_not_null(model, *field_names)
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL
def migrate(migrator, database, fake=False, **kwargs):
migrator.sql(
"""
CREATE TABLE IF NOT EXISTS "exportcase" (
"id" VARCHAR(30) NOT NULL PRIMARY KEY,
"name" VARCHAR(100) NOT NULL,
"description" TEXT NULL,
"created_at" DATETIME NOT NULL,
"updated_at" DATETIME NOT NULL
)
"""
)
migrator.sql(
'CREATE INDEX IF NOT EXISTS "exportcase_name" ON "exportcase" ("name")'
)
migrator.sql(
'CREATE INDEX IF NOT EXISTS "exportcase_created_at" ON "exportcase" ("created_at")'
)
def rollback(migrator, database, fake=False, **kwargs):
pass

View File

@@ -0,0 +1,40 @@
"""Peewee migrations -- 034_add_export_case_to_exports.py.
Some examples (model - class or model name)::
> Model = migrator.orm['model_name'] # Return model in current state by name
> migrator.sql(sql) # Run custom SQL
> migrator.python(func, *args, **kwargs) # Run python code
> migrator.create_model(Model) # Create a model (could be used as decorator)
> migrator.remove_model(model, cascade=True) # Remove a model
> migrator.add_fields(model, **fields) # Add fields to a model
> migrator.change_fields(model, **fields) # Change fields
> migrator.remove_fields(model, *field_names, cascade=True)
> migrator.rename_field(model, old_field_name, new_field_name)
> migrator.rename_table(model, new_table_name)
> migrator.add_index(model, *col_names, unique=False)
> migrator.drop_index(model, *col_names)
> migrator.add_not_null(model, *field_names)
> migrator.drop_not_null(model, *field_names)
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL
def migrate(migrator, database, fake=False, **kwargs):
# Add nullable export_case_id column to export table
migrator.sql('ALTER TABLE "export" ADD COLUMN "export_case_id" VARCHAR(30) NULL')
# Index for faster case-based queries
migrator.sql(
'CREATE INDEX IF NOT EXISTS "export_export_case_id" ON "export" ("export_case_id")'
)
def rollback(migrator, database, fake=False, **kwargs):
pass

View File

@@ -48,6 +48,10 @@
"name": {
"placeholder": "Name the Export"
},
"case": {
"label": "Case",
"placeholder": "Select a case"
},
"select": "Select",
"export": "Export",
"selectOrExport": "Select or Export",

View File

@@ -324,9 +324,6 @@
"enabled": {
"label": "Enable record on all cameras."
},
"sync_recordings": {
"label": "Sync recordings with disk on startup and once a day."
},
"expire_interval": {
"label": "Number of minutes to wait between cleanup runs."
},
@@ -758,4 +755,4 @@
"label": "Keep track of original state of camera."
}
}
}
}

View File

@@ -4,9 +4,6 @@
"enabled": {
"label": "Enable record on all cameras."
},
"sync_recordings": {
"label": "Sync recordings with disk on startup and once a day."
},
"expire_interval": {
"label": "Number of minutes to wait between cleanup runs."
},
@@ -90,4 +87,4 @@
"label": "Keep track of original state of recording."
}
}
}
}

View File

@@ -2,6 +2,10 @@
"documentTitle": "Export - Frigate",
"search": "Search",
"noExports": "No exports found",
"headings": {
"cases": "Cases",
"uncategorizedExports": "Uncategorized Exports"
},
"deleteExport": "Delete Export",
"deleteExport.desc": "Are you sure you want to delete {{exportName}}?",
"editExport": {
@@ -13,11 +17,21 @@
"shareExport": "Share export",
"downloadVideo": "Download video",
"editName": "Edit name",
"deleteExport": "Delete export"
"deleteExport": "Delete export",
"assignToCase": "Add to case"
},
"toast": {
"error": {
"renameExportFailed": "Failed to rename export: {{errorMessage}}"
"renameExportFailed": "Failed to rename export: {{errorMessage}}",
"assignCaseFailed": "Failed to update case assignment: {{errorMessage}}"
}
},
"caseDialog": {
"title": "Add to case",
"description": "Choose an existing case or create a new one.",
"selectLabel": "Case",
"newCaseOption": "Create new case",
"nameLabel": "Case name",
"descriptionLabel": "Description"
}
}

View File

@@ -1065,5 +1065,53 @@
"deleteTriggerFailed": "Failed to delete trigger: {{errorMessage}}"
}
}
},
"maintenance": {
"title": "Maintenance",
"sync": {
"title": "Media Sync",
"desc": "Frigate will periodically clean up media on a regular schedule according to your retention configuration. It is normal to see a few orphaned files as Frigate runs. Use this feature to remove orphaned media files from disk that are no longer referenced in the database.",
"started": "Media sync started.",
"alreadyRunning": "A sync job is already running",
"error": "Failed to start sync",
"currentStatus": "Status",
"jobId": "Job ID",
"startTime": "Start Time",
"endTime": "End Time",
"statusLabel": "Status",
"results": "Results",
"errorLabel": "Error",
"mediaTypes": "Media Types",
"allMedia": "All Media",
"dryRun": "Dry Run",
"dryRunEnabled": "No files will be deleted",
"dryRunDisabled": "Files will be deleted",
"force": "Force",
"forceDesc": "Bypass safety threshold and complete sync even if more than 50% of the files would be deleted.",
"running": "Sync Running...",
"start": "Start Sync",
"inProgress": "Sync is in progress. This page is disabled.",
"status": {
"queued": "Queued",
"running": "Running",
"completed": "Completed",
"failed": "Failed",
"notRunning": "Not Running"
},
"resultsFields": {
"filesChecked": "Files Checked",
"orphansFound": "Orphans Found",
"orphansDeleted": "Orphans Deleted",
"aborted": "Aborted. Deletion would exceed safety threshold.",
"error": "Error",
"totals": "Totals"
},
"event_snapshots": "Tracked Object Snapshots",
"event_thumbnails": "Tracked Object Thumbnails",
"review_thumbnails": "Review Thumbnails",
"previews": "Previews",
"exports": "Exports",
"recordings": "Recordings"
}
}
}

View File

@@ -51,6 +51,7 @@
"gpuMemory": "GPU Memory",
"gpuEncoder": "GPU Encoder",
"gpuDecoder": "GPU Decoder",
"gpuTemperature": "GPU Temperature",
"gpuInfo": {
"vainfoOutput": {
"title": "Vainfo Output",
@@ -77,6 +78,7 @@
},
"npuUsage": "NPU Usage",
"npuMemory": "NPU Memory",
"npuTemperature": "NPU Temperature",
"intelGpuWarning": {
"title": "Intel GPU Stats Warning",
"message": "GPU stats unavailable",
@@ -151,6 +153,17 @@
"cameraDetectionsPerSecond": "{{camName}} detections per second",
"cameraSkippedDetectionsPerSecond": "{{camName}} skipped detections per second"
},
"connectionQuality": {
"title": "Connection Quality",
"excellent": "Excellent",
"fair": "Fair",
"poor": "Poor",
"unusable": "Unusable",
"fps": "FPS",
"expectedFps": "Expected FPS",
"reconnectsLastHour": "Reconnects (last hour)",
"stallsLastHour": "Stalls (last hour)"
},
"toast": {
"success": {
"copyToClipboard": "Copied probe data to clipboard."

View File

@@ -11,6 +11,7 @@ import {
TrackedObjectUpdateReturnType,
TriggerStatus,
FrigateAudioDetections,
Job,
} from "@/types/ws";
import { FrigateStats } from "@/types/stats";
import { createContainer } from "react-tracked";
@@ -651,3 +652,40 @@ export function useTriggers(): { payload: TriggerStatus } {
: { name: "", camera: "", event_id: "", type: "", score: 0 };
return { payload: useDeepMemo(parsed) };
}
export function useJobStatus(
jobType: string,
revalidateOnFocus: boolean = true,
): { payload: Job | null } {
const {
value: { payload },
send: sendCommand,
} = useWs("job_state", "jobState");
const jobData = useDeepMemo(
payload && typeof payload === "string" ? JSON.parse(payload) : {},
);
const currentJob = jobData[jobType] || null;
useEffect(() => {
let listener: (() => void) | undefined;
if (revalidateOnFocus) {
sendCommand("jobState");
listener = () => {
if (document.visibilityState === "visible") {
sendCommand("jobState");
}
};
addEventListener("visibilitychange", listener);
}
return () => {
if (listener) {
removeEventListener("visibilitychange", listener);
}
};
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [revalidateOnFocus]);
return { payload: currentJob as Job | null };
}

View File

@@ -0,0 +1,76 @@
import { useTranslation } from "react-i18next";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { cn } from "@/lib/utils";
type ConnectionQualityIndicatorProps = {
quality: "excellent" | "fair" | "poor" | "unusable";
expectedFps: number;
reconnects: number;
stalls: number;
};
export function ConnectionQualityIndicator({
quality,
expectedFps,
reconnects,
stalls,
}: ConnectionQualityIndicatorProps) {
const { t } = useTranslation(["views/system"]);
const getColorClass = (quality: string): string => {
switch (quality) {
case "excellent":
return "bg-success";
case "fair":
return "bg-yellow-500";
case "poor":
return "bg-orange-500";
case "unusable":
return "bg-destructive";
default:
return "bg-gray-500";
}
};
const qualityLabel = t(`cameras.connectionQuality.${quality}`);
return (
<Tooltip>
<TooltipTrigger asChild>
<div
className={cn(
"inline-block size-3 cursor-pointer rounded-full",
getColorClass(quality),
)}
/>
</TooltipTrigger>
<TooltipContent className="max-w-xs">
<div className="space-y-2">
<div className="font-semibold">
{t("cameras.connectionQuality.title")}
</div>
<div className="text-sm">
<div className="capitalize">{qualityLabel}</div>
<div className="mt-2 space-y-1 text-xs">
<div>
{t("cameras.connectionQuality.expectedFps")}:{" "}
{expectedFps.toFixed(1)} {t("cameras.connectionQuality.fps")}
</div>
<div>
{t("cameras.connectionQuality.reconnectsLastHour")}:{" "}
{reconnects}
</div>
<div>
{t("cameras.connectionQuality.stallsLastHour")}: {stalls}
</div>
</div>
</div>
</div>
</TooltipContent>
</Tooltip>
);
}

View File

@@ -1,9 +1,8 @@
import ActivityIndicator from "../indicators/activity-indicator";
import { LuTrash } from "react-icons/lu";
import { Button } from "../ui/button";
import { useCallback, useState } from "react";
import { isDesktop, isMobile } from "react-device-detect";
import { FaDownload, FaPlay, FaShareAlt } from "react-icons/fa";
import { useCallback, useMemo, useState } from "react";
import { isMobile } from "react-device-detect";
import { FiMoreVertical } from "react-icons/fi";
import { Skeleton } from "../ui/skeleton";
import {
Dialog,
@@ -14,35 +13,81 @@ import {
} from "../ui/dialog";
import { Input } from "../ui/input";
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { DeleteClipType, Export } from "@/types/export";
import { MdEditSquare } from "react-icons/md";
import { DeleteClipType, Export, ExportCase } from "@/types/export";
import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils";
import { shareOrCopy } from "@/utils/browserUtil";
import { useTranslation } from "react-i18next";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
import BlurredIconButton from "../button/BlurredIconButton";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { useIsAdmin } from "@/hooks/use-is-admin";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "../ui/dropdown-menu";
import { FaFolder } from "react-icons/fa";
type ExportProps = {
type CaseCardProps = {
className: string;
exportCase: ExportCase;
exports: Export[];
onSelect: () => void;
};
export function CaseCard({
className,
exportCase,
exports,
onSelect,
}: CaseCardProps) {
const firstExport = useMemo(
() => exports.find((exp) => exp.thumb_path && exp.thumb_path.length > 0),
[exports],
);
return (
<div
className={cn(
"relative flex aspect-video size-full cursor-pointer items-center justify-center overflow-hidden rounded-lg bg-secondary md:rounded-2xl",
className,
)}
onClick={() => onSelect()}
>
{firstExport && (
<img
className="absolute inset-0 size-full object-cover"
src={`${baseUrl}${firstExport.thumb_path.replace("/media/frigate/", "")}`}
alt=""
/>
)}
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-16 bg-gradient-to-t from-black/60 to-transparent" />
<div className="absolute bottom-2 left-2 z-20 flex items-center justify-start gap-2 text-white">
<FaFolder />
<div className="capitalize">{exportCase.name}</div>
</div>
</div>
);
}
type ExportCardProps = {
className: string;
exportedRecording: Export;
onSelect: (selected: Export) => void;
onRename: (original: string, update: string) => void;
onDelete: ({ file, exportName }: DeleteClipType) => void;
onAssignToCase?: (selected: Export) => void;
};
export default function ExportCard({
export function ExportCard({
className,
exportedRecording,
onSelect,
onRename,
onDelete,
}: ExportProps) {
onAssignToCase,
}: ExportCardProps) {
const { t } = useTranslation(["views/exports"]);
const isAdmin = useIsAdmin();
const [hovered, setHovered] = useState(false);
const [loading, setLoading] = useState(
exportedRecording.thumb_path.length > 0,
);
@@ -136,12 +181,14 @@ export default function ExportCard({
<div
className={cn(
"relative flex aspect-video items-center justify-center rounded-lg bg-black md:rounded-2xl",
"relative flex aspect-video cursor-pointer items-center justify-center rounded-lg bg-black md:rounded-2xl",
className,
)}
onMouseEnter={isDesktop ? () => setHovered(true) : undefined}
onMouseLeave={isDesktop ? () => setHovered(false) : undefined}
onClick={isDesktop ? undefined : () => setHovered(!hovered)}
onClick={() => {
if (!exportedRecording.in_progress) {
onSelect(exportedRecording);
}
}}
>
{exportedRecording.in_progress ? (
<ActivityIndicator />
@@ -158,95 +205,88 @@ export default function ExportCard({
)}
</>
)}
{hovered && (
<>
<div className="absolute inset-0 rounded-lg bg-black bg-opacity-60 md:rounded-2xl" />
<div className="absolute right-3 top-2">
<div className="flex items-center justify-center gap-4">
{!exportedRecording.in_progress && (
<Tooltip>
<TooltipTrigger asChild>
<BlurredIconButton
onClick={() =>
shareOrCopy(
`${baseUrl}export?id=${exportedRecording.id}`,
exportedRecording.name.replaceAll("_", " "),
)
}
>
<FaShareAlt className="size-4" />
</BlurredIconButton>
</TooltipTrigger>
<TooltipContent>{t("tooltip.shareExport")}</TooltipContent>
</Tooltip>
)}
{!exportedRecording.in_progress && (
{!exportedRecording.in_progress && (
<div className="absolute bottom-2 right-3 z-40">
<DropdownMenu modal={false}>
<DropdownMenuTrigger>
<BlurredIconButton
aria-label={t("tooltip.editName")}
onClick={(e) => e.stopPropagation()}
>
<FiMoreVertical className="size-5" />
</BlurredIconButton>
</DropdownMenuTrigger>
<DropdownMenuContent align="end">
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("tooltip.shareExport")}
onClick={(e) => {
e.stopPropagation();
shareOrCopy(
`${baseUrl}export?id=${exportedRecording.id}`,
exportedRecording.name.replaceAll("_", " "),
);
}}
>
{t("tooltip.shareExport")}
</DropdownMenuItem>
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("tooltip.downloadVideo")}
>
<a
download
href={`${baseUrl}${exportedRecording.video_path.replace("/media/frigate/", "")}`}
onClick={(e) => e.stopPropagation()}
>
<Tooltip>
<TooltipTrigger asChild>
<BlurredIconButton>
<FaDownload className="size-4" />
</BlurredIconButton>
</TooltipTrigger>
<TooltipContent>
{t("tooltip.downloadVideo")}
</TooltipContent>
</Tooltip>
{t("tooltip.downloadVideo")}
</a>
)}
{isAdmin && !exportedRecording.in_progress && (
<Tooltip>
<TooltipTrigger asChild>
<BlurredIconButton
onClick={() =>
setEditName({
original: exportedRecording.name,
update: undefined,
})
}
>
<MdEditSquare className="size-4" />
</BlurredIconButton>
</TooltipTrigger>
<TooltipContent>{t("tooltip.editName")}</TooltipContent>
</Tooltip>
</DropdownMenuItem>
{isAdmin && onAssignToCase && (
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("tooltip.assignToCase")}
onClick={(e) => {
e.stopPropagation();
onAssignToCase(exportedRecording);
}}
>
{t("tooltip.assignToCase")}
</DropdownMenuItem>
)}
{isAdmin && (
<Tooltip>
<TooltipTrigger asChild>
<BlurredIconButton
onClick={() =>
onDelete({
file: exportedRecording.id,
exportName: exportedRecording.name,
})
}
>
<LuTrash className="size-4 fill-destructive text-destructive hover:text-white" />
</BlurredIconButton>
</TooltipTrigger>
<TooltipContent>{t("tooltip.deleteExport")}</TooltipContent>
</Tooltip>
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("tooltip.editName")}
onClick={(e) => {
e.stopPropagation();
setEditName({
original: exportedRecording.name,
update: undefined,
});
}}
>
{t("tooltip.editName")}
</DropdownMenuItem>
)}
</div>
</div>
{!exportedRecording.in_progress && (
<Button
className="absolute left-1/2 top-1/2 h-20 w-20 -translate-x-1/2 -translate-y-1/2 cursor-pointer text-white hover:bg-transparent hover:text-white"
aria-label={t("button.play", { ns: "common" })}
variant="ghost"
onClick={() => {
onSelect(exportedRecording);
}}
>
<FaPlay />
</Button>
)}
</>
{isAdmin && (
<DropdownMenuItem
className="cursor-pointer"
aria-label={t("tooltip.deleteExport")}
onClick={(e) => {
e.stopPropagation();
onDelete({
file: exportedRecording.id,
exportName: exportedRecording.name,
});
}}
>
{t("tooltip.deleteExport")}
</DropdownMenuItem>
)}
</DropdownMenuContent>
</DropdownMenu>
</div>
)}
{loading && (
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />

View File

@@ -0,0 +1,67 @@
import { cn } from "@/lib/utils";
import {
DEFAULT_EXPORT_FILTERS,
ExportFilter,
ExportFilters,
} from "@/types/export";
import { CamerasFilterButton } from "./CamerasFilterButton";
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
import { useMemo } from "react";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
type ExportFilterGroupProps = {
className: string;
filters?: ExportFilters[];
filter?: ExportFilter;
onUpdateFilter: (filter: ExportFilter) => void;
};
export default function ExportFilterGroup({
className,
filter,
filters = DEFAULT_EXPORT_FILTERS,
onUpdateFilter,
}: ExportFilterGroupProps) {
const { data: config } = useSWR<FrigateConfig>("config", {
revalidateOnFocus: false,
});
const allowedCameras = useAllowedCameras();
const filterValues = useMemo(
() => ({
cameras: allowedCameras,
}),
[allowedCameras],
);
const groups = useMemo(() => {
if (!config) {
return [];
}
return Object.entries(config.camera_groups).sort(
(a, b) => a[1].order - b[1].order,
);
}, [config]);
return (
<div
className={cn(
"scrollbar-container flex justify-center gap-2 overflow-x-auto",
className,
)}
>
{filters.includes("cameras") && (
<CamerasFilterButton
allCameras={filterValues.cameras}
groups={groups}
selectedCameras={filter?.cameras}
hideText={false}
updateCameraFilter={(newCameras) => {
onUpdateFilter({ ...filter, cameras: newCameras });
}}
/>
)}
</div>
);
}

View File

@@ -22,7 +22,14 @@ import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { Popover, PopoverContent, PopoverTrigger } from "../ui/popover";
import { TimezoneAwareCalendar } from "./ReviewActivityCalendar";
import { SelectSeparator } from "../ui/select";
import {
Select,
SelectContent,
SelectItem,
SelectSeparator,
SelectTrigger,
SelectValue,
} from "../ui/select";
import { isDesktop, isIOS, isMobile } from "react-device-detect";
import { Drawer, DrawerContent, DrawerTrigger } from "../ui/drawer";
import SaveExportOverlay from "./SaveExportOverlay";
@@ -31,6 +38,7 @@ import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils";
import { GenericVideoPlayer } from "../player/GenericVideoPlayer";
import { useTranslation } from "react-i18next";
import { ExportCase } from "@/types/export";
const EXPORT_OPTIONS = [
"1",
@@ -67,6 +75,9 @@ export default function ExportDialog({
}: ExportDialogProps) {
const { t } = useTranslation(["components/dialog"]);
const [name, setName] = useState("");
const [selectedCaseId, setSelectedCaseId] = useState<string | undefined>(
undefined,
);
const onStartExport = useCallback(() => {
if (!range) {
@@ -89,6 +100,7 @@ export default function ExportDialog({
{
playback: "realtime",
name,
export_case_id: selectedCaseId || undefined,
},
)
.then((response) => {
@@ -102,6 +114,7 @@ export default function ExportDialog({
),
});
setName("");
setSelectedCaseId(undefined);
setRange(undefined);
setMode("none");
}
@@ -118,10 +131,11 @@ export default function ExportDialog({
{ position: "top-center" },
);
});
}, [camera, name, range, setRange, setName, setMode, t]);
}, [camera, name, range, selectedCaseId, setRange, setName, setMode, t]);
const handleCancel = useCallback(() => {
setName("");
setSelectedCaseId(undefined);
setMode("none");
setRange(undefined);
}, [setMode, setRange]);
@@ -190,8 +204,10 @@ export default function ExportDialog({
currentTime={currentTime}
range={range}
name={name}
selectedCaseId={selectedCaseId}
onStartExport={onStartExport}
setName={setName}
setSelectedCaseId={setSelectedCaseId}
setRange={setRange}
setMode={setMode}
onCancel={handleCancel}
@@ -207,8 +223,10 @@ type ExportContentProps = {
currentTime: number;
range?: TimeRange;
name: string;
selectedCaseId?: string;
onStartExport: () => void;
setName: (name: string) => void;
setSelectedCaseId: (caseId: string | undefined) => void;
setRange: (range: TimeRange | undefined) => void;
setMode: (mode: ExportMode) => void;
onCancel: () => void;
@@ -218,14 +236,17 @@ export function ExportContent({
currentTime,
range,
name,
selectedCaseId,
onStartExport,
setName,
setSelectedCaseId,
setRange,
setMode,
onCancel,
}: ExportContentProps) {
const { t } = useTranslation(["components/dialog"]);
const [selectedOption, setSelectedOption] = useState<ExportOption>("1");
const { data: cases } = useSWR<ExportCase[]>("cases");
const onSelectTime = useCallback(
(option: ExportOption) => {
@@ -320,6 +341,44 @@ export function ExportContent({
value={name}
onChange={(e) => setName(e.target.value)}
/>
<div className="my-4">
<Label className="text-sm text-secondary-foreground">
{t("export.case.label", { defaultValue: "Case (optional)" })}
</Label>
<Select
value={selectedCaseId || "none"}
onValueChange={(value) =>
setSelectedCaseId(value === "none" ? undefined : value)
}
>
<SelectTrigger className="mt-2">
<SelectValue
placeholder={t("export.case.placeholder", {
defaultValue: "Select a case (optional)",
})}
/>
</SelectTrigger>
<SelectContent>
<SelectItem
value="none"
className="cursor-pointer hover:bg-accent hover:text-accent-foreground"
>
{t("label.none", { ns: "common" })}
</SelectItem>
{cases
?.sort((a, b) => a.name.localeCompare(b.name))
.map((caseItem) => (
<SelectItem
key={caseItem.id}
value={caseItem.id}
className="cursor-pointer hover:bg-accent hover:text-accent-foreground"
>
{caseItem.name}
</SelectItem>
))}
</SelectContent>
</Select>
</div>
{isDesktop && <SelectSeparator className="my-4 bg-secondary" />}
<DialogFooter
className={isDesktop ? "" : "mt-3 flex flex-col-reverse gap-4"}

View File

@@ -75,6 +75,9 @@ export default function MobileReviewSettingsDrawer({
// exports
const [name, setName] = useState("");
const [selectedCaseId, setSelectedCaseId] = useState<string | undefined>(
undefined,
);
const onStartExport = useCallback(() => {
if (!range) {
toast.error(t("toast.error.noValidTimeSelected"), {
@@ -96,6 +99,7 @@ export default function MobileReviewSettingsDrawer({
{
playback: "realtime",
name,
export_case_id: selectedCaseId || undefined,
},
)
.then((response) => {
@@ -114,6 +118,7 @@ export default function MobileReviewSettingsDrawer({
},
);
setName("");
setSelectedCaseId(undefined);
setRange(undefined);
setMode("none");
}
@@ -133,7 +138,7 @@ export default function MobileReviewSettingsDrawer({
},
);
});
}, [camera, name, range, setRange, setName, setMode, t]);
}, [camera, name, range, selectedCaseId, setRange, setName, setMode, t]);
// filters
@@ -200,8 +205,10 @@ export default function MobileReviewSettingsDrawer({
currentTime={currentTime}
range={range}
name={name}
selectedCaseId={selectedCaseId}
onStartExport={onStartExport}
setName={setName}
setSelectedCaseId={setSelectedCaseId}
setRange={setRange}
setMode={(mode) => {
setMode(mode);
@@ -213,6 +220,7 @@ export default function MobileReviewSettingsDrawer({
onCancel={() => {
setMode("none");
setRange(undefined);
setSelectedCaseId(undefined);
setDrawerMode("select");
}}
/>

View File

@@ -0,0 +1,166 @@
import { Button } from "@/components/ui/button";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog";
import { Input } from "@/components/ui/input";
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { cn } from "@/lib/utils";
import { isMobile } from "react-device-detect";
import { useEffect, useMemo, useState } from "react";
import { useTranslation } from "react-i18next";
type Option = {
value: string;
label: string;
};
type OptionAndInputDialogProps = {
open: boolean;
title: string;
description?: string;
options: Option[];
newValueKey: string;
initialValue?: string;
nameLabel: string;
descriptionLabel: string;
setOpen: (open: boolean) => void;
onSave: (value: string) => void;
onCreateNew: (name: string, description: string) => void;
};
export default function OptionAndInputDialog({
open,
title,
description,
options,
newValueKey,
initialValue,
nameLabel,
descriptionLabel,
setOpen,
onSave,
onCreateNew,
}: OptionAndInputDialogProps) {
const { t } = useTranslation("common");
const firstOption = useMemo(() => options[0]?.value, [options]);
const [selectedValue, setSelectedValue] = useState<string | undefined>(
initialValue ?? firstOption,
);
const [name, setName] = useState("");
const [descriptionValue, setDescriptionValue] = useState("");
useEffect(() => {
if (open) {
setSelectedValue(initialValue ?? firstOption);
setName("");
setDescriptionValue("");
}
}, [open, initialValue, firstOption]);
const isNew = selectedValue === newValueKey;
const disableSave = !selectedValue || (isNew && name.trim().length === 0);
const handleSave = () => {
if (!selectedValue) {
return;
}
const trimmedName = name.trim();
const trimmedDescription = descriptionValue.trim();
if (isNew) {
onCreateNew(trimmedName, trimmedDescription);
} else {
onSave(selectedValue);
}
setOpen(false);
};
return (
<Dialog open={open} defaultOpen={false} onOpenChange={setOpen}>
<DialogContent
className={cn("space-y-4", isMobile && "px-4")}
onOpenAutoFocus={(e) => {
if (isMobile) {
e.preventDefault();
}
}}
>
<DialogHeader>
<DialogTitle>{title}</DialogTitle>
{description && <DialogDescription>{description}</DialogDescription>}
</DialogHeader>
<div className="space-y-2">
<Select
value={selectedValue}
onValueChange={(val) => setSelectedValue(val)}
>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
{options.map((option) => (
<SelectItem key={option.value} value={option.value}>
{option.label}
</SelectItem>
))}
</SelectContent>
</Select>
</div>
{isNew && (
<div className="space-y-4">
<div className="space-y-1">
<label className="text-sm font-medium text-secondary-foreground">
{nameLabel}
</label>
<Input value={name} onChange={(e) => setName(e.target.value)} />
</div>
<div className="space-y-1">
<label className="text-sm font-medium text-secondary-foreground">
{descriptionLabel}
</label>
<Input
value={descriptionValue}
onChange={(e) => setDescriptionValue(e.target.value)}
/>
</div>
</div>
)}
<DialogFooter className={cn("pt-2", isMobile && "gap-2")}>
<Button
type="button"
variant="outline"
onClick={() => {
setOpen(false);
}}
>
{t("button.cancel")}
</Button>
<Button
type="button"
variant="select"
disabled={disableSave}
onClick={handleSave}
>
{t("button.save")}
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
);
}

View File

@@ -81,6 +81,11 @@ export default function LivePlayer({
const internalContainerRef = useRef<HTMLDivElement | null>(null);
const cameraName = useCameraFriendlyName(cameraConfig);
// player is showing on a dashboard if containerRef is not provided
const inDashboard = containerRef?.current == null;
// stats
const [stats, setStats] = useState<PlayerStatsType>({
@@ -408,6 +413,28 @@ export default function LivePlayer({
/>
</div>
{offline && inDashboard && (
<>
<div className="absolute inset-0 rounded-lg bg-black/50 md:rounded-2xl" />
<div className="absolute inset-0 left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 items-center justify-center">
<div className="flex flex-col items-center justify-center gap-2 rounded-lg bg-background/50 p-3 text-center">
<div className="text-md">{t("streamOffline.title")}</div>
<TbExclamationCircle className="size-6" />
<p className="text-center text-sm">
<Trans
ns="components/player"
values={{
cameraName: cameraName,
}}
>
streamOffline.desc
</Trans>
</p>
</div>
</div>
</>
)}
{offline && !showStillWithoutActivity && cameraEnabled && (
<div className="absolute inset-0 left-1/2 top-1/2 flex h-96 w-96 -translate-x-1/2 -translate-y-1/2">
<div className="flex flex-col items-center justify-center rounded-lg bg-background/50 p-5">

View File

@@ -1,5 +1,5 @@
import { baseUrl } from "@/api/baseUrl";
import ExportCard from "@/components/card/ExportCard";
import { CaseCard, ExportCard } from "@/components/card/ExportCard";
import {
AlertDialog,
AlertDialogCancel,
@@ -11,64 +11,144 @@ import {
} from "@/components/ui/alert-dialog";
import { Button } from "@/components/ui/button";
import { Dialog, DialogContent, DialogTitle } from "@/components/ui/dialog";
import Heading from "@/components/ui/heading";
import { Input } from "@/components/ui/input";
import { Toaster } from "@/components/ui/sonner";
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { useSearchEffect } from "@/hooks/use-overlay-state";
import { useHistoryBack } from "@/hooks/use-history-back";
import { useApiFilterArgs } from "@/hooks/use-api-filter";
import { cn } from "@/lib/utils";
import { DeleteClipType, Export } from "@/types/export";
import {
DeleteClipType,
Export,
ExportCase,
ExportFilter,
} from "@/types/export";
import OptionAndInputDialog from "@/components/overlay/dialog/OptionAndInputDialog";
import axios from "axios";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { isMobile } from "react-device-detect";
import {
MutableRefObject,
useCallback,
useEffect,
useMemo,
useRef,
useState,
} from "react";
import { isMobile, isMobileOnly } from "react-device-detect";
import { useTranslation } from "react-i18next";
import { LuFolderX } from "react-icons/lu";
import { toast } from "sonner";
import useSWR from "swr";
import ExportFilterGroup from "@/components/filter/ExportFilterGroup";
// always parse these as string arrays
const EXPORT_FILTER_ARRAY_KEYS = ["cameras"];
function Exports() {
const { t } = useTranslation(["views/exports"]);
const { data: exports, mutate } = useSWR<Export[]>("exports");
useEffect(() => {
document.title = t("documentTitle");
}, [t]);
// Filters
const [exportFilter, setExportFilter, exportSearchParams] =
useApiFilterArgs<ExportFilter>(EXPORT_FILTER_ARRAY_KEYS);
// Data
const { data: cases, mutate: updateCases } = useSWR<ExportCase[]>("cases");
const { data: rawExports, mutate: updateExports } = useSWR<Export[]>(
exportSearchParams && Object.keys(exportSearchParams).length > 0
? ["exports", exportSearchParams]
: "exports",
);
const exportsByCase = useMemo<{ [caseId: string]: Export[] }>(() => {
const grouped: { [caseId: string]: Export[] } = {};
(rawExports ?? []).forEach((exp) => {
const caseId = exp.export_case || "none";
if (!grouped[caseId]) {
grouped[caseId] = [];
}
grouped[caseId].push(exp);
});
return grouped;
}, [rawExports]);
const filteredCases = useMemo<ExportCase[]>(() => {
if (!cases) {
return [];
}
return cases.filter((caseItem) => {
const caseExports = exportsByCase[caseItem.id];
return caseExports?.length;
});
}, [cases, exportsByCase]);
const exports = useMemo<Export[]>(
() => exportsByCase["none"] || [],
[exportsByCase],
);
const mutate = useCallback(() => {
updateExports();
updateCases();
}, [updateExports, updateCases]);
// Search
const [search, setSearch] = useState("");
const filteredExports = useMemo(() => {
if (!search || !exports) {
return exports;
}
return exports.filter((exp) =>
exp.name
.toLowerCase()
.replaceAll("_", " ")
.includes(search.toLowerCase()),
);
}, [exports, search]);
// Viewing
const [selected, setSelected] = useState<Export>();
const [selectedCaseId, setSelectedCaseId] = useState<string | undefined>(
undefined,
);
const [selectedAspect, setSelectedAspect] = useState(0.0);
// Handle browser back button to deselect case before navigating away
useHistoryBack({
enabled: true,
open: selectedCaseId !== undefined,
onClose: () => setSelectedCaseId(undefined),
});
useSearchEffect("id", (id) => {
if (!exports) {
if (!rawExports) {
return false;
}
setSelected(exports.find((exp) => exp.id == id));
setSelected(rawExports.find((exp) => exp.id == id));
return true;
});
// Deleting
useSearchEffect("caseId", (caseId: string) => {
if (!filteredCases) {
return false;
}
const exists = filteredCases.some((c) => c.id === caseId);
if (!exists) {
return false;
}
setSelectedCaseId(caseId);
return true;
});
// Modifying
const [deleteClip, setDeleteClip] = useState<DeleteClipType | undefined>();
const [exportToAssign, setExportToAssign] = useState<Export | undefined>();
const onHandleDelete = useCallback(() => {
if (!deleteClip) {
@@ -83,8 +163,6 @@ function Exports() {
});
}, [deleteClip, mutate]);
// Renaming
const onHandleRename = useCallback(
(id: string, update: string) => {
axios
@@ -107,7 +185,7 @@ function Exports() {
});
});
},
[mutate, t],
[mutate, setDeleteClip, t],
);
// Keyboard Listener
@@ -115,10 +193,27 @@ function Exports() {
const contentRef = useRef<HTMLDivElement | null>(null);
useKeyboardListener([], undefined, contentRef);
const selectedCase = useMemo(
() => filteredCases?.find((c) => c.id === selectedCaseId),
[filteredCases, selectedCaseId],
);
const resetCaseDialog = useCallback(() => {
setExportToAssign(undefined);
}, []);
return (
<div className="flex size-full flex-col gap-2 overflow-hidden px-1 pt-2 md:p-2">
<Toaster closeButton={true} />
<CaseAssignmentDialog
exportToAssign={exportToAssign}
cases={cases}
selectedCaseId={selectedCaseId}
onClose={resetCaseDialog}
mutate={mutate}
/>
<AlertDialog
open={deleteClip != undefined}
onOpenChange={() => setDeleteClip(undefined)}
@@ -187,47 +282,364 @@ function Exports() {
</DialogContent>
</Dialog>
{exports && (
<div className="flex w-full items-center justify-center p-2">
<div
className={cn(
"flex w-full flex-col items-start space-y-2 pr-2 md:mb-2 lg:relative lg:h-10 lg:flex-row lg:items-center lg:space-y-0",
isMobileOnly && "mb-2 h-auto flex-wrap gap-2 space-y-0",
)}
>
<div className="w-full">
<Input
className="text-md w-full bg-muted md:w-1/3"
className="text-md w-full bg-muted md:w-1/2"
placeholder={t("search")}
value={search}
onChange={(e) => setSearch(e.target.value)}
/>
</div>
)}
<ExportFilterGroup
className="w-full justify-between md:justify-start lg:justify-end"
filter={exportFilter}
filters={["cameras"]}
onUpdateFilter={setExportFilter}
/>
</div>
<div className="w-full overflow-hidden">
{exports && filteredExports && filteredExports.length > 0 ? (
<div
ref={contentRef}
className="scrollbar-container grid size-full gap-2 overflow-y-auto sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4"
>
{Object.values(exports).map((item) => (
<ExportCard
key={item.name}
className={
search == "" || filteredExports.includes(item) ? "" : "hidden"
}
exportedRecording={item}
onSelect={setSelected}
onRename={onHandleRename}
onDelete={({ file, exportName }) =>
setDeleteClip({ file, exportName })
}
/>
))}
</div>
) : (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
<LuFolderX className="size-16" />
{t("noExports")}
</div>
)}
{selectedCase ? (
<CaseView
contentRef={contentRef}
selectedCase={selectedCase}
exports={exportsByCase[selectedCase.id] || []}
search={search}
setSelected={setSelected}
renameClip={onHandleRename}
setDeleteClip={setDeleteClip}
onAssignToCase={setExportToAssign}
/>
) : (
<AllExportsView
contentRef={contentRef}
search={search}
cases={filteredCases}
exports={exports}
exportsByCase={exportsByCase}
setSelectedCaseId={setSelectedCaseId}
setSelected={setSelected}
renameClip={onHandleRename}
setDeleteClip={setDeleteClip}
onAssignToCase={setExportToAssign}
/>
)}
</div>
);
}
type AllExportsViewProps = {
contentRef: MutableRefObject<HTMLDivElement | null>;
search: string;
cases?: ExportCase[];
exports: Export[];
exportsByCase: { [caseId: string]: Export[] };
setSelectedCaseId: (id: string) => void;
setSelected: (e: Export) => void;
renameClip: (id: string, update: string) => void;
setDeleteClip: (d: DeleteClipType | undefined) => void;
onAssignToCase: (e: Export) => void;
};
function AllExportsView({
contentRef,
search,
cases,
exports,
exportsByCase,
setSelectedCaseId,
setSelected,
renameClip,
setDeleteClip,
onAssignToCase,
}: AllExportsViewProps) {
const { t } = useTranslation(["views/exports"]);
// Filter
const filteredCases = useMemo(() => {
if (!search || !cases) {
return cases || [];
}
return cases.filter(
(caseItem) =>
caseItem.name.toLowerCase().includes(search.toLowerCase()) ||
(caseItem.description &&
caseItem.description.toLowerCase().includes(search.toLowerCase())),
);
}, [search, cases]);
const filteredExports = useMemo<Export[]>(() => {
if (!search) {
return exports;
}
return exports.filter((exp) =>
exp.name
.toLowerCase()
.replaceAll("_", " ")
.includes(search.toLowerCase()),
);
}, [exports, search]);
return (
<div className="w-full overflow-hidden">
{filteredCases?.length || filteredExports.length ? (
<div
ref={contentRef}
className="scrollbar-container flex size-full flex-col gap-4 overflow-y-auto"
>
{filteredCases.length > 0 && (
<div className="space-y-2">
<Heading as="h4">{t("headings.cases")}</Heading>
<div className="grid gap-2 sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4">
{cases?.map((item) => (
<CaseCard
key={item.name}
className={
search == "" || filteredCases?.includes(item)
? ""
: "hidden"
}
exportCase={item}
exports={exportsByCase[item.id] || []}
onSelect={() => {
setSelectedCaseId(item.id);
}}
/>
))}
</div>
</div>
)}
{filteredExports.length > 0 && (
<div className="space-y-4">
<Heading as="h4">{t("headings.uncategorizedExports")}</Heading>
<div
ref={contentRef}
className="scrollbar-container grid gap-2 overflow-y-auto sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4"
>
{exports.map((item) => (
<ExportCard
key={item.name}
className={
search == "" || filteredExports.includes(item)
? ""
: "hidden"
}
exportedRecording={item}
onSelect={setSelected}
onRename={renameClip}
onDelete={({ file, exportName }) =>
setDeleteClip({ file, exportName })
}
onAssignToCase={onAssignToCase}
/>
))}
</div>
</div>
)}
</div>
) : (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
<LuFolderX className="size-16" />
{t("noExports")}
</div>
)}
</div>
);
}
type CaseViewProps = {
contentRef: MutableRefObject<HTMLDivElement | null>;
selectedCase: ExportCase;
exports?: Export[];
search: string;
setSelected: (e: Export) => void;
renameClip: (id: string, update: string) => void;
setDeleteClip: (d: DeleteClipType | undefined) => void;
onAssignToCase: (e: Export) => void;
};
function CaseView({
contentRef,
selectedCase,
exports,
search,
setSelected,
renameClip,
setDeleteClip,
onAssignToCase,
}: CaseViewProps) {
const filteredExports = useMemo<Export[]>(() => {
const caseExports = (exports || []).filter(
(e) => e.export_case == selectedCase.id,
);
if (!search) {
return caseExports;
}
return caseExports.filter((exp) =>
exp.name
.toLowerCase()
.replaceAll("_", " ")
.includes(search.toLowerCase()),
);
}, [selectedCase, exports, search]);
return (
<div className="flex size-full flex-col gap-8 overflow-hidden">
<div className="flex shrink-0 flex-col gap-1">
<Heading className="capitalize" as="h2">
{selectedCase.name}
</Heading>
<div className="text-secondary-foreground">
{selectedCase.description}
</div>
</div>
<div
ref={contentRef}
className="scrollbar-container grid min-h-0 flex-1 content-start gap-2 overflow-y-auto sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4"
>
{exports?.map((item) => (
<ExportCard
key={item.name}
className={filteredExports.includes(item) ? "" : "hidden"}
exportedRecording={item}
onSelect={setSelected}
onRename={renameClip}
onDelete={({ file, exportName }) =>
setDeleteClip({ file, exportName })
}
onAssignToCase={onAssignToCase}
/>
))}
</div>
</div>
);
}
type CaseAssignmentDialogProps = {
exportToAssign?: Export;
cases?: ExportCase[];
selectedCaseId?: string;
onClose: () => void;
mutate: () => void;
};
function CaseAssignmentDialog({
exportToAssign,
cases,
selectedCaseId,
onClose,
mutate,
}: CaseAssignmentDialogProps) {
const { t } = useTranslation(["views/exports"]);
const caseOptions = useMemo(
() => [
...(cases ?? [])
.map((c) => ({
value: c.id,
label: c.name,
}))
.sort((cA, cB) => cA.label.localeCompare(cB.label)),
{
value: "new",
label: t("caseDialog.newCaseOption"),
},
],
[cases, t],
);
const handleSave = useCallback(
async (caseId: string) => {
if (!exportToAssign) return;
try {
await axios.patch(`export/${exportToAssign.id}/case`, {
export_case_id: caseId,
});
mutate();
onClose();
} catch (error: unknown) {
const apiError = error as {
response?: { data?: { message?: string; detail?: string } };
};
const errorMessage =
apiError.response?.data?.message ||
apiError.response?.data?.detail ||
"Unknown error";
toast.error(t("toast.error.assignCaseFailed", { errorMessage }), {
position: "top-center",
});
}
},
[exportToAssign, mutate, onClose, t],
);
const handleCreateNew = useCallback(
async (name: string, description: string) => {
if (!exportToAssign) return;
try {
const createResp = await axios.post("cases", {
name,
description,
});
const newCaseId: string | undefined = createResp.data?.id;
if (newCaseId) {
await axios.patch(`export/${exportToAssign.id}/case`, {
export_case_id: newCaseId,
});
}
mutate();
onClose();
} catch (error: unknown) {
const apiError = error as {
response?: { data?: { message?: string; detail?: string } };
};
const errorMessage =
apiError.response?.data?.message ||
apiError.response?.data?.detail ||
"Unknown error";
toast.error(t("toast.error.assignCaseFailed", { errorMessage }), {
position: "top-center",
});
}
},
[exportToAssign, mutate, onClose, t],
);
if (!exportToAssign) {
return null;
}
return (
<OptionAndInputDialog
open={!!exportToAssign}
title={t("caseDialog.title")}
description={t("caseDialog.description")}
setOpen={(open) => {
if (!open) {
onClose();
}
}}
options={caseOptions}
nameLabel={t("caseDialog.nameLabel")}
descriptionLabel={t("caseDialog.descriptionLabel")}
initialValue={selectedCaseId}
newValueKey="new"
onSave={handleSave}
onCreateNew={handleCreateNew}
/>
);
}
export default Exports;

View File

@@ -36,6 +36,7 @@ import NotificationView from "@/views/settings/NotificationsSettingsView";
import EnrichmentsSettingsView from "@/views/settings/EnrichmentsSettingsView";
import UiSettingsView from "@/views/settings/UiSettingsView";
import FrigatePlusSettingsView from "@/views/settings/FrigatePlusSettingsView";
import MaintenanceSettingsView from "@/views/settings/MaintenanceSettingsView";
import { useSearchEffect } from "@/hooks/use-overlay-state";
import { useNavigate, useSearchParams } from "react-router-dom";
import { useInitialCameraState } from "@/api/ws";
@@ -81,6 +82,7 @@ const allSettingsViews = [
"roles",
"notifications",
"frigateplus",
"maintenance",
] as const;
type SettingsType = (typeof allSettingsViews)[number];
@@ -120,6 +122,10 @@ const settingsGroups = [
label: "frigateplus",
items: [{ key: "frigateplus", component: FrigatePlusSettingsView }],
},
{
label: "maintenance",
items: [{ key: "maintenance", component: MaintenanceSettingsView }],
},
];
const CAMERA_SELECT_BUTTON_PAGES = [

View File

@@ -6,9 +6,28 @@ export type Export = {
video_path: string;
thumb_path: string;
in_progress: boolean;
export_case?: string;
};
export type ExportCase = {
id: string;
name: string;
description: string;
created_at: number;
updated_at: number;
};
export type DeleteClipType = {
file: string;
exportName: string;
};
// filtering
const EXPORT_FILTERS = ["cameras"] as const;
export type ExportFilters = (typeof EXPORT_FILTERS)[number];
export const DEFAULT_EXPORT_FILTERS: ExportFilters[] = ["cameras"];
export type ExportFilter = {
cameras?: string[];
};

View File

@@ -197,7 +197,6 @@ export interface CameraConfig {
days: number;
mode: string;
};
sync_recordings: boolean;
};
review: {
alerts: {
@@ -542,7 +541,6 @@ export interface FrigateConfig {
days: number;
mode: string;
};
sync_recordings: boolean;
};
rtmp: {

View File

@@ -24,6 +24,10 @@ export type CameraStats = {
pid: number;
process_fps: number;
skipped_fps: number;
connection_quality: "excellent" | "fair" | "poor" | "unusable";
expected_fps: number;
reconnects_last_hour: number;
stalls_last_hour: number;
};
export type CpuStats = {
@@ -37,6 +41,7 @@ export type DetectorStats = {
detection_start: number;
inference_speed: number;
pid: number;
temperature?: number;
};
export type EmbeddingsStats = {
@@ -56,11 +61,13 @@ export type GpuStats = {
enc?: string;
dec?: string;
pstate?: string;
temp?: number;
};
export type NpuStats = {
npu: number;
mem: string;
temp?: number;
};
export type GpuInfo = "vainfo" | "nvinfo";
@@ -68,7 +75,6 @@ export type GpuInfo = "vainfo" | "nvinfo";
export type ServiceStats = {
last_updated: number;
storage: { [path: string]: StorageStats };
temperatures: { [apex: string]: number };
uptime: number;
latest_version: string;
version: string;

View File

@@ -126,3 +126,32 @@ export type TriggerStatus = {
type: string;
score: number;
};
export type MediaSyncStats = {
files_checked: number;
orphans_found: number;
orphans_deleted: number;
aborted: boolean;
error: string | null;
};
export type MediaSyncTotals = {
files_checked: number;
orphans_found: number;
orphans_deleted: number;
};
export type MediaSyncResults = {
[mediaType: string]: MediaSyncStats | MediaSyncTotals;
totals: MediaSyncTotals;
};
export type Job = {
id: string;
job_type: string;
status: string;
results?: MediaSyncResults;
start_time?: number;
end_time?: number;
error_message?: string;
};

View File

@@ -0,0 +1,442 @@
import Heading from "@/components/ui/heading";
import { Button } from "@/components/ui/button";
import { Label } from "@/components/ui/label";
import { Separator } from "@/components/ui/separator";
import { Toaster } from "@/components/ui/sonner";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useCallback, useState } from "react";
import { useTranslation } from "react-i18next";
import axios from "axios";
import { toast } from "sonner";
import { useJobStatus } from "@/api/ws";
import { Switch } from "@/components/ui/switch";
import { LuCheck, LuX } from "react-icons/lu";
import { cn } from "@/lib/utils";
import { formatUnixTimestampToDateTime } from "@/utils/dateUtil";
import { MediaSyncStats } from "@/types/ws";
export default function MaintenanceSettingsView() {
const { t } = useTranslation("views/settings");
const [selectedMediaTypes, setSelectedMediaTypes] = useState<string[]>([
"all",
]);
const [dryRun, setDryRun] = useState(true);
const [force, setForce] = useState(false);
const [isSubmitting, setIsSubmitting] = useState(false);
const MEDIA_TYPES = [
{ id: "event_snapshots", label: t("maintenance.sync.event_snapshots") },
{ id: "event_thumbnails", label: t("maintenance.sync.event_thumbnails") },
{ id: "review_thumbnails", label: t("maintenance.sync.review_thumbnails") },
{ id: "previews", label: t("maintenance.sync.previews") },
{ id: "exports", label: t("maintenance.sync.exports") },
{ id: "recordings", label: t("maintenance.sync.recordings") },
];
// Subscribe to media sync status via WebSocket
const { payload: currentJob } = useJobStatus("media_sync");
const isJobRunning = Boolean(
currentJob &&
(currentJob.status === "queued" || currentJob.status === "running"),
);
const handleMediaTypeChange = useCallback((id: string, checked: boolean) => {
setSelectedMediaTypes((prev) => {
if (id === "all") {
return checked ? ["all"] : [];
}
let next = prev.filter((t) => t !== "all");
if (checked) {
next.push(id);
} else {
next = next.filter((t) => t !== id);
}
return next.length === 0 ? ["all"] : next;
});
}, []);
const handleStartSync = useCallback(async () => {
setIsSubmitting(true);
try {
const response = await axios.post(
"/media/sync",
{
dry_run: dryRun,
media_types: selectedMediaTypes,
force: force,
},
{
headers: {
"Content-Type": "application/json",
},
},
);
if (response.status === 202) {
toast.success(t("maintenance.sync.started"), {
position: "top-center",
closeButton: true,
});
} else if (response.status === 409) {
toast.error(t("maintenance.sync.alreadyRunning"), {
position: "top-center",
closeButton: true,
});
}
} catch {
toast.error(t("maintenance.sync.error"), {
position: "top-center",
closeButton: true,
});
} finally {
setIsSubmitting(false);
}
}, [selectedMediaTypes, dryRun, force, t]);
return (
<>
<div className="flex size-full flex-col md:flex-row">
<Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto px-2 md:order-none">
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
<div className="col-span-1">
<Heading as="h4" className="mb-2">
{t("maintenance.sync.title")}
</Heading>
<div className="max-w-6xl">
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
<p>{t("maintenance.sync.desc")}</p>
</div>
</div>
<div className="space-y-6">
{/* Media Types Selection */}
<div>
<Label className="mb-2 flex flex-row items-center text-base">
{t("maintenance.sync.mediaTypes")}
</Label>
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
<div className="flex items-center justify-between">
<Label
htmlFor="all-media"
className="cursor-pointer font-medium"
>
{t("maintenance.sync.allMedia")}
</Label>
<Switch
id="all-media"
checked={selectedMediaTypes.includes("all")}
onCheckedChange={(checked) =>
handleMediaTypeChange("all", checked)
}
disabled={isJobRunning}
/>
</div>
<div className="ml-4 space-y-2">
{MEDIA_TYPES.map((type) => (
<div
key={type.id}
className="flex items-center justify-between"
>
<Label htmlFor={type.id} className="cursor-pointer">
{type.label}
</Label>
<Switch
id={type.id}
checked={
selectedMediaTypes.includes("all") ||
selectedMediaTypes.includes(type.id)
}
onCheckedChange={(checked) =>
handleMediaTypeChange(type.id, checked)
}
disabled={
isJobRunning || selectedMediaTypes.includes("all")
}
/>
</div>
))}
</div>
</div>
</div>
{/* Options */}
<div className="space-y-4">
<div className="flex flex-col">
<div className="flex flex-row items-center">
<Switch
id="dry-run"
className="mr-3"
checked={dryRun}
onCheckedChange={setDryRun}
disabled={isJobRunning}
/>
<div className="space-y-0.5">
<Label htmlFor="dry-run" className="cursor-pointer">
{t("maintenance.sync.dryRun")}
</Label>
<p className="text-xs text-muted-foreground">
{dryRun
? t("maintenance.sync.dryRunEnabled")
: t("maintenance.sync.dryRunDisabled")}
</p>
</div>
</div>
</div>
<div className="flex flex-col">
<div className="flex flex-row items-center">
<Switch
id="force"
className="mr-3"
checked={force}
onCheckedChange={setForce}
disabled={isJobRunning}
/>
<div className="space-y-0.5">
<Label htmlFor="force" className="cursor-pointer">
{t("maintenance.sync.force")}
</Label>
<p className="text-xs text-muted-foreground">
{t("maintenance.sync.forceDesc")}
</p>
</div>
</div>
</div>
</div>
{/* Action Buttons */}
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
<Button
onClick={handleStartSync}
disabled={isJobRunning || isSubmitting}
className="flex flex-1"
variant={"select"}
>
{(isSubmitting || isJobRunning) && (
<ActivityIndicator className="mr-2 size-6" />
)}
{isJobRunning
? t("maintenance.sync.running")
: t("maintenance.sync.start")}
</Button>
</div>
</div>
</div>
<div className="col-span-1">
<div className="mt-4 gap-2 space-y-3 md:mt-8">
<Separator className="my-2 flex bg-secondary md:hidden" />
<div className="flex flex-row items-center justify-between rounded-lg bg-card p-3 md:mr-2">
<Heading as="h4" className="my-2">
{t("maintenance.sync.currentStatus")}
</Heading>
<div
className={cn(
"flex flex-row items-center gap-2",
currentJob?.status === "success" && "text-green-500",
currentJob?.status === "failed" && "text-destructive",
)}
>
{currentJob?.status === "success" && (
<LuCheck className="size-5" />
)}
{currentJob?.status === "failed" && (
<LuX className="size-5" />
)}
{(currentJob?.status === "running" ||
currentJob?.status === "queued") && (
<ActivityIndicator className="size-5" />
)}
{t(
`maintenance.sync.status.${currentJob?.status ?? "notRunning"}`,
)}
</div>
</div>
{/* Current Job Status */}
<div className="space-y-2 text-sm">
{currentJob?.start_time && (
<div className="flex gap-1">
<span className="text-muted-foreground">
{t("maintenance.sync.startTime")}:
</span>
<span className="font-mono">
{formatUnixTimestampToDateTime(
currentJob?.start_time ?? "-",
)}
</span>
</div>
)}
{currentJob?.end_time && (
<div className="flex gap-1">
<span className="text-muted-foreground">
{t("maintenance.sync.endTime")}:
</span>
<span className="font-mono">
{formatUnixTimestampToDateTime(currentJob?.end_time)}
</span>
</div>
)}
{currentJob?.results && (
<div className="mt-2 space-y-2 md:mr-2">
<p className="text-sm font-medium text-muted-foreground">
{t("maintenance.sync.results")}
</p>
<div className="rounded-md border border-secondary">
{/* Individual media type results */}
<div className="divide-y divide-secondary">
{Object.entries(currentJob.results)
.filter(([key]) => key !== "totals")
.map(([mediaType, stats]) => {
const mediaStats = stats as MediaSyncStats;
return (
<div key={mediaType} className="p-3 pb-3">
<p className="mb-1 font-medium capitalize">
{t(`maintenance.sync.${mediaType}`)}
</p>
<div className="ml-2 space-y-0.5">
<div className="flex justify-between">
<span className="text-muted-foreground">
{t(
"maintenance.sync.resultsFields.filesChecked",
)}
</span>
<span>{mediaStats.files_checked}</span>
</div>
<div className="flex justify-between">
<span className="text-muted-foreground">
{t(
"maintenance.sync.resultsFields.orphansFound",
)}
</span>
<span
className={
mediaStats.orphans_found > 0
? "text-yellow-500"
: ""
}
>
{mediaStats.orphans_found}
</span>
</div>
<div className="flex justify-between">
<span className="text-muted-foreground">
{t(
"maintenance.sync.resultsFields.orphansDeleted",
)}
</span>
<span
className={cn(
"text-muted-foreground",
mediaStats.orphans_deleted > 0 &&
"text-success",
mediaStats.orphans_deleted === 0 &&
mediaStats.aborted &&
"text-destructive",
)}
>
{mediaStats.orphans_deleted}
</span>
</div>
{mediaStats.aborted && (
<div className="flex items-center gap-2 text-destructive">
<LuX className="size-4" />
{t(
"maintenance.sync.resultsFields.aborted",
)}
</div>
)}
{mediaStats.error && (
<div className="text-destructive">
{t(
"maintenance.sync.resultsFields.error",
)}
{": "}
{mediaStats.error}
</div>
)}
</div>
</div>
);
})}
</div>
{/* Totals */}
{currentJob.results.totals && (
<div className="border-t border-secondary bg-background_alt p-3">
<p className="mb-1 font-medium">
{t("maintenance.sync.resultsFields.totals")}
</p>
<div className="ml-2 space-y-0.5">
<div className="flex justify-between">
<span className="text-muted-foreground">
{t(
"maintenance.sync.resultsFields.filesChecked",
)}
</span>
<span className="font-medium">
{currentJob.results.totals.files_checked}
</span>
</div>
<div className="flex justify-between">
<span className="text-muted-foreground">
{t(
"maintenance.sync.resultsFields.orphansFound",
)}
</span>
<span
className={
currentJob.results.totals.orphans_found > 0
? "font-medium text-yellow-500"
: "font-medium"
}
>
{currentJob.results.totals.orphans_found}
</span>
</div>
<div className="flex justify-between">
<span className="text-muted-foreground">
{t(
"maintenance.sync.resultsFields.orphansDeleted",
)}
</span>
<span
className={cn(
"text-medium",
currentJob.results.totals.orphans_deleted >
0
? "text-success"
: "text-muted-foreground",
)}
>
{currentJob.results.totals.orphans_deleted}
</span>
</div>
</div>
</div>
)}
</div>
</div>
)}
{currentJob?.error_message && (
<div className="text-destructive">
<p className="text-muted-foreground">
{t("maintenance.sync.errorLabel")}
</p>
<p>{currentJob?.error_message}</p>
</div>
)}
</div>
</div>
</div>
</div>
</div>
</div>
</>
);
}

View File

@@ -1,6 +1,7 @@
import { useFrigateStats } from "@/api/ws";
import { CameraLineGraph } from "@/components/graph/LineGraph";
import CameraInfoDialog from "@/components/overlay/CameraInfoDialog";
import { ConnectionQualityIndicator } from "@/components/camera/ConnectionQualityIndicator";
import { Skeleton } from "@/components/ui/skeleton";
import { FrigateConfig } from "@/types/frigateConfig";
import { FrigateStats } from "@/types/stats";
@@ -282,8 +283,37 @@ export default function CameraMetrics({
)}
<div className="flex w-full flex-col gap-3">
<div className="flex flex-row items-center justify-between">
<div className="text-sm font-medium text-muted-foreground smart-capitalize">
<CameraNameLabel camera={camera} />
<div className="flex items-center gap-2">
<div className="text-sm font-medium text-muted-foreground smart-capitalize">
<CameraNameLabel camera={camera} />
</div>
{statsHistory.length > 0 &&
statsHistory[statsHistory.length - 1]?.cameras[
camera.name
] && (
<ConnectionQualityIndicator
quality={
statsHistory[statsHistory.length - 1]?.cameras[
camera.name
]?.connection_quality
}
expectedFps={
statsHistory[statsHistory.length - 1]?.cameras[
camera.name
]?.expected_fps || 0
}
reconnects={
statsHistory[statsHistory.length - 1]?.cameras[
camera.name
]?.reconnects_last_hour || 0
}
stalls={
statsHistory[statsHistory.length - 1]?.cameras[
camera.name
]?.stalls_last_hour || 0
}
/>
)}
</div>
<Tooltip>
<TooltipTrigger>

View File

@@ -127,13 +127,6 @@ export default function GeneralMetrics({
return undefined;
}
if (
statsHistory.length > 0 &&
Object.keys(statsHistory[0].service.temperatures).length == 0
) {
return undefined;
}
const series: {
[key: string]: { name: string; data: { x: number; y: number }[] };
} = {};
@@ -143,22 +136,22 @@ export default function GeneralMetrics({
return;
}
Object.entries(stats.detectors).forEach(([key], cIdx) => {
if (!key.includes("coral")) {
Object.entries(stats.detectors).forEach(([key, detectorStats]) => {
if (detectorStats.temperature === undefined) {
return;
}
if (cIdx <= Object.keys(stats.service.temperatures).length) {
if (!(key in series)) {
series[key] = {
name: key,
data: [],
};
}
const temp = Object.values(stats.service.temperatures)[cIdx];
series[key].data.push({ x: statsIdx + 1, y: Math.round(temp) });
if (!(key in series)) {
series[key] = {
name: key,
data: [],
};
}
series[key].data.push({
x: statsIdx + 1,
y: Math.round(detectorStats.temperature),
});
});
});
@@ -375,6 +368,40 @@ export default function GeneralMetrics({
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
}, [statsHistory]);
const gpuTempSeries = useMemo(() => {
if (!statsHistory) {
return [];
}
const series: {
[key: string]: { name: string; data: { x: number; y: number }[] };
} = {};
let hasValidGpu = false;
statsHistory.forEach((stats, statsIdx) => {
if (!stats) {
return;
}
Object.entries(stats.gpu_usages || {}).forEach(([key, stats]) => {
if (!(key in series)) {
series[key] = { name: key, data: [] };
}
if (stats.temp !== undefined) {
hasValidGpu = true;
series[key].data.push({ x: statsIdx + 1, y: stats.temp });
}
});
});
if (!hasValidGpu) {
return [];
}
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
}, [statsHistory]);
// Check if Intel GPU has all 0% usage values (known bug)
const showIntelGpuWarning = useMemo(() => {
if (!statsHistory || statsHistory.length < 3) {
@@ -455,6 +482,40 @@ export default function GeneralMetrics({
return Object.keys(series).length > 0 ? Object.values(series) : [];
}, [statsHistory]);
const npuTempSeries = useMemo(() => {
if (!statsHistory) {
return [];
}
const series: {
[key: string]: { name: string; data: { x: number; y: number }[] };
} = {};
let hasValidNpu = false;
statsHistory.forEach((stats, statsIdx) => {
if (!stats) {
return;
}
Object.entries(stats.npu_usages || {}).forEach(([key, stats]) => {
if (!(key in series)) {
series[key] = { name: key, data: [] };
}
if (stats.temp !== undefined) {
hasValidNpu = true;
series[key].data.push({ x: statsIdx + 1, y: stats.temp });
}
});
});
if (!hasValidNpu) {
return [];
}
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
}, [statsHistory]);
// other processes stats
const hardwareType = useMemo(() => {
@@ -676,7 +737,11 @@ export default function GeneralMetrics({
<div
className={cn(
"mt-4 grid grid-cols-1 gap-2 sm:grid-cols-2",
gpuEncSeries?.length && "md:grid-cols-4",
gpuTempSeries?.length && "md:grid-cols-3",
gpuEncSeries?.length && "xl:grid-cols-4",
gpuEncSeries?.length &&
gpuTempSeries?.length &&
"3xl:grid-cols-5",
)}
>
{statsHistory[0]?.gpu_usages && (
@@ -811,6 +876,30 @@ export default function GeneralMetrics({
) : (
<Skeleton className="aspect-video w-full" />
)}
{statsHistory.length != 0 ? (
<>
{gpuTempSeries && gpuTempSeries?.length != 0 && (
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
<div className="mb-5">
{t("general.hardwareInfo.gpuTemperature")}
</div>
{gpuTempSeries.map((series) => (
<ThresholdBarGraph
key={series.name}
graphId={`${series.name}-temp`}
name={series.name}
unit="°C"
threshold={DetectorTempThreshold}
updateTimes={updateTimes}
data={[series]}
/>
))}
</div>
)}
</>
) : (
<Skeleton className="aspect-video w-full" />
)}
{statsHistory[0]?.npu_usages && (
<>
@@ -834,6 +923,30 @@ export default function GeneralMetrics({
) : (
<Skeleton className="aspect-video w-full" />
)}
{statsHistory.length != 0 ? (
<>
{npuTempSeries && npuTempSeries?.length != 0 && (
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
<div className="mb-5">
{t("general.hardwareInfo.npuTemperature")}
</div>
{npuTempSeries.map((series) => (
<ThresholdBarGraph
key={series.name}
graphId={`${series.name}-temp`}
name={series.name}
unit="°C"
threshold={DetectorTempThreshold}
updateTimes={updateTimes}
data={[series]}
/>
))}
</div>
)}
</>
) : (
<Skeleton className="aspect-video w-full" />
)}
</>
)}
</>

View File

@@ -4,7 +4,7 @@ import { defineConfig } from "vite";
import react from "@vitejs/plugin-react-swc";
import monacoEditorPlugin from "vite-plugin-monaco-editor";
const proxyHost = process.env.PROXY_HOST || "localhost:5000";
const proxyHost = process.env.PROXY_HOST || "1ocalhost:5000";
// https://vitejs.dev/config/
export default defineConfig({