mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-23 21:48:13 -05:00
Compare commits
20 Commits
v0.17.0-be
...
dev
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f862ef5d0c | ||
|
|
f74df040bb | ||
|
|
54f4af3c6a | ||
|
|
8a4d5f34da | ||
|
|
60052e5f9f | ||
|
|
e636449d56 | ||
|
|
6a0e31dcf9 | ||
|
|
074b060e9c | ||
|
|
ae009b9861 | ||
|
|
13957fec00 | ||
|
|
3edfd905de | ||
|
|
78eace258e | ||
|
|
c292cd207d | ||
|
|
e7d047715d | ||
|
|
818cccb2e3 | ||
|
|
f543d0ab31 | ||
|
|
39af85625e | ||
|
|
fa16539429 | ||
|
|
e1545a8db8 | ||
|
|
51ee6f26e6 |
8
.github/workflows/pull_request.yml
vendored
8
.github/workflows/pull_request.yml
vendored
@@ -19,9 +19,9 @@ jobs:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 16.x
|
||||
node-version: 20.x
|
||||
- run: npm install
|
||||
working-directory: ./web
|
||||
- name: Lint
|
||||
@@ -35,7 +35,7 @@ jobs:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 20.x
|
||||
- run: npm install
|
||||
@@ -78,7 +78,7 @@ jobs:
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 20.x
|
||||
- name: Install devcontainer cli
|
||||
|
||||
@@ -4,14 +4,14 @@
|
||||
|
||||
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
|
||||
|
||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
|
||||
</a>
|
||||
|
||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
|
||||
|
||||
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU,并且功耗也极低。
|
||||
@@ -38,6 +38,7 @@
|
||||
## 协议
|
||||
|
||||
本项目采用 **MIT 许可证**授权。
|
||||
|
||||
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
|
||||
|
||||
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
|
||||
|
||||
@@ -237,8 +237,18 @@ ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
|
||||
# Set HailoRT to disable logging
|
||||
ENV HAILORT_LOGGER_PATH=NONE
|
||||
|
||||
# TensorFlow error only
|
||||
# TensorFlow C++ logging suppression (must be set before import)
|
||||
# TF_CPP_MIN_LOG_LEVEL: 0=all, 1=INFO+, 2=WARNING+, 3=ERROR+ (we use 3 for errors only)
|
||||
ENV TF_CPP_MIN_LOG_LEVEL=3
|
||||
# Suppress verbose logging from TensorFlow C++ code
|
||||
ENV TF_CPP_MIN_VLOG_LEVEL=3
|
||||
# Disable oneDNN optimization messages ("optimized with oneDNN...")
|
||||
ENV TF_ENABLE_ONEDNN_OPTS=0
|
||||
# Suppress AutoGraph verbosity during conversion
|
||||
ENV AUTOGRAPH_VERBOSITY=0
|
||||
# Google Logging (GLOG) suppression for TensorFlow components
|
||||
ENV GLOG_minloglevel=3
|
||||
ENV GLOG_logtostderr=0
|
||||
|
||||
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
|
||||
|
||||
|
||||
@@ -270,3 +270,42 @@ To use role-based access control, you must connect to Frigate via the **authenti
|
||||
1. Log in as an **admin** user via port `8971`.
|
||||
2. Navigate to **Settings > Users**.
|
||||
3. Edit a user’s role by selecting **admin** or **viewer**.
|
||||
|
||||
## API Authentication Guide
|
||||
|
||||
### Getting a Bearer Token
|
||||
|
||||
To use the Frigate API, you need to authenticate first. Follow these steps to obtain a Bearer token:
|
||||
|
||||
#### 1. Login
|
||||
|
||||
Make a POST request to `/login` with your credentials:
|
||||
|
||||
```bash
|
||||
curl -i -X POST https://frigate_ip:8971/api/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"user": "admin", "password": "your_password"}'
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
You may need to include `-k` in the argument list in these steps (eg: `curl -k -i -X POST ...`) if your Frigate instance is using a self-signed certificate.
|
||||
|
||||
:::
|
||||
|
||||
The response will contain a cookie with the JWT token.
|
||||
|
||||
#### 2. Using the Bearer Token
|
||||
|
||||
Once you have the token, include it in the Authorization header for subsequent requests:
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer <your_token>" https://frigate_ip:8971/api/profile
|
||||
```
|
||||
|
||||
#### 3. Token Lifecycle
|
||||
|
||||
- Tokens are valid for the configured session length
|
||||
- Tokens are automatically refreshed when you visit the `/auth` endpoint
|
||||
- Tokens are invalidated when the user's password is changed
|
||||
- Use `/logout` to clear your session cookie
|
||||
|
||||
@@ -3,7 +3,7 @@ id: object_classification
|
||||
title: Object Classification
|
||||
---
|
||||
|
||||
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
|
||||
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
@@ -11,6 +11,8 @@ Object classification models are lightweight and run very fast on CPU. Inference
|
||||
|
||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||
|
||||
A CPU with AVX instructions is required for training and inference.
|
||||
|
||||
## Classes
|
||||
|
||||
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
|
||||
@@ -31,9 +33,15 @@ For object classification:
|
||||
- Example: `cat` → `Leo`, `Charlie`, `None`.
|
||||
|
||||
- **Attribute**:
|
||||
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
|
||||
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
|
||||
- Ideal when multiple attributes can coexist independently.
|
||||
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
|
||||
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
|
||||
|
||||
:::note
|
||||
|
||||
A tracked object can only have a single sub label. If you are using Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
|
||||
|
||||
:::
|
||||
|
||||
## Assignment Requirements
|
||||
|
||||
@@ -73,6 +81,8 @@ classification:
|
||||
classification_type: sub_label # or: attribute
|
||||
```
|
||||
|
||||
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
|
||||
|
||||
## Training the model
|
||||
|
||||
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
|
||||
@@ -81,12 +91,16 @@ Creating and training the model is done within the Frigate UI using the `Classif
|
||||
|
||||
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
|
||||
|
||||
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". Create a third class, "none", for other neighborhood cats that are not your own.
|
||||
|
||||
### Step 2: Assign Training Examples
|
||||
|
||||
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.
|
||||
|
||||
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
|
||||
|
||||
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
|
||||
|
||||
### Improving the Model
|
||||
|
||||
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
||||
@@ -94,3 +108,23 @@ When choosing which objects to classify, start with a small number of visually d
|
||||
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
|
||||
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
|
||||
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
|
||||
|
||||
## Debugging Classification Models
|
||||
|
||||
To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.
|
||||
|
||||
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
|
||||
|
||||
```yaml
|
||||
logger:
|
||||
default: info
|
||||
logs:
|
||||
frigate.data_processing.real_time.custom_classification: debug
|
||||
```
|
||||
|
||||
The debug logs will show:
|
||||
|
||||
- Classification probabilities for each attempt
|
||||
- Whether scores meet the threshold requirement
|
||||
- Consensus calculations and when assignments are made
|
||||
- Object classification history and weighted scores
|
||||
|
||||
@@ -3,7 +3,7 @@ id: state_classification
|
||||
title: State Classification
|
||||
---
|
||||
|
||||
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
|
||||
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
@@ -11,6 +11,8 @@ State classification models are lightweight and run very fast on CPU. Inference
|
||||
|
||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||
|
||||
A CPU with AVX instructions is required for training and inference.
|
||||
|
||||
## Classes
|
||||
|
||||
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
|
||||
@@ -46,6 +48,8 @@ classification:
|
||||
crop: [0, 180, 220, 400]
|
||||
```
|
||||
|
||||
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
|
||||
|
||||
## Training the model
|
||||
|
||||
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
|
||||
@@ -60,11 +64,9 @@ Choose one or more cameras and draw a rectangle over the area of interest for ea
|
||||
|
||||
### Step 3: Assign Training Examples
|
||||
|
||||
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
|
||||
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state. It's not strictly required to select all images you see. If a state is missing from the samples, you can train it from the Recent tab later.
|
||||
|
||||
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
|
||||
|
||||
Once all images are assigned, training will begin automatically.
|
||||
Once some images are assigned, training will begin automatically.
|
||||
|
||||
### Improving the Model
|
||||
|
||||
@@ -72,3 +74,34 @@ Once all images are assigned, training will begin automatically.
|
||||
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
|
||||
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
|
||||
|
||||
## Debugging Classification Models
|
||||
|
||||
To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.
|
||||
|
||||
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
|
||||
|
||||
```yaml
|
||||
logger:
|
||||
default: info
|
||||
logs:
|
||||
frigate.data_processing.real_time.custom_classification: debug
|
||||
```
|
||||
|
||||
The debug logs will show:
|
||||
|
||||
- Classification probabilities for each attempt
|
||||
- Whether scores meet the threshold requirement
|
||||
- State verification progress (consecutive detections needed)
|
||||
- When state changes are published
|
||||
|
||||
### Recent Classifications
|
||||
|
||||
For state classification, images are only added to recent classifications under specific circumstances:
|
||||
|
||||
- **First detection**: The first classification attempt for a camera is always saved
|
||||
- **State changes**: Images are saved when the detected state differs from the current verified state
|
||||
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
|
||||
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)
|
||||
|
||||
Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.
|
||||
|
||||
@@ -56,7 +56,7 @@ Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
|
||||
|
||||
:::note
|
||||
|
||||
@@ -64,6 +64,10 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
|
||||
|
||||
:::
|
||||
|
||||
#### Ollama Cloud models
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
|
||||
@@ -13,7 +13,7 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
|
||||
|
||||
- **AMD**
|
||||
|
||||
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
|
||||
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
|
||||
|
||||
- **Intel**
|
||||
|
||||
|
||||
@@ -146,16 +146,16 @@ detectors:
|
||||
|
||||
### EdgeTPU Supported Models
|
||||
|
||||
| Model | Notes |
|
||||
| ------------------------------------- | ------------------------------------------- |
|
||||
| [MobileNet v2](#ssdlite-mobilenet-v2) | Default model |
|
||||
| [YOLOv9](#yolo-v9) | More accurate but slower than default model |
|
||||
| Model | Notes |
|
||||
| ----------------------- | ------------------------------------------- |
|
||||
| [Mobiledet](#mobiledet) | Default model |
|
||||
| [YOLOv9](#yolov9) | More accurate but slower than default model |
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
#### Mobiledet
|
||||
|
||||
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||
|
||||
#### YOLO v9
|
||||
#### YOLOv9
|
||||
|
||||
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
|
||||
|
||||
@@ -175,7 +175,7 @@ model:
|
||||
width: 320 # <--- should match the imgsize of the model, typically 320
|
||||
height: 320 # <--- should match the imgsize of the model, typically 320
|
||||
path: /config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite
|
||||
labelmap_path: /labelmap/labels-coco-17.txt
|
||||
labelmap_path: /config/labels-coco17.txt
|
||||
```
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 17 objects.
|
||||
|
||||
@@ -38,3 +38,7 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
|
||||
## [Periscope](https://github.com/maksz42/periscope)
|
||||
|
||||
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.
|
||||
|
||||
## [Scrypted - Frigate bridge plugin](https://github.com/apocaliss92/scrypted-frigate-bridge)
|
||||
|
||||
[Scrypted - Frigate bridge](https://github.com/apocaliss92/scrypted-frigate-bridge) is an plugin that allows to ingest Frigate detections, motion, videoclips on Scrypted as well as provide templates to export rebroadcast configurations on Frigate.
|
||||
|
||||
60
docs/docs/troubleshooting/dummy-camera.md
Normal file
60
docs/docs/troubleshooting/dummy-camera.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
id: dummy-camera
|
||||
title: Troubleshooting Detection
|
||||
---
|
||||
|
||||
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
|
||||
|
||||
## When to use
|
||||
|
||||
- Replaying an exported clip to reproduce incorrect detections
|
||||
- Testing configuration changes (model settings, trackers, filters) against a known clip
|
||||
- Gathering deterministic logs and recordings for debugging or issue reports
|
||||
|
||||
## Example Config
|
||||
|
||||
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
test:
|
||||
ffmpeg:
|
||||
inputs:
|
||||
- path: /media/frigate/car-stopping.mp4
|
||||
input_args: -re -stream_loop -1 -fflags +genpts
|
||||
roles:
|
||||
- detect
|
||||
detect:
|
||||
enabled: true
|
||||
record:
|
||||
enabled: false
|
||||
snapshots:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
|
||||
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
|
||||
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
|
||||
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
|
||||
3. Restart Frigate.
|
||||
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
|
||||
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
|
||||
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
|
||||
|
||||
## Variables to consider in object tracking
|
||||
|
||||
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
|
||||
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
|
||||
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
|
||||
|
||||
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- No video: verify the path is correct and accessible from the Frigate process/container.
|
||||
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
|
||||
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.
|
||||
@@ -9,8 +9,20 @@ Frigate includes built-in memory profiling using [memray](https://bloomberg.gith
|
||||
|
||||
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
|
||||
|
||||
```yaml
|
||||
# docker-compose example
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
environment:
|
||||
- FRIGATE_MEMRAY_MODULES=frigate.embeddings,frigate.capture
|
||||
```
|
||||
|
||||
```bash
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
|
||||
# docker run example
|
||||
docker run -e FRIGATE_MEMRAY_MODULES="frigate.embeddings" \
|
||||
...
|
||||
--name frigate <frigate_image>
|
||||
```
|
||||
|
||||
### Module Names
|
||||
@@ -24,11 +36,12 @@ Frigate processes are named using a module-based naming scheme. Common module na
|
||||
- `frigate.output` - Output processing
|
||||
- `frigate.audio_manager` - Audio processing
|
||||
- `frigate.embeddings` - Embeddings processing
|
||||
- `frigate.embeddings_manager` - Embeddings manager
|
||||
|
||||
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
|
||||
|
||||
```bash
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.capture:front_door"
|
||||
FRIGATE_MEMRAY_MODULES=frigate.capture:front_door
|
||||
```
|
||||
|
||||
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
|
||||
@@ -55,11 +68,20 @@ After a process exits normally, you'll find HTML reports in `/config/memray_repo
|
||||
|
||||
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
|
||||
|
||||
- Run `memray` inside the Frigate container:
|
||||
|
||||
```bash
|
||||
memray flamegraph /config/memray_reports/<module_name>.bin
|
||||
docker-compose exec frigate memray flamegraph /config/memray_reports/<module_name>.bin
|
||||
# or
|
||||
docker exec -it <container_name_or_id> memray flamegraph /config/memray_reports/<module_name>.bin
|
||||
```
|
||||
|
||||
This will generate an HTML file that you can open in your browser.
|
||||
- You can also copy the `.bin` file to the host and run `memray` locally if you have it installed:
|
||||
|
||||
```bash
|
||||
docker cp <container_name_or_id>:/config/memray_reports/<module_name>.bin /tmp/
|
||||
memray flamegraph /tmp/<module_name>.bin
|
||||
```
|
||||
|
||||
## Understanding the Reports
|
||||
|
||||
@@ -110,20 +132,4 @@ The interactive HTML reports allow you to:
|
||||
- Check that memray is properly installed (included by default in Frigate)
|
||||
- Verify the process actually started and ran (check process logs)
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Enable profiling for review and capture modules
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
|
||||
|
||||
# Start Frigate
|
||||
# ... let it run for a while ...
|
||||
|
||||
# Check for reports
|
||||
ls -lh /config/memray_reports/
|
||||
|
||||
# If a process crashed, manually generate report
|
||||
memray flamegraph /config/memray_reports/frigate_capture_front_door.bin
|
||||
```
|
||||
|
||||
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).
|
||||
|
||||
@@ -132,6 +132,7 @@ const sidebars: SidebarsConfig = {
|
||||
"troubleshooting/gpu",
|
||||
"troubleshooting/edgetpu",
|
||||
"troubleshooting/memory",
|
||||
"troubleshooting/dummy-camera",
|
||||
],
|
||||
Development: [
|
||||
"development/contributing",
|
||||
|
||||
194
docs/static/frigate-api.yaml
vendored
194
docs/static/frigate-api.yaml
vendored
@@ -14,19 +14,38 @@ paths:
|
||||
get:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Auth
|
||||
summary: Authenticate request
|
||||
description: |-
|
||||
Authenticates the current request based on proxy headers or JWT token.
|
||||
This endpoint verifies authentication credentials and manages JWT token refresh.
|
||||
On success, no JSON body is returned; authentication state is communicated via response headers and cookies.
|
||||
operationId: auth_auth_get
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"202":
|
||||
description: Authentication Accepted (no response body, different headers depending on auth method)
|
||||
headers:
|
||||
remote-user:
|
||||
description: Authenticated username or "viewer" in proxy-only mode
|
||||
schema:
|
||||
type: string
|
||||
remote-role:
|
||||
description: Resolved role (e.g., admin, viewer, or custom)
|
||||
schema:
|
||||
type: string
|
||||
Set-Cookie:
|
||||
description: May include refreshed JWT cookie ("frigate-token") when applicable
|
||||
schema:
|
||||
type: string
|
||||
"401":
|
||||
description: Authentication Failed
|
||||
/profile:
|
||||
get:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Profile
|
||||
summary: Get user profile
|
||||
description: |-
|
||||
Returns the current authenticated user's profile including username, role, and allowed cameras.
|
||||
This endpoint requires authentication and returns information about the user's permissions.
|
||||
operationId: profile_profile_get
|
||||
responses:
|
||||
"200":
|
||||
@@ -34,11 +53,16 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"401":
|
||||
description: Unauthorized
|
||||
/logout:
|
||||
get:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Logout
|
||||
summary: Logout user
|
||||
description: |-
|
||||
Logs out the current user by clearing the session cookie.
|
||||
After logout, subsequent requests will require re-authentication.
|
||||
operationId: logout_logout_get
|
||||
responses:
|
||||
"200":
|
||||
@@ -46,11 +70,22 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"303":
|
||||
description: See Other (redirects to login page)
|
||||
/login:
|
||||
post:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Login
|
||||
summary: Login with credentials
|
||||
description: |-
|
||||
Authenticates a user with username and password.
|
||||
Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests.
|
||||
The JWT token can also be retrieved from the response and used as a Bearer token in the Authorization header.
|
||||
|
||||
Example using Bearer token:
|
||||
```
|
||||
curl -H "Authorization: Bearer <token_value>" https://frigate_ip:8971/api/profile
|
||||
```
|
||||
operationId: login_login_post
|
||||
requestBody:
|
||||
required: true
|
||||
@@ -64,6 +99,11 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"401":
|
||||
description: Login Failed - Invalid credentials
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
@@ -74,7 +114,10 @@ paths:
|
||||
get:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Get Users
|
||||
summary: Get all users
|
||||
description: |-
|
||||
Returns a list of all users with their usernames and roles.
|
||||
Requires admin role. Each user object contains the username and assigned role.
|
||||
operationId: get_users_users_get
|
||||
responses:
|
||||
"200":
|
||||
@@ -82,10 +125,19 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"403":
|
||||
description: Forbidden - Admin role required
|
||||
post:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Create User
|
||||
summary: Create new user
|
||||
description: |-
|
||||
Creates a new user with the specified username, password, and role.
|
||||
Requires admin role. Password must meet strength requirements:
|
||||
- Minimum 8 characters
|
||||
- At least one uppercase letter
|
||||
- At least one digit
|
||||
- At least one special character (!@#$%^&*(),.?":{}\|<>)
|
||||
operationId: create_user_users_post
|
||||
requestBody:
|
||||
required: true
|
||||
@@ -99,6 +151,13 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"400":
|
||||
description: Bad Request - Invalid username or role
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"403":
|
||||
description: Forbidden - Admin role required
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
@@ -109,7 +168,10 @@ paths:
|
||||
delete:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Delete User
|
||||
summary: Delete user
|
||||
description: |-
|
||||
Deletes a user by username. The built-in admin user cannot be deleted.
|
||||
Requires admin role. Returns success message or error if user not found.
|
||||
operationId: delete_user_users__username__delete
|
||||
parameters:
|
||||
- name: username
|
||||
@@ -118,12 +180,15 @@ paths:
|
||||
schema:
|
||||
type: string
|
||||
title: Username
|
||||
description: The username of the user to delete
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"403":
|
||||
description: Forbidden - Cannot delete admin user or admin role required
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
@@ -134,7 +199,17 @@ paths:
|
||||
put:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Update Password
|
||||
summary: Update user password
|
||||
description: |-
|
||||
Updates a user's password. Users can only change their own password unless they have admin role.
|
||||
Requires the current password to verify identity for non-admin users.
|
||||
Password must meet strength requirements:
|
||||
- Minimum 8 characters
|
||||
- At least one uppercase letter
|
||||
- At least one digit
|
||||
- At least one special character (!@#$%^&*(),.?":{}\|<>)
|
||||
|
||||
If user changes their own password, a new JWT cookie is automatically issued.
|
||||
operationId: update_password_users__username__password_put
|
||||
parameters:
|
||||
- name: username
|
||||
@@ -143,6 +218,7 @@ paths:
|
||||
schema:
|
||||
type: string
|
||||
title: Username
|
||||
description: The username of the user whose password to update
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
@@ -155,6 +231,14 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"400":
|
||||
description: Bad Request - Current password required or password doesn't meet requirements
|
||||
"401":
|
||||
description: Unauthorized - Current password is incorrect
|
||||
"403":
|
||||
description: Forbidden - Viewers can only update their own password
|
||||
"404":
|
||||
description: Not Found - User not found
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
@@ -165,7 +249,10 @@ paths:
|
||||
put:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Update Role
|
||||
summary: Update user role
|
||||
description: |-
|
||||
Updates a user's role. The built-in admin user's role cannot be modified.
|
||||
Requires admin role. Valid roles are defined in the configuration.
|
||||
operationId: update_role_users__username__role_put
|
||||
parameters:
|
||||
- name: username
|
||||
@@ -174,6 +261,7 @@ paths:
|
||||
schema:
|
||||
type: string
|
||||
title: Username
|
||||
description: The username of the user whose role to update
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
@@ -186,6 +274,10 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"400":
|
||||
description: Bad Request - Invalid role
|
||||
"403":
|
||||
description: Forbidden - Cannot modify admin user's role or admin role required
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
@@ -524,6 +616,32 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/classification/attributes:
|
||||
get:
|
||||
tags:
|
||||
- Classification
|
||||
summary: Get custom classification attributes
|
||||
description: |-
|
||||
Returns custom classification attributes for a given object type.
|
||||
Only includes models with classification_type set to 'attribute'.
|
||||
By default returns a flat sorted list of all attribute labels.
|
||||
If group_by_model is true, returns attributes grouped by model name.
|
||||
operationId: get_custom_attributes_classification_attributes_get
|
||||
parameters:
|
||||
- name: object_type
|
||||
in: query
|
||||
schema:
|
||||
type: string
|
||||
- name: group_by_model
|
||||
in: query
|
||||
schema:
|
||||
type: boolean
|
||||
default: false
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
"422":
|
||||
description: Validation Error
|
||||
/classification/{name}/dataset:
|
||||
get:
|
||||
tags:
|
||||
@@ -2820,6 +2938,42 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/events/{event_id}/attributes:
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Set custom classification attributes
|
||||
description: |-
|
||||
Sets an event's custom classification attributes for all attribute-type
|
||||
models that apply to the event's object type.
|
||||
Returns a success message or an error if the event is not found.
|
||||
operationId: set_attributes_events__event_id__attributes_post
|
||||
parameters:
|
||||
- name: event_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Event Id
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/EventsAttributesBody"
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/GenericResponse"
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/events/{event_id}/description:
|
||||
post:
|
||||
tags:
|
||||
@@ -4867,6 +5021,18 @@ components:
|
||||
required:
|
||||
- subLabel
|
||||
title: EventsSubLabelBody
|
||||
EventsAttributesBody:
|
||||
properties:
|
||||
attributes:
|
||||
type: object
|
||||
title: Attributes
|
||||
description: Object with model names as keys and attribute values
|
||||
additionalProperties:
|
||||
type: string
|
||||
type: object
|
||||
required:
|
||||
- attributes
|
||||
title: EventsAttributesBody
|
||||
ExportModel:
|
||||
properties:
|
||||
id:
|
||||
|
||||
@@ -143,17 +143,6 @@ def require_admin_by_default():
|
||||
return admin_checker
|
||||
|
||||
|
||||
def _is_authenticated(request: Request) -> bool:
|
||||
"""
|
||||
Helper to determine if a request is from an authenticated user.
|
||||
|
||||
Returns True if the request has a valid authenticated user (not anonymous).
|
||||
Port 5000 internal requests are considered anonymous despite having admin role.
|
||||
"""
|
||||
username = request.headers.get("remote-user")
|
||||
return username is not None and username != "anonymous"
|
||||
|
||||
|
||||
def allow_public():
|
||||
"""
|
||||
Override dependency to allow unauthenticated access to an endpoint.
|
||||
@@ -173,27 +162,24 @@ def allow_public():
|
||||
|
||||
def allow_any_authenticated():
|
||||
"""
|
||||
Override dependency to allow any authenticated user (bypass admin requirement).
|
||||
Override dependency to allow any request that passed through the /auth endpoint.
|
||||
|
||||
Allows:
|
||||
- Port 5000 internal requests (have admin role despite anonymous user)
|
||||
- Any authenticated user with a real username (not "anonymous")
|
||||
- Port 5000 internal requests (remote-user: "anonymous", remote-role: "admin")
|
||||
- Authenticated users with JWT tokens (remote-user: username)
|
||||
- Unauthenticated requests when auth is disabled (remote-user: "viewer")
|
||||
|
||||
Rejects:
|
||||
- Port 8971 requests with anonymous user (auth disabled, no proxy auth)
|
||||
- Requests with no remote-user header (did not pass through /auth endpoint)
|
||||
|
||||
Example:
|
||||
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
|
||||
"""
|
||||
|
||||
async def auth_checker(request: Request):
|
||||
# Port 5000 requests have admin role and should be allowed
|
||||
role = request.headers.get("remote-role")
|
||||
if role == "admin":
|
||||
return
|
||||
|
||||
# Otherwise require a real authenticated user (not anonymous)
|
||||
if not _is_authenticated(request):
|
||||
# Ensure a remote-user has been set by the /auth endpoint
|
||||
username = request.headers.get("remote-user")
|
||||
if username is None:
|
||||
raise HTTPException(status_code=401, detail="Authentication required")
|
||||
return
|
||||
|
||||
@@ -549,7 +535,37 @@ def resolve_role(
|
||||
|
||||
|
||||
# Endpoints
|
||||
@router.get("/auth", dependencies=[Depends(allow_public())])
|
||||
@router.get(
|
||||
"/auth",
|
||||
dependencies=[Depends(allow_public())],
|
||||
summary="Authenticate request",
|
||||
description=(
|
||||
"Authenticates the current request based on proxy headers or JWT token. "
|
||||
"This endpoint verifies authentication credentials and manages JWT token refresh. "
|
||||
"On success, no JSON body is returned; authentication state is communicated via response headers and cookies."
|
||||
),
|
||||
status_code=202,
|
||||
responses={
|
||||
202: {
|
||||
"description": "Authentication Accepted (no response body)",
|
||||
"headers": {
|
||||
"remote-user": {
|
||||
"description": 'Authenticated username or "viewer" in proxy-only mode',
|
||||
"schema": {"type": "string"},
|
||||
},
|
||||
"remote-role": {
|
||||
"description": "Resolved role (e.g., admin, viewer, or custom)",
|
||||
"schema": {"type": "string"},
|
||||
},
|
||||
"Set-Cookie": {
|
||||
"description": "May include refreshed JWT cookie when applicable",
|
||||
"schema": {"type": "string"},
|
||||
},
|
||||
},
|
||||
},
|
||||
401: {"description": "Authentication Failed"},
|
||||
},
|
||||
)
|
||||
def auth(request: Request):
|
||||
auth_config: AuthConfig = request.app.frigate_config.auth
|
||||
proxy_config: ProxyConfig = request.app.frigate_config.proxy
|
||||
@@ -576,12 +592,12 @@ def auth(request: Request):
|
||||
# if auth is disabled, just apply the proxy header map and return success
|
||||
if not auth_config.enabled:
|
||||
# pass the user header value from the upstream proxy if a mapping is specified
|
||||
# or use anonymous if none are specified
|
||||
# or use viewer if none are specified
|
||||
user_header = proxy_config.header_map.user
|
||||
success_response.headers["remote-user"] = (
|
||||
request.headers.get(user_header, default="anonymous")
|
||||
request.headers.get(user_header, default="viewer")
|
||||
if user_header
|
||||
else "anonymous"
|
||||
else "viewer"
|
||||
)
|
||||
|
||||
# parse header and resolve a valid role
|
||||
@@ -689,9 +705,14 @@ def auth(request: Request):
|
||||
return fail_response
|
||||
|
||||
|
||||
@router.get("/profile", dependencies=[Depends(allow_any_authenticated())])
|
||||
@router.get(
|
||||
"/profile",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get user profile",
|
||||
description="Returns the current authenticated user's profile including username, role, and allowed cameras. This endpoint requires authentication and returns information about the user's permissions.",
|
||||
)
|
||||
def profile(request: Request):
|
||||
username = request.headers.get("remote-user", "anonymous")
|
||||
username = request.headers.get("remote-user", "viewer")
|
||||
role = request.headers.get("remote-role", "viewer")
|
||||
|
||||
all_camera_names = set(request.app.frigate_config.cameras.keys())
|
||||
@@ -703,7 +724,12 @@ def profile(request: Request):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/logout", dependencies=[Depends(allow_public())])
|
||||
@router.get(
|
||||
"/logout",
|
||||
dependencies=[Depends(allow_public())],
|
||||
summary="Logout user",
|
||||
description="Logs out the current user by clearing the session cookie. After logout, subsequent requests will require re-authentication.",
|
||||
)
|
||||
def logout(request: Request):
|
||||
auth_config: AuthConfig = request.app.frigate_config.auth
|
||||
response = RedirectResponse("/login", status_code=303)
|
||||
@@ -714,7 +740,12 @@ def logout(request: Request):
|
||||
limiter = Limiter(key_func=get_remote_addr)
|
||||
|
||||
|
||||
@router.post("/login", dependencies=[Depends(allow_public())])
|
||||
@router.post(
|
||||
"/login",
|
||||
dependencies=[Depends(allow_public())],
|
||||
summary="Login with credentials",
|
||||
description='Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The JWT token can also be retrieved from the response and used as a Bearer token in the Authorization header.\n\nExample using Bearer token:\n```\ncurl -H "Authorization: Bearer <token_value>" https://frigate_ip:8971/api/profile\n```',
|
||||
)
|
||||
@limiter.limit(limit_value=rateLimiter.get_limit)
|
||||
def login(request: Request, body: AppPostLoginBody):
|
||||
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
|
||||
@@ -752,7 +783,12 @@ def login(request: Request, body: AppPostLoginBody):
|
||||
return JSONResponse(content={"message": "Login failed"}, status_code=401)
|
||||
|
||||
|
||||
@router.get("/users", dependencies=[Depends(require_role(["admin"]))])
|
||||
@router.get(
|
||||
"/users",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get all users",
|
||||
description="Returns a list of all users with their usernames and roles. Requires admin role. Each user object contains the username and assigned role.",
|
||||
)
|
||||
def get_users():
|
||||
exports = (
|
||||
User.select(User.username, User.role).order_by(User.username).dicts().iterator()
|
||||
@@ -760,7 +796,12 @@ def get_users():
|
||||
return JSONResponse([e for e in exports])
|
||||
|
||||
|
||||
@router.post("/users", dependencies=[Depends(require_role(["admin"]))])
|
||||
@router.post(
|
||||
"/users",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create new user",
|
||||
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
|
||||
)
|
||||
def create_user(
|
||||
request: Request,
|
||||
body: AppPostUsersBody,
|
||||
@@ -789,7 +830,12 @@ def create_user(
|
||||
return JSONResponse(content={"username": body.username})
|
||||
|
||||
|
||||
@router.delete("/users/{username}", dependencies=[Depends(require_role(["admin"]))])
|
||||
@router.delete(
|
||||
"/users/{username}",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete user",
|
||||
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role. Returns success message or error if user not found.",
|
||||
)
|
||||
def delete_user(request: Request, username: str):
|
||||
# Prevent deletion of the built-in admin user
|
||||
if username == "admin":
|
||||
@@ -802,7 +848,10 @@ def delete_user(request: Request, username: str):
|
||||
|
||||
|
||||
@router.put(
|
||||
"/users/{username}/password", dependencies=[Depends(allow_any_authenticated())]
|
||||
"/users/{username}/password",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Update user password",
|
||||
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
|
||||
)
|
||||
async def update_password(
|
||||
request: Request,
|
||||
@@ -830,13 +879,9 @@ async def update_password(
|
||||
except DoesNotExist:
|
||||
return JSONResponse(content={"message": "User not found"}, status_code=404)
|
||||
|
||||
# Require old_password when:
|
||||
# 1. Non-admin user is changing another user's password (admin only action)
|
||||
# 2. Any user is changing their own password
|
||||
is_changing_own_password = current_username == username
|
||||
is_non_admin = current_role != "admin"
|
||||
|
||||
if is_changing_own_password or is_non_admin:
|
||||
# Require old_password when non-admin user is changing any password
|
||||
# Admin users changing passwords do NOT need to provide the current password
|
||||
if current_role != "admin":
|
||||
if not body.old_password:
|
||||
return JSONResponse(
|
||||
content={"message": "Current password is required"},
|
||||
@@ -887,6 +932,8 @@ async def update_password(
|
||||
@router.put(
|
||||
"/users/{username}/role",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Update user role",
|
||||
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role. Valid roles are defined in the configuration.",
|
||||
)
|
||||
async def update_role(
|
||||
request: Request,
|
||||
|
||||
@@ -31,6 +31,7 @@ from frigate.api.defs.response.generic_response import GenericResponse
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.config.camera import DetectConfig
|
||||
from frigate.config.classification import ObjectClassificationType
|
||||
from frigate.const import CLIPS_DIR, FACE_DIR, MODEL_CACHE_DIR
|
||||
from frigate.embeddings import EmbeddingsContext
|
||||
from frigate.models import Event
|
||||
@@ -39,6 +40,7 @@ from frigate.util.classification import (
|
||||
collect_state_classification_examples,
|
||||
get_dataset_image_count,
|
||||
read_training_metadata,
|
||||
write_training_metadata,
|
||||
)
|
||||
from frigate.util.file import get_event_snapshot
|
||||
|
||||
@@ -622,6 +624,59 @@ def get_classification_dataset(name: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/classification/attributes",
|
||||
summary="Get custom classification attributes",
|
||||
description="""Returns custom classification attributes for a given object type.
|
||||
Only includes models with classification_type set to 'attribute'.
|
||||
By default returns a flat sorted list of all attribute labels.
|
||||
If group_by_model is true, returns attributes grouped by model name.""",
|
||||
)
|
||||
def get_custom_attributes(
|
||||
request: Request, object_type: str = None, group_by_model: bool = False
|
||||
):
|
||||
models_with_attributes = {}
|
||||
|
||||
for (
|
||||
model_key,
|
||||
model_config,
|
||||
) in request.app.frigate_config.classification.custom.items():
|
||||
if (
|
||||
not model_config.enabled
|
||||
or not model_config.object_config
|
||||
or model_config.object_config.classification_type
|
||||
!= ObjectClassificationType.attribute
|
||||
):
|
||||
continue
|
||||
|
||||
model_objects = getattr(model_config.object_config, "objects", []) or []
|
||||
if object_type is not None and object_type not in model_objects:
|
||||
continue
|
||||
|
||||
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
|
||||
if not os.path.exists(dataset_dir):
|
||||
continue
|
||||
|
||||
attributes = []
|
||||
for category_name in os.listdir(dataset_dir):
|
||||
category_dir = os.path.join(dataset_dir, category_name)
|
||||
if os.path.isdir(category_dir) and category_name != "none":
|
||||
attributes.append(category_name)
|
||||
|
||||
if attributes:
|
||||
model_name = model_config.name or model_key
|
||||
models_with_attributes[model_name] = sorted(attributes)
|
||||
|
||||
if group_by_model:
|
||||
return JSONResponse(content=models_with_attributes)
|
||||
else:
|
||||
# Flatten to a unique sorted list
|
||||
all_attributes = set()
|
||||
for attributes in models_with_attributes.values():
|
||||
all_attributes.update(attributes)
|
||||
return JSONResponse(content=sorted(list(all_attributes)))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/classification/{name}/train",
|
||||
summary="Get classification train images",
|
||||
@@ -788,6 +843,12 @@ def rename_classification_category(
|
||||
|
||||
try:
|
||||
os.rename(old_folder, new_folder)
|
||||
|
||||
# Mark dataset as ready to train by resetting training metadata
|
||||
# This ensures the dataset is marked as changed after renaming
|
||||
sanitized_name = sanitize_filename(name)
|
||||
write_training_metadata(sanitized_name, 0)
|
||||
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
|
||||
@@ -12,6 +12,7 @@ class EventsQueryParams(BaseModel):
|
||||
labels: Optional[str] = "all"
|
||||
sub_label: Optional[str] = "all"
|
||||
sub_labels: Optional[str] = "all"
|
||||
attributes: Optional[str] = "all"
|
||||
zone: Optional[str] = "all"
|
||||
zones: Optional[str] = "all"
|
||||
limit: Optional[int] = 100
|
||||
@@ -58,6 +59,8 @@ class EventsSearchQueryParams(BaseModel):
|
||||
limit: Optional[int] = 50
|
||||
cameras: Optional[str] = "all"
|
||||
labels: Optional[str] = "all"
|
||||
sub_labels: Optional[str] = "all"
|
||||
attributes: Optional[str] = "all"
|
||||
zones: Optional[str] = "all"
|
||||
after: Optional[float] = None
|
||||
before: Optional[float] = None
|
||||
|
||||
@@ -24,6 +24,13 @@ class EventsLPRBody(BaseModel):
|
||||
)
|
||||
|
||||
|
||||
class EventsAttributesBody(BaseModel):
|
||||
attributes: List[str] = Field(
|
||||
title="Selected classification attributes for the event",
|
||||
default_factory=list,
|
||||
)
|
||||
|
||||
|
||||
class EventsDescriptionBody(BaseModel):
|
||||
description: Union[str, None] = Field(title="The description of the event")
|
||||
|
||||
|
||||
@@ -37,6 +37,7 @@ from frigate.api.defs.query.regenerate_query_parameters import (
|
||||
RegenerateQueryParameters,
|
||||
)
|
||||
from frigate.api.defs.request.events_body import (
|
||||
EventsAttributesBody,
|
||||
EventsCreateBody,
|
||||
EventsDeleteBody,
|
||||
EventsDescriptionBody,
|
||||
@@ -55,6 +56,7 @@ from frigate.api.defs.response.event_response import (
|
||||
from frigate.api.defs.response.generic_response import GenericResponse
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
|
||||
from frigate.config.classification import ObjectClassificationType
|
||||
from frigate.const import CLIPS_DIR, TRIGGER_DIR
|
||||
from frigate.embeddings import EmbeddingsContext
|
||||
from frigate.models import Event, ReviewSegment, Timeline, Trigger
|
||||
@@ -99,6 +101,8 @@ def events(
|
||||
if sub_labels == "all" and sub_label != "all":
|
||||
sub_labels = sub_label
|
||||
|
||||
attributes = unquote(params.attributes)
|
||||
|
||||
zone = params.zone
|
||||
zones = params.zones
|
||||
|
||||
@@ -187,6 +191,17 @@ def events(
|
||||
sub_label_clause = reduce(operator.or_, sub_label_clauses)
|
||||
clauses.append((sub_label_clause))
|
||||
|
||||
if attributes != "all":
|
||||
# Custom classification results are stored as data[model_name] = result_value
|
||||
filtered_attributes = attributes.split(",")
|
||||
attribute_clauses = []
|
||||
|
||||
for attr in filtered_attributes:
|
||||
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
|
||||
|
||||
attribute_clause = reduce(operator.or_, attribute_clauses)
|
||||
clauses.append(attribute_clause)
|
||||
|
||||
if recognized_license_plate != "all":
|
||||
filtered_recognized_license_plates = recognized_license_plate.split(",")
|
||||
|
||||
@@ -492,6 +507,8 @@ def events_search(
|
||||
# Filters
|
||||
cameras = params.cameras
|
||||
labels = params.labels
|
||||
sub_labels = params.sub_labels
|
||||
attributes = params.attributes
|
||||
zones = params.zones
|
||||
after = params.after
|
||||
before = params.before
|
||||
@@ -566,6 +583,38 @@ def events_search(
|
||||
if labels != "all":
|
||||
event_filters.append((Event.label << labels.split(",")))
|
||||
|
||||
if sub_labels != "all":
|
||||
# use matching so joined sub labels are included
|
||||
# for example a sub label 'bob' would get events
|
||||
# with sub labels 'bob' and 'bob, john'
|
||||
sub_label_clauses = []
|
||||
filtered_sub_labels = sub_labels.split(",")
|
||||
|
||||
if "None" in filtered_sub_labels:
|
||||
filtered_sub_labels.remove("None")
|
||||
sub_label_clauses.append((Event.sub_label.is_null()))
|
||||
|
||||
for label in filtered_sub_labels:
|
||||
sub_label_clauses.append(
|
||||
(Event.sub_label.cast("text") == label)
|
||||
) # include exact matches
|
||||
|
||||
# include this label when part of a list
|
||||
sub_label_clauses.append((Event.sub_label.cast("text") % f"*{label},*"))
|
||||
sub_label_clauses.append((Event.sub_label.cast("text") % f"*, {label}*"))
|
||||
|
||||
event_filters.append((reduce(operator.or_, sub_label_clauses)))
|
||||
|
||||
if attributes != "all":
|
||||
# Custom classification results are stored as data[model_name] = result_value
|
||||
filtered_attributes = attributes.split(",")
|
||||
attribute_clauses = []
|
||||
|
||||
for attr in filtered_attributes:
|
||||
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
|
||||
|
||||
event_filters.append(reduce(operator.or_, attribute_clauses))
|
||||
|
||||
if zones != "all":
|
||||
zone_clauses = []
|
||||
filtered_zones = zones.split(",")
|
||||
@@ -1351,6 +1400,107 @@ async def set_plate(
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/events/{event_id}/attributes",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Set custom classification attributes",
|
||||
description=(
|
||||
"Sets an event's custom classification attributes for all attribute-type "
|
||||
"models that apply to the event's object type."
|
||||
),
|
||||
)
|
||||
async def set_attributes(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
body: EventsAttributesBody,
|
||||
):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
await require_camera_access(event.camera, request=request)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content=({"success": False, "message": f"Event {event_id} not found."}),
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
object_type = event.label
|
||||
selected_attributes = set(body.attributes or [])
|
||||
applied_updates: list[dict[str, str | float | None]] = []
|
||||
|
||||
for (
|
||||
model_key,
|
||||
model_config,
|
||||
) in request.app.frigate_config.classification.custom.items():
|
||||
# Only apply to enabled attribute classifiers that target this object type
|
||||
if (
|
||||
not model_config.enabled
|
||||
or not model_config.object_config
|
||||
or model_config.object_config.classification_type
|
||||
!= ObjectClassificationType.attribute
|
||||
or object_type not in (model_config.object_config.objects or [])
|
||||
):
|
||||
continue
|
||||
|
||||
# Get available labels from dataset directory
|
||||
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(model_key), "dataset")
|
||||
available_labels = set()
|
||||
|
||||
if os.path.exists(dataset_dir):
|
||||
for category_name in os.listdir(dataset_dir):
|
||||
category_dir = os.path.join(dataset_dir, category_name)
|
||||
if os.path.isdir(category_dir):
|
||||
available_labels.add(category_name)
|
||||
|
||||
if not available_labels:
|
||||
logger.warning(
|
||||
"No dataset found for custom attribute model %s at %s",
|
||||
model_key,
|
||||
dataset_dir,
|
||||
)
|
||||
continue
|
||||
|
||||
# Find all selected attributes that apply to this model
|
||||
model_name = model_config.name or model_key
|
||||
matching_attrs = selected_attributes & available_labels
|
||||
|
||||
if matching_attrs:
|
||||
# Publish updates for each selected attribute
|
||||
for attr in matching_attrs:
|
||||
request.app.event_metadata_updater.publish(
|
||||
(event_id, model_name, attr, 1.0),
|
||||
EventMetadataTypeEnum.attribute.value,
|
||||
)
|
||||
applied_updates.append(
|
||||
{"model": model_name, "label": attr, "score": 1.0}
|
||||
)
|
||||
else:
|
||||
# Clear this model's attribute
|
||||
request.app.event_metadata_updater.publish(
|
||||
(event_id, model_name, None, None),
|
||||
EventMetadataTypeEnum.attribute.value,
|
||||
)
|
||||
applied_updates.append({"model": model_name, "label": None, "score": None})
|
||||
|
||||
if len(applied_updates) == 0:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "No matching attributes found for this object type.",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"message": f"Updated {len(applied_updates)} attribute(s)",
|
||||
"applied": applied_updates,
|
||||
},
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/events/{event_id}/description",
|
||||
response_model=GenericResponse,
|
||||
|
||||
@@ -100,6 +100,10 @@ class FrigateApp:
|
||||
)
|
||||
if (
|
||||
config.semantic_search.enabled
|
||||
or any(
|
||||
c.objects.genai.enabled or c.review.genai.enabled
|
||||
for c in config.cameras.values()
|
||||
)
|
||||
or config.lpr.enabled
|
||||
or config.face_recognition.enabled
|
||||
or len(config.classification.custom) > 0
|
||||
|
||||
@@ -225,7 +225,8 @@ class MqttClient(Communicator):
|
||||
"birdseye_mode",
|
||||
"review_alerts",
|
||||
"review_detections",
|
||||
"genai",
|
||||
"object_descriptions",
|
||||
"review_descriptions",
|
||||
]
|
||||
|
||||
for name in self.config.cameras.keys():
|
||||
|
||||
@@ -77,6 +77,9 @@ FFMPEG_HWACCEL_RKMPP = "preset-rkmpp"
|
||||
FFMPEG_HWACCEL_AMF = "preset-amd-amf"
|
||||
FFMPEG_HVC1_ARGS = ["-tag:v", "hvc1"]
|
||||
|
||||
# RKNN constants
|
||||
SUPPORTED_RK_SOCS = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
|
||||
|
||||
# Regex constants
|
||||
|
||||
REGEX_CAMERA_NAME = r"^[a-zA-Z0-9_-]+$"
|
||||
|
||||
@@ -374,6 +374,9 @@ class LicensePlateProcessingMixin:
|
||||
combined_plate = re.sub(
|
||||
pattern, replacement, combined_plate
|
||||
)
|
||||
logger.debug(
|
||||
f"{camera}: Processing replace rule: '{pattern}' -> '{replacement}', result: '{combined_plate}'"
|
||||
)
|
||||
except re.error as e:
|
||||
logger.warning(
|
||||
f"{camera}: Invalid regex in replace_rules '{pattern}': {e}"
|
||||
@@ -381,7 +384,7 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
if combined_plate != original_combined:
|
||||
logger.debug(
|
||||
f"{camera}: Rules applied: '{original_combined}' -> '{combined_plate}'"
|
||||
f"{camera}: All rules applied: '{original_combined}' -> '{combined_plate}'"
|
||||
)
|
||||
|
||||
# Compute the combined area for qualifying boxes
|
||||
|
||||
@@ -131,8 +131,9 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
|
||||
},
|
||||
)
|
||||
|
||||
# Embed the description
|
||||
self.embeddings.embed_description(event_id, transcription)
|
||||
# Embed the description if semantic search is enabled
|
||||
if self.config.semantic_search.enabled:
|
||||
self.embeddings.embed_description(event_id, transcription)
|
||||
|
||||
except DoesNotExist:
|
||||
logger.debug("No recording found for audio transcription post-processing")
|
||||
|
||||
@@ -131,6 +131,8 @@ class ObjectDescriptionProcessor(PostProcessorApi):
|
||||
)
|
||||
):
|
||||
self._process_genai_description(event, camera_config, thumbnail)
|
||||
else:
|
||||
self.cleanup_event(event.id)
|
||||
|
||||
def __regenerate_description(self, event_id: str, source: str, force: bool) -> None:
|
||||
"""Regenerate the description for an event."""
|
||||
@@ -204,6 +206,17 @@ class ObjectDescriptionProcessor(PostProcessorApi):
|
||||
)
|
||||
return None
|
||||
|
||||
def cleanup_event(self, event_id: str) -> None:
|
||||
"""Clean up tracked event data to prevent memory leaks.
|
||||
|
||||
This should be called when an event ends, regardless of whether
|
||||
genai processing is triggered.
|
||||
"""
|
||||
if event_id in self.tracked_events:
|
||||
del self.tracked_events[event_id]
|
||||
if event_id in self.early_request_sent:
|
||||
del self.early_request_sent[event_id]
|
||||
|
||||
def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
|
||||
"""Read, decode, and crop the snapshot image."""
|
||||
|
||||
@@ -299,9 +312,8 @@ class ObjectDescriptionProcessor(PostProcessorApi):
|
||||
),
|
||||
).start()
|
||||
|
||||
# Delete tracked events based on the event_id
|
||||
if event.id in self.tracked_events:
|
||||
del self.tracked_events[event.id]
|
||||
# Clean up tracked events and early request state
|
||||
self.cleanup_event(event.id)
|
||||
|
||||
def _genai_embed_description(self, event: Event, thumbnails: list[bytes]) -> None:
|
||||
"""Embed the description for an event."""
|
||||
|
||||
@@ -311,6 +311,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
start_ts,
|
||||
end_ts,
|
||||
events_with_context,
|
||||
self.config.review.genai.preferred_language,
|
||||
self.config.review.genai.debug_save_thumbnails,
|
||||
)
|
||||
else:
|
||||
|
||||
@@ -13,7 +13,7 @@ from frigate.comms.event_metadata_updater import (
|
||||
)
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import MODEL_CACHE_DIR
|
||||
from frigate.log import redirect_output_to_logger
|
||||
from frigate.log import suppress_stderr_during
|
||||
from frigate.util.object import calculate_region
|
||||
|
||||
from ..types import DataProcessorMetrics
|
||||
@@ -80,13 +80,14 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to download {path}: {e}")
|
||||
|
||||
@redirect_output_to_logger(logger, logging.DEBUG)
|
||||
def __build_detector(self) -> None:
|
||||
self.interpreter = Interpreter(
|
||||
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
|
||||
num_threads=2,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
# Suppress TFLite delegate creation messages that bypass Python logging
|
||||
with suppress_stderr_during("tflite_interpreter_init"):
|
||||
self.interpreter = Interpreter(
|
||||
model_path=os.path.join(MODEL_CACHE_DIR, "bird/bird.tflite"),
|
||||
num_threads=2,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
self.tensor_input_details = self.interpreter.get_input_details()
|
||||
self.tensor_output_details = self.interpreter.get_output_details()
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ from frigate.config.classification import (
|
||||
ObjectClassificationType,
|
||||
)
|
||||
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
|
||||
from frigate.log import redirect_output_to_logger
|
||||
from frigate.log import suppress_stderr_during
|
||||
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
|
||||
from frigate.util.object import box_overlaps, calculate_region
|
||||
@@ -52,7 +52,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
self.requestor = requestor
|
||||
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
|
||||
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
|
||||
self.interpreter: Interpreter | None = None
|
||||
self.interpreter: Interpreter = None
|
||||
self.tensor_input_details: dict[str, Any] | None = None
|
||||
self.tensor_output_details: dict[str, Any] | None = None
|
||||
self.labelmap: dict[int, str] = {}
|
||||
@@ -72,8 +72,12 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
self.last_run = datetime.datetime.now().timestamp()
|
||||
self.__build_detector()
|
||||
|
||||
@redirect_output_to_logger(logger, logging.DEBUG)
|
||||
def __build_detector(self) -> None:
|
||||
try:
|
||||
from tflite_runtime.interpreter import Interpreter
|
||||
except ModuleNotFoundError:
|
||||
from tensorflow.lite.python.interpreter import Interpreter
|
||||
|
||||
model_path = os.path.join(self.model_dir, "model.tflite")
|
||||
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
|
||||
|
||||
@@ -84,11 +88,13 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
self.labelmap = {}
|
||||
return
|
||||
|
||||
self.interpreter = Interpreter(
|
||||
model_path=model_path,
|
||||
num_threads=2,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
# Suppress TFLite delegate creation messages that bypass Python logging
|
||||
with suppress_stderr_during("tflite_interpreter_init"):
|
||||
self.interpreter = Interpreter(
|
||||
model_path=model_path,
|
||||
num_threads=2,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
self.tensor_input_details = self.interpreter.get_input_details()
|
||||
self.tensor_output_details = self.interpreter.get_output_details()
|
||||
self.labelmap = load_labels(labelmap_path, prefill=0)
|
||||
@@ -224,28 +230,34 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
if not should_run:
|
||||
return
|
||||
|
||||
x, y, x2, y2 = calculate_region(
|
||||
frame.shape,
|
||||
crop[0],
|
||||
crop[1],
|
||||
crop[2],
|
||||
crop[3],
|
||||
224,
|
||||
1.0,
|
||||
)
|
||||
|
||||
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
|
||||
frame = rgb[
|
||||
y:y2,
|
||||
x:x2,
|
||||
]
|
||||
height, width = rgb.shape[:2]
|
||||
|
||||
if frame.shape != (224, 224):
|
||||
try:
|
||||
resized_frame = cv2.resize(frame, (224, 224))
|
||||
except Exception:
|
||||
logger.warning("Failed to resize image for state classification")
|
||||
return
|
||||
# Convert normalized crop coordinates to pixel values
|
||||
x1 = int(camera_config.crop[0] * width)
|
||||
y1 = int(camera_config.crop[1] * height)
|
||||
x2 = int(camera_config.crop[2] * width)
|
||||
y2 = int(camera_config.crop[3] * height)
|
||||
|
||||
# Clip coordinates to frame boundaries
|
||||
x1 = max(0, min(x1, width))
|
||||
y1 = max(0, min(y1, height))
|
||||
x2 = max(0, min(x2, width))
|
||||
y2 = max(0, min(y2, height))
|
||||
|
||||
if x2 <= x1 or y2 <= y1:
|
||||
logger.warning(
|
||||
f"Invalid crop coordinates for {camera}: [{x1}, {y1}, {x2}, {y2}]"
|
||||
)
|
||||
return
|
||||
|
||||
frame = rgb[y1:y2, x1:x2]
|
||||
|
||||
try:
|
||||
resized_frame = cv2.resize(frame, (224, 224))
|
||||
except Exception:
|
||||
logger.warning("Failed to resize image for state classification")
|
||||
return
|
||||
|
||||
if self.interpreter is None:
|
||||
# When interpreter is None, always save (score is 0.0, which is < 1.0)
|
||||
@@ -345,7 +357,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
self.model_config = model_config
|
||||
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
|
||||
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
|
||||
self.interpreter: Interpreter | None = None
|
||||
self.interpreter: Interpreter = None
|
||||
self.sub_label_publisher = sub_label_publisher
|
||||
self.requestor = requestor
|
||||
self.tensor_input_details: dict[str, Any] | None = None
|
||||
@@ -366,7 +378,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
|
||||
self.__build_detector()
|
||||
|
||||
@redirect_output_to_logger(logger, logging.DEBUG)
|
||||
def __build_detector(self) -> None:
|
||||
model_path = os.path.join(self.model_dir, "model.tflite")
|
||||
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
|
||||
@@ -378,11 +389,13 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
self.labelmap = {}
|
||||
return
|
||||
|
||||
self.interpreter = Interpreter(
|
||||
model_path=model_path,
|
||||
num_threads=2,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
# Suppress TFLite delegate creation messages that bypass Python logging
|
||||
with suppress_stderr_during("tflite_interpreter_init"):
|
||||
self.interpreter = Interpreter(
|
||||
model_path=model_path,
|
||||
num_threads=2,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
self.tensor_input_details = self.interpreter.get_input_details()
|
||||
self.tensor_output_details = self.interpreter.get_output_details()
|
||||
self.labelmap = load_labels(labelmap_path, prefill=0)
|
||||
@@ -508,6 +521,13 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
0.0,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
|
||||
# Still track history even when model doesn't exist to respect MAX_OBJECT_CLASSIFICATIONS
|
||||
# Add an entry with "unknown" label so the history limit is enforced
|
||||
if object_id not in self.classification_history:
|
||||
self.classification_history[object_id] = []
|
||||
|
||||
self.classification_history[object_id].append(("unknown", 0.0, now))
|
||||
return
|
||||
|
||||
input = np.expand_dims(resized_crop, axis=0)
|
||||
@@ -649,5 +669,5 @@ def write_classification_attempt(
|
||||
|
||||
if len(files) > max_files:
|
||||
os.unlink(os.path.join(folder, files[-1]))
|
||||
except FileNotFoundError:
|
||||
except (FileNotFoundError, OSError):
|
||||
pass
|
||||
|
||||
@@ -131,6 +131,7 @@ class ONNXModelRunner(BaseModelRunner):
|
||||
|
||||
return model_type in [
|
||||
EnrichmentModelTypeEnum.paddleocr.value,
|
||||
EnrichmentModelTypeEnum.yolov9_license_plate.value,
|
||||
EnrichmentModelTypeEnum.jina_v1.value,
|
||||
EnrichmentModelTypeEnum.jina_v2.value,
|
||||
EnrichmentModelTypeEnum.facenet.value,
|
||||
@@ -169,6 +170,7 @@ class CudaGraphRunner(BaseModelRunner):
|
||||
|
||||
return model_type not in [
|
||||
ModelTypeEnum.yolonas.value,
|
||||
ModelTypeEnum.dfine.value,
|
||||
EnrichmentModelTypeEnum.paddleocr.value,
|
||||
EnrichmentModelTypeEnum.jina_v1.value,
|
||||
EnrichmentModelTypeEnum.jina_v2.value,
|
||||
|
||||
@@ -5,7 +5,7 @@ from typing_extensions import Literal
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from frigate.log import redirect_output_to_logger
|
||||
from frigate.log import suppress_stderr_during
|
||||
|
||||
from ..detector_utils import tflite_detect_raw, tflite_init
|
||||
|
||||
@@ -28,12 +28,13 @@ class CpuDetectorConfig(BaseDetectorConfig):
|
||||
class CpuTfl(DetectionApi):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
@redirect_output_to_logger(logger, logging.DEBUG)
|
||||
def __init__(self, detector_config: CpuDetectorConfig):
|
||||
interpreter = Interpreter(
|
||||
model_path=detector_config.model.path,
|
||||
num_threads=detector_config.num_threads or 3,
|
||||
)
|
||||
# Suppress TFLite delegate creation messages that bypass Python logging
|
||||
with suppress_stderr_during("tflite_interpreter_init"):
|
||||
interpreter = Interpreter(
|
||||
model_path=detector_config.model.path,
|
||||
num_threads=detector_config.num_threads or 3,
|
||||
)
|
||||
|
||||
tflite_init(self, interpreter)
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ import cv2
|
||||
import numpy as np
|
||||
from pydantic import Field
|
||||
|
||||
from frigate.const import MODEL_CACHE_DIR
|
||||
from frigate.const import MODEL_CACHE_DIR, SUPPORTED_RK_SOCS
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detection_runners import RKNNModelRunner
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
|
||||
@@ -19,8 +19,6 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
DETECTOR_KEY = "rknn"
|
||||
|
||||
supported_socs = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
|
||||
|
||||
supported_models = {
|
||||
ModelTypeEnum.yologeneric: "^frigate-fp16-yolov9-[cemst]$",
|
||||
ModelTypeEnum.yolonas: "^deci-fp16-yolonas_[sml]$",
|
||||
@@ -82,9 +80,9 @@ class Rknn(DetectionApi):
|
||||
except FileNotFoundError:
|
||||
raise Exception("Make sure to run docker in privileged mode.")
|
||||
|
||||
if soc not in supported_socs:
|
||||
if soc not in SUPPORTED_RK_SOCS:
|
||||
raise Exception(
|
||||
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {supported_socs}."
|
||||
f"Your SoC is not supported. Your SoC is: {soc}. Currently these SoCs are supported: {SUPPORTED_RK_SOCS}."
|
||||
)
|
||||
|
||||
return soc
|
||||
|
||||
@@ -522,6 +522,8 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
)
|
||||
elif isinstance(processor, ObjectDescriptionProcessor):
|
||||
if not updated_db:
|
||||
# Still need to cleanup tracked events even if not processing
|
||||
processor.cleanup_event(event_id)
|
||||
continue
|
||||
|
||||
processor.process_data(
|
||||
|
||||
@@ -8,7 +8,7 @@ import numpy as np
|
||||
from frigate.const import MODEL_CACHE_DIR
|
||||
from frigate.detectors.detection_runners import get_optimized_runner
|
||||
from frigate.embeddings.types import EnrichmentModelTypeEnum
|
||||
from frigate.log import redirect_output_to_logger
|
||||
from frigate.log import suppress_stderr_during
|
||||
from frigate.util.downloader import ModelDownloader
|
||||
|
||||
from ...config import FaceRecognitionConfig
|
||||
@@ -57,17 +57,18 @@ class FaceNetEmbedding(BaseEmbedding):
|
||||
self._load_model_and_utils()
|
||||
logger.debug(f"models are already downloaded for {self.model_name}")
|
||||
|
||||
@redirect_output_to_logger(logger, logging.DEBUG)
|
||||
def _load_model_and_utils(self):
|
||||
if self.runner is None:
|
||||
if self.downloader:
|
||||
self.downloader.wait_for_download()
|
||||
|
||||
self.runner = Interpreter(
|
||||
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
|
||||
num_threads=2,
|
||||
)
|
||||
self.runner.allocate_tensors()
|
||||
# Suppress TFLite delegate creation messages that bypass Python logging
|
||||
with suppress_stderr_during("tflite_interpreter_init"):
|
||||
self.runner = Interpreter(
|
||||
model_path=os.path.join(MODEL_CACHE_DIR, "facedet/facenet.tflite"),
|
||||
num_threads=2,
|
||||
)
|
||||
self.runner.allocate_tensors()
|
||||
self.tensor_input_details = self.runner.get_input_details()
|
||||
self.tensor_output_details = self.runner.get_output_details()
|
||||
|
||||
|
||||
@@ -186,6 +186,9 @@ class JinaV1ImageEmbedding(BaseEmbedding):
|
||||
download_func=self._download_model,
|
||||
)
|
||||
self.downloader.ensure_model_files()
|
||||
# Avoid lazy loading in worker threads: block until downloads complete
|
||||
# and load the model on the main thread during initialization.
|
||||
self._load_model_and_utils()
|
||||
else:
|
||||
self.downloader = None
|
||||
ModelDownloader.mark_files_state(
|
||||
|
||||
@@ -65,6 +65,9 @@ class JinaV2Embedding(BaseEmbedding):
|
||||
download_func=self._download_model,
|
||||
)
|
||||
self.downloader.ensure_model_files()
|
||||
# Avoid lazy loading in worker threads: block until downloads complete
|
||||
# and load the model on the main thread during initialization.
|
||||
self._load_model_and_utils()
|
||||
else:
|
||||
self.downloader = None
|
||||
ModelDownloader.mark_files_state(
|
||||
|
||||
@@ -34,7 +34,7 @@ from frigate.data_processing.real_time.audio_transcription import (
|
||||
AudioTranscriptionRealTimeProcessor,
|
||||
)
|
||||
from frigate.ffmpeg_presets import parse_preset_input
|
||||
from frigate.log import LogPipe, redirect_output_to_logger
|
||||
from frigate.log import LogPipe, suppress_stderr_during
|
||||
from frigate.object_detection.base import load_labels
|
||||
from frigate.util.builtin import get_ffmpeg_arg_list
|
||||
from frigate.util.process import FrigateProcess
|
||||
@@ -367,17 +367,17 @@ class AudioEventMaintainer(threading.Thread):
|
||||
|
||||
|
||||
class AudioTfl:
|
||||
@redirect_output_to_logger(logger, logging.DEBUG)
|
||||
def __init__(self, stop_event: threading.Event, num_threads=2):
|
||||
self.stop_event = stop_event
|
||||
self.num_threads = num_threads
|
||||
self.labels = load_labels("/audio-labelmap.txt", prefill=521)
|
||||
self.interpreter = Interpreter(
|
||||
model_path="/cpu_audio_model.tflite",
|
||||
num_threads=self.num_threads,
|
||||
)
|
||||
|
||||
self.interpreter.allocate_tensors()
|
||||
# Suppress TFLite delegate creation messages that bypass Python logging
|
||||
with suppress_stderr_during("tflite_interpreter_init"):
|
||||
self.interpreter = Interpreter(
|
||||
model_path="/cpu_audio_model.tflite",
|
||||
num_threads=self.num_threads,
|
||||
)
|
||||
self.interpreter.allocate_tensors()
|
||||
|
||||
self.tensor_input_details = self.interpreter.get_input_details()
|
||||
self.tensor_output_details = self.interpreter.get_output_details()
|
||||
|
||||
@@ -46,7 +46,7 @@ def should_update_state(prev_event: Event, current_event: Event) -> bool:
|
||||
if prev_event["sub_label"] != current_event["sub_label"]:
|
||||
return True
|
||||
|
||||
if len(prev_event["current_zones"]) < len(current_event["current_zones"]):
|
||||
if set(prev_event["current_zones"]) != set(current_event["current_zones"]):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@@ -153,7 +153,7 @@ PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
|
||||
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
|
||||
"preset-intel-qsv-h264": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {2}",
|
||||
"preset-intel-qsv-h265": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v main -level:v 4.1 -async_depth:v 1 {2}",
|
||||
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
|
||||
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
|
||||
"preset-jetson-h264": "{0} -hide_banner {1} -c:v h264_nvmpi -profile high {2}",
|
||||
"preset-jetson-h265": "{0} -hide_banner {1} -c:v h264_nvmpi -profile main {2}",
|
||||
FFMPEG_HWACCEL_RKMPP: "{0} -hide_banner {1} -c:v h264_rkmpp -profile:v high {2}",
|
||||
|
||||
@@ -178,6 +178,7 @@ Each line represents a detection state, not necessarily unique individuals. Pare
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
events: list[dict[str, Any]],
|
||||
preferred_language: str | None,
|
||||
debug_save: bool,
|
||||
) -> str | None:
|
||||
"""Generate a summary of review item descriptions over a period of time."""
|
||||
@@ -232,6 +233,9 @@ Guidelines:
|
||||
for event in events:
|
||||
timeline_summary_prompt += f"\n{event}\n"
|
||||
|
||||
if preferred_language:
|
||||
timeline_summary_prompt += f"\nProvide your answer in {preferred_language}"
|
||||
|
||||
if debug_save:
|
||||
with open(
|
||||
os.path.join(
|
||||
|
||||
@@ -80,10 +80,15 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
|
||||
log_levels = {
|
||||
"absl": LogLevel.error,
|
||||
"httpx": LogLevel.error,
|
||||
"h5py": LogLevel.error,
|
||||
"keras": LogLevel.error,
|
||||
"matplotlib": LogLevel.error,
|
||||
"tensorflow": LogLevel.error,
|
||||
"tensorflow.python": LogLevel.error,
|
||||
"werkzeug": LogLevel.error,
|
||||
"ws4py": LogLevel.error,
|
||||
"PIL": LogLevel.warning,
|
||||
"numba": LogLevel.warning,
|
||||
**log_levels,
|
||||
}
|
||||
|
||||
@@ -318,3 +323,31 @@ def suppress_os_output(func: Callable) -> Callable:
|
||||
return result
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
@contextmanager
|
||||
def suppress_stderr_during(operation_name: str) -> Generator[None, None, None]:
|
||||
"""
|
||||
Context manager to suppress stderr output during a specific operation.
|
||||
|
||||
Useful for silencing LLVM debug output, CUDA messages, and other native
|
||||
library logging that cannot be controlled via Python logging or environment
|
||||
variables. Completely redirects file descriptor 2 (stderr) to /dev/null.
|
||||
|
||||
Usage:
|
||||
with suppress_stderr_during("model_conversion"):
|
||||
converter = tf.lite.TFLiteConverter.from_keras_model(model)
|
||||
tflite_model = converter.convert()
|
||||
|
||||
Args:
|
||||
operation_name: Name of the operation for debugging purposes
|
||||
"""
|
||||
original_stderr_fd = os.dup(2)
|
||||
devnull = os.open(os.devnull, os.O_WRONLY)
|
||||
try:
|
||||
os.dup2(devnull, 2)
|
||||
yield
|
||||
finally:
|
||||
os.dup2(original_stderr_fd, 2)
|
||||
os.close(devnull)
|
||||
os.close(original_stderr_fd)
|
||||
|
||||
@@ -119,6 +119,7 @@ class RecordingCleanup(threading.Thread):
|
||||
Recordings.path,
|
||||
Recordings.objects,
|
||||
Recordings.motion,
|
||||
Recordings.dBFS,
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera == config.name)
|
||||
@@ -126,6 +127,7 @@ class RecordingCleanup(threading.Thread):
|
||||
(
|
||||
(Recordings.end_time < continuous_expire_date)
|
||||
& (Recordings.motion == 0)
|
||||
& (Recordings.dBFS == 0)
|
||||
)
|
||||
| (Recordings.end_time < motion_expire_date)
|
||||
)
|
||||
@@ -185,6 +187,7 @@ class RecordingCleanup(threading.Thread):
|
||||
mode == RetainModeEnum.motion
|
||||
and recording.motion == 0
|
||||
and recording.objects == 0
|
||||
and recording.dBFS == 0
|
||||
)
|
||||
or (mode == RetainModeEnum.active_objects and recording.objects == 0)
|
||||
):
|
||||
|
||||
@@ -67,7 +67,7 @@ class SegmentInfo:
|
||||
if (
|
||||
not keep
|
||||
and retain_mode == RetainModeEnum.motion
|
||||
and (self.motion_count > 0 or self.average_dBFS > 0)
|
||||
and (self.motion_count > 0 or self.average_dBFS != 0)
|
||||
):
|
||||
keep = True
|
||||
|
||||
|
||||
@@ -86,11 +86,11 @@ class TimelineProcessor(threading.Thread):
|
||||
event_data: dict[Any, Any],
|
||||
) -> bool:
|
||||
"""Handle object detection."""
|
||||
save = False
|
||||
camera_config = self.config.cameras[camera]
|
||||
event_id = event_data["id"]
|
||||
|
||||
timeline_entry = {
|
||||
# Base timeline entry data that all entries will share
|
||||
base_entry = {
|
||||
Timeline.timestamp: event_data["frame_time"],
|
||||
Timeline.camera: camera,
|
||||
Timeline.source: "tracked_object",
|
||||
@@ -123,40 +123,64 @@ class TimelineProcessor(threading.Thread):
|
||||
e[Timeline.data]["sub_label"] = event_data["sub_label"]
|
||||
|
||||
if event_type == EventStateEnum.start:
|
||||
timeline_entry = base_entry.copy()
|
||||
timeline_entry[Timeline.class_type] = "visible"
|
||||
save = True
|
||||
self.insert_or_save(timeline_entry, prev_event_data, event_data)
|
||||
elif event_type == EventStateEnum.update:
|
||||
# Check all conditions and create timeline entries for each change
|
||||
entries_to_save = []
|
||||
|
||||
# Check for zone changes
|
||||
prev_zones = set(prev_event_data["current_zones"])
|
||||
current_zones = set(event_data["current_zones"])
|
||||
zones_changed = prev_zones != current_zones
|
||||
|
||||
# Only save "entered_zone" events when the object is actually IN zones
|
||||
if (
|
||||
len(prev_event_data["current_zones"]) < len(event_data["current_zones"])
|
||||
zones_changed
|
||||
and not event_data["stationary"]
|
||||
and len(current_zones) > 0
|
||||
):
|
||||
timeline_entry[Timeline.class_type] = "entered_zone"
|
||||
timeline_entry[Timeline.data]["zones"] = event_data["current_zones"]
|
||||
save = True
|
||||
elif prev_event_data["stationary"] != event_data["stationary"]:
|
||||
timeline_entry[Timeline.class_type] = (
|
||||
zone_entry = base_entry.copy()
|
||||
zone_entry[Timeline.class_type] = "entered_zone"
|
||||
zone_entry[Timeline.data] = base_entry[Timeline.data].copy()
|
||||
zone_entry[Timeline.data]["zones"] = event_data["current_zones"]
|
||||
entries_to_save.append(zone_entry)
|
||||
|
||||
# Check for stationary status change
|
||||
if prev_event_data["stationary"] != event_data["stationary"]:
|
||||
stationary_entry = base_entry.copy()
|
||||
stationary_entry[Timeline.class_type] = (
|
||||
"stationary" if event_data["stationary"] else "active"
|
||||
)
|
||||
save = True
|
||||
elif prev_event_data["attributes"] == {} and event_data["attributes"] != {}:
|
||||
timeline_entry[Timeline.class_type] = "attribute"
|
||||
timeline_entry[Timeline.data]["attribute"] = list(
|
||||
stationary_entry[Timeline.data] = base_entry[Timeline.data].copy()
|
||||
entries_to_save.append(stationary_entry)
|
||||
|
||||
# Check for new attributes
|
||||
if prev_event_data["attributes"] == {} and event_data["attributes"] != {}:
|
||||
attribute_entry = base_entry.copy()
|
||||
attribute_entry[Timeline.class_type] = "attribute"
|
||||
attribute_entry[Timeline.data] = base_entry[Timeline.data].copy()
|
||||
attribute_entry[Timeline.data]["attribute"] = list(
|
||||
event_data["attributes"].keys()
|
||||
)[0]
|
||||
|
||||
if len(event_data["current_attributes"]) > 0:
|
||||
timeline_entry[Timeline.data]["attribute_box"] = to_relative_box(
|
||||
attribute_entry[Timeline.data]["attribute_box"] = to_relative_box(
|
||||
camera_config.detect.width,
|
||||
camera_config.detect.height,
|
||||
event_data["current_attributes"][0]["box"],
|
||||
)
|
||||
|
||||
save = True
|
||||
elif event_type == EventStateEnum.end:
|
||||
timeline_entry[Timeline.class_type] = "gone"
|
||||
save = True
|
||||
entries_to_save.append(attribute_entry)
|
||||
|
||||
if save:
|
||||
# Save all entries
|
||||
for entry in entries_to_save:
|
||||
self.insert_or_save(entry, prev_event_data, event_data)
|
||||
|
||||
elif event_type == EventStateEnum.end:
|
||||
timeline_entry = base_entry.copy()
|
||||
timeline_entry[Timeline.class_type] = "gone"
|
||||
self.insert_or_save(timeline_entry, prev_event_data, event_data)
|
||||
|
||||
def handle_api_entry(
|
||||
|
||||
@@ -19,9 +19,10 @@ from frigate.const import (
|
||||
PROCESS_PRIORITY_LOW,
|
||||
UPDATE_MODEL_STATE,
|
||||
)
|
||||
from frigate.log import redirect_output_to_logger
|
||||
from frigate.log import redirect_output_to_logger, suppress_stderr_during
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.types import ModelStatusTypesEnum
|
||||
from frigate.util.downloader import ModelDownloader
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.image import get_image_from_recording
|
||||
from frigate.util.process import FrigateProcess
|
||||
@@ -121,6 +122,10 @@ def get_dataset_image_count(model_name: str) -> int:
|
||||
|
||||
class ClassificationTrainingProcess(FrigateProcess):
|
||||
def __init__(self, model_name: str) -> None:
|
||||
self.BASE_WEIGHT_URL = os.environ.get(
|
||||
"TF_KERAS_MOBILENET_V2_WEIGHTS_URL",
|
||||
"",
|
||||
)
|
||||
super().__init__(
|
||||
stop_event=None,
|
||||
priority=PROCESS_PRIORITY_LOW,
|
||||
@@ -179,11 +184,23 @@ class ClassificationTrainingProcess(FrigateProcess):
|
||||
)
|
||||
return False
|
||||
|
||||
weights_path = "imagenet"
|
||||
# Download MobileNetV2 weights if not present
|
||||
if self.BASE_WEIGHT_URL:
|
||||
weights_path = os.path.join(
|
||||
MODEL_CACHE_DIR, "MobileNet", "mobilenet_v2_weights.h5"
|
||||
)
|
||||
if not os.path.exists(weights_path):
|
||||
logger.info("Downloading MobileNet V2 weights file")
|
||||
ModelDownloader.download_from_url(
|
||||
self.BASE_WEIGHT_URL, weights_path
|
||||
)
|
||||
|
||||
# Start with imagenet base model with 35% of channels in each layer
|
||||
base_model = MobileNetV2(
|
||||
input_shape=(224, 224, 3),
|
||||
include_top=False,
|
||||
weights="imagenet",
|
||||
weights=weights_path,
|
||||
alpha=0.35,
|
||||
)
|
||||
base_model.trainable = False # Freeze pre-trained layers
|
||||
@@ -233,15 +250,20 @@ class ClassificationTrainingProcess(FrigateProcess):
|
||||
logger.debug(f"Converting {self.model_name} to TFLite...")
|
||||
|
||||
# convert model to tflite
|
||||
converter = tf.lite.TFLiteConverter.from_keras_model(model)
|
||||
converter.optimizations = [tf.lite.Optimize.DEFAULT]
|
||||
converter.representative_dataset = (
|
||||
self.__generate_representative_dataset_factory(dataset_dir)
|
||||
)
|
||||
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
|
||||
converter.inference_input_type = tf.uint8
|
||||
converter.inference_output_type = tf.uint8
|
||||
tflite_model = converter.convert()
|
||||
# Suppress stderr during conversion to avoid LLVM debug output
|
||||
# (fully_quantize, inference_type, MLIR optimization messages, etc)
|
||||
with suppress_stderr_during("tflite_conversion"):
|
||||
converter = tf.lite.TFLiteConverter.from_keras_model(model)
|
||||
converter.optimizations = [tf.lite.Optimize.DEFAULT]
|
||||
converter.representative_dataset = (
|
||||
self.__generate_representative_dataset_factory(dataset_dir)
|
||||
)
|
||||
converter.target_spec.supported_ops = [
|
||||
tf.lite.OpsSet.TFLITE_BUILTINS_INT8
|
||||
]
|
||||
converter.inference_input_type = tf.uint8
|
||||
converter.inference_output_type = tf.uint8
|
||||
tflite_model = converter.convert()
|
||||
|
||||
# write model
|
||||
model_path = os.path.join(model_dir, "model.tflite")
|
||||
@@ -338,8 +360,6 @@ def collect_state_classification_examples(
|
||||
cameras: Dict mapping camera names to normalized crop coordinates [x1, y1, x2, y2] (0-1)
|
||||
"""
|
||||
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
|
||||
temp_dir = os.path.join(dataset_dir, "temp")
|
||||
os.makedirs(temp_dir, exist_ok=True)
|
||||
|
||||
# Step 1: Get review items for the cameras
|
||||
camera_names = list(cameras.keys())
|
||||
@@ -354,6 +374,10 @@ def collect_state_classification_examples(
|
||||
logger.warning(f"No review items found for cameras: {camera_names}")
|
||||
return
|
||||
|
||||
# The temp directory is only created when there are review_items.
|
||||
temp_dir = os.path.join(dataset_dir, "temp")
|
||||
os.makedirs(temp_dir, exist_ok=True)
|
||||
|
||||
# Step 2: Create balanced timestamp selection (100 samples)
|
||||
timestamps = _select_balanced_timestamps(review_items, target_count=100)
|
||||
|
||||
@@ -482,6 +506,10 @@ def _extract_keyframes(
|
||||
"""
|
||||
Extract keyframes from recordings at specified timestamps and crop to specified regions.
|
||||
|
||||
This implementation batches work by running multiple ffmpeg snapshot commands
|
||||
concurrently, which significantly reduces total runtime compared to
|
||||
processing each timestamp serially.
|
||||
|
||||
Args:
|
||||
ffmpeg_path: Path to ffmpeg binary
|
||||
timestamps: List of timestamp dicts from _select_balanced_timestamps
|
||||
@@ -491,15 +519,21 @@ def _extract_keyframes(
|
||||
Returns:
|
||||
List of paths to successfully extracted and cropped keyframe images
|
||||
"""
|
||||
keyframe_paths = []
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
|
||||
for idx, ts_info in enumerate(timestamps):
|
||||
if not timestamps:
|
||||
return []
|
||||
|
||||
# Limit the number of concurrent ffmpeg processes so we don't overload the host.
|
||||
max_workers = min(5, len(timestamps))
|
||||
|
||||
def _process_timestamp(idx: int, ts_info: dict) -> tuple[int, str | None]:
|
||||
camera = ts_info["camera"]
|
||||
timestamp = ts_info["timestamp"]
|
||||
|
||||
if camera not in camera_crops:
|
||||
logger.warning(f"No crop coordinates for camera {camera}")
|
||||
continue
|
||||
return idx, None
|
||||
|
||||
norm_x1, norm_y1, norm_x2, norm_y2 = camera_crops[camera]
|
||||
|
||||
@@ -516,7 +550,7 @@ def _extract_keyframes(
|
||||
.get()
|
||||
)
|
||||
except Exception:
|
||||
continue
|
||||
return idx, None
|
||||
|
||||
relative_time = timestamp - recording.start_time
|
||||
|
||||
@@ -530,38 +564,57 @@ def _extract_keyframes(
|
||||
height=None,
|
||||
)
|
||||
|
||||
if image_data:
|
||||
nparr = np.frombuffer(image_data, np.uint8)
|
||||
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
||||
if not image_data:
|
||||
return idx, None
|
||||
|
||||
if img is not None:
|
||||
height, width = img.shape[:2]
|
||||
nparr = np.frombuffer(image_data, np.uint8)
|
||||
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
||||
|
||||
x1 = int(norm_x1 * width)
|
||||
y1 = int(norm_y1 * height)
|
||||
x2 = int(norm_x2 * width)
|
||||
y2 = int(norm_y2 * height)
|
||||
if img is None:
|
||||
return idx, None
|
||||
|
||||
x1_clipped = max(0, min(x1, width))
|
||||
y1_clipped = max(0, min(y1, height))
|
||||
x2_clipped = max(0, min(x2, width))
|
||||
y2_clipped = max(0, min(y2, height))
|
||||
height, width = img.shape[:2]
|
||||
|
||||
if x2_clipped > x1_clipped and y2_clipped > y1_clipped:
|
||||
cropped = img[y1_clipped:y2_clipped, x1_clipped:x2_clipped]
|
||||
resized = cv2.resize(cropped, (224, 224))
|
||||
x1 = int(norm_x1 * width)
|
||||
y1 = int(norm_y1 * height)
|
||||
x2 = int(norm_x2 * width)
|
||||
y2 = int(norm_y2 * height)
|
||||
|
||||
output_path = os.path.join(output_dir, f"frame_{idx:04d}.jpg")
|
||||
cv2.imwrite(output_path, resized)
|
||||
keyframe_paths.append(output_path)
|
||||
x1_clipped = max(0, min(x1, width))
|
||||
y1_clipped = max(0, min(y1, height))
|
||||
x2_clipped = max(0, min(x2, width))
|
||||
y2_clipped = max(0, min(y2, height))
|
||||
|
||||
if x2_clipped <= x1_clipped or y2_clipped <= y1_clipped:
|
||||
return idx, None
|
||||
|
||||
cropped = img[y1_clipped:y2_clipped, x1_clipped:x2_clipped]
|
||||
resized = cv2.resize(cropped, (224, 224))
|
||||
|
||||
output_path = os.path.join(output_dir, f"frame_{idx:04d}.jpg")
|
||||
cv2.imwrite(output_path, resized)
|
||||
return idx, output_path
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
f"Failed to extract frame from {recording.path} at {relative_time}s: {e}"
|
||||
)
|
||||
continue
|
||||
return idx, None
|
||||
|
||||
return keyframe_paths
|
||||
keyframes_with_index: list[tuple[int, str]] = []
|
||||
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
future_to_idx = {
|
||||
executor.submit(_process_timestamp, idx, ts_info): idx
|
||||
for idx, ts_info in enumerate(timestamps)
|
||||
}
|
||||
|
||||
for future in as_completed(future_to_idx):
|
||||
_, path = future.result()
|
||||
if path:
|
||||
keyframes_with_index.append((future_to_idx[future], path))
|
||||
|
||||
keyframes_with_index.sort(key=lambda item: item[0])
|
||||
return [path for _, path in keyframes_with_index]
|
||||
|
||||
|
||||
def _select_distinct_images(
|
||||
|
||||
@@ -65,10 +65,15 @@ class FrigateProcess(BaseProcess):
|
||||
logging.basicConfig(handlers=[], force=True)
|
||||
logging.getLogger().addHandler(QueueHandler(self.__log_queue))
|
||||
|
||||
# Always apply base log level suppressions for noisy third-party libraries
|
||||
# even if no specific logConfig is provided
|
||||
if logConfig:
|
||||
frigate.log.apply_log_levels(
|
||||
logConfig.default.value.upper(), logConfig.logs
|
||||
)
|
||||
else:
|
||||
# Apply default INFO level with standard library suppressions
|
||||
frigate.log.apply_log_levels("INFO", {})
|
||||
|
||||
self._setup_memray()
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from frigate.const import SUPPORTED_RK_SOCS
|
||||
from frigate.util.file import FileLock
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -68,9 +69,20 @@ def is_rknn_compatible(model_path: str, model_type: str | None = None) -> bool:
|
||||
True if the model is RKNN-compatible, False otherwise
|
||||
"""
|
||||
soc = get_soc_type()
|
||||
|
||||
if soc is None:
|
||||
return False
|
||||
|
||||
# Check if the SoC is actually a supported RK device
|
||||
# This prevents false positives on non-RK devices (e.g., macOS Docker)
|
||||
# where /proc/device-tree/compatible might exist but contain non-RK content
|
||||
if soc not in SUPPORTED_RK_SOCS:
|
||||
logger.debug(
|
||||
f"SoC '{soc}' is not a supported RK device for RKNN conversion. "
|
||||
f"Supported SoCs: {SUPPORTED_RK_SOCS}"
|
||||
)
|
||||
return False
|
||||
|
||||
if not model_type:
|
||||
model_type = get_rknn_model_type(model_path)
|
||||
|
||||
|
||||
@@ -38,6 +38,10 @@
|
||||
"label": "Sub Labels",
|
||||
"all": "All Sub Labels"
|
||||
},
|
||||
"attributes": {
|
||||
"label": "Classification Attributes",
|
||||
"all": "All Attributes"
|
||||
},
|
||||
"score": "Score",
|
||||
"estimatedSpeed": "Estimated Speed ({{unit}})",
|
||||
"features": {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
{
|
||||
"documentTitle": "Classification Models - Frigate",
|
||||
"details": {
|
||||
"scoreInfo": "Score represents the average classification confidence across all detections of this object."
|
||||
"scoreInfo": "Score represents the average classification confidence across all detections of this object.",
|
||||
"none": "None",
|
||||
"unknown": "Unknown"
|
||||
},
|
||||
"button": {
|
||||
"deleteClassificationAttempts": "Delete Classification Images",
|
||||
@@ -72,7 +74,7 @@
|
||||
},
|
||||
"renameCategory": {
|
||||
"title": "Rename Class",
|
||||
"desc": "Enter a new name for {{name}}. You will be required to retrain the model for the name change to take affect."
|
||||
"desc": "Enter a new name for {{name}}. You will be required to retrain the model for the name change to take effect."
|
||||
},
|
||||
"description": {
|
||||
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
|
||||
@@ -83,7 +85,6 @@
|
||||
"aria": "Select Recent Classifications"
|
||||
},
|
||||
"categories": "Classes",
|
||||
"none": "None",
|
||||
"createCategory": {
|
||||
"new": "Create New Class"
|
||||
},
|
||||
@@ -138,6 +139,7 @@
|
||||
"nameOnlyNumbers": "Model name cannot contain only numbers",
|
||||
"classRequired": "At least 1 class is required",
|
||||
"classesUnique": "Class names must be unique",
|
||||
"noneNotAllowed": "The class 'none' is not allowed",
|
||||
"stateRequiresTwoClasses": "State models require at least 2 classes",
|
||||
"objectLabelRequired": "Please select an object label",
|
||||
"objectTypeRequired": "Please select a classification type"
|
||||
|
||||
@@ -104,12 +104,14 @@
|
||||
"regenerate": "A new description has been requested from {{provider}}. Depending on the speed of your provider, the new description may take some time to regenerate.",
|
||||
"updatedSublabel": "Successfully updated sub label.",
|
||||
"updatedLPR": "Successfully updated license plate.",
|
||||
"updatedAttributes": "Successfully updated attributes.",
|
||||
"audioTranscription": "Successfully requested audio transcription. Depending on the speed of your Frigate server, the transcription may take some time to complete."
|
||||
},
|
||||
"error": {
|
||||
"regenerate": "Failed to call {{provider}} for a new description: {{errorMessage}}",
|
||||
"updatedSublabelFailed": "Failed to update sub label: {{errorMessage}}",
|
||||
"updatedLPRFailed": "Failed to update license plate: {{errorMessage}}",
|
||||
"updatedAttributesFailed": "Failed to update attributes: {{errorMessage}}",
|
||||
"audioTranscription": "Failed to request audio transcription: {{errorMessage}}"
|
||||
}
|
||||
}
|
||||
@@ -125,6 +127,10 @@
|
||||
"desc": "Enter a new license plate value for this {{label}}",
|
||||
"descNoLabel": "Enter a new license plate value for this tracked object"
|
||||
},
|
||||
"editAttributes": {
|
||||
"title": "Edit attributes",
|
||||
"desc": "Select classification attributes for this {{label}}"
|
||||
},
|
||||
"snapshotScore": {
|
||||
"label": "Snapshot Score"
|
||||
},
|
||||
@@ -136,6 +142,7 @@
|
||||
"label": "Score"
|
||||
},
|
||||
"recognizedLicensePlate": "Recognized License Plate",
|
||||
"attributes": "Classification Attributes",
|
||||
"estimatedSpeed": "Estimated Speed",
|
||||
"objects": "Objects",
|
||||
"camera": "Camera",
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
"labels": "Labels",
|
||||
"zones": "Zones",
|
||||
"sub_labels": "Sub Labels",
|
||||
"attributes": "Attributes",
|
||||
"search_type": "Search Type",
|
||||
"time_range": "Time Range",
|
||||
"before": "Before",
|
||||
|
||||
@@ -679,7 +679,7 @@
|
||||
"desc": "Manage this Frigate instance's user accounts."
|
||||
},
|
||||
"addUser": "Add User",
|
||||
"updatePassword": "Update Password",
|
||||
"updatePassword": "Reset Password",
|
||||
"toast": {
|
||||
"success": {
|
||||
"createUser": "User {{user}} created successfully",
|
||||
@@ -700,7 +700,7 @@
|
||||
"role": "Role",
|
||||
"noUsers": "No users found.",
|
||||
"changeRole": "Change user role",
|
||||
"password": "Password",
|
||||
"password": "Reset Password",
|
||||
"deleteUser": "Delete user"
|
||||
},
|
||||
"dialog": {
|
||||
|
||||
@@ -192,7 +192,10 @@
|
||||
"review_description_events_per_second": "Review Description",
|
||||
"object_description": "Object Description",
|
||||
"object_description_speed": "Object Description Speed",
|
||||
"object_description_events_per_second": "Object Description"
|
||||
"object_description_events_per_second": "Object Description",
|
||||
"classification": "{{name}} Classification",
|
||||
"classification_speed": "{{name}} Classification Speed",
|
||||
"classification_events_per_second": "{{name}} Classification Events Per Second"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ import ProtectedRoute from "@/components/auth/ProtectedRoute";
|
||||
import { AuthProvider } from "@/context/auth-context";
|
||||
import useSWR from "swr";
|
||||
import { FrigateConfig } from "./types/frigateConfig";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
|
||||
const Live = lazy(() => import("@/pages/Live"));
|
||||
const Events = lazy(() => import("@/pages/Events"));
|
||||
@@ -50,6 +51,13 @@ function DefaultAppView() {
|
||||
const { data: config } = useSWR<FrigateConfig>("config", {
|
||||
revalidateOnFocus: false,
|
||||
});
|
||||
|
||||
// Compute required roles for main routes, ensuring we have config first
|
||||
// to prevent race condition where custom roles are temporarily unavailable
|
||||
const mainRouteRoles = config?.auth?.roles
|
||||
? Object.keys(config.auth.roles)
|
||||
: undefined;
|
||||
|
||||
return (
|
||||
<div className="size-full overflow-hidden">
|
||||
{isDesktop && <Sidebar />}
|
||||
@@ -68,13 +76,11 @@ function DefaultAppView() {
|
||||
<Routes>
|
||||
<Route
|
||||
element={
|
||||
<ProtectedRoute
|
||||
requiredRoles={
|
||||
config?.auth.roles
|
||||
? Object.keys(config.auth.roles)
|
||||
: ["admin", "viewer"]
|
||||
}
|
||||
/>
|
||||
mainRouteRoles ? (
|
||||
<ProtectedRoute requiredRoles={mainRouteRoles} />
|
||||
) : (
|
||||
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
||||
)
|
||||
}
|
||||
>
|
||||
<Route index element={<Live />} />
|
||||
|
||||
@@ -116,10 +116,10 @@ export default function Statusbar() {
|
||||
}
|
||||
|
||||
return (
|
||||
<Link key={gpuTitle} to="/system#general">
|
||||
<Link key={name} to="/system#general">
|
||||
{" "}
|
||||
<div
|
||||
key={gpuTitle}
|
||||
key={name}
|
||||
className="flex cursor-pointer items-center gap-2 text-sm hover:underline"
|
||||
>
|
||||
<MdCircle
|
||||
|
||||
@@ -4,8 +4,8 @@ import { cn } from "@/lib/utils";
|
||||
import {
|
||||
ClassificationItemData,
|
||||
ClassificationThreshold,
|
||||
ClassifiedEvent,
|
||||
} from "@/types/classification";
|
||||
import { Event } from "@/types/event";
|
||||
import { forwardRef, useMemo, useRef, useState } from "react";
|
||||
import { isDesktop, isIOS, isMobile, isMobileOnly } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
@@ -40,6 +40,7 @@ type ClassificationCardProps = {
|
||||
data: ClassificationItemData;
|
||||
threshold?: ClassificationThreshold;
|
||||
selected: boolean;
|
||||
clickable: boolean;
|
||||
i18nLibrary: string;
|
||||
showArea?: boolean;
|
||||
count?: number;
|
||||
@@ -56,6 +57,7 @@ export const ClassificationCard = forwardRef<
|
||||
data,
|
||||
threshold,
|
||||
selected,
|
||||
clickable,
|
||||
i18nLibrary,
|
||||
showArea = true,
|
||||
count,
|
||||
@@ -101,11 +103,12 @@ export const ClassificationCard = forwardRef<
|
||||
<div
|
||||
ref={ref}
|
||||
className={cn(
|
||||
"relative flex size-full cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
|
||||
"relative flex size-full flex-col overflow-hidden rounded-lg outline outline-[3px]",
|
||||
className,
|
||||
selected
|
||||
? "shadow-selected outline-selected"
|
||||
: "outline-transparent duration-500",
|
||||
clickable && "cursor-pointer",
|
||||
)}
|
||||
onClick={(e) => {
|
||||
const isMeta = e.metaKey || e.ctrlKey;
|
||||
@@ -160,8 +163,12 @@ export const ClassificationCard = forwardRef<
|
||||
data.score != undefined ? "text-xs" : "text-sm",
|
||||
)}
|
||||
>
|
||||
<div className="smart-capitalize">
|
||||
{data.name == "unknown" ? t("details.unknown") : data.name}
|
||||
<div className="break-all smart-capitalize">
|
||||
{data.name == "unknown"
|
||||
? t("details.unknown")
|
||||
: data.name == "none"
|
||||
? t("details.none")
|
||||
: data.name}
|
||||
</div>
|
||||
{data.score != undefined && (
|
||||
<div
|
||||
@@ -186,7 +193,7 @@ export const ClassificationCard = forwardRef<
|
||||
|
||||
type GroupedClassificationCardProps = {
|
||||
group: ClassificationItemData[];
|
||||
event?: Event;
|
||||
classifiedEvent?: ClassifiedEvent;
|
||||
threshold?: ClassificationThreshold;
|
||||
selectedItems: string[];
|
||||
i18nLibrary: string;
|
||||
@@ -197,7 +204,7 @@ type GroupedClassificationCardProps = {
|
||||
};
|
||||
export function GroupedClassificationCard({
|
||||
group,
|
||||
event,
|
||||
classifiedEvent,
|
||||
threshold,
|
||||
selectedItems,
|
||||
i18nLibrary,
|
||||
@@ -226,20 +233,21 @@ export function GroupedClassificationCard({
|
||||
});
|
||||
|
||||
if (!best) {
|
||||
return group.at(-1);
|
||||
best = group.at(-1)!;
|
||||
}
|
||||
|
||||
const bestTyped: ClassificationItemData = best;
|
||||
return {
|
||||
...bestTyped,
|
||||
name: event
|
||||
? event.sub_label && event.sub_label !== "none"
|
||||
? event.sub_label
|
||||
: t(noClassificationLabel)
|
||||
: bestTyped.name,
|
||||
score: event?.data?.sub_label_score,
|
||||
name:
|
||||
classifiedEvent?.label && classifiedEvent.label !== "none"
|
||||
? classifiedEvent.label
|
||||
: classifiedEvent
|
||||
? t(noClassificationLabel)
|
||||
: bestTyped.name,
|
||||
score: classifiedEvent?.score,
|
||||
};
|
||||
}, [group, event, noClassificationLabel, t]);
|
||||
}, [group, classifiedEvent, noClassificationLabel, t]);
|
||||
|
||||
const bestScoreStatus = useMemo(() => {
|
||||
if (!bestItem?.score || !threshold) {
|
||||
@@ -284,6 +292,7 @@ export function GroupedClassificationCard({
|
||||
data={bestItem}
|
||||
threshold={threshold}
|
||||
selected={selectedItems.includes(bestItem.filename)}
|
||||
clickable={true}
|
||||
i18nLibrary={i18nLibrary}
|
||||
count={group.length}
|
||||
onClick={(_, meta) => {
|
||||
@@ -325,36 +334,38 @@ export function GroupedClassificationCard({
|
||||
)}
|
||||
>
|
||||
<ContentTitle className="flex items-center gap-2 font-normal capitalize">
|
||||
{event?.sub_label && event.sub_label !== "none"
|
||||
? event.sub_label
|
||||
{classifiedEvent?.label && classifiedEvent.label !== "none"
|
||||
? classifiedEvent.label
|
||||
: t(noClassificationLabel)}
|
||||
{event?.sub_label && event.sub_label !== "none" && (
|
||||
<div className="flex items-center gap-1">
|
||||
<div
|
||||
className={cn(
|
||||
"",
|
||||
bestScoreStatus == "match" && "text-success",
|
||||
bestScoreStatus == "potential" && "text-orange-400",
|
||||
bestScoreStatus == "unknown" && "text-danger",
|
||||
)}
|
||||
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<button
|
||||
className="focus:outline-none"
|
||||
aria-label={t("details.scoreInfo", {
|
||||
ns: i18nLibrary,
|
||||
})}
|
||||
>
|
||||
<LuInfo className="size-3" />
|
||||
</button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 text-sm">
|
||||
{t("details.scoreInfo", { ns: i18nLibrary })}
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
)}
|
||||
{classifiedEvent?.label &&
|
||||
classifiedEvent.label !== "none" &&
|
||||
classifiedEvent.score !== undefined && (
|
||||
<div className="flex items-center gap-1">
|
||||
<div
|
||||
className={cn(
|
||||
"",
|
||||
bestScoreStatus == "match" && "text-success",
|
||||
bestScoreStatus == "potential" && "text-orange-400",
|
||||
bestScoreStatus == "unknown" && "text-danger",
|
||||
)}
|
||||
>{`${Math.round((classifiedEvent.score || 0) * 100)}%`}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<button
|
||||
className="focus:outline-none"
|
||||
aria-label={t("details.scoreInfo", {
|
||||
ns: i18nLibrary,
|
||||
})}
|
||||
>
|
||||
<LuInfo className="size-3" />
|
||||
</button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 text-sm">
|
||||
{t("details.scoreInfo", { ns: i18nLibrary })}
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
)}
|
||||
</ContentTitle>
|
||||
<ContentDescription className={cn("", isMobile && "px-2")}>
|
||||
{time && (
|
||||
@@ -366,30 +377,34 @@ export function GroupedClassificationCard({
|
||||
)}
|
||||
</ContentDescription>
|
||||
</div>
|
||||
{isDesktop && (
|
||||
<div className="flex flex-row justify-between">
|
||||
{event && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div
|
||||
className="cursor-pointer"
|
||||
tabIndex={-1}
|
||||
onClick={() => {
|
||||
navigate(`/explore?event_id=${event.id}`);
|
||||
}}
|
||||
>
|
||||
<LuSearch className="size-4 text-secondary-foreground" />
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
<TooltipPortal>
|
||||
<TooltipContent>
|
||||
{t("details.item.button.viewInExplore", {
|
||||
ns: "views/explore",
|
||||
})}
|
||||
</TooltipContent>
|
||||
</TooltipPortal>
|
||||
</Tooltip>
|
||||
{classifiedEvent && (
|
||||
<div
|
||||
className={cn(
|
||||
"flex",
|
||||
isDesktop && "flex-row justify-between",
|
||||
isMobile && "absolute right-4 top-8",
|
||||
)}
|
||||
>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div
|
||||
className="cursor-pointer"
|
||||
tabIndex={-1}
|
||||
onClick={() => {
|
||||
navigate(`/explore?event_id=${classifiedEvent.id}`);
|
||||
}}
|
||||
>
|
||||
<LuSearch className="size-4 text-secondary-foreground" />
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
<TooltipPortal>
|
||||
<TooltipContent>
|
||||
{t("details.item.button.viewInExplore", {
|
||||
ns: "views/explore",
|
||||
})}
|
||||
</TooltipContent>
|
||||
</TooltipPortal>
|
||||
</Tooltip>
|
||||
</div>
|
||||
)}
|
||||
</Header>
|
||||
@@ -406,6 +421,7 @@ export function GroupedClassificationCard({
|
||||
data={data}
|
||||
threshold={threshold}
|
||||
selected={false}
|
||||
clickable={false}
|
||||
i18nLibrary={i18nLibrary}
|
||||
onClick={() => {}}
|
||||
>
|
||||
|
||||
@@ -94,7 +94,14 @@ export default function Step1NameAndDefine({
|
||||
objectLabel: z.string().optional(),
|
||||
objectType: z.enum(["sub_label", "attribute"]).optional(),
|
||||
classes: z
|
||||
.array(z.string())
|
||||
.array(
|
||||
z
|
||||
.string()
|
||||
.refine(
|
||||
(val) => val.trim().toLowerCase() !== "none",
|
||||
t("wizard.step1.errors.noneNotAllowed"),
|
||||
),
|
||||
)
|
||||
.min(1, t("wizard.step1.errors.classRequired"))
|
||||
.refine(
|
||||
(classes) => {
|
||||
@@ -315,7 +322,7 @@ export default function Step1NameAndDefine({
|
||||
<FormLabel className="text-primary-variant">
|
||||
{t("wizard.step1.classificationType")}
|
||||
</FormLabel>
|
||||
<Popover>
|
||||
<Popover modal={true}>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
variant="ghost"
|
||||
@@ -398,7 +405,7 @@ export default function Step1NameAndDefine({
|
||||
? t("wizard.step1.states")
|
||||
: t("wizard.step1.classes")}
|
||||
</FormLabel>
|
||||
<Popover>
|
||||
<Popover modal={true}>
|
||||
<PopoverTrigger asChild>
|
||||
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
|
||||
<LuInfo className="size-3" />
|
||||
@@ -467,6 +474,7 @@ export default function Step1NameAndDefine({
|
||||
)}
|
||||
</div>
|
||||
</FormControl>
|
||||
<FormMessage />
|
||||
</FormItem>
|
||||
)}
|
||||
/>
|
||||
|
||||
@@ -45,6 +45,12 @@ export default function Step3ChooseExamples({
|
||||
const [isProcessing, setIsProcessing] = useState(false);
|
||||
const [currentClassIndex, setCurrentClassIndex] = useState(0);
|
||||
const [selectedImages, setSelectedImages] = useState<Set<string>>(new Set());
|
||||
const [cacheKey, setCacheKey] = useState<number>(Date.now());
|
||||
const [loadedImages, setLoadedImages] = useState<Set<string>>(new Set());
|
||||
|
||||
const handleImageLoad = useCallback((imageName: string) => {
|
||||
setLoadedImages((prev) => new Set(prev).add(imageName));
|
||||
}, []);
|
||||
|
||||
const { data: trainImages, mutate: refreshTrainImages } = useSWR<string[]>(
|
||||
hasGenerated ? `classification/${step1Data.modelName}/train` : null,
|
||||
@@ -141,7 +147,37 @@ export default function Step3ChooseExamples({
|
||||
);
|
||||
await Promise.all(categorizePromises);
|
||||
|
||||
// Step 2.5: Create empty folders for classes that don't have any images
|
||||
// Step 2.5: Delete any unselected images from train folder
|
||||
// For state models, all images must be classified, so unselected images should be removed
|
||||
// For object models, unselected images are assigned to "none" so they're already categorized
|
||||
if (step1Data.modelType === "state") {
|
||||
try {
|
||||
// Fetch current train images to see what's left after categorization
|
||||
const trainImagesResponse = await axios.get<string[]>(
|
||||
`/classification/${step1Data.modelName}/train`,
|
||||
);
|
||||
const remainingTrainImages = trainImagesResponse.data || [];
|
||||
|
||||
const categorizedImageNames = new Set(Object.keys(classifications));
|
||||
const unselectedImages = remainingTrainImages.filter(
|
||||
(imageName) => !categorizedImageNames.has(imageName),
|
||||
);
|
||||
|
||||
if (unselectedImages.length > 0) {
|
||||
await axios.post(
|
||||
`/classification/${step1Data.modelName}/train/delete`,
|
||||
{
|
||||
ids: unselectedImages,
|
||||
},
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
// Silently fail - unselected images will remain but won't cause issues
|
||||
// since the frontend filters out images that don't match expected format
|
||||
}
|
||||
}
|
||||
|
||||
// Step 2.6: Create empty folders for classes that don't have any images
|
||||
// This ensures all classes are available in the dataset view later
|
||||
const classesWithImages = new Set(
|
||||
Object.values(classifications).filter((c) => c && c !== "none"),
|
||||
@@ -156,15 +192,17 @@ export default function Step3ChooseExamples({
|
||||
await Promise.all(emptyFolderPromises);
|
||||
|
||||
// Step 3: Determine if we should train
|
||||
// For state models, we need ALL states to have examples
|
||||
// For object models, we need at least 2 classes with images
|
||||
// For state models, we need ALL states to have examples (at least 2 states)
|
||||
// For object models, we need at least 1 class with images (the rest go to "none")
|
||||
const allStatesHaveExamplesForTraining =
|
||||
step1Data.modelType !== "state" ||
|
||||
step1Data.classes.every((className) =>
|
||||
classesWithImages.has(className),
|
||||
);
|
||||
const shouldTrain =
|
||||
allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
||||
step1Data.modelType === "object"
|
||||
? classesWithImages.size >= 1
|
||||
: allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
||||
|
||||
// Step 4: Kick off training only if we have enough classes with images
|
||||
if (shouldTrain) {
|
||||
@@ -300,6 +338,8 @@ export default function Step3ChooseExamples({
|
||||
setHasGenerated(true);
|
||||
toast.success(t("wizard.step3.generateSuccess"));
|
||||
|
||||
// Update cache key to force image reload
|
||||
setCacheKey(Date.now());
|
||||
await refreshTrainImages();
|
||||
} catch (error) {
|
||||
const axiosError = error as {
|
||||
@@ -533,10 +573,16 @@ export default function Step3ChooseExamples({
|
||||
)}
|
||||
onClick={() => toggleImageSelection(imageName)}
|
||||
>
|
||||
{!loadedImages.has(imageName) && (
|
||||
<div className="flex h-full items-center justify-center">
|
||||
<ActivityIndicator className="size-6" />
|
||||
</div>
|
||||
)}
|
||||
<img
|
||||
src={`${baseUrl}clips/${step1Data.modelName}/train/${imageName}`}
|
||||
src={`${baseUrl}clips/${step1Data.modelName}/train/${imageName}?t=${cacheKey}`}
|
||||
alt={`Example ${index + 1}`}
|
||||
className="h-full w-full object-cover"
|
||||
onLoad={() => handleImageLoad(imageName)}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
|
||||
@@ -399,7 +399,7 @@ export default function InputWithTags({
|
||||
newFilters.sort = value as SearchSortType;
|
||||
break;
|
||||
default:
|
||||
// Handle array types (cameras, labels, subLabels, zones)
|
||||
// Handle array types (cameras, labels, sub_labels, attributes, zones)
|
||||
if (!newFilters[type]) newFilters[type] = [];
|
||||
if (Array.isArray(newFilters[type])) {
|
||||
if (!(newFilters[type] as string[]).includes(value)) {
|
||||
|
||||
@@ -132,7 +132,7 @@ export default function ClassificationSelectionDialog({
|
||||
onClick={() => onCategorizeImage(category)}
|
||||
>
|
||||
{category === "none"
|
||||
? t("none")
|
||||
? t("details.none")
|
||||
: category.replaceAll("_", " ")}
|
||||
</SelectorItem>
|
||||
))}
|
||||
|
||||
@@ -440,6 +440,7 @@ function CustomTimeSelector({
|
||||
<FaCalendarAlt />
|
||||
<div className="flex flex-wrap items-center">
|
||||
<Popover
|
||||
modal={false}
|
||||
open={startOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
@@ -461,7 +462,10 @@ function CustomTimeSelector({
|
||||
{formattedStart}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="flex flex-col items-center">
|
||||
<PopoverContent
|
||||
disablePortal={isDesktop}
|
||||
className="flex flex-col items-center"
|
||||
>
|
||||
<TimezoneAwareCalendar
|
||||
timezone={config?.ui.timezone}
|
||||
selectedDay={new Date(startTime * 1000)}
|
||||
@@ -506,6 +510,7 @@ function CustomTimeSelector({
|
||||
</Popover>
|
||||
<FaArrowRight className="size-4 text-primary" />
|
||||
<Popover
|
||||
modal={false}
|
||||
open={endOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
@@ -527,7 +532,10 @@ function CustomTimeSelector({
|
||||
{formattedEnd}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="flex flex-col items-center">
|
||||
<PopoverContent
|
||||
disablePortal={isDesktop}
|
||||
className="flex flex-col items-center"
|
||||
>
|
||||
<TimezoneAwareCalendar
|
||||
timezone={config?.ui.timezone}
|
||||
selectedDay={new Date(endTime * 1000)}
|
||||
@@ -545,7 +553,7 @@ function CustomTimeSelector({
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="startTime"
|
||||
id="endTime"
|
||||
type="time"
|
||||
value={endClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
|
||||
@@ -54,7 +54,7 @@ export default function SetPasswordDialog({
|
||||
config?.auth?.refresh_time ?? undefined;
|
||||
const refreshTimeLabel = refreshSeconds
|
||||
? formatSecondsToDuration(refreshSeconds)
|
||||
: "30 minutes";
|
||||
: t("time.30minutes", { ns: "common" });
|
||||
|
||||
// visibility toggles for password fields
|
||||
const [showOldPassword, setShowOldPassword] = useState<boolean>(false);
|
||||
|
||||
@@ -49,6 +49,29 @@ export default function DetailActionsMenu({
|
||||
search.data?.type === "audio" ? null : [`review/event/${search.id}`],
|
||||
);
|
||||
|
||||
// don't render menu at all if no options are available
|
||||
const hasSemanticSearchOption =
|
||||
config?.semantic_search.enabled &&
|
||||
setSimilarity !== undefined &&
|
||||
search.data?.type === "object";
|
||||
|
||||
const hasReviewItem = !!(reviewItem && reviewItem.id);
|
||||
|
||||
const hasAdminTriggerOption =
|
||||
isAdmin &&
|
||||
config?.semantic_search.enabled &&
|
||||
search.data?.type === "object";
|
||||
|
||||
if (
|
||||
!search.has_snapshot &&
|
||||
!search.has_clip &&
|
||||
!hasSemanticSearchOption &&
|
||||
!hasReviewItem &&
|
||||
!hasAdminTriggerOption
|
||||
) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
||||
<DropdownMenuTrigger>
|
||||
|
||||
@@ -84,6 +84,7 @@ import { LuInfo } from "react-icons/lu";
|
||||
import { TooltipPortal } from "@radix-ui/react-tooltip";
|
||||
import { FaPencilAlt } from "react-icons/fa";
|
||||
import TextEntryDialog from "@/components/overlay/dialog/TextEntryDialog";
|
||||
import AttributeSelectDialog from "@/components/overlay/dialog/AttributeSelectDialog";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
import { getTranslatedLabel } from "@/utils/i18n";
|
||||
@@ -297,6 +298,7 @@ type DialogContentComponentProps = {
|
||||
isPopoverOpen: boolean;
|
||||
setIsPopoverOpen: (open: boolean) => void;
|
||||
dialogContainer: HTMLDivElement | null;
|
||||
setShowNavigationButtons: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
};
|
||||
|
||||
function DialogContentComponent({
|
||||
@@ -314,6 +316,7 @@ function DialogContentComponent({
|
||||
isPopoverOpen,
|
||||
setIsPopoverOpen,
|
||||
dialogContainer,
|
||||
setShowNavigationButtons,
|
||||
}: DialogContentComponentProps) {
|
||||
if (page === "tracking_details") {
|
||||
return (
|
||||
@@ -399,6 +402,7 @@ function DialogContentComponent({
|
||||
config={config}
|
||||
setSearch={setSearch}
|
||||
setInputFocused={setInputFocused}
|
||||
setShowNavigationButtons={setShowNavigationButtons}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
@@ -415,6 +419,7 @@ function DialogContentComponent({
|
||||
config={config}
|
||||
setSearch={setSearch}
|
||||
setInputFocused={setInputFocused}
|
||||
setShowNavigationButtons={setShowNavigationButtons}
|
||||
/>
|
||||
</>
|
||||
);
|
||||
@@ -459,6 +464,7 @@ export default function SearchDetailDialog({
|
||||
|
||||
const [isOpen, setIsOpen] = useState(search != undefined);
|
||||
const [isPopoverOpen, setIsPopoverOpen] = useState(false);
|
||||
const [showNavigationButtons, setShowNavigationButtons] = useState(false);
|
||||
const dialogContentRef = useRef<HTMLDivElement | null>(null);
|
||||
const [dialogContainer, setDialogContainer] = useState<HTMLDivElement | null>(
|
||||
null,
|
||||
@@ -540,9 +546,9 @@ export default function SearchDetailDialog({
|
||||
onOpenChange={handleOpenChange}
|
||||
enableHistoryBack={true}
|
||||
>
|
||||
{isDesktop && onPrevious && onNext && (
|
||||
{isDesktop && onPrevious && onNext && showNavigationButtons && (
|
||||
<DialogPortal>
|
||||
<div className="pointer-events-none fixed inset-0 z-[200] flex items-center justify-center">
|
||||
<div className="pointer-events-none fixed inset-0 z-[51] flex items-center justify-center">
|
||||
<div
|
||||
className={cn(
|
||||
"relative flex items-center justify-between",
|
||||
@@ -593,9 +599,14 @@ export default function SearchDetailDialog({
|
||||
<Content
|
||||
ref={isDesktop ? dialogContentRef : undefined}
|
||||
className={cn(
|
||||
"scrollbar-container overflow-y-auto",
|
||||
isDesktop && "max-h-[95dvh] max-w-[85%] xl:max-w-[70%]",
|
||||
isMobile && "flex h-full flex-col px-4",
|
||||
isDesktop && [
|
||||
"max-h-[95dvh] max-w-[85%] xl:max-w-[70%]",
|
||||
pageToggle === "tracking_details"
|
||||
? "flex flex-col overflow-hidden"
|
||||
: "scrollbar-container overflow-y-auto",
|
||||
],
|
||||
isMobile &&
|
||||
"scrollbar-container flex h-full flex-col overflow-y-auto px-4",
|
||||
)}
|
||||
onEscapeKeyDown={(event) => {
|
||||
if (isPopoverOpen) {
|
||||
@@ -652,6 +663,7 @@ export default function SearchDetailDialog({
|
||||
isPopoverOpen={isPopoverOpen}
|
||||
setIsPopoverOpen={setIsPopoverOpen}
|
||||
dialogContainer={dialogContainer}
|
||||
setShowNavigationButtons={setShowNavigationButtons}
|
||||
/>
|
||||
</Content>
|
||||
</Overlay>
|
||||
@@ -664,12 +676,14 @@ type ObjectDetailsTabProps = {
|
||||
config?: FrigateConfig;
|
||||
setSearch: (search: SearchResult | undefined) => void;
|
||||
setInputFocused: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
setShowNavigationButtons?: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
};
|
||||
function ObjectDetailsTab({
|
||||
search,
|
||||
config,
|
||||
setSearch,
|
||||
setInputFocused,
|
||||
setShowNavigationButtons,
|
||||
}: ObjectDetailsTabProps) {
|
||||
const { t, i18n } = useTranslation([
|
||||
"views/explore",
|
||||
@@ -678,6 +692,15 @@ function ObjectDetailsTab({
|
||||
]);
|
||||
|
||||
const apiHost = useApiHost();
|
||||
const hasCustomClassificationModels = useMemo(
|
||||
() => Object.keys(config?.classification?.custom ?? {}).length > 0,
|
||||
[config],
|
||||
);
|
||||
const { data: modelAttributes } = useSWR<Record<string, string[]>>(
|
||||
hasCustomClassificationModels && search
|
||||
? `classification/attributes?object_type=${encodeURIComponent(search.label)}&group_by_model=true`
|
||||
: null,
|
||||
);
|
||||
|
||||
// mutation / revalidation
|
||||
|
||||
@@ -708,6 +731,7 @@ function ObjectDetailsTab({
|
||||
const [desc, setDesc] = useState(search?.data.description);
|
||||
const [isSubLabelDialogOpen, setIsSubLabelDialogOpen] = useState(false);
|
||||
const [isLPRDialogOpen, setIsLPRDialogOpen] = useState(false);
|
||||
const [isAttributesDialogOpen, setIsAttributesDialogOpen] = useState(false);
|
||||
const [isEditingDesc, setIsEditingDesc] = useState(false);
|
||||
const originalDescRef = useRef<string | null>(null);
|
||||
|
||||
@@ -722,6 +746,19 @@ function ObjectDetailsTab({
|
||||
// we have to make sure the current selected search item stays in sync
|
||||
useEffect(() => setDesc(search?.data.description ?? ""), [search]);
|
||||
|
||||
useEffect(() => setIsAttributesDialogOpen(false), [search?.id]);
|
||||
|
||||
useEffect(() => {
|
||||
const anyDialogOpen =
|
||||
isSubLabelDialogOpen || isLPRDialogOpen || isAttributesDialogOpen;
|
||||
setShowNavigationButtons?.(!anyDialogOpen);
|
||||
}, [
|
||||
isSubLabelDialogOpen,
|
||||
isLPRDialogOpen,
|
||||
isAttributesDialogOpen,
|
||||
setShowNavigationButtons,
|
||||
]);
|
||||
|
||||
const formattedDate = useFormattedTimestamp(
|
||||
search?.start_time ?? 0,
|
||||
config?.ui.time_format == "24hour"
|
||||
@@ -807,6 +844,41 @@ function ObjectDetailsTab({
|
||||
}
|
||||
}, [search]);
|
||||
|
||||
// Extract current attribute selections grouped by model
|
||||
const selectedAttributesByModel = useMemo(() => {
|
||||
if (!search || !modelAttributes) {
|
||||
return {};
|
||||
}
|
||||
|
||||
const dataAny = search.data as Record<string, unknown>;
|
||||
const selections: Record<string, string | null> = {};
|
||||
|
||||
// Initialize all models with null
|
||||
Object.keys(modelAttributes).forEach((modelName) => {
|
||||
selections[modelName] = null;
|
||||
});
|
||||
|
||||
// Find which attribute is selected for each model
|
||||
Object.keys(modelAttributes).forEach((modelName) => {
|
||||
const value = dataAny[modelName];
|
||||
if (
|
||||
typeof value === "string" &&
|
||||
modelAttributes[modelName].includes(value)
|
||||
) {
|
||||
selections[modelName] = value;
|
||||
}
|
||||
});
|
||||
|
||||
return selections;
|
||||
}, [search, modelAttributes]);
|
||||
|
||||
// Get flat list of selected attributes for display
|
||||
const eventAttributes = useMemo(() => {
|
||||
return Object.values(selectedAttributesByModel)
|
||||
.filter((attr): attr is string => attr !== null)
|
||||
.sort((a, b) => a.localeCompare(b));
|
||||
}, [selectedAttributesByModel]);
|
||||
|
||||
const isEventsKey = useCallback((key: unknown): boolean => {
|
||||
const candidate = Array.isArray(key) ? key[0] : key;
|
||||
const EVENTS_KEY_PATTERNS = ["events", "events/search", "events/explore"];
|
||||
@@ -1048,6 +1120,74 @@ function ObjectDetailsTab({
|
||||
[search, apiHost, mutate, setSearch, t, mapSearchResults, isEventsKey],
|
||||
);
|
||||
|
||||
const handleAttributesSave = useCallback(
|
||||
(selectedAttributes: string[]) => {
|
||||
if (!search) return;
|
||||
|
||||
axios
|
||||
.post(`${apiHost}api/events/${search.id}/attributes`, {
|
||||
attributes: selectedAttributes,
|
||||
})
|
||||
.then((response) => {
|
||||
const applied = Array.isArray(response.data?.applied)
|
||||
? (response.data.applied as {
|
||||
model?: string;
|
||||
label?: string | null;
|
||||
score?: number | null;
|
||||
}[])
|
||||
: [];
|
||||
|
||||
toast.success(t("details.item.toast.success.updatedAttributes"), {
|
||||
position: "top-center",
|
||||
});
|
||||
|
||||
const applyUpdatedAttributes = (event: SearchResult) => {
|
||||
if (event.id !== search.id) return event;
|
||||
|
||||
const updatedData: Record<string, unknown> = { ...event.data };
|
||||
|
||||
applied.forEach(({ model, label, score }) => {
|
||||
if (!model) return;
|
||||
updatedData[model] = label ?? null;
|
||||
updatedData[`${model}_score`] = score ?? null;
|
||||
});
|
||||
|
||||
return { ...event, data: updatedData } as SearchResult;
|
||||
};
|
||||
|
||||
mutate(
|
||||
(key) => isEventsKey(key),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, applyUpdatedAttributes),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
revalidate: false,
|
||||
},
|
||||
);
|
||||
|
||||
setSearch(applyUpdatedAttributes(search));
|
||||
setIsAttributesDialogOpen(false);
|
||||
})
|
||||
.catch((error) => {
|
||||
const errorMessage =
|
||||
error.response?.data?.message ||
|
||||
error.response?.data?.detail ||
|
||||
"Unknown error";
|
||||
|
||||
toast.error(
|
||||
t("details.item.toast.error.updatedAttributesFailed", {
|
||||
errorMessage,
|
||||
}),
|
||||
{
|
||||
position: "top-center",
|
||||
},
|
||||
);
|
||||
});
|
||||
},
|
||||
[search, apiHost, mutate, t, mapSearchResults, isEventsKey, setSearch],
|
||||
);
|
||||
|
||||
// speech transcription
|
||||
|
||||
const onTranscribe = useCallback(() => {
|
||||
@@ -1295,6 +1435,38 @@ function ObjectDetailsTab({
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{hasCustomClassificationModels &&
|
||||
modelAttributes &&
|
||||
Object.keys(modelAttributes).length > 0 && (
|
||||
<div className="flex flex-col gap-1.5">
|
||||
<div className="flex items-center gap-2 text-sm text-primary/40">
|
||||
{t("details.attributes")}
|
||||
{isAdmin && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<span>
|
||||
<FaPencilAlt
|
||||
className="size-4 cursor-pointer text-primary/40 hover:text-primary/80"
|
||||
onClick={() => setIsAttributesDialogOpen(true)}
|
||||
/>
|
||||
</span>
|
||||
</TooltipTrigger>
|
||||
<TooltipPortal>
|
||||
<TooltipContent>
|
||||
{t("button.edit", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</TooltipPortal>
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
<div className="text-sm">
|
||||
{eventAttributes.length > 0
|
||||
? eventAttributes.join(", ")
|
||||
: t("label.none", { ns: "common" })}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -1595,6 +1767,17 @@ function ObjectDetailsTab({
|
||||
defaultValue={search?.data.recognized_license_plate || ""}
|
||||
allowEmpty={true}
|
||||
/>
|
||||
<AttributeSelectDialog
|
||||
open={isAttributesDialogOpen}
|
||||
setOpen={setIsAttributesDialogOpen}
|
||||
title={t("details.editAttributes.title")}
|
||||
description={t("details.editAttributes.desc", {
|
||||
label: search.label,
|
||||
})}
|
||||
onSave={handleAttributesSave}
|
||||
selectedAttributes={selectedAttributesByModel}
|
||||
modelAttributes={modelAttributes ?? {}}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
@@ -526,7 +526,7 @@ export function TrackingDetails({
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
"flex items-center justify-center",
|
||||
"flex items-start justify-center",
|
||||
isDesktop && "overflow-hidden",
|
||||
cameraAspect === "tall" ? "max-h-[50dvh] lg:max-h-[70dvh]" : "w-full",
|
||||
cameraAspect === "tall" && isMobileOnly && "w-full",
|
||||
@@ -622,7 +622,10 @@ export function TrackingDetails({
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
isDesktop && "justify-between overflow-hidden lg:basis-2/5",
|
||||
isDesktop && "justify-start overflow-hidden",
|
||||
aspectRatio > 1 && aspectRatio < 1.5
|
||||
? "lg:basis-3/5"
|
||||
: "lg:basis-2/5",
|
||||
)}
|
||||
>
|
||||
{isDesktop && tabs && (
|
||||
@@ -632,121 +635,114 @@ export function TrackingDetails({
|
||||
)}
|
||||
<div
|
||||
className={cn(
|
||||
isDesktop && "scrollbar-container h-full overflow-y-auto",
|
||||
isDesktop && "scrollbar-container max-h-[70vh] overflow-y-auto",
|
||||
)}
|
||||
>
|
||||
{config?.cameras[event.camera]?.onvif.autotracking
|
||||
.enabled_in_config && (
|
||||
<div className="mb-2 ml-3 text-sm text-danger">
|
||||
<div className="mb-4 ml-3 text-sm text-danger">
|
||||
{t("trackingDetails.autoTrackingTips")}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="mt-4">
|
||||
<div
|
||||
className={cn("rounded-md bg-background_alt px-0 py-3 md:px-2")}
|
||||
>
|
||||
<div className="flex w-full items-center justify-between">
|
||||
<div className={cn("rounded-md bg-background_alt px-0 py-3 md:px-2")}>
|
||||
<div className="flex w-full items-center justify-between">
|
||||
<div
|
||||
className="flex items-center gap-2 font-medium"
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
// event.start_time is detect time, convert to record
|
||||
handleSeekToTime(
|
||||
(event.start_time ?? 0) + annotationOffset / 1000,
|
||||
);
|
||||
}}
|
||||
role="button"
|
||||
>
|
||||
<div
|
||||
className="flex items-center gap-2 font-medium"
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
// event.start_time is detect time, convert to record
|
||||
handleSeekToTime(
|
||||
(event.start_time ?? 0) + annotationOffset / 1000,
|
||||
);
|
||||
}}
|
||||
role="button"
|
||||
className={cn(
|
||||
"relative ml-2 rounded-full bg-muted-foreground p-2",
|
||||
)}
|
||||
>
|
||||
<div
|
||||
className={cn(
|
||||
"relative ml-2 rounded-full bg-muted-foreground p-2",
|
||||
)}
|
||||
>
|
||||
{getIconForLabel(
|
||||
event.sub_label ? event.label + "-verified" : event.label,
|
||||
"size-4 text-white",
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<span className="capitalize">{label}</span>
|
||||
<div className="md:text-md flex items-center text-xs text-secondary-foreground">
|
||||
{formattedStart ?? ""}
|
||||
{event.end_time != null ? (
|
||||
<> - {formattedEnd}</>
|
||||
) : (
|
||||
<div className="inline-block">
|
||||
<ActivityIndicator className="ml-3 size-4" />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
{event.data?.recognized_license_plate && (
|
||||
<>
|
||||
<span className="text-secondary-foreground">·</span>
|
||||
<div className="text-sm text-secondary-foreground">
|
||||
<Link
|
||||
to={`/explore?recognized_license_plate=${event.data.recognized_license_plate}`}
|
||||
className="text-sm"
|
||||
>
|
||||
{event.data.recognized_license_plate}
|
||||
</Link>
|
||||
</div>
|
||||
</>
|
||||
{getIconForLabel(
|
||||
event.sub_label ? event.label + "-verified" : event.label,
|
||||
"size-4 text-white",
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<span className="capitalize">{label}</span>
|
||||
<div className="md:text-md flex items-center text-xs text-secondary-foreground">
|
||||
{formattedStart ?? ""}
|
||||
{event.end_time != null ? (
|
||||
<> - {formattedEnd}</>
|
||||
) : (
|
||||
<div className="inline-block">
|
||||
<ActivityIndicator className="ml-3 size-4" />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
{event.data?.recognized_license_plate && (
|
||||
<>
|
||||
<span className="text-secondary-foreground">·</span>
|
||||
<div className="text-sm text-secondary-foreground">
|
||||
<Link
|
||||
to={`/explore?recognized_license_plate=${event.data.recognized_license_plate}`}
|
||||
className="text-sm"
|
||||
>
|
||||
{event.data.recognized_license_plate}
|
||||
</Link>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mt-2">
|
||||
{!eventSequence ? (
|
||||
<ActivityIndicator className="size-2" size={2} />
|
||||
) : eventSequence.length === 0 ? (
|
||||
<div className="py-2 text-muted-foreground">
|
||||
{t("detail.noObjectDetailData", { ns: "views/events" })}
|
||||
</div>
|
||||
) : (
|
||||
<div className="mt-2">
|
||||
{!eventSequence ? (
|
||||
<ActivityIndicator className="size-2" size={2} />
|
||||
) : eventSequence.length === 0 ? (
|
||||
<div className="py-2 text-muted-foreground">
|
||||
{t("detail.noObjectDetailData", { ns: "views/events" })}
|
||||
</div>
|
||||
) : (
|
||||
<div className="-pb-2 relative mx-0" ref={timelineContainerRef}>
|
||||
<div
|
||||
className="-pb-2 relative mx-0"
|
||||
ref={timelineContainerRef}
|
||||
>
|
||||
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
|
||||
style={{ bottom: lineBottomOffsetPx }}
|
||||
/>
|
||||
{isWithinEventRange && (
|
||||
<div
|
||||
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
|
||||
style={{ bottom: lineBottomOffsetPx }}
|
||||
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||
style={{
|
||||
top: `${lineTopOffsetPx}px`,
|
||||
height: `${blueLineHeightPx}px`,
|
||||
}}
|
||||
/>
|
||||
{isWithinEventRange && (
|
||||
<div
|
||||
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||
style={{
|
||||
top: `${lineTopOffsetPx}px`,
|
||||
height: `${blueLineHeightPx}px`,
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
<div className="space-y-2">
|
||||
{eventSequence.map((item, idx) => {
|
||||
return (
|
||||
<div
|
||||
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
||||
ref={(el) => {
|
||||
rowRefs.current[idx] = el;
|
||||
}}
|
||||
>
|
||||
<LifecycleIconRow
|
||||
item={item}
|
||||
event={event}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
effectiveTime={effectiveTime}
|
||||
isTimelineActive={isWithinEventRange}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
<div className="space-y-2">
|
||||
{eventSequence.map((item, idx) => {
|
||||
return (
|
||||
<div
|
||||
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
||||
ref={(el) => {
|
||||
rowRefs.current[idx] = el;
|
||||
}}
|
||||
>
|
||||
<LifecycleIconRow
|
||||
item={item}
|
||||
event={event}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
effectiveTime={effectiveTime}
|
||||
isTimelineActive={isWithinEventRange}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
123
web/src/components/overlay/dialog/AttributeSelectDialog.tsx
Normal file
123
web/src/components/overlay/dialog/AttributeSelectDialog.tsx
Normal file
@@ -0,0 +1,123 @@
|
||||
import { Button } from "@/components/ui/button";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogFooter,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { Switch } from "@/components/ui/switch";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useCallback, useEffect, useState } from "react";
|
||||
import { isDesktop } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
|
||||
type AttributeSelectDialogProps = {
|
||||
open: boolean;
|
||||
setOpen: (open: boolean) => void;
|
||||
title: string;
|
||||
description: string;
|
||||
onSave: (selectedAttributes: string[]) => void;
|
||||
selectedAttributes: Record<string, string | null>; // model -> selected attribute
|
||||
modelAttributes: Record<string, string[]>; // model -> available attributes
|
||||
className?: string;
|
||||
};
|
||||
|
||||
export default function AttributeSelectDialog({
|
||||
open,
|
||||
setOpen,
|
||||
title,
|
||||
description,
|
||||
onSave,
|
||||
selectedAttributes,
|
||||
modelAttributes,
|
||||
className,
|
||||
}: AttributeSelectDialogProps) {
|
||||
const { t } = useTranslation();
|
||||
const [internalSelection, setInternalSelection] = useState<
|
||||
Record<string, string | null>
|
||||
>({});
|
||||
|
||||
useEffect(() => {
|
||||
if (open) {
|
||||
setInternalSelection({ ...selectedAttributes });
|
||||
}
|
||||
}, [open, selectedAttributes]);
|
||||
|
||||
const handleSave = useCallback(() => {
|
||||
// Convert from model->attribute map to flat list of attributes
|
||||
const attributes = Object.values(internalSelection).filter(
|
||||
(attr): attr is string => attr !== null,
|
||||
);
|
||||
onSave(attributes);
|
||||
}, [internalSelection, onSave]);
|
||||
|
||||
const handleToggle = useCallback((modelName: string, attribute: string) => {
|
||||
setInternalSelection((prev) => {
|
||||
const currentSelection = prev[modelName];
|
||||
// If clicking the currently selected attribute, deselect it
|
||||
if (currentSelection === attribute) {
|
||||
return { ...prev, [modelName]: null };
|
||||
}
|
||||
// Otherwise, select this attribute for this model
|
||||
return { ...prev, [modelName]: attribute };
|
||||
});
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<Dialog open={open} onOpenChange={setOpen}>
|
||||
<DialogContent
|
||||
className={cn(className, isDesktop ? "max-w-md" : "max-w-[90%]")}
|
||||
onOpenAutoFocus={(e) => e.preventDefault()}
|
||||
>
|
||||
<DialogHeader>
|
||||
<DialogTitle>{title}</DialogTitle>
|
||||
<DialogDescription>{description}</DialogDescription>
|
||||
</DialogHeader>
|
||||
<div className="scrollbar-container overflow-y-auto">
|
||||
<div className="max-h-[80dvh] space-y-6 py-2">
|
||||
{Object.entries(modelAttributes).map(([modelName, attributes]) => (
|
||||
<div key={modelName} className="space-y-3">
|
||||
<div className="text-sm font-semibold text-primary-variant">
|
||||
{modelName}
|
||||
</div>
|
||||
<div className="space-y-2 pl-2">
|
||||
{attributes.map((attribute) => (
|
||||
<div
|
||||
key={attribute}
|
||||
className="flex items-center justify-between gap-2"
|
||||
>
|
||||
<Label
|
||||
htmlFor={`${modelName}-${attribute}`}
|
||||
className="cursor-pointer text-sm text-primary"
|
||||
>
|
||||
{attribute}
|
||||
</Label>
|
||||
<Switch
|
||||
id={`${modelName}-${attribute}`}
|
||||
checked={internalSelection[modelName] === attribute}
|
||||
onCheckedChange={() =>
|
||||
handleToggle(modelName, attribute)
|
||||
}
|
||||
/>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
<DialogFooter>
|
||||
<Button type="button" onClick={() => setOpen(false)}>
|
||||
{t("button.cancel")}
|
||||
</Button>
|
||||
<Button variant="select" onClick={handleSave}>
|
||||
{t("button.save", { ns: "common" })}
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
);
|
||||
}
|
||||
96
web/src/components/overlay/dialog/MultiSelectDialog.tsx
Normal file
96
web/src/components/overlay/dialog/MultiSelectDialog.tsx
Normal file
@@ -0,0 +1,96 @@
|
||||
import { Button } from "@/components/ui/button";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogFooter,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useState } from "react";
|
||||
import { isMobile } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import FilterSwitch from "@/components/filter/FilterSwitch";
|
||||
|
||||
type MultiSelectDialogProps = {
|
||||
open: boolean;
|
||||
title: string;
|
||||
description?: string;
|
||||
setOpen: (open: boolean) => void;
|
||||
onSave: (selectedItems: string[]) => void;
|
||||
selectedItems: string[];
|
||||
availableItems: string[];
|
||||
allowEmpty?: boolean;
|
||||
};
|
||||
|
||||
export default function MultiSelectDialog({
|
||||
open,
|
||||
title,
|
||||
description,
|
||||
setOpen,
|
||||
onSave,
|
||||
selectedItems = [],
|
||||
availableItems = [],
|
||||
allowEmpty = false,
|
||||
}: MultiSelectDialogProps) {
|
||||
const { t } = useTranslation("common");
|
||||
const [internalSelection, setInternalSelection] =
|
||||
useState<string[]>(selectedItems);
|
||||
|
||||
// Reset internal selection when dialog opens
|
||||
const handleOpenChange = (isOpen: boolean) => {
|
||||
if (isOpen) {
|
||||
setInternalSelection(selectedItems);
|
||||
}
|
||||
setOpen(isOpen);
|
||||
};
|
||||
|
||||
const toggleItem = (item: string) => {
|
||||
setInternalSelection((prev) =>
|
||||
prev.includes(item) ? prev.filter((i) => i !== item) : [...prev, item],
|
||||
);
|
||||
};
|
||||
|
||||
const handleSave = () => {
|
||||
if (!allowEmpty && internalSelection.length === 0) {
|
||||
return;
|
||||
}
|
||||
onSave(internalSelection);
|
||||
setOpen(false);
|
||||
};
|
||||
|
||||
return (
|
||||
<Dialog open={open} defaultOpen={false} onOpenChange={handleOpenChange}>
|
||||
<DialogContent>
|
||||
<DialogHeader>
|
||||
<DialogTitle>{title}</DialogTitle>
|
||||
{description && <DialogDescription>{description}</DialogDescription>}
|
||||
</DialogHeader>
|
||||
<div className="max-h-[80dvh] space-y-3 overflow-y-auto py-4">
|
||||
{availableItems.map((item) => (
|
||||
<FilterSwitch
|
||||
key={item}
|
||||
label={item}
|
||||
isChecked={internalSelection.includes(item)}
|
||||
onCheckedChange={() => toggleItem(item)}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
<DialogFooter className={cn("pt-4", isMobile && "gap-2")}>
|
||||
<Button type="button" onClick={() => setOpen(false)}>
|
||||
{t("button.cancel")}
|
||||
</Button>
|
||||
<Button
|
||||
variant="select"
|
||||
type="button"
|
||||
onClick={handleSave}
|
||||
disabled={!allowEmpty && internalSelection.length === 0}
|
||||
>
|
||||
{t("button.save")}
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
);
|
||||
}
|
||||
@@ -65,6 +65,13 @@ export default function SearchFilterDialog({
|
||||
const { t } = useTranslation(["components/filter"]);
|
||||
const [currentFilter, setCurrentFilter] = useState(filter ?? {});
|
||||
const { data: allSubLabels } = useSWR(["sub_labels", { split_joined: 1 }]);
|
||||
const hasCustomClassificationModels = useMemo(
|
||||
() => Object.keys(config?.classification?.custom ?? {}).length > 0,
|
||||
[config],
|
||||
);
|
||||
const { data: allAttributes } = useSWR(
|
||||
hasCustomClassificationModels ? "classification/attributes" : null,
|
||||
);
|
||||
const { data: allRecognizedLicensePlates } = useSWR<string[]>(
|
||||
"recognized_license_plates",
|
||||
);
|
||||
@@ -91,8 +98,10 @@ export default function SearchFilterDialog({
|
||||
(currentFilter.max_speed ?? 150) < 150 ||
|
||||
(currentFilter.zones?.length ?? 0) > 0 ||
|
||||
(currentFilter.sub_labels?.length ?? 0) > 0 ||
|
||||
(hasCustomClassificationModels &&
|
||||
(currentFilter.attributes?.length ?? 0) > 0) ||
|
||||
(currentFilter.recognized_license_plate?.length ?? 0) > 0),
|
||||
[currentFilter],
|
||||
[currentFilter, hasCustomClassificationModels],
|
||||
);
|
||||
|
||||
const trigger = (
|
||||
@@ -133,6 +142,15 @@ export default function SearchFilterDialog({
|
||||
setCurrentFilter({ ...currentFilter, sub_labels: newSubLabels })
|
||||
}
|
||||
/>
|
||||
{hasCustomClassificationModels && (
|
||||
<AttributeFilterContent
|
||||
allAttributes={allAttributes}
|
||||
attributes={currentFilter.attributes}
|
||||
setAttributes={(newAttributes) =>
|
||||
setCurrentFilter({ ...currentFilter, attributes: newAttributes })
|
||||
}
|
||||
/>
|
||||
)}
|
||||
<RecognizedLicensePlatesFilterContent
|
||||
allRecognizedLicensePlates={allRecognizedLicensePlates}
|
||||
recognizedLicensePlates={currentFilter.recognized_license_plate}
|
||||
@@ -216,6 +234,7 @@ export default function SearchFilterDialog({
|
||||
max_speed: undefined,
|
||||
has_snapshot: undefined,
|
||||
has_clip: undefined,
|
||||
...(hasCustomClassificationModels && { attributes: undefined }),
|
||||
recognized_license_plate: undefined,
|
||||
}));
|
||||
}}
|
||||
@@ -1087,3 +1106,72 @@ export function RecognizedLicensePlatesFilterContent({
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
type AttributeFilterContentProps = {
|
||||
allAttributes?: string[];
|
||||
attributes: string[] | undefined;
|
||||
setAttributes: (labels: string[] | undefined) => void;
|
||||
};
|
||||
export function AttributeFilterContent({
|
||||
allAttributes,
|
||||
attributes,
|
||||
setAttributes,
|
||||
}: AttributeFilterContentProps) {
|
||||
const { t } = useTranslation(["components/filter"]);
|
||||
const sortedAttributes = useMemo(
|
||||
() =>
|
||||
[...(allAttributes || [])].sort((a, b) =>
|
||||
a.toLowerCase().localeCompare(b.toLowerCase()),
|
||||
),
|
||||
[allAttributes],
|
||||
);
|
||||
return (
|
||||
<div className="overflow-x-hidden">
|
||||
<DropdownMenuSeparator className="mb-3" />
|
||||
<div className="text-lg">{t("attributes.label")}</div>
|
||||
<div className="mb-5 mt-2.5 flex items-center justify-between">
|
||||
<Label
|
||||
className="mx-2 cursor-pointer text-primary"
|
||||
htmlFor="allAttributes"
|
||||
>
|
||||
{t("attributes.all")}
|
||||
</Label>
|
||||
<Switch
|
||||
className="ml-1"
|
||||
id="allAttributes"
|
||||
checked={attributes == undefined}
|
||||
onCheckedChange={(isChecked) => {
|
||||
if (isChecked) {
|
||||
setAttributes(undefined);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
<div className="mt-2.5 flex flex-col gap-2.5">
|
||||
{sortedAttributes.map((item) => (
|
||||
<FilterSwitch
|
||||
key={item}
|
||||
label={item.replaceAll("_", " ")}
|
||||
isChecked={attributes?.includes(item) ?? false}
|
||||
onCheckedChange={(isChecked) => {
|
||||
if (isChecked) {
|
||||
const updatedAttributes = attributes ? [...attributes] : [];
|
||||
|
||||
updatedAttributes.push(item);
|
||||
setAttributes(updatedAttributes);
|
||||
} else {
|
||||
const updatedAttributes = attributes ? [...attributes] : [];
|
||||
|
||||
// can not deselect the last item
|
||||
if (updatedAttributes.length > 1) {
|
||||
updatedAttributes.splice(updatedAttributes.indexOf(item), 1);
|
||||
setAttributes(updatedAttributes);
|
||||
}
|
||||
}
|
||||
}}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -170,7 +170,9 @@ export function ClassFilterContent({
|
||||
<FilterSwitch
|
||||
key={item}
|
||||
label={
|
||||
item === "none" ? t("none") : item.replaceAll("_", " ")
|
||||
item === "none"
|
||||
? t("details.none", { ns: "views/classificationModel" })
|
||||
: item.replaceAll("_", " ")
|
||||
}
|
||||
isChecked={classes?.includes(item) ?? false}
|
||||
onCheckedChange={(isChecked) => {
|
||||
|
||||
@@ -178,6 +178,19 @@ export default function ObjectMaskEditPane({
|
||||
filteredMask.splice(index, 0, coordinates);
|
||||
}
|
||||
|
||||
// prevent duplicating global masks under specific object filters
|
||||
if (!globalMask) {
|
||||
const globalObjectMasksArray = Array.isArray(cameraConfig.objects.mask)
|
||||
? cameraConfig.objects.mask
|
||||
: cameraConfig.objects.mask
|
||||
? [cameraConfig.objects.mask]
|
||||
: [];
|
||||
|
||||
filteredMask = filteredMask.filter(
|
||||
(mask) => !globalObjectMasksArray.includes(mask),
|
||||
);
|
||||
}
|
||||
|
||||
queryString = filteredMask
|
||||
.map((pointsArray) => {
|
||||
const coordinates = flattenPoints(parseCoordinates(pointsArray)).join(
|
||||
|
||||
@@ -345,9 +345,9 @@ function ReviewGroup({
|
||||
}
|
||||
|
||||
const reviewInfo = useMemo(() => {
|
||||
const objectCount = fetchedEvents
|
||||
? fetchedEvents.length
|
||||
: (review.data.objects ?? []).length;
|
||||
const detectionsCount =
|
||||
review.data?.detections?.length ?? (review.data?.objects ?? []).length;
|
||||
const objectCount = fetchedEvents ? fetchedEvents.length : detectionsCount;
|
||||
|
||||
return `${t("detail.trackedObject", { count: objectCount })}`;
|
||||
}, [review, t, fetchedEvents]);
|
||||
|
||||
@@ -54,7 +54,7 @@ export default function useCameraLiveMode(
|
||||
}>({});
|
||||
|
||||
useEffect(() => {
|
||||
if (!cameras) return;
|
||||
if (!cameras || cameras.length === 0) return;
|
||||
|
||||
const mseSupported =
|
||||
"MediaSource" in window || "ManagedMediaSource" in window;
|
||||
|
||||
@@ -31,6 +31,7 @@ const SEARCH_FILTER_ARRAY_KEYS = [
|
||||
"cameras",
|
||||
"labels",
|
||||
"sub_labels",
|
||||
"attributes",
|
||||
"recognized_license_plate",
|
||||
"zones",
|
||||
];
|
||||
@@ -122,6 +123,7 @@ export default function Explore() {
|
||||
cameras: searchSearchParams["cameras"],
|
||||
labels: searchSearchParams["labels"],
|
||||
sub_labels: searchSearchParams["sub_labels"],
|
||||
attributes: searchSearchParams["attributes"],
|
||||
recognized_license_plate:
|
||||
searchSearchParams["recognized_license_plate"],
|
||||
zones: searchSearchParams["zones"],
|
||||
@@ -158,6 +160,7 @@ export default function Explore() {
|
||||
cameras: searchSearchParams["cameras"],
|
||||
labels: searchSearchParams["labels"],
|
||||
sub_labels: searchSearchParams["sub_labels"],
|
||||
attributes: searchSearchParams["attributes"],
|
||||
recognized_license_plate:
|
||||
searchSearchParams["recognized_license_plate"],
|
||||
zones: searchSearchParams["zones"],
|
||||
|
||||
@@ -68,7 +68,10 @@ import {
|
||||
ClassificationCard,
|
||||
GroupedClassificationCard,
|
||||
} from "@/components/card/ClassificationCard";
|
||||
import { ClassificationItemData } from "@/types/classification";
|
||||
import {
|
||||
ClassificationItemData,
|
||||
ClassifiedEvent,
|
||||
} from "@/types/classification";
|
||||
|
||||
export default function FaceLibrary() {
|
||||
const { t } = useTranslation(["views/faceLibrary"]);
|
||||
@@ -922,10 +925,22 @@ function FaceAttemptGroup({
|
||||
[onRefresh, t],
|
||||
);
|
||||
|
||||
// Create ClassifiedEvent from Event (face recognition uses sub_label)
|
||||
const classifiedEvent: ClassifiedEvent | undefined = useMemo(() => {
|
||||
if (!event || !event.sub_label || event.sub_label === "none") {
|
||||
return undefined;
|
||||
}
|
||||
return {
|
||||
id: event.id,
|
||||
label: event.sub_label,
|
||||
score: event.data?.sub_label_score,
|
||||
};
|
||||
}, [event]);
|
||||
|
||||
return (
|
||||
<GroupedClassificationCard
|
||||
group={group}
|
||||
event={event}
|
||||
classifiedEvent={classifiedEvent}
|
||||
threshold={threshold}
|
||||
selectedItems={selectedFaces}
|
||||
i18nLibrary="views/faceLibrary"
|
||||
@@ -1011,6 +1026,7 @@ function FaceGrid({
|
||||
filepath: `clips/faces/${pageToggle}/${image}`,
|
||||
}}
|
||||
selected={selectedFaces.includes(image)}
|
||||
clickable={selectedFaces.length > 0}
|
||||
i18nLibrary="views/faceLibrary"
|
||||
onClick={(data, meta) => onClickFaces([data.filename], meta)}
|
||||
>
|
||||
|
||||
@@ -437,7 +437,7 @@ export default function Settings() {
|
||||
|
||||
return (
|
||||
<div className="flex h-full flex-col">
|
||||
<div className="flex items-center justify-between border-b border-secondary p-3">
|
||||
<div className="flex min-h-16 items-center justify-between border-b border-secondary p-3">
|
||||
<Heading as="h3" className="mb-0">
|
||||
{t("menu.settings", { ns: "common" })}
|
||||
</Heading>
|
||||
|
||||
@@ -21,6 +21,12 @@ export type ClassificationThreshold = {
|
||||
unknown: number;
|
||||
};
|
||||
|
||||
export type ClassifiedEvent = {
|
||||
id: string;
|
||||
label?: string;
|
||||
score?: number;
|
||||
};
|
||||
|
||||
export type ClassificationDatasetResponse = {
|
||||
categories: {
|
||||
[id: string]: string[];
|
||||
|
||||
@@ -24,5 +24,12 @@ export interface Event {
|
||||
type: "object" | "audio" | "manual";
|
||||
recognized_license_plate?: string;
|
||||
path_data: [number[], number][];
|
||||
// Allow arbitrary keys for attributes (e.g., model_name, model_name_score)
|
||||
[key: string]:
|
||||
| number
|
||||
| number[]
|
||||
| string
|
||||
| [number[], number][]
|
||||
| undefined;
|
||||
};
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ const SEARCH_FILTERS = [
|
||||
"general",
|
||||
"zone",
|
||||
"sub",
|
||||
"attribute",
|
||||
"source",
|
||||
"sort",
|
||||
] as const;
|
||||
@@ -16,6 +17,7 @@ export const DEFAULT_SEARCH_FILTERS: SearchFilters[] = [
|
||||
"general",
|
||||
"zone",
|
||||
"sub",
|
||||
"attribute",
|
||||
"source",
|
||||
"sort",
|
||||
];
|
||||
@@ -71,6 +73,7 @@ export type SearchFilter = {
|
||||
cameras?: string[];
|
||||
labels?: string[];
|
||||
sub_labels?: string[];
|
||||
attributes?: string[];
|
||||
recognized_license_plate?: string[];
|
||||
zones?: string[];
|
||||
before?: number;
|
||||
@@ -95,6 +98,7 @@ export type SearchQueryParams = {
|
||||
cameras?: string[];
|
||||
labels?: string[];
|
||||
sub_labels?: string[];
|
||||
attributes?: string[];
|
||||
recognized_license_plate?: string[];
|
||||
zones?: string[];
|
||||
before?: string;
|
||||
|
||||
@@ -62,6 +62,7 @@ import useApiFilter from "@/hooks/use-api-filter";
|
||||
import {
|
||||
ClassificationDatasetResponse,
|
||||
ClassificationItemData,
|
||||
ClassifiedEvent,
|
||||
TrainFilter,
|
||||
} from "@/types/classification";
|
||||
import {
|
||||
@@ -707,7 +708,7 @@ function LibrarySelector({
|
||||
className="flex-grow cursor-pointer capitalize"
|
||||
onClick={() => setPageToggle(id)}
|
||||
>
|
||||
{id === "none" ? t("none") : id.replaceAll("_", " ")}
|
||||
{id === "none" ? t("details.none") : id.replaceAll("_", " ")}
|
||||
<span className="ml-2 text-muted-foreground">
|
||||
({dataset?.[id].length})
|
||||
</span>
|
||||
@@ -803,6 +804,7 @@ function DatasetGrid({
|
||||
name: "",
|
||||
}}
|
||||
showArea={false}
|
||||
clickable={selectedImages.length > 0}
|
||||
selected={selectedImages.includes(image)}
|
||||
i18nLibrary="views/classificationModel"
|
||||
onClick={(data, _) => onClickImages([data.filename], true)}
|
||||
@@ -866,6 +868,12 @@ function TrainGrid({
|
||||
};
|
||||
})
|
||||
.filter((data) => {
|
||||
// Ignore images that don't match the expected format (event-camera-timestamp-state-score.webp)
|
||||
// Expected format has 5 parts when split by "-", and score should be a valid number
|
||||
if (data.score === undefined || isNaN(data.score) || !data.name) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!trainFilter) {
|
||||
return true;
|
||||
}
|
||||
@@ -955,6 +963,7 @@ function StateTrainGrid({
|
||||
data={data}
|
||||
threshold={threshold}
|
||||
selected={selectedImages.includes(data.filename)}
|
||||
clickable={selectedImages.length > 0}
|
||||
i18nLibrary="views/classificationModel"
|
||||
showArea={false}
|
||||
onClick={(data, meta) => onClickImages([data.filename], meta)}
|
||||
@@ -1027,6 +1036,45 @@ function ObjectTrainGrid({
|
||||
};
|
||||
}, [model]);
|
||||
|
||||
// Helper function to create ClassifiedEvent from Event
|
||||
const createClassifiedEvent = useCallback(
|
||||
(event: Event | undefined): ClassifiedEvent | undefined => {
|
||||
if (!event || !model.object_config) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const classificationType = model.object_config.classification_type;
|
||||
|
||||
if (classificationType === "attribute") {
|
||||
// For attribute type, look at event.data[model.name]
|
||||
const attributeValue = event.data[model.name] as string | undefined;
|
||||
const attributeScore = event.data[`${model.name}_score`] as
|
||||
| number
|
||||
| undefined;
|
||||
|
||||
if (attributeValue && attributeValue !== "none") {
|
||||
return {
|
||||
id: event.id,
|
||||
label: attributeValue,
|
||||
score: attributeScore,
|
||||
};
|
||||
}
|
||||
} else {
|
||||
// For sub_label type, use event.sub_label
|
||||
if (event.sub_label && event.sub_label !== "none") {
|
||||
return {
|
||||
id: event.id,
|
||||
label: event.sub_label,
|
||||
score: event.data?.sub_label_score,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return undefined;
|
||||
},
|
||||
[model],
|
||||
);
|
||||
|
||||
// selection
|
||||
|
||||
const [selectedEvent, setSelectedEvent] = useState<Event>();
|
||||
@@ -1089,11 +1137,13 @@ function ObjectTrainGrid({
|
||||
>
|
||||
{Object.entries(groups).map(([key, group]) => {
|
||||
const event = events?.find((ev) => ev.id == key);
|
||||
const classifiedEvent = createClassifiedEvent(event);
|
||||
|
||||
return (
|
||||
<div key={key} className="aspect-square w-full">
|
||||
<GroupedClassificationCard
|
||||
group={group}
|
||||
event={event}
|
||||
classifiedEvent={classifiedEvent}
|
||||
threshold={threshold}
|
||||
selectedItems={selectedImages}
|
||||
i18nLibrary="views/classificationModel"
|
||||
|
||||
@@ -147,10 +147,11 @@ export default function LiveCameraView({
|
||||
|
||||
// supported features
|
||||
|
||||
const [streamName, setStreamName] = useUserPersistence<string>(
|
||||
`${camera.name}-stream`,
|
||||
Object.values(camera.live.streams)[0],
|
||||
);
|
||||
const [streamName, setStreamName, streamNameLoaded] =
|
||||
useUserPersistence<string>(
|
||||
`${camera.name}-stream`,
|
||||
Object.values(camera.live.streams)[0],
|
||||
);
|
||||
|
||||
const isRestreamed = useMemo(
|
||||
() =>
|
||||
@@ -159,6 +160,19 @@ export default function LiveCameraView({
|
||||
[config, streamName],
|
||||
);
|
||||
|
||||
// validate stored stream name and reset if now invalid
|
||||
|
||||
useEffect(() => {
|
||||
if (!streamNameLoaded) return;
|
||||
|
||||
const available = Object.values(camera.live.streams || {});
|
||||
if (available.length === 0) return;
|
||||
|
||||
if (streamName != null && !available.includes(streamName)) {
|
||||
setStreamName(available[0]);
|
||||
}
|
||||
}, [streamNameLoaded, camera.live.streams, streamName, setStreamName]);
|
||||
|
||||
const { data: cameraMetadata } = useSWR<LiveStreamMetadata>(
|
||||
isRestreamed ? `go2rtc/streams/${streamName}` : null,
|
||||
{
|
||||
@@ -1430,7 +1444,7 @@ function FrigateCameraFeatures({
|
||||
ns: "components/dialog",
|
||||
})}
|
||||
</div>
|
||||
<Popover>
|
||||
<Popover modal={true}>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
@@ -1517,7 +1531,7 @@ function FrigateCameraFeatures({
|
||||
<>
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>{t("stream.audio.unavailable")}</div>
|
||||
<Popover>
|
||||
<Popover modal={true}>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
@@ -1561,7 +1575,7 @@ function FrigateCameraFeatures({
|
||||
<>
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>{t("stream.twoWayTalk.unavailable")}</div>
|
||||
<Popover>
|
||||
<Popover modal={true}>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
|
||||
@@ -309,7 +309,10 @@ export function RecordingView({
|
||||
currentTimeRange.after <= currentTime &&
|
||||
currentTimeRange.before >= currentTime
|
||||
) {
|
||||
mainControllerRef.current?.seekToTimestamp(currentTime, true);
|
||||
mainControllerRef.current?.seekToTimestamp(
|
||||
currentTime,
|
||||
mainControllerRef.current.isPlaying(),
|
||||
);
|
||||
} else {
|
||||
updateSelectedSegment(currentTime, true);
|
||||
}
|
||||
|
||||
@@ -143,6 +143,13 @@ export default function SearchView({
|
||||
}, [config, searchFilter, allowedCameras]);
|
||||
|
||||
const { data: allSubLabels } = useSWR("sub_labels");
|
||||
const hasCustomClassificationModels = useMemo(
|
||||
() => Object.keys(config?.classification?.custom ?? {}).length > 0,
|
||||
[config],
|
||||
);
|
||||
const { data: allAttributes } = useSWR(
|
||||
hasCustomClassificationModels ? "classification/attributes" : null,
|
||||
);
|
||||
const { data: allRecognizedLicensePlates } = useSWR(
|
||||
"recognized_license_plates",
|
||||
);
|
||||
@@ -182,6 +189,7 @@ export default function SearchView({
|
||||
labels: Object.values(allLabels || {}),
|
||||
zones: Object.values(allZones || {}),
|
||||
sub_labels: allSubLabels,
|
||||
...(hasCustomClassificationModels && { attributes: allAttributes }),
|
||||
search_type: ["thumbnail", "description"] as SearchSource[],
|
||||
time_range:
|
||||
config?.ui.time_format == "24hour"
|
||||
@@ -204,9 +212,11 @@ export default function SearchView({
|
||||
allLabels,
|
||||
allZones,
|
||||
allSubLabels,
|
||||
allAttributes,
|
||||
allRecognizedLicensePlates,
|
||||
searchFilter,
|
||||
allowedCameras,
|
||||
hasCustomClassificationModels,
|
||||
],
|
||||
);
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ import useSWR from "swr";
|
||||
import axios from "axios";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import AutoUpdatingCameraImage from "@/components/camera/AutoUpdatingCameraImage";
|
||||
import { useCallback, useContext, useEffect, useMemo, useState } from "react";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { Slider } from "@/components/ui/slider";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import {
|
||||
@@ -20,7 +20,6 @@ import { toast } from "sonner";
|
||||
import { Separator } from "@/components/ui/separator";
|
||||
import { Link } from "react-router-dom";
|
||||
import { LuExternalLink } from "react-icons/lu";
|
||||
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { cn } from "@/lib/utils";
|
||||
@@ -48,8 +47,6 @@ export default function MotionTunerView({
|
||||
const [changedValue, setChangedValue] = useState(false);
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
|
||||
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
|
||||
|
||||
const { send: sendMotionThreshold } = useMotionThreshold(selectedCamera);
|
||||
const { send: sendMotionContourArea } = useMotionContourArea(selectedCamera);
|
||||
const { send: sendImproveContrast } = useImproveContrast(selectedCamera);
|
||||
@@ -119,7 +116,10 @@ export default function MotionTunerView({
|
||||
axios
|
||||
.put(
|
||||
`config/set?cameras.${selectedCamera}.motion.threshold=${motionSettings.threshold}&cameras.${selectedCamera}.motion.contour_area=${motionSettings.contour_area}&cameras.${selectedCamera}.motion.improve_contrast=${motionSettings.improve_contrast ? "True" : "False"}`,
|
||||
{ requires_restart: 0 },
|
||||
{
|
||||
requires_restart: 0,
|
||||
update_topic: `config/cameras/${selectedCamera}/motion`,
|
||||
},
|
||||
)
|
||||
.then((res) => {
|
||||
if (res.status === 200) {
|
||||
@@ -164,23 +164,7 @@ export default function MotionTunerView({
|
||||
const onCancel = useCallback(() => {
|
||||
setMotionSettings(origMotionSettings);
|
||||
setChangedValue(false);
|
||||
removeMessage("motion_tuner", `motion_tuner_${selectedCamera}`);
|
||||
}, [origMotionSettings, removeMessage, selectedCamera]);
|
||||
|
||||
useEffect(() => {
|
||||
if (changedValue) {
|
||||
addMessage(
|
||||
"motion_tuner",
|
||||
t("motionDetectionTuner.unsavedChanges", { camera: selectedCamera }),
|
||||
undefined,
|
||||
`motion_tuner_${selectedCamera}`,
|
||||
);
|
||||
} else {
|
||||
removeMessage("motion_tuner", `motion_tuner_${selectedCamera}`);
|
||||
}
|
||||
// we know that these deps are correct
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [changedValue, selectedCamera]);
|
||||
}, [origMotionSettings]);
|
||||
|
||||
useEffect(() => {
|
||||
document.title = t("documentTitle.motionTuner");
|
||||
|
||||
@@ -88,11 +88,20 @@ export default function EnrichmentMetrics({
|
||||
|
||||
Object.entries(stats.embeddings).forEach(([rawKey, stat]) => {
|
||||
const key = rawKey.replaceAll("_", " ");
|
||||
|
||||
if (!(key in series)) {
|
||||
const classificationIndex = rawKey.indexOf("_classification_");
|
||||
const seriesName =
|
||||
classificationIndex === -1
|
||||
? t("enrichments.embeddings." + rawKey)
|
||||
: t(
|
||||
`enrichments.embeddings.${rawKey.substring(classificationIndex + 1)}`,
|
||||
{
|
||||
name: rawKey.substring(0, classificationIndex),
|
||||
},
|
||||
);
|
||||
series[key] = {
|
||||
rawKey,
|
||||
name: t("enrichments.embeddings." + rawKey),
|
||||
name: seriesName,
|
||||
metrics: getThreshold(rawKey),
|
||||
data: [],
|
||||
};
|
||||
@@ -133,8 +142,14 @@ export default function EnrichmentMetrics({
|
||||
isSpeed = false;
|
||||
}
|
||||
|
||||
let categoryName = "";
|
||||
// Get translated category name
|
||||
const categoryName = t("enrichments.embeddings." + categoryKey);
|
||||
if (categoryKey.endsWith("_classification")) {
|
||||
const name = categoryKey.replace("_classification", "");
|
||||
categoryName = t("enrichments.embeddings.classification", { name });
|
||||
} else {
|
||||
categoryName = t("enrichments.embeddings." + categoryKey);
|
||||
}
|
||||
|
||||
if (!(categoryKey in grouped)) {
|
||||
grouped[categoryKey] = {
|
||||
|
||||
Reference in New Issue
Block a user