mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-20 03:08:48 -05:00
Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4d582062fb | ||
|
|
e0a8445bac | ||
|
|
2a271c0f5e | ||
|
|
925bf78811 | ||
|
|
59102794e8 | ||
|
|
20e5e3bdc0 | ||
|
|
b94ebda9e5 | ||
|
|
8cdaef307a | ||
|
|
4914029a50 | ||
|
|
bafdab9d67 | ||
|
|
b08db4913f | ||
|
|
7c7ff49b90 | ||
|
|
037c4d1cc0 | ||
|
|
1613499218 | ||
|
|
205fdf3ae3 | ||
|
|
f46f8a2160 | ||
|
|
880902cdd7 | ||
|
|
c5ed95ec52 | ||
|
|
751de141d5 | ||
|
|
0eb441fe50 | ||
|
|
7566aecb0b | ||
|
|
60714a733e | ||
|
|
d7f7cd7be1 | ||
|
|
6591210050 | ||
|
|
7e7b3288a8 | ||
|
|
fe3eb24dfe |
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.16.1
|
||||
VERSION = 0.16.2
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
BOARDS= #Initialized empty
|
||||
|
||||
@@ -144,7 +144,14 @@ WEB Digest Algorithm - MD5
|
||||
|
||||
### Reolink Cameras
|
||||
|
||||
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
|
||||
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
|
||||
|
||||
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
|
||||
| ---------------- | ------------------------- | -------------------------------- | ----------------------------------------------------------------------- |
|
||||
| 5MP or lower | All | http-flv | Stream is h264 |
|
||||
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
|
||||
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
|
||||
|
||||
Frigate works much better with newer reolink cameras that are setup with the below options:
|
||||
|
||||
If available, recommended settings are:
|
||||
@@ -157,12 +164,6 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
|
||||
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
|
||||
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
|
||||
|
||||
:::warning
|
||||
|
||||
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
@@ -212,7 +213,7 @@ go2rtc:
|
||||
streams:
|
||||
your_reolink_doorbell:
|
||||
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
|
||||
- rtsp://reolink_ip/Preview_01_sub
|
||||
- rtsp://username:password@reolink_ip/Preview_01_sub
|
||||
your_reolink_doorbell_sub:
|
||||
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
|
||||
```
|
||||
@@ -259,7 +260,7 @@ To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's
|
||||
go2rtc:
|
||||
streams:
|
||||
usb_camera:
|
||||
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
|
||||
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
|
||||
|
||||
cameras:
|
||||
usb_camera:
|
||||
|
||||
@@ -107,10 +107,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
|
||||
| Hanwha XNP-6550RH | ✅ | ❌ | |
|
||||
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
|
||||
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
|
||||
| Reolink 511WA | ✅ | ❌ | Zoom only |
|
||||
| Reolink E1 Pro | ✅ | ❌ | |
|
||||
| Reolink E1 Zoom | ✅ | ❌ | |
|
||||
| Reolink RLC-823A 16x | ✅ | ❌ | |
|
||||
| Reolink | ✅ | ❌ | |
|
||||
| Speco O8P32X | ✅ | ❌ | |
|
||||
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
|
||||
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
|
||||
|
||||
@@ -158,6 +158,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
|
||||
|
||||
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
|
||||
|
||||
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
|
||||
|
||||
### Why can't I bulk upload photos?
|
||||
|
||||
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.
|
||||
|
||||
@@ -18,10 +18,10 @@ genai:
|
||||
enabled: True
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-1.5-flash
|
||||
model: gemini-2.0-flash
|
||||
|
||||
cameras:
|
||||
front_camera:
|
||||
front_camera:
|
||||
genai:
|
||||
enabled: True # <- enable GenAI for your front camera
|
||||
use_snapshot: True
|
||||
@@ -30,7 +30,7 @@ cameras:
|
||||
required_zones:
|
||||
- steps
|
||||
indoor_camera:
|
||||
genai:
|
||||
genai:
|
||||
enabled: False # <- disable GenAI for your indoor camera
|
||||
```
|
||||
|
||||
@@ -78,7 +78,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||
|
||||
### Get API Key
|
||||
|
||||
@@ -96,7 +96,7 @@ genai:
|
||||
enabled: True
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-1.5-flash
|
||||
model: gemini-2.0-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
@@ -202,7 +202,7 @@ genai:
|
||||
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
|
||||
```
|
||||
|
||||
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
|
||||
@@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
## Configuration
|
||||
|
||||
License plate recognition is disabled by default. Enable it in your config file:
|
||||
|
||||
@@ -251,3 +251,7 @@ Note that disabling a camera through the config file (`enabled: False`) removes
|
||||
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
|
||||
|
||||
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.
|
||||
|
||||
7. **My camera streams have lots of visual artifacts / distortion.**
|
||||
|
||||
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
|
||||
|
||||
@@ -29,6 +29,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
**Nvidia Jetson**
|
||||
|
||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
@@ -325,6 +326,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
@@ -533,6 +540,12 @@ There is no default model provided, the following formats are supported:
|
||||
|
||||
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
@@ -560,6 +573,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
@@ -959,26 +978,29 @@ Here are some tips for getting different model types
|
||||
|
||||
### Downloading D-FINE Model
|
||||
|
||||
To export as ONNX:
|
||||
|
||||
1. Clone: https://github.com/Peterande/D-FINE and install all dependencies.
|
||||
2. Select and download a checkpoint from the [readme](https://github.com/Peterande/D-FINE).
|
||||
3. Modify line 58 of `tools/deployment/export_onnx.py` and change batch size to 1: `data = torch.rand(1, 3, 640, 640)`
|
||||
4. Run the export, making sure you select the right config, for your checkpoint.
|
||||
|
||||
Example:
|
||||
D-FINE can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=s` in the first line to `s`, `m`, or `l` size.
|
||||
|
||||
```sh
|
||||
docker build . --build-arg MODEL_SIZE=s --output . -f- <<'EOF'
|
||||
FROM python:3.11 AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /dfine
|
||||
RUN git clone https://github.com/Peterande/D-FINE.git .
|
||||
RUN uv pip install --system -r requirements.txt
|
||||
RUN uv pip install --system onnx onnxruntime onnxsim
|
||||
# Create output directory and download checkpoint
|
||||
RUN mkdir -p output
|
||||
ARG MODEL_SIZE
|
||||
RUN wget https://github.com/Peterande/storage/releases/download/dfinev1.0/dfine_${MODEL_SIZE}_obj2coco.pth -O output/dfine_${MODEL_SIZE}_obj2coco.pth
|
||||
# Modify line 58 of export_onnx.py to change batch size to 1
|
||||
RUN sed -i '58s/data = torch.rand(.*)/data = torch.rand(1, 3, 640, 640)/' tools/deployment/export_onnx.py
|
||||
RUN python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_${MODEL_SIZE}_obj2coco.yml -r output/dfine_${MODEL_SIZE}_obj2coco.pth
|
||||
FROM scratch
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL_SIZE}.onnx
|
||||
EOF
|
||||
```
|
||||
python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml -r output/dfine_m_obj2coco.pth
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Model export has only been tested on Linux (or WSL2). Not all dependencies are in `requirements.txt`. Some live in the deployment folder, and some are still missing entirely and must be installed manually.
|
||||
|
||||
Make sure you change the batch size to 1 before exporting.
|
||||
|
||||
:::
|
||||
|
||||
### Download RF-DETR Model
|
||||
|
||||
@@ -990,9 +1012,9 @@ FROM python:3.11 AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /rfdetr
|
||||
RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon
|
||||
RUN uv pip install --system rfdetr[onnxexport]
|
||||
ARG MODEL_SIZE
|
||||
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()"
|
||||
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
|
||||
FROM scratch
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx
|
||||
@@ -1030,23 +1052,25 @@ python3 yolo_to_onnx.py -m yolov7-320
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available sizes are `t`, `s`, `m`, `c`, and `e`).
|
||||
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
|
||||
|
||||
```sh
|
||||
docker build . --build-arg MODEL_SIZE=t --output . -f- <<'EOF'
|
||||
docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
|
||||
FROM python:3.11 AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /yolov9
|
||||
ADD https://github.com/WongKinYiu/yolov9.git .
|
||||
RUN uv pip install --system -r requirements.txt
|
||||
RUN uv pip install --system onnx onnxruntime onnx-simplifier>=0.4.1
|
||||
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
|
||||
ARG MODEL_SIZE
|
||||
ARG IMG_SIZE
|
||||
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
|
||||
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
|
||||
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz 320 --simplify --include onnx
|
||||
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx
|
||||
FROM scratch
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /
|
||||
ARG IMG_SIZE
|
||||
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx
|
||||
EOF
|
||||
```
|
||||
|
||||
@@ -99,6 +99,7 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
|
||||
| Name | Hailo‑8 Inference Time | Hailo‑8L Inference Time |
|
||||
| ---------------- | ---------------------- | ----------------------- |
|
||||
| ssd mobilenet v1 | ~ 6 ms | ~ 10 ms |
|
||||
| yolov9-tiny | | 320: 18ms |
|
||||
| yolov6n | ~ 7 ms | ~ 11 ms |
|
||||
|
||||
### Google Coral TPU
|
||||
@@ -131,17 +132,19 @@ More information is available [in the detector docs](/configuration/object_detec
|
||||
|
||||
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
|
||||
|
||||
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
|
||||
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
|
||||
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
|
||||
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
|
||||
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel N100 | ~ 15 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
|
||||
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
|
||||
| Name | MobileNetV2 Inference Time | YOLOv9 | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
|
||||
| -------------- | -------------------------- | ------------------------------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
|
||||
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
|
||||
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
|
||||
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
|
||||
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
|
||||
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
|
||||
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
||||
|
||||
### TensorRT - Nvidia GPU
|
||||
|
||||
@@ -166,12 +169,13 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
|
||||
Inference speeds will vary greatly depending on the GPU and the model used.
|
||||
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
|
||||
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
|
||||
| --------------- | --------------------- | ------------------------- | ---------------------- |
|
||||
| RTX 3050 | t-320: 15 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
|
||||
| RTX 3070 | t-320: 11 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
|
||||
| RTX A4000 | | 320: ~ 15 ms | |
|
||||
| Tesla P40 | | 320: ~ 105 ms | |
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
|
||||
| --------------- | ------------------------- | ------------------------- | ---------------------- |
|
||||
| GTX 1070 | s-320: 16 ms | 320: 14 ms | |
|
||||
| RTX 3050 | t-320: 15 ms s-320: 17 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
|
||||
| RTX 3070 | t-320: 11 ms s-320: 13 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
|
||||
| RTX A4000 | | 320: ~ 15 ms | |
|
||||
| Tesla P40 | | 320: ~ 105 ms | |
|
||||
|
||||
### ROCm - AMD GPU
|
||||
|
||||
@@ -179,7 +183,7 @@ With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detec
|
||||
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
||||
| --------- | --------------------- | ------------------------- |
|
||||
| AMD 780M | ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
|
||||
| AMD 780M | 320: ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
|
||||
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
|
||||
|
||||
## Community Supported Detectors
|
||||
|
||||
@@ -5,7 +5,7 @@ title: Updating
|
||||
|
||||
# Updating Frigate
|
||||
|
||||
The current stable version of Frigate is **0.16.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.0).
|
||||
The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1).
|
||||
|
||||
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
|
||||
|
||||
@@ -33,21 +33,21 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||
2. **Update and Pull the Latest Image**:
|
||||
|
||||
- If using Docker Compose:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.0` instead of `0.15.2`). For example:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
image: ghcr.io/blakeblackshear/frigate:0.16.0
|
||||
image: ghcr.io/blakeblackshear/frigate:0.16.1
|
||||
```
|
||||
- Then pull the image:
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.0
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
|
||||
```
|
||||
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
|
||||
- If using `docker run`:
|
||||
- Pull the image with the appropriate tag (e.g., `0.16.0`, `0.16.0-tensorrt`, or `stable`):
|
||||
- Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`):
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.0
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
|
||||
```
|
||||
|
||||
3. **Start the Container**:
|
||||
|
||||
@@ -185,6 +185,26 @@ For clips to be castable to media devices, audio is required and may need to be
|
||||
|
||||
<a name="api"></a>
|
||||
|
||||
## Camera API
|
||||
|
||||
To disable a camera dynamically
|
||||
|
||||
```
|
||||
action: camera.turn_off
|
||||
data: {}
|
||||
target:
|
||||
entity_id: camera.back_deck_cam # your Frigate camera entity ID
|
||||
```
|
||||
|
||||
To enable a camera that has been disabled dynamically
|
||||
|
||||
```
|
||||
action: camera.turn_on
|
||||
data: {}
|
||||
target:
|
||||
entity_id: camera.back_deck_cam # your Frigate camera entity ID
|
||||
```
|
||||
|
||||
## Notification API
|
||||
|
||||
Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications.
|
||||
|
||||
@@ -29,12 +29,12 @@ Message published for each changed tracked object. The first message is publishe
|
||||
"camera": "front_door",
|
||||
"frame_time": 1607123961.837752,
|
||||
"snapshot": {
|
||||
"frame_time": 1607123965.975463,
|
||||
"box": [415, 489, 528, 700],
|
||||
"area": 12728,
|
||||
"region": [260, 446, 660, 846],
|
||||
"score": 0.77546,
|
||||
"attributes": [],
|
||||
"frame_time": 1607123965.975463,
|
||||
"box": [415, 489, 528, 700],
|
||||
"area": 12728,
|
||||
"region": [260, 446, 660, 846],
|
||||
"score": 0.77546,
|
||||
"attributes": []
|
||||
},
|
||||
"label": "person",
|
||||
"sub_label": null,
|
||||
@@ -61,6 +61,7 @@ Message published for each changed tracked object. The first message is publishe
|
||||
}, // attributes with top score that have been identified on the object at any point
|
||||
"current_attributes": [], // detailed data about the current attributes in this frame
|
||||
"current_estimated_speed": 0.71, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
|
||||
"average_estimated_speed": 14.3, // average estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
|
||||
"velocity_angle": 180, // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
|
||||
"recognized_license_plate": "ABC12345", // a recognized license plate for car objects
|
||||
"recognized_license_plate_score": 0.933451
|
||||
@@ -70,12 +71,12 @@ Message published for each changed tracked object. The first message is publishe
|
||||
"camera": "front_door",
|
||||
"frame_time": 1607123962.082975,
|
||||
"snapshot": {
|
||||
"frame_time": 1607123965.975463,
|
||||
"box": [415, 489, 528, 700],
|
||||
"area": 12728,
|
||||
"region": [260, 446, 660, 846],
|
||||
"score": 0.77546,
|
||||
"attributes": [],
|
||||
"frame_time": 1607123965.975463,
|
||||
"box": [415, 489, 528, 700],
|
||||
"area": 12728,
|
||||
"region": [260, 446, 660, 846],
|
||||
"score": 0.77546,
|
||||
"attributes": []
|
||||
},
|
||||
"label": "person",
|
||||
"sub_label": ["John Smith", 0.79],
|
||||
@@ -109,6 +110,7 @@ Message published for each changed tracked object. The first message is publishe
|
||||
}
|
||||
],
|
||||
"current_estimated_speed": 0.77, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
|
||||
"average_estimated_speed": 14.31, // average estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
|
||||
"velocity_angle": 180, // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
|
||||
"recognized_license_plate": "ABC12345", // a recognized license plate for car objects
|
||||
"recognized_license_plate_score": 0.933451
|
||||
@@ -139,7 +141,7 @@ Message published for updates to tracked object metadata, for example:
|
||||
"name": "John",
|
||||
"score": 0.95,
|
||||
"camera": "front_door_cam",
|
||||
"timestamp": 1607123958.748393,
|
||||
"timestamp": 1607123958.748393
|
||||
}
|
||||
```
|
||||
|
||||
@@ -153,13 +155,20 @@ Message published for updates to tracked object metadata, for example:
|
||||
"plate": "123ABC",
|
||||
"score": 0.95,
|
||||
"camera": "driveway_cam",
|
||||
"timestamp": 1607123958.748393,
|
||||
"timestamp": 1607123958.748393
|
||||
}
|
||||
```
|
||||
|
||||
### `frigate/reviews`
|
||||
|
||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
|
||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
|
||||
|
||||
An `update` with the same ID will be published when:
|
||||
- The severity changes from `detection` to `alert`
|
||||
- Additional objects are detected
|
||||
- An object is recognized via face, lpr, etc.
|
||||
|
||||
When the review activity has ended a final `end` message is published.
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso
|
||||
| `w` | Add box |
|
||||
| `d` | Toggle difficult |
|
||||
| `s` | Switch to the next label |
|
||||
| `Shift + s` | Switch to the previous label |
|
||||
| `tab` | Select next largest box |
|
||||
| `del` | Delete current box |
|
||||
| `esc` | Deselect/Cancel |
|
||||
|
||||
@@ -34,6 +34,12 @@ Model IDs are not secret values and can be shared freely. Access to your model i
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
When setting the plus model id, all other fields should be removed as these are configured automatically with the Frigate+ model config
|
||||
|
||||
:::
|
||||
|
||||
## Step 4: Adjust your object filters for higher scores
|
||||
|
||||
Frigate+ models generally have much higher scores than the default model provided in Frigate. You will likely need to increase your `threshold` and `min_score` values. Here is an example of how these values can be refined, but you should expect these to evolve as your model improves. For more information about how `threshold` and `min_score` are related, see the docs on [object filters](../configuration/object_filters.md#object-scores).
|
||||
|
||||
@@ -11,34 +11,51 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
|
||||
|
||||
## Available model types
|
||||
|
||||
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yolov9`. All of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||
|
||||
| Model Type | Description |
|
||||
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| Model Type | Description |
|
||||
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
|
||||
|
||||
_\* Support coming in 0.17_
|
||||
|
||||
### YOLOv9 Details
|
||||
|
||||
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
|
||||
|
||||
:::info
|
||||
|
||||
When switching to YOLOv9, you may need to adjust your thresholds for some objects.
|
||||
|
||||
:::
|
||||
|
||||
#### Hailo Support
|
||||
|
||||
If you have a Hailo device, you will need to specify the hardware you have when submitting a model request because they are not cross compatible. Please test using the available base models before submitting your model request.
|
||||
|
||||
#### Rockchip (RKNN) Support
|
||||
|
||||
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
|
||||
|
||||
## Supported detector types
|
||||
|
||||
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), and ONNX (`onnx`) detectors.
|
||||
|
||||
:::warning
|
||||
|
||||
Using Frigate+ models with `onnx` is only available with Frigate 0.15 and later.
|
||||
|
||||
:::
|
||||
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
|
||||
|
||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||
| [NVidia GPU](/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
|
||||
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
|
||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
|
||||
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
|
||||
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
|
||||
|
||||
_\* Requires Frigate 0.15_
|
||||
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
|
||||
|
||||
## Improving your model
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ from pathlib import Path
|
||||
import psutil
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from pathvalidate import sanitize_filepath
|
||||
from peewee import DoesNotExist
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
@@ -15,7 +16,7 @@ from frigate.api.auth import require_role
|
||||
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
|
||||
from frigate.api.defs.request.export_rename_body import ExportRenameBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import EXPORT_DIR
|
||||
from frigate.const import CLIPS_DIR, EXPORT_DIR
|
||||
from frigate.models import Export, Previews, Recordings
|
||||
from frigate.record.export import (
|
||||
PlaybackFactorEnum,
|
||||
@@ -54,7 +55,14 @@ def export_recording(
|
||||
playback_factor = body.playback
|
||||
playback_source = body.source
|
||||
friendly_name = body.name
|
||||
existing_image = body.image_path
|
||||
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
|
||||
|
||||
# Ensure that existing_image is a valid path
|
||||
if existing_image and not existing_image.startswith(CLIPS_DIR):
|
||||
return JSONResponse(
|
||||
content=({"success": False, "message": "Invalid image path"}),
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
if playback_source == "recordings":
|
||||
recordings_count = (
|
||||
|
||||
@@ -1598,7 +1598,7 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
|
||||
try:
|
||||
event_id = event_query.scalar()
|
||||
|
||||
return event_thumbnail(request, event_id, 60)
|
||||
return event_thumbnail(request, event_id, Extension.jpg, 60)
|
||||
except DoesNotExist:
|
||||
frame = np.zeros((175, 175, 3), np.uint8)
|
||||
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
|
||||
|
||||
@@ -41,10 +41,13 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
|
||||
self.detected_birds: dict[str, float] = {}
|
||||
self.labelmap: dict[int, str] = {}
|
||||
|
||||
GITHUB_RAW_ENDPOINT = os.environ.get(
|
||||
"GITHUB_RAW_ENDPOINT", "https://raw.githubusercontent.com"
|
||||
)
|
||||
download_path = os.path.join(MODEL_CACHE_DIR, "bird")
|
||||
self.model_files = {
|
||||
"bird.tflite": "https://raw.githubusercontent.com/google-coral/test_data/master/mobilenet_v2_1.0_224_inat_bird_quant.tflite",
|
||||
"birdmap.txt": "https://raw.githubusercontent.com/google-coral/test_data/master/inat_bird_labels.txt",
|
||||
"bird.tflite": f"{GITHUB_RAW_ENDPOINT}/google-coral/test_data/master/mobilenet_v2_1.0_224_inat_bird_quant.tflite",
|
||||
"birdmap.txt": f"{GITHUB_RAW_ENDPOINT}/google-coral/test_data/master/inat_bird_labels.txt",
|
||||
}
|
||||
|
||||
if not all(
|
||||
|
||||
@@ -60,10 +60,12 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
self.faces_per_second = EventsPerSecond()
|
||||
self.inference_speed = InferenceSpeed(self.metrics.face_rec_speed)
|
||||
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
|
||||
download_path = os.path.join(MODEL_CACHE_DIR, "facedet")
|
||||
self.model_files = {
|
||||
"facedet.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facedet.onnx",
|
||||
"landmarkdet.yaml": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/landmarkdet.yaml",
|
||||
"facedet.onnx": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/facedet.onnx",
|
||||
"landmarkdet.yaml": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/landmarkdet.yaml",
|
||||
}
|
||||
|
||||
if not all(
|
||||
|
||||
@@ -161,6 +161,10 @@ class ModelConfig(BaseModel):
|
||||
if model_info.get("inputDataType"):
|
||||
self.input_dtype = model_info["inputDataType"]
|
||||
|
||||
# RKNN always uses NHWC
|
||||
if detector == "rknn":
|
||||
self.input_tensor = InputTensorEnum.nhwc
|
||||
|
||||
# generate list of attribute labels
|
||||
self.attributes_map = {
|
||||
**model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP),
|
||||
|
||||
@@ -139,8 +139,9 @@ class Rknn(DetectionApi):
|
||||
if not os.path.isdir(model_cache_dir):
|
||||
os.mkdir(model_cache_dir)
|
||||
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
urllib.request.urlretrieve(
|
||||
f"https://github.com/MarcA711/rknn-models/releases/download/v2.3.2-2/{filename}",
|
||||
f"{GITHUB_ENDPOINT}/MarcA711/rknn-models/releases/download/v2.3.2-2/{filename}",
|
||||
model_cache_dir + filename,
|
||||
)
|
||||
|
||||
|
||||
@@ -24,11 +24,12 @@ FACENET_INPUT_SIZE = 160
|
||||
|
||||
class FaceNetEmbedding(BaseEmbedding):
|
||||
def __init__(self):
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
super().__init__(
|
||||
model_name="facedet",
|
||||
model_file="facenet.tflite",
|
||||
download_urls={
|
||||
"facenet.tflite": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
|
||||
"facenet.tflite": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
|
||||
},
|
||||
)
|
||||
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
|
||||
@@ -110,11 +111,12 @@ class FaceNetEmbedding(BaseEmbedding):
|
||||
|
||||
class ArcfaceEmbedding(BaseEmbedding):
|
||||
def __init__(self):
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
super().__init__(
|
||||
model_name="facedet",
|
||||
model_file="arcface.onnx",
|
||||
download_urls={
|
||||
"arcface.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
|
||||
"arcface.onnx": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
|
||||
},
|
||||
)
|
||||
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
|
||||
|
||||
@@ -34,11 +34,12 @@ class PaddleOCRDetection(BaseEmbedding):
|
||||
model_file = (
|
||||
"detection-large.onnx" if model_size == "large" else "detection-small.onnx"
|
||||
)
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
super().__init__(
|
||||
model_name="paddleocr-onnx",
|
||||
model_file=model_file,
|
||||
download_urls={
|
||||
model_file: f"https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{model_file}"
|
||||
model_file: f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{model_file}"
|
||||
},
|
||||
)
|
||||
self.requestor = requestor
|
||||
@@ -94,11 +95,12 @@ class PaddleOCRClassification(BaseEmbedding):
|
||||
requestor: InterProcessRequestor,
|
||||
device: str = "AUTO",
|
||||
):
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
super().__init__(
|
||||
model_name="paddleocr-onnx",
|
||||
model_file="classification.onnx",
|
||||
download_urls={
|
||||
"classification.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/classification.onnx"
|
||||
"classification.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/classification.onnx"
|
||||
},
|
||||
)
|
||||
self.requestor = requestor
|
||||
@@ -154,11 +156,12 @@ class PaddleOCRRecognition(BaseEmbedding):
|
||||
requestor: InterProcessRequestor,
|
||||
device: str = "AUTO",
|
||||
):
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
super().__init__(
|
||||
model_name="paddleocr-onnx",
|
||||
model_file="recognition.onnx",
|
||||
download_urls={
|
||||
"recognition.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/recognition.onnx"
|
||||
"recognition.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/recognition.onnx"
|
||||
},
|
||||
)
|
||||
self.requestor = requestor
|
||||
@@ -214,11 +217,12 @@ class LicensePlateDetector(BaseEmbedding):
|
||||
requestor: InterProcessRequestor,
|
||||
device: str = "AUTO",
|
||||
):
|
||||
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
|
||||
super().__init__(
|
||||
model_name="yolov9_license_plate",
|
||||
model_file="yolov9-256-license-plates.onnx",
|
||||
download_urls={
|
||||
"yolov9-256-license-plates.onnx": "https://github.com/hawkeye217/yolov9-license-plates/raw/refs/heads/master/models/yolov9-256-license-plates.onnx"
|
||||
"yolov9-256-license-plates.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/yolov9-license-plates/raw/refs/heads/master/models/yolov9-256-license-plates.onnx"
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@@ -301,7 +301,7 @@ def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, s
|
||||
"-o",
|
||||
"-",
|
||||
"-s",
|
||||
"1",
|
||||
"1000", # Intel changed this from seconds to milliseconds in 2024+ versions
|
||||
]
|
||||
|
||||
if intel_gpu_device:
|
||||
|
||||
2
web/public/robots.txt
Normal file
2
web/public/robots.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
User-agent: *
|
||||
Disallow: /
|
||||
@@ -139,7 +139,7 @@ export default function HlsVideoPlayer({
|
||||
if (hlsRef.current) {
|
||||
hlsRef.current.destroy();
|
||||
}
|
||||
}
|
||||
};
|
||||
}, [videoRef, hlsRef, useHlsCompat, currentSource]);
|
||||
|
||||
// state handling
|
||||
|
||||
@@ -33,29 +33,43 @@ export default function useCameraLiveMode(
|
||||
|
||||
const streamsFetcher = useCallback(async (key: string) => {
|
||||
const streamNames = key.split(",");
|
||||
const metadata: { [key: string]: LiveStreamMetadata } = {};
|
||||
|
||||
await Promise.all(
|
||||
streamNames.map(async (streamName) => {
|
||||
try {
|
||||
const response = await fetch(`/api/go2rtc/streams/${streamName}`);
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
metadata[streamName] = data;
|
||||
}
|
||||
} catch (error) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(`Failed to fetch metadata for ${streamName}:`, error);
|
||||
const metadataPromises = streamNames.map(async (streamName) => {
|
||||
try {
|
||||
const response = await fetch(`/api/go2rtc/streams/${streamName}`, {
|
||||
priority: "low",
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
return { streamName, data };
|
||||
}
|
||||
}),
|
||||
);
|
||||
return { streamName, data: null };
|
||||
} catch (error) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(`Failed to fetch metadata for ${streamName}:`, error);
|
||||
return { streamName, data: null };
|
||||
}
|
||||
});
|
||||
|
||||
const results = await Promise.allSettled(metadataPromises);
|
||||
|
||||
const metadata: { [key: string]: LiveStreamMetadata } = {};
|
||||
results.forEach((result) => {
|
||||
if (result.status === "fulfilled" && result.value.data) {
|
||||
metadata[result.value.streamName] = result.value.data;
|
||||
}
|
||||
});
|
||||
|
||||
return metadata;
|
||||
}, []);
|
||||
|
||||
const { data: allStreamMetadata = {} } = useSWR<{
|
||||
[key: string]: LiveStreamMetadata;
|
||||
}>(restreamedStreamsKey, streamsFetcher, { revalidateOnFocus: false });
|
||||
}>(restreamedStreamsKey, streamsFetcher, {
|
||||
revalidateOnFocus: false,
|
||||
dedupingInterval: 10000,
|
||||
});
|
||||
|
||||
const [preferredLiveModes, setPreferredLiveModes] = useState<{
|
||||
[key: string]: LivePlayerMode;
|
||||
|
||||
@@ -390,7 +390,6 @@ export default function FrigatePlusSettingsView({
|
||||
className="cursor-pointer"
|
||||
value={id}
|
||||
disabled={
|
||||
model.type != config.model.model_type ||
|
||||
!model.supportedDetectors.includes(
|
||||
Object.values(config.detectors)[0]
|
||||
.type,
|
||||
|
||||
Reference in New Issue
Block a user