Compare commits

...

7 Commits

Author SHA1 Message Date
Josh Hawkins
c687aa5119 Birdseye fixes (#22166)
* permit birdseye access if user has viewer role or a custom viewer role that has access to all cameras

* bump version
2026-02-27 20:02:46 -07:00
Josh Hawkins
e064024a31 Fix go2rtc stream alias auth (#22097)
* Fix go2rtc stream alias authorization and live audio gating for main/sub stream names

* revert

* add tests
2026-02-27 20:02:19 -07:00
Josh Hawkins
96c70eee4c fix link to coral yolov9 plus models (#22164) 2026-02-27 16:07:07 -07:00
Blake Blackshear
0310a9654d Merge pull request #19787 from blakeblackshear/dev
0.17 Release
2026-02-26 21:03:59 -06:00
Blake Blackshear
7df3622243 updates for yolov9 coral support (#22136) 2026-02-26 20:36:26 -06:00
Bart Nagel
dd8282ff3c Docs: fix YOLOv9 onnx export (#22107)
* Docs: fix missing dependency in YOLOv9 build script

I had this command fail because it didn't have cmake available.

This change fixes that problem.

* Docs: avoid failure in YOLOv9 build script

Pinning to 0.4.36 avoids this error:

```
10.58  Downloading onnx
12.87    Building onnxsim==0.5.0
1029.4   × Failed to download and build `onnxsim==0.5.0`
1029.4   ╰─▶ Package metadata version `0.4.36` does not match given version `0.5.0`
1029.4   help: `onnxsim` (v0.5.0) was included because `onnx-simplifier` (v0.5.0)
1029.4         depends on `onnxsim`
```

* Update Dockerfile instructions for object detectors

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2026-02-24 07:38:04 -07:00
Meow
984d654c40 Update line breaks in video_pipeline.md diagram (#21919)
Mermaid compatible newlines (<br>)
2026-02-23 06:45:49 -07:00
13 changed files with 393 additions and 58 deletions

View File

@@ -1,7 +1,7 @@
default_target: local default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1) COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.17.0 VERSION = 0.17.1
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD) GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty BOARDS= #Initialized empty

View File

@@ -157,7 +157,13 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite`
#### YOLOv9 #### YOLOv9
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes. YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Instructions](#yolov9-for-google-coral-support) for downloading a model with support for the Google Coral.
:::tip
**Frigate+ Users:** Follow the [instructions](/integrations/plus#use-models) to set a model ID in your config file.
:::
<details> <details>
<summary>YOLOv9 Setup & Config</summary> <summary>YOLOv9 Setup & Config</summary>
@@ -1554,19 +1560,23 @@ cd tensorrt_demos/yolo
python3 yolo_to_onnx.py -m yolov7-320 python3 yolo_to_onnx.py -m yolov7-320
``` ```
#### YOLOv9 #### YOLOv9 for Google Coral Support
[Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
#### YOLOv9 for other detectors
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`). YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
```sh ```sh
docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF' docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
FROM python:3.11 AS build FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* RUN apt-get update && apt-get install --no-install-recommends -y cmake libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ COPY --from=ghcr.io/astral-sh/uv:0.10.4 /uv /bin/
WORKDIR /yolov9 WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git . ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier==0.4.* onnxscript
ARG MODEL_SIZE ARG MODEL_SIZE
ARG IMG_SIZE ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt

View File

@@ -37,18 +37,18 @@ The following diagram adds a lot more detail than the simple view explained befo
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%% %%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
flowchart TD flowchart TD
RecStore[(Recording\nstore)] RecStore[(Recording<br>store)]
SnapStore[(Snapshot\nstore)] SnapStore[(Snapshot<br>store)]
subgraph Acquisition subgraph Acquisition
Cam["Camera"] -->|FFmpeg supported| Stream Cam["Camera"] -->|FFmpeg supported| Stream
Cam -->|"Other streaming\nprotocols"| go2rtc Cam -->|"Other streaming<br>protocols"| go2rtc
go2rtc("go2rtc") --> Stream go2rtc("go2rtc") --> Stream
Stream[Capture main and\nsub streams] --> |detect stream|Decode(Decode and\ndownscale) Stream[Capture main and<br>sub streams] --> |detect stream|Decode(Decode and<br>downscale)
end end
subgraph Motion subgraph Motion
Decode --> MotionM(Apply\nmotion masks) Decode --> MotionM(Apply<br>motion masks)
MotionM --> MotionD(Motion\ndetection) MotionM --> MotionD(Motion<br>detection)
end end
subgraph Detection subgraph Detection
MotionD --> |motion regions| ObjectD(Object detection) MotionD --> |motion regions| ObjectD(Object detection)
@@ -60,8 +60,8 @@ flowchart TD
MotionD --> |motion event|Birdseye MotionD --> |motion event|Birdseye
ObjectZ --> |object event|Birdseye ObjectZ --> |object event|Birdseye
MotionD --> |"video segments\n(retain motion)"|RecStore MotionD --> |"video segments<br>(retain motion)"|RecStore
ObjectZ --> |detection clip|RecStore ObjectZ --> |detection clip|RecStore
Stream -->|"video segments\n(retain all)"| RecStore Stream -->|"video segments<br>(retain all)"| RecStore
ObjectZ --> |detection snapshot|SnapStore ObjectZ --> |detection snapshot|SnapStore
``` ```

View File

@@ -54,6 +54,8 @@ Once you have [requested your first model](../plus/first_model.md) and gotten yo
You can either choose the new model from the Frigate+ pane in the Settings page of the Frigate UI, or manually set the model at the root level in your config: You can either choose the new model from the Frigate+ pane in the Settings page of the Frigate UI, or manually set the model at the root level in your config:
```yaml ```yaml
detectors: ...
model: model:
path: plus://<your_model_id> path: plus://<your_model_id>
``` ```

View File

@@ -24,6 +24,8 @@ You will receive an email notification when your Frigate+ model is ready.
Models available in Frigate+ can be used with a special model path. No other information needs to be configured because it fetches the remaining config from Frigate+ automatically. Models available in Frigate+ can be used with a special model path. No other information needs to be configured because it fetches the remaining config from Frigate+ automatically.
```yaml ```yaml
detectors: ...
model: model:
path: plus://<your_model_id> path: plus://<your_model_id>
``` ```

View File

@@ -15,15 +15,15 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models. Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description | | Model Type | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. | | `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. | | `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. | | `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on most hardware. |
### YOLOv9 Details ### YOLOv9 Details
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`. YOLOv9 models are available in `s`, `t`, `edgetpu` variants. When requesting a `yolov9` model, you will be prompted to choose a variant. If you want the model to be compatible with a Google Coral, you will need to choose the `edgetpu` variant. If you are unsure what variant to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
:::info :::info
@@ -37,23 +37,21 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support #### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later. Rockchip models are automatically converted as of 0.17. For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it.
## Supported detector types ## Supported detector types
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors. Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip (`rknn`) detectors.
| Hardware | Recommended Detector Type | Recommended Model Type | | Hardware | Recommended Detector Type | Recommended Model Type |
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- | | -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` | | [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` | | [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `yolov9` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` | | [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` | | [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` | | [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` | | [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` | | [Rockchip NPU](/configuration/object_detectors#rockchip-platform) | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
## Improving your model ## Improving your model
@@ -81,7 +79,7 @@ Candidate labels are also available for annotation. These labels don't have enou
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added. Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball` The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`, `la_poste`, `lawnmower`, `heron`, `rickshaw`, `wombat`, `auspost`, `aramex`, `bobcat`, `mustelid`, `transoflex`, `airplane`, `drone`, `mountain_lion`, `crocodile`, `turkey`, `baby_stroller`, `monkey`, `coyote`, `porcupine`, `parcelforce`, `sheep`, `snake`, `helicopter`, `lizard`, `duck`, `hermes`, `cargus`, `fan_courier`, `sameday`
Candidate labels are not available for automatic suggestions. Candidate labels are not available for automatic suggestions.

View File

@@ -986,7 +986,16 @@ async def require_camera_access(
current_user = await get_current_user(request) current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse): if isinstance(current_user, JSONResponse):
return current_user detail = "Authentication required"
try:
error_payload = json.loads(current_user.body)
detail = (
error_payload.get("message") or error_payload.get("detail") or detail
)
except Exception:
pass
raise HTTPException(status_code=current_user.status_code, detail=detail)
role = current_user["role"] role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys()) all_camera_names = set(request.app.frigate_config.cameras.keys())
@@ -1004,6 +1013,61 @@ async def require_camera_access(
) )
def _get_stream_owner_cameras(request: Request, stream_name: str) -> set[str]:
owner_cameras: set[str] = set()
for camera_name, camera in request.app.frigate_config.cameras.items():
if stream_name == camera_name:
owner_cameras.add(camera_name)
continue
if stream_name in camera.live.streams.values():
owner_cameras.add(camera_name)
return owner_cameras
async def require_go2rtc_stream_access(
stream_name: Optional[str] = None,
request: Request = None,
):
"""Dependency to enforce go2rtc stream access based on owning camera access."""
if stream_name is None:
return
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
detail = "Authentication required"
try:
error_payload = json.loads(current_user.body)
detail = (
error_payload.get("message") or error_payload.get("detail") or detail
)
except Exception:
pass
raise HTTPException(status_code=current_user.status_code, detail=detail)
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
# Admin or full access bypasses
if role == "admin" or not roles_dict.get(role):
return
owner_cameras = _get_stream_owner_cameras(request, stream_name)
if owner_cameras & set(allowed_cameras):
return
raise HTTPException(
status_code=403,
detail=f"Access denied to camera '{stream_name}'. Allowed: {allowed_cameras}",
)
async def get_allowed_cameras_for_filter(request: Request): async def get_allowed_cameras_for_filter(request: Request):
"""Dependency to get allowed_cameras for filtering lists.""" """Dependency to get allowed_cameras for filtering lists."""
current_user = await get_current_user(request) current_user = await get_current_user(request)

View File

@@ -17,7 +17,7 @@ from zeep.transports import AsyncTransport
from frigate.api.auth import ( from frigate.api.auth import (
allow_any_authenticated, allow_any_authenticated,
require_camera_access, require_go2rtc_stream_access,
require_role, require_role,
) )
from frigate.api.defs.tags import Tags from frigate.api.defs.tags import Tags
@@ -71,14 +71,27 @@ def go2rtc_streams():
@router.get( @router.get(
"/go2rtc/streams/{camera_name}", dependencies=[Depends(require_camera_access)] "/go2rtc/streams/{stream_name}",
dependencies=[Depends(require_go2rtc_stream_access)],
) )
def go2rtc_camera_stream(request: Request, camera_name: str): def go2rtc_camera_stream(request: Request, stream_name: str):
r = requests.get( r = requests.get(
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=all&microphone" "http://127.0.0.1:1984/api/streams",
params={
"src": stream_name,
"video": "all",
"audio": "all",
"microphone": "",
},
) )
if not r.ok: if not r.ok:
camera_config = request.app.frigate_config.cameras.get(camera_name) camera_config = request.app.frigate_config.cameras.get(stream_name)
if camera_config is None:
for camera_name, camera in request.app.frigate_config.cameras.items():
if stream_name in camera.live.streams.values():
camera_config = request.app.frigate_config.cameras.get(camera_name)
break
if camera_config and camera_config.enabled: if camera_config and camera_config.enabled:
logger.error("Failed to fetch streams from go2rtc") logger.error("Failed to fetch streams from go2rtc")

View File

@@ -1,6 +1,7 @@
from unittest.mock import patch from unittest.mock import patch
from fastapi import HTTPException, Request from fastapi import HTTPException, Request
from fastapi.testclient import TestClient
from frigate.api.auth import ( from frigate.api.auth import (
get_allowed_cameras_for_filter, get_allowed_cameras_for_filter,
@@ -9,6 +10,33 @@ from frigate.api.auth import (
from frigate.models import Event, Recordings, ReviewSegment from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
# Minimal multi-camera config used by go2rtc stream access tests.
# front_door has a stream alias "front_door_main"; back_door uses its own name.
# The "limited_user" role is restricted to front_door only.
_MULTI_CAMERA_CONFIG = {
"mqtt": {"host": "mqtt"},
"auth": {
"roles": {
"limited_user": ["front_door"],
}
},
"cameras": {
"front_door": {
"ffmpeg": {
"inputs": [{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}]
},
"detect": {"height": 1080, "width": 1920, "fps": 5},
"live": {"streams": {"default": "front_door_main"}},
},
"back_door": {
"ffmpeg": {
"inputs": [{"path": "rtsp://10.0.0.2:554/video", "roles": ["detect"]}]
},
"detect": {"height": 1080, "width": 1920, "fps": 5},
},
},
}
class TestCameraAccessEventReview(BaseTestHttp): class TestCameraAccessEventReview(BaseTestHttp):
def setUp(self): def setUp(self):
@@ -190,3 +218,179 @@ class TestCameraAccessEventReview(BaseTestHttp):
resp = client.get("/events/summary") resp = client.get("/events/summary")
summary_list = resp.json() summary_list = resp.json()
assert len(summary_list) == 2 assert len(summary_list) == 2
class TestGo2rtcStreamAccess(BaseTestHttp):
"""Tests for require_go2rtc_stream_access — the auth dependency on
GET /go2rtc/streams/{stream_name}.
go2rtc is not running in unit tests, so an authorized request returns
500 (the proxy call fails), while an unauthorized request returns 401/403
before the proxy is ever reached.
"""
def _make_app(self, config_override: dict | None = None):
"""Build a test app, optionally replacing self.minimal_config."""
if config_override is not None:
self.minimal_config = config_override
app = super().create_app()
# Allow tests to control the current user via request headers.
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
if not username or not role:
from fastapi.responses import JSONResponse
return JSONResponse(
content={"message": "No authorization headers."},
status_code=401,
)
return {"username": username, "role": role}
app.dependency_overrides[get_current_user] = mock_get_current_user
return app
def setUp(self):
super().setUp([Event, ReviewSegment, Recordings])
def tearDown(self):
super().tearDown()
# ------------------------------------------------------------------
# Helpers
# ------------------------------------------------------------------
def _get_stream(
self, app, stream_name: str, role: str = "admin", user: str = "test"
):
"""Issue GET /go2rtc/streams/{stream_name} with the given role."""
with AuthTestClient(app) as client:
return client.get(
f"/go2rtc/streams/{stream_name}",
headers={"remote-user": user, "remote-role": role},
)
# ------------------------------------------------------------------
# Tests
# ------------------------------------------------------------------
def test_admin_can_access_any_stream(self):
"""Admin role bypasses camera restrictions."""
app = self._make_app(_MULTI_CAMERA_CONFIG)
# front_door stream — go2rtc is not running so expect 500, not 401/403
resp = self._get_stream(app, "front_door", role="admin")
assert resp.status_code not in (401, 403), (
f"Admin should not be blocked; got {resp.status_code}"
)
# back_door stream
resp = self._get_stream(app, "back_door", role="admin")
assert resp.status_code not in (401, 403)
def test_missing_auth_headers_returns_401(self):
"""Requests without auth headers must be rejected with 401."""
app = self._make_app(_MULTI_CAMERA_CONFIG)
# Use plain TestClient (not AuthTestClient) so no headers are injected.
with TestClient(app, raise_server_exceptions=False) as client:
resp = client.get("/go2rtc/streams/front_door")
assert resp.status_code == 401, f"Expected 401, got {resp.status_code}"
def test_unconfigured_role_can_access_any_stream(self):
"""When no camera restrictions are configured for a role the user
should have access to all streams (no roles_dict entry ⇒ no restriction)."""
no_roles_config = {
"mqtt": {"host": "mqtt"},
"cameras": {
"front_door": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {"height": 1080, "width": 1920, "fps": 5},
},
"back_door": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.2:554/video", "roles": ["detect"]}
]
},
"detect": {"height": 1080, "width": 1920, "fps": 5},
},
},
}
app = self._make_app(no_roles_config)
# "myuser" role is not listed in roles_dict — should be allowed everywhere
for stream in ("front_door", "back_door"):
resp = self._get_stream(app, stream, role="myuser")
assert resp.status_code not in (401, 403), (
f"Unconfigured role should not be blocked on '{stream}'; "
f"got {resp.status_code}"
)
def test_restricted_role_can_access_allowed_camera(self):
"""limited_user role (restricted to front_door) can access front_door stream."""
app = self._make_app(_MULTI_CAMERA_CONFIG)
resp = self._get_stream(app, "front_door", role="limited_user")
assert resp.status_code not in (401, 403), (
f"limited_user should be allowed on front_door; got {resp.status_code}"
)
def test_restricted_role_blocked_from_disallowed_camera(self):
"""limited_user role (restricted to front_door) cannot access back_door stream."""
app = self._make_app(_MULTI_CAMERA_CONFIG)
resp = self._get_stream(app, "back_door", role="limited_user")
assert resp.status_code == 403, (
f"limited_user should be denied on back_door; got {resp.status_code}"
)
def test_stream_alias_allowed_for_owning_camera(self):
"""Stream alias 'front_door_main' is owned by front_door; limited_user (who
is allowed front_door) should be permitted."""
app = self._make_app(_MULTI_CAMERA_CONFIG)
# front_door_main is the alias defined in live.streams for front_door
resp = self._get_stream(app, "front_door_main", role="limited_user")
assert resp.status_code not in (401, 403), (
f"limited_user should be allowed on alias front_door_main; "
f"got {resp.status_code}"
)
def test_stream_alias_blocked_when_owning_camera_disallowed(self):
"""limited_user cannot access a stream alias that belongs to a camera they
are not allowed to see."""
# Give back_door a stream alias and restrict limited_user to front_door only
config = {
"mqtt": {"host": "mqtt"},
"auth": {
"roles": {
"limited_user": ["front_door"],
}
},
"cameras": {
"front_door": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {"height": 1080, "width": 1920, "fps": 5},
},
"back_door": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.2:554/video", "roles": ["detect"]}
]
},
"detect": {"height": 1080, "width": 1920, "fps": 5},
"live": {"streams": {"default": "back_door_main"}},
},
},
}
app = self._make_app(config)
resp = self._get_stream(app, "back_door_main", role="limited_user")
assert resp.status_code == 403, (
f"limited_user should be denied on alias back_door_main; "
f"got {resp.status_code}"
)

View File

@@ -77,6 +77,7 @@ import { useStreamingSettings } from "@/context/streaming-settings-provider";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { CameraNameLabel } from "../camera/FriendlyNameLabel"; import { CameraNameLabel } from "../camera/FriendlyNameLabel";
import { useAllowedCameras } from "@/hooks/use-allowed-cameras"; import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
import { useHasFullCameraAccess } from "@/hooks/use-has-full-camera-access";
import { useIsAdmin } from "@/hooks/use-is-admin"; import { useIsAdmin } from "@/hooks/use-is-admin";
import { useUserPersistedOverlayState } from "@/hooks/use-overlay-state"; import { useUserPersistedOverlayState } from "@/hooks/use-overlay-state";
@@ -677,7 +678,7 @@ export function CameraGroupEdit({
); );
const allowedCameras = useAllowedCameras(); const allowedCameras = useAllowedCameras();
const isAdmin = useIsAdmin(); const hasFullCameraAccess = useHasFullCameraAccess();
const [openCamera, setOpenCamera] = useState<string | null>(); const [openCamera, setOpenCamera] = useState<string | null>();
@@ -866,8 +867,7 @@ export function CameraGroupEdit({
<FormDescription>{t("group.cameras.desc")}</FormDescription> <FormDescription>{t("group.cameras.desc")}</FormDescription>
<FormMessage /> <FormMessage />
{[ {[
...(birdseyeConfig?.enabled && ...(birdseyeConfig?.enabled && hasFullCameraAccess
(isAdmin || "birdseye" in allowedCameras)
? ["birdseye"] ? ["birdseye"]
: []), : []),
...Object.keys(config?.cameras ?? {}) ...Object.keys(config?.cameras ?? {})

View File

@@ -18,18 +18,25 @@ export default function useCameraLiveMode(
const streamNames = new Set<string>(); const streamNames = new Set<string>();
cameras.forEach((camera) => { cameras.forEach((camera) => {
const isRestreamed = Object.keys(config.go2rtc.streams || {}).includes( if (activeStreams && activeStreams[camera.name]) {
Object.values(camera.live.streams)[0], const selectedStreamName = activeStreams[camera.name];
); const isRestreamed = Object.keys(config.go2rtc.streams || {}).includes(
selectedStreamName,
);
if (isRestreamed) { if (isRestreamed) {
if (activeStreams && activeStreams[camera.name]) { streamNames.add(selectedStreamName);
streamNames.add(activeStreams[camera.name]);
} else {
Object.values(camera.live.streams).forEach((streamName) => {
streamNames.add(streamName);
});
} }
} else {
Object.values(camera.live.streams).forEach((streamName) => {
const isRestreamed = Object.keys(
config.go2rtc.streams || {},
).includes(streamName);
if (isRestreamed) {
streamNames.add(streamName);
}
});
} }
}); });
@@ -66,11 +73,11 @@ export default function useCameraLiveMode(
} = {}; } = {};
cameras.forEach((camera) => { cameras.forEach((camera) => {
const selectedStreamName =
activeStreams?.[camera.name] ?? Object.values(camera.live.streams)[0];
const isRestreamed = const isRestreamed =
config && config &&
Object.keys(config.go2rtc.streams || {}).includes( Object.keys(config.go2rtc.streams || {}).includes(selectedStreamName);
Object.values(camera.live.streams)[0],
);
newIsRestreamedStates[camera.name] = isRestreamed ?? false; newIsRestreamedStates[camera.name] = isRestreamed ?? false;
@@ -101,14 +108,21 @@ export default function useCameraLiveMode(
setPreferredLiveModes(newPreferredLiveModes); setPreferredLiveModes(newPreferredLiveModes);
setIsRestreamedStates(newIsRestreamedStates); setIsRestreamedStates(newIsRestreamedStates);
setSupportsAudioOutputStates(newSupportsAudioOutputStates); setSupportsAudioOutputStates(newSupportsAudioOutputStates);
}, [cameras, config, windowVisible, streamMetadata]); }, [activeStreams, cameras, config, windowVisible, streamMetadata]);
const resetPreferredLiveMode = useCallback( const resetPreferredLiveMode = useCallback(
(cameraName: string) => { (cameraName: string) => {
const mseSupported = const mseSupported =
"MediaSource" in window || "ManagedMediaSource" in window; "MediaSource" in window || "ManagedMediaSource" in window;
const cameraConfig = cameras.find((camera) => camera.name === cameraName);
const selectedStreamName =
activeStreams?.[cameraName] ??
(cameraConfig
? Object.values(cameraConfig.live.streams)[0]
: cameraName);
const isRestreamed = const isRestreamed =
config && Object.keys(config.go2rtc.streams || {}).includes(cameraName); config &&
Object.keys(config.go2rtc.streams || {}).includes(selectedStreamName);
setPreferredLiveModes((prevModes) => { setPreferredLiveModes((prevModes) => {
const newModes = { ...prevModes }; const newModes = { ...prevModes };
@@ -122,7 +136,7 @@ export default function useCameraLiveMode(
return newModes; return newModes;
}); });
}, },
[config], [activeStreams, cameras, config],
); );
return { return {

View File

@@ -0,0 +1,26 @@
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
/**
* Returns true if the current user has access to all cameras.
* This is used to determine birdseye access — users who can see
* all cameras should also be able to see the birdseye view.
*/
export function useHasFullCameraAccess() {
const allowedCameras = useAllowedCameras();
const { data: config } = useSWR<FrigateConfig>("config", {
revalidateOnFocus: false,
});
if (!config?.cameras) return false;
const enabledCameraNames = Object.entries(config.cameras)
.filter(([, cam]) => cam.enabled_in_config)
.map(([name]) => name);
return (
enabledCameraNames.length > 0 &&
enabledCameraNames.every((name) => allowedCameras.includes(name))
);
}

View File

@@ -11,12 +11,12 @@ import { useTranslation } from "react-i18next";
import { useEffect, useMemo, useRef } from "react"; import { useEffect, useMemo, useRef } from "react";
import useSWR from "swr"; import useSWR from "swr";
import { useAllowedCameras } from "@/hooks/use-allowed-cameras"; import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
import { useIsAdmin } from "@/hooks/use-is-admin"; import { useHasFullCameraAccess } from "@/hooks/use-has-full-camera-access";
function Live() { function Live() {
const { t } = useTranslation(["views/live"]); const { t } = useTranslation(["views/live"]);
const { data: config } = useSWR<FrigateConfig>("config"); const { data: config } = useSWR<FrigateConfig>("config");
const isAdmin = useIsAdmin(); const hasFullCameraAccess = useHasFullCameraAccess();
// selection // selection
@@ -90,8 +90,8 @@ function Live() {
const allowedCameras = useAllowedCameras(); const allowedCameras = useAllowedCameras();
const includesBirdseye = useMemo(() => { const includesBirdseye = useMemo(() => {
// Restricted users should never have access to birdseye // Users without access to all cameras should not have access to birdseye
if (!isAdmin) { if (!hasFullCameraAccess) {
return false; return false;
} }
@@ -106,7 +106,7 @@ function Live() {
} else { } else {
return false; return false;
} }
}, [config, cameraGroup, isAdmin]); }, [config, cameraGroup, hasFullCameraAccess]);
const cameras = useMemo(() => { const cameras = useMemo(() => {
if (!config) { if (!config) {
@@ -151,7 +151,9 @@ function Live() {
return ( return (
<div className="size-full" ref={mainRef}> <div className="size-full" ref={mainRef}>
{selectedCameraName === "birdseye" ? ( {selectedCameraName === "birdseye" &&
hasFullCameraAccess &&
config?.birdseye?.enabled ? (
<LiveBirdseyeView <LiveBirdseyeView
supportsFullscreen={supportsFullScreen} supportsFullscreen={supportsFullScreen}
fullscreen={fullscreen} fullscreen={fullscreen}