mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-11 14:58:29 -05:00
Compare commits
15 Commits
dev
...
media-sync
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a8b90834b0 | ||
|
|
f1a19128ed | ||
|
|
a77b0a7c4b | ||
|
|
1c95eb2c39 | ||
|
|
26744efb1e | ||
|
|
aa0b082184 | ||
|
|
7fb8d9b050 | ||
|
|
b8bc98a423 | ||
|
|
f9e06bb7b7 | ||
|
|
7cc16161b3 | ||
|
|
08311a6ee2 | ||
|
|
a08c044144 | ||
|
|
5cced22f65 | ||
|
|
b962c95725 | ||
|
|
0cbec25494 |
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.17.0
|
||||
VERSION = 0.18.0
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
BOARDS= #Initialized empty
|
||||
|
||||
@@ -139,7 +139,13 @@ record:
|
||||
|
||||
:::tip
|
||||
|
||||
When using `hwaccel_args` globally hardware encoding is used for time lapse generation. The encoder determines its own behavior so the resulting file size may be undesirably large.
|
||||
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras.<camera>.record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
The encoder determines its own behavior so the resulting file size may be undesirably large.
|
||||
To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (where `n` stands for the value of the quantisation parameter). The value can be adjusted to get an acceptable tradeoff between quality and file size for the given scenario.
|
||||
|
||||
:::
|
||||
@@ -148,19 +154,16 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
|
||||
|
||||
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
|
||||
|
||||
## Syncing Recordings With Disk
|
||||
## Syncing Media Files With Disk
|
||||
|
||||
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.
|
||||
Media files (event snapshots, event thumbnails, review thumbnails, previews, exports, and recordings) can become orphaned when database entries are deleted but the corresponding files remain on disk.
|
||||
|
||||
```yaml
|
||||
record:
|
||||
sync_recordings: True
|
||||
```
|
||||
Normal operation may leave small numbers of orphaned files until Frigate's scheduled cleanup, but crashes, configuration changes, or upgrades may cause more orphaned files that Frigate does not clean up. This feature checks the file system for media files and removes any that are not referenced in the database.
|
||||
|
||||
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
|
||||
The Maintenance pane in the Frigate UI or an API endpoint `POST /api/media/sync` can be used to trigger a media sync. When using the API, a job ID is returned and the operation continues on the server. Status can be checked with the `/api/media/sync/status/{job_id}` endpoint.
|
||||
|
||||
:::warning
|
||||
|
||||
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.
|
||||
This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended.
|
||||
|
||||
:::
|
||||
|
||||
@@ -510,8 +510,6 @@ record:
|
||||
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
|
||||
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
|
||||
expire_interval: 60
|
||||
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
|
||||
sync_recordings: False
|
||||
# Optional: Continuous retention settings
|
||||
continuous:
|
||||
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
|
||||
@@ -534,6 +532,8 @@ record:
|
||||
# The -r (framerate) dictates how smooth the output video is.
|
||||
# So the args would be -vf setpts=0.02*PTS -r 30 in that case.
|
||||
timelapse_args: "-vf setpts=0.04*PTS -r 30"
|
||||
# Optional: Global hardware acceleration settings for timelapse exports. (default: inherit)
|
||||
hwaccel_args: auto
|
||||
# Optional: Recording Preview Settings
|
||||
preview:
|
||||
# Optional: Quality of recording preview (default: shown below).
|
||||
@@ -835,6 +835,11 @@ cameras:
|
||||
# Optional: camera specific output args (default: inherit)
|
||||
# output_args:
|
||||
|
||||
# Optional: camera specific hwaccel args for timelapse export (default: inherit)
|
||||
# record:
|
||||
# export:
|
||||
# hwaccel_args:
|
||||
|
||||
# Optional: timeout for highest scoring image before allowing it
|
||||
# to be replaced by a newer image. (default: shown below)
|
||||
best_image_timeout: 60
|
||||
|
||||
53
docs/static/frigate-api.yaml
vendored
53
docs/static/frigate-api.yaml
vendored
@@ -326,6 +326,59 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/media/sync:
|
||||
post:
|
||||
tags:
|
||||
- App
|
||||
summary: Start media sync job
|
||||
description: |-
|
||||
Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
|
||||
Returns 202 with job details when queued, or 409 if a job is already running.
|
||||
operationId: sync_media_media_sync_post
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
responses:
|
||||
"202":
|
||||
description: Accepted - Job queued
|
||||
"409":
|
||||
description: Conflict - Job already running
|
||||
"422":
|
||||
description: Validation Error
|
||||
|
||||
/media/sync/current:
|
||||
get:
|
||||
tags:
|
||||
- App
|
||||
summary: Get current media sync job
|
||||
description: |-
|
||||
Retrieve the current running media sync job, if any. Returns the job details or null when no job is active.
|
||||
operationId: get_media_sync_current_media_sync_current_get
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
"422":
|
||||
description: Validation Error
|
||||
|
||||
/media/sync/status/{job_id}:
|
||||
get:
|
||||
tags:
|
||||
- App
|
||||
summary: Get media sync job status
|
||||
description: |-
|
||||
Get status and results for the specified media sync job id. Returns 200 with job details including results, or 404 if the job is not found.
|
||||
operationId: get_media_sync_status_media_sync_status__job_id__get
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
"404":
|
||||
description: Not Found - Job not found
|
||||
"422":
|
||||
description: Validation Error
|
||||
/faces/train/{name}/classify:
|
||||
post:
|
||||
tags:
|
||||
|
||||
@@ -25,15 +25,22 @@ from pydantic import ValidationError
|
||||
|
||||
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
|
||||
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
|
||||
from frigate.api.defs.request.app_body import AppConfigSetBody
|
||||
from frigate.api.defs.request.app_body import AppConfigSetBody, MediaSyncBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.config.camera.updater import (
|
||||
CameraConfigUpdateEnum,
|
||||
CameraConfigUpdateTopic,
|
||||
)
|
||||
from frigate.ffmpeg_presets import FFMPEG_HWACCEL_VAAPI, _gpu_selector
|
||||
from frigate.jobs.media_sync import (
|
||||
get_current_media_sync_job,
|
||||
get_media_sync_job_by_id,
|
||||
start_media_sync_job,
|
||||
)
|
||||
from frigate.models import Event, Timeline
|
||||
from frigate.stats.prometheus import get_metrics, update_metrics
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
from frigate.util.builtin import (
|
||||
clean_camera_user_pass,
|
||||
flatten_config_data,
|
||||
@@ -458,7 +465,15 @@ def config_set(request: Request, body: AppConfigSetBody):
|
||||
|
||||
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
|
||||
def vainfo():
|
||||
vainfo = vainfo_hwaccel()
|
||||
# Use LibvaGpuSelector to pick an appropriate libva device (if available)
|
||||
selected_gpu = ""
|
||||
try:
|
||||
selected_gpu = _gpu_selector.get_gpu_arg(FFMPEG_HWACCEL_VAAPI, 0) or ""
|
||||
except Exception:
|
||||
selected_gpu = ""
|
||||
|
||||
# If selected_gpu is empty, pass None to vainfo_hwaccel to run plain `vainfo`.
|
||||
vainfo = vainfo_hwaccel(device_name=selected_gpu or None)
|
||||
return JSONResponse(
|
||||
content={
|
||||
"return_code": vainfo.returncode,
|
||||
@@ -593,6 +608,98 @@ def restart():
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/media/sync",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Start media sync job",
|
||||
description="""Start an asynchronous media sync job to find and (optionally) remove orphaned media files.
|
||||
Returns 202 with job details when queued, or 409 if a job is already running.""",
|
||||
)
|
||||
def sync_media(body: MediaSyncBody = Body(...)):
|
||||
"""Start async media sync job - remove orphaned files.
|
||||
|
||||
Syncs specified media types: event snapshots, event thumbnails, review thumbnails,
|
||||
previews, exports, and/or recordings. Job runs in background; use /media/sync/current
|
||||
or /media/sync/status/{job_id} to check status.
|
||||
|
||||
Args:
|
||||
body: MediaSyncBody with dry_run flag and media_types list.
|
||||
media_types can include: 'all', 'event_snapshots', 'event_thumbnails',
|
||||
'review_thumbnails', 'previews', 'exports', 'recordings'
|
||||
|
||||
Returns:
|
||||
202 Accepted with job_id, or 409 Conflict if job already running.
|
||||
"""
|
||||
job_id = start_media_sync_job(
|
||||
dry_run=body.dry_run, media_types=body.media_types, force=body.force
|
||||
)
|
||||
|
||||
if job_id is None:
|
||||
# A job is already running
|
||||
current = get_current_media_sync_job()
|
||||
return JSONResponse(
|
||||
content={
|
||||
"error": "A media sync job is already running",
|
||||
"current_job_id": current.id if current else None,
|
||||
},
|
||||
status_code=409,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"job": {
|
||||
"job_type": "media_sync",
|
||||
"status": JobStatusTypesEnum.queued,
|
||||
"id": job_id,
|
||||
}
|
||||
},
|
||||
status_code=202,
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/media/sync/current",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get current media sync job",
|
||||
description="""Retrieve the current running media sync job, if any. Returns the job details
|
||||
or null when no job is active.""",
|
||||
)
|
||||
def get_media_sync_current():
|
||||
"""Get the current running media sync job, if any."""
|
||||
job = get_current_media_sync_job()
|
||||
|
||||
if job is None:
|
||||
return JSONResponse(content={"job": None}, status_code=200)
|
||||
|
||||
return JSONResponse(
|
||||
content={"job": job.to_dict()},
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/media/sync/status/{job_id}",
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Get media sync job status",
|
||||
description="""Get status and results for the specified media sync job id. Returns 200 with
|
||||
job details including results, or 404 if the job is not found.""",
|
||||
)
|
||||
def get_media_sync_status(job_id: str):
|
||||
"""Get the status of a specific media sync job."""
|
||||
job = get_media_sync_job_by_id(job_id)
|
||||
|
||||
if job is None:
|
||||
return JSONResponse(
|
||||
content={"error": "Job not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={"job": job.to_dict()},
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_labels(camera: str = ""):
|
||||
try:
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
from enum import Enum
|
||||
from typing import Optional, Union
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
|
||||
|
||||
class Extension(str, Enum):
|
||||
@@ -48,15 +47,3 @@ class MediaMjpegFeedQueryParams(BaseModel):
|
||||
mask: Optional[int] = None
|
||||
motion: Optional[int] = None
|
||||
regions: Optional[int] = None
|
||||
|
||||
|
||||
class MediaRecordingsSummaryQueryParams(BaseModel):
|
||||
timezone: str = "utc"
|
||||
cameras: Optional[str] = "all"
|
||||
|
||||
|
||||
class MediaRecordingsAvailabilityQueryParams(BaseModel):
|
||||
cameras: str = "all"
|
||||
before: Union[float, SkipJsonSchema[None]] = None
|
||||
after: Union[float, SkipJsonSchema[None]] = None
|
||||
scale: int = 30
|
||||
|
||||
21
frigate/api/defs/query/recordings_query_parameters.py
Normal file
21
frigate/api/defs/query/recordings_query_parameters.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
|
||||
|
||||
class MediaRecordingsSummaryQueryParams(BaseModel):
|
||||
timezone: str = "utc"
|
||||
cameras: Optional[str] = "all"
|
||||
|
||||
|
||||
class MediaRecordingsAvailabilityQueryParams(BaseModel):
|
||||
cameras: str = "all"
|
||||
before: Union[float, SkipJsonSchema[None]] = None
|
||||
after: Union[float, SkipJsonSchema[None]] = None
|
||||
scale: int = 30
|
||||
|
||||
|
||||
class RecordingsDeleteQueryParams(BaseModel):
|
||||
keep: Optional[str] = None
|
||||
cameras: Optional[str] = "all"
|
||||
@@ -1,6 +1,6 @@
|
||||
from typing import Any, Dict, Optional
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class AppConfigSetBody(BaseModel):
|
||||
@@ -27,3 +27,16 @@ class AppPostLoginBody(BaseModel):
|
||||
|
||||
class AppPutRoleBody(BaseModel):
|
||||
role: str
|
||||
|
||||
|
||||
class MediaSyncBody(BaseModel):
|
||||
dry_run: bool = Field(
|
||||
default=True, description="If True, only report orphans without deleting them"
|
||||
)
|
||||
media_types: List[str] = Field(
|
||||
default=["all"],
|
||||
description="Types of media to sync: 'all', 'event_snapshots', 'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'",
|
||||
)
|
||||
force: bool = Field(
|
||||
default=False, description="If True, bypass safety threshold checks"
|
||||
)
|
||||
|
||||
35
frigate/api/defs/request/export_case_body.py
Normal file
35
frigate/api/defs/request/export_case_body.py
Normal file
@@ -0,0 +1,35 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class ExportCaseCreateBody(BaseModel):
|
||||
"""Request body for creating a new export case."""
|
||||
|
||||
name: str = Field(max_length=100, description="Friendly name of the export case")
|
||||
description: Optional[str] = Field(
|
||||
default=None, description="Optional description of the export case"
|
||||
)
|
||||
|
||||
|
||||
class ExportCaseUpdateBody(BaseModel):
|
||||
"""Request body for updating an existing export case."""
|
||||
|
||||
name: Optional[str] = Field(
|
||||
default=None,
|
||||
max_length=100,
|
||||
description="Updated friendly name of the export case",
|
||||
)
|
||||
description: Optional[str] = Field(
|
||||
default=None, description="Updated description of the export case"
|
||||
)
|
||||
|
||||
|
||||
class ExportCaseAssignBody(BaseModel):
|
||||
"""Request body for assigning or unassigning an export to a case."""
|
||||
|
||||
export_case_id: Optional[str] = Field(
|
||||
default=None,
|
||||
max_length=30,
|
||||
description="Case ID to assign to the export, or null to unassign",
|
||||
)
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Union
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
@@ -18,3 +18,9 @@ class ExportRecordingsBody(BaseModel):
|
||||
)
|
||||
name: str = Field(title="Friendly name", default=None, max_length=256)
|
||||
image_path: Union[str, SkipJsonSchema[None]] = None
|
||||
export_case_id: Optional[str] = Field(
|
||||
default=None,
|
||||
title="Export case ID",
|
||||
max_length=30,
|
||||
description="ID of the export case to assign this export to",
|
||||
)
|
||||
|
||||
22
frigate/api/defs/response/export_case_response.py
Normal file
22
frigate/api/defs/response/export_case_response.py
Normal file
@@ -0,0 +1,22 @@
|
||||
from typing import List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class ExportCaseModel(BaseModel):
|
||||
"""Model representing a single export case."""
|
||||
|
||||
id: str = Field(description="Unique identifier for the export case")
|
||||
name: str = Field(description="Friendly name of the export case")
|
||||
description: Optional[str] = Field(
|
||||
default=None, description="Optional description of the export case"
|
||||
)
|
||||
created_at: float = Field(
|
||||
description="Unix timestamp when the export case was created"
|
||||
)
|
||||
updated_at: float = Field(
|
||||
description="Unix timestamp when the export case was last updated"
|
||||
)
|
||||
|
||||
|
||||
ExportCasesResponse = List[ExportCaseModel]
|
||||
@@ -15,6 +15,9 @@ class ExportModel(BaseModel):
|
||||
in_progress: bool = Field(
|
||||
description="Whether the export is currently being processed"
|
||||
)
|
||||
export_case_id: Optional[str] = Field(
|
||||
default=None, description="ID of the export case this export belongs to"
|
||||
)
|
||||
|
||||
|
||||
class StartExportResponse(BaseModel):
|
||||
|
||||
@@ -3,13 +3,14 @@ from enum import Enum
|
||||
|
||||
class Tags(Enum):
|
||||
app = "App"
|
||||
auth = "Auth"
|
||||
camera = "Camera"
|
||||
preview = "Preview"
|
||||
events = "Events"
|
||||
export = "Export"
|
||||
classification = "Classification"
|
||||
logs = "Logs"
|
||||
media = "Media"
|
||||
notifications = "Notifications"
|
||||
preview = "Preview"
|
||||
recordings = "Recordings"
|
||||
review = "Review"
|
||||
export = "Export"
|
||||
events = "Events"
|
||||
classification = "Classification"
|
||||
auth = "Auth"
|
||||
|
||||
@@ -4,10 +4,10 @@ import logging
|
||||
import random
|
||||
import string
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from typing import List, Optional
|
||||
|
||||
import psutil
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi import APIRouter, Depends, Query, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from pathvalidate import sanitize_filepath
|
||||
from peewee import DoesNotExist
|
||||
@@ -19,8 +19,17 @@ from frigate.api.auth import (
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.request.export_case_body import (
|
||||
ExportCaseAssignBody,
|
||||
ExportCaseCreateBody,
|
||||
ExportCaseUpdateBody,
|
||||
)
|
||||
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
|
||||
from frigate.api.defs.request.export_rename_body import ExportRenameBody
|
||||
from frigate.api.defs.response.export_case_response import (
|
||||
ExportCaseModel,
|
||||
ExportCasesResponse,
|
||||
)
|
||||
from frigate.api.defs.response.export_response import (
|
||||
ExportModel,
|
||||
ExportsResponse,
|
||||
@@ -29,7 +38,7 @@ from frigate.api.defs.response.export_response import (
|
||||
from frigate.api.defs.response.generic_response import GenericResponse
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import CLIPS_DIR, EXPORT_DIR
|
||||
from frigate.models import Export, Previews, Recordings
|
||||
from frigate.models import Export, ExportCase, Previews, Recordings
|
||||
from frigate.record.export import (
|
||||
PlaybackFactorEnum,
|
||||
PlaybackSourceEnum,
|
||||
@@ -52,17 +61,182 @@ router = APIRouter(tags=[Tags.export])
|
||||
)
|
||||
def get_exports(
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
export_case_id: Optional[str] = None,
|
||||
cameras: Optional[str] = Query(default="all"),
|
||||
start_date: Optional[float] = None,
|
||||
end_date: Optional[float] = None,
|
||||
):
|
||||
exports = (
|
||||
Export.select()
|
||||
.where(Export.camera << allowed_cameras)
|
||||
.order_by(Export.date.desc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
query = Export.select().where(Export.camera << allowed_cameras)
|
||||
|
||||
if export_case_id is not None:
|
||||
if export_case_id == "unassigned":
|
||||
query = query.where(Export.export_case.is_null(True))
|
||||
else:
|
||||
query = query.where(Export.export_case == export_case_id)
|
||||
|
||||
if cameras and cameras != "all":
|
||||
requested = set(cameras.split(","))
|
||||
filtered_cameras = list(requested.intersection(allowed_cameras))
|
||||
if not filtered_cameras:
|
||||
return JSONResponse(content=[])
|
||||
query = query.where(Export.camera << filtered_cameras)
|
||||
|
||||
if start_date is not None:
|
||||
query = query.where(Export.date >= start_date)
|
||||
|
||||
if end_date is not None:
|
||||
query = query.where(Export.date <= end_date)
|
||||
|
||||
exports = query.order_by(Export.date.desc()).dicts().iterator()
|
||||
return JSONResponse(content=[e for e in exports])
|
||||
|
||||
|
||||
@router.get(
|
||||
"/cases",
|
||||
response_model=ExportCasesResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get export cases",
|
||||
description="Gets all export cases from the database.",
|
||||
)
|
||||
def get_export_cases():
|
||||
cases = (
|
||||
ExportCase.select().order_by(ExportCase.created_at.desc()).dicts().iterator()
|
||||
)
|
||||
return JSONResponse(content=[c for c in cases])
|
||||
|
||||
|
||||
@router.post(
|
||||
"/cases",
|
||||
response_model=ExportCaseModel,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create export case",
|
||||
description="Creates a new export case.",
|
||||
)
|
||||
def create_export_case(body: ExportCaseCreateBody):
|
||||
case = ExportCase.create(
|
||||
id="".join(random.choices(string.ascii_lowercase + string.digits, k=12)),
|
||||
name=body.name,
|
||||
description=body.description,
|
||||
created_at=Path().stat().st_mtime,
|
||||
updated_at=Path().stat().st_mtime,
|
||||
)
|
||||
return JSONResponse(content=model_to_dict(case))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/cases/{case_id}",
|
||||
response_model=ExportCaseModel,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Get a single export case",
|
||||
description="Gets a specific export case by ID.",
|
||||
)
|
||||
def get_export_case(case_id: str):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
return JSONResponse(content=model_to_dict(case))
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
|
||||
@router.patch(
|
||||
"/cases/{case_id}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Update export case",
|
||||
description="Updates an existing export case.",
|
||||
)
|
||||
def update_export_case(case_id: str, body: ExportCaseUpdateBody):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
if body.name is not None:
|
||||
case.name = body.name
|
||||
if body.description is not None:
|
||||
case.description = body.description
|
||||
|
||||
case.save()
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Successfully updated export case."}
|
||||
)
|
||||
|
||||
|
||||
@router.delete(
|
||||
"/cases/{case_id}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete export case",
|
||||
description="""Deletes an export case.\n Exports that reference this case will have their export_case set to null.\n """,
|
||||
)
|
||||
def delete_export_case(case_id: str):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
# Unassign exports from this case but keep the exports themselves
|
||||
Export.update(export_case=None).where(Export.export_case == case).execute()
|
||||
|
||||
case.delete_instance()
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Successfully deleted export case."}
|
||||
)
|
||||
|
||||
|
||||
@router.patch(
|
||||
"/export/{export_id}/case",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Assign export to case",
|
||||
description=(
|
||||
"Assigns an export to a case, or unassigns it if export_case_id is null."
|
||||
),
|
||||
)
|
||||
async def assign_export_case(
|
||||
export_id: str,
|
||||
body: ExportCaseAssignBody,
|
||||
request: Request,
|
||||
):
|
||||
try:
|
||||
export: Export = Export.get(Export.id == export_id)
|
||||
await require_camera_access(export.camera, request=request)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export not found."},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
if body.export_case_id is not None:
|
||||
try:
|
||||
ExportCase.get(ExportCase.id == body.export_case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found."},
|
||||
status_code=404,
|
||||
)
|
||||
export.export_case = body.export_case_id
|
||||
else:
|
||||
export.export_case = None
|
||||
|
||||
export.save()
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Successfully updated export case."}
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/export/{camera_name}/start/{start_time}/end/{end_time}",
|
||||
response_model=StartExportResponse,
|
||||
@@ -93,6 +267,16 @@ def export_recording(
|
||||
friendly_name = body.name
|
||||
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
|
||||
|
||||
export_case_id = body.export_case_id
|
||||
if export_case_id is not None:
|
||||
try:
|
||||
ExportCase.get(ExportCase.id == export_case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
# Ensure that existing_image is a valid path
|
||||
if existing_image and not existing_image.startswith(CLIPS_DIR):
|
||||
return JSONResponse(
|
||||
@@ -161,6 +345,7 @@ def export_recording(
|
||||
if playback_source in PlaybackSourceEnum.__members__.values()
|
||||
else PlaybackSourceEnum.recordings
|
||||
),
|
||||
export_case_id,
|
||||
)
|
||||
exporter.start()
|
||||
return JSONResponse(
|
||||
|
||||
@@ -22,6 +22,7 @@ from frigate.api import (
|
||||
media,
|
||||
notification,
|
||||
preview,
|
||||
record,
|
||||
review,
|
||||
)
|
||||
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
|
||||
@@ -128,6 +129,7 @@ def create_fastapi_app(
|
||||
app.include_router(export.router)
|
||||
app.include_router(event.router)
|
||||
app.include_router(media.router)
|
||||
app.include_router(record.router)
|
||||
# App Properties
|
||||
app.frigate_config = frigate_config
|
||||
app.embeddings = embeddings
|
||||
|
||||
@@ -8,9 +8,8 @@ import os
|
||||
import subprocess as sp
|
||||
import time
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from functools import reduce
|
||||
from pathlib import Path as FilePath
|
||||
from typing import Any, List
|
||||
from typing import Any
|
||||
from urllib.parse import unquote
|
||||
|
||||
import cv2
|
||||
@@ -19,12 +18,11 @@ import pytz
|
||||
from fastapi import APIRouter, Depends, Path, Query, Request, Response
|
||||
from fastapi.responses import FileResponse, JSONResponse, StreamingResponse
|
||||
from pathvalidate import sanitize_filename
|
||||
from peewee import DoesNotExist, fn, operator
|
||||
from peewee import DoesNotExist, fn
|
||||
from tzlocal import get_localzone_name
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
)
|
||||
from frigate.api.defs.query.media_query_parameters import (
|
||||
@@ -32,8 +30,6 @@ from frigate.api.defs.query.media_query_parameters import (
|
||||
MediaEventsSnapshotQueryParams,
|
||||
MediaLatestFrameQueryParams,
|
||||
MediaMjpegFeedQueryParams,
|
||||
MediaRecordingsAvailabilityQueryParams,
|
||||
MediaRecordingsSummaryQueryParams,
|
||||
)
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.camera.state import CameraState
|
||||
@@ -44,13 +40,11 @@ from frigate.const import (
|
||||
INSTALL_DIR,
|
||||
MAX_SEGMENT_DURATION,
|
||||
PREVIEW_FRAME_TYPE,
|
||||
RECORD_DIR,
|
||||
)
|
||||
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
|
||||
from frigate.track.object_processing import TrackedObjectProcessor
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.image import get_image_from_recording
|
||||
from frigate.util.time import get_dst_transitions
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -397,333 +391,6 @@ async def submit_recording_snapshot_to_plus(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_recordings_storage_usage(request: Request):
|
||||
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
|
||||
"storage"
|
||||
][RECORD_DIR]
|
||||
|
||||
if not recording_stats:
|
||||
return JSONResponse({})
|
||||
|
||||
total_mb = recording_stats["total"]
|
||||
|
||||
camera_usages: dict[str, dict] = (
|
||||
request.app.storage_maintainer.calculate_camera_usages()
|
||||
)
|
||||
|
||||
for camera_name in camera_usages.keys():
|
||||
if camera_usages.get(camera_name, {}).get("usage"):
|
||||
camera_usages[camera_name]["usage_percent"] = (
|
||||
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
|
||||
) * 100
|
||||
|
||||
return JSONResponse(content=camera_usages)
|
||||
|
||||
|
||||
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
|
||||
def all_recordings_summary(
|
||||
request: Request,
|
||||
params: MediaRecordingsSummaryQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Returns true/false by day indicating if recordings exist"""
|
||||
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content={})
|
||||
camera_list = list(filtered)
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera << camera_list)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content={})
|
||||
|
||||
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
|
||||
|
||||
days: dict[str, bool] = {}
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
period_query = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("day")
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera << camera_list)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
)
|
||||
)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
for g in period_query:
|
||||
days[g.day] = True
|
||||
|
||||
return JSONResponse(content=dict(sorted(days.items())))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
async def recordings_summary(camera_name: str, timezone: str = "utc"):
|
||||
"""Returns hourly summary for recordings of given camera"""
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera == camera_name)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
days: dict[str, dict] = {}
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
dst_periods = get_dst_transitions(timezone, min_time, max_time)
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
recording_groups = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.SUM(Recordings.duration).alias("duration"),
|
||||
fn.SUM(Recordings.motion).alias("motion"),
|
||||
fn.SUM(Recordings.objects).alias("objects"),
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera == camera_name)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_groups = (
|
||||
Event.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Event.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.COUNT(Event.id).alias("count"),
|
||||
)
|
||||
.where(Event.camera == camera_name, Event.has_clip)
|
||||
.where(
|
||||
(Event.start_time >= period_start) & (Event.start_time <= period_end)
|
||||
)
|
||||
.group_by((Event.start_time + period_offset).cast("int") / 3600)
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_map = {g.hour: g.count for g in event_groups}
|
||||
|
||||
for recording_group in recording_groups:
|
||||
parts = recording_group.hour.split()
|
||||
hour = parts[1]
|
||||
day = parts[0]
|
||||
events_count = event_map.get(recording_group.hour, 0)
|
||||
hour_data = {
|
||||
"hour": hour,
|
||||
"events": events_count,
|
||||
"motion": recording_group.motion,
|
||||
"objects": recording_group.objects,
|
||||
"duration": round(recording_group.duration),
|
||||
}
|
||||
if day in days:
|
||||
# merge counts if already present (edge-case at DST boundary)
|
||||
days[day]["events"] += events_count or 0
|
||||
days[day]["hours"].append(hour_data)
|
||||
else:
|
||||
days[day] = {
|
||||
"events": events_count or 0,
|
||||
"hours": [hour_data],
|
||||
"day": day,
|
||||
}
|
||||
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
|
||||
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
|
||||
async def recordings(
|
||||
camera_name: str,
|
||||
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
|
||||
before: float = datetime.now().timestamp(),
|
||||
):
|
||||
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
|
||||
recordings = (
|
||||
Recordings.select(
|
||||
Recordings.id,
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
Recordings.segment_size,
|
||||
Recordings.motion,
|
||||
Recordings.objects,
|
||||
Recordings.duration,
|
||||
)
|
||||
.where(
|
||||
Recordings.camera == camera_name,
|
||||
Recordings.end_time >= after,
|
||||
Recordings.start_time <= before,
|
||||
)
|
||||
.order_by(Recordings.start_time)
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
return JSONResponse(content=list(recordings))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/recordings/unavailable",
|
||||
response_model=list[dict],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def no_recordings(
|
||||
request: Request,
|
||||
params: MediaRecordingsAvailabilityQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Get time ranges with no recordings."""
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content=[])
|
||||
cameras = ",".join(filtered)
|
||||
else:
|
||||
cameras = allowed_cameras
|
||||
|
||||
before = params.before or datetime.datetime.now().timestamp()
|
||||
after = (
|
||||
params.after
|
||||
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
|
||||
)
|
||||
scale = params.scale
|
||||
|
||||
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
|
||||
if cameras != "all":
|
||||
camera_list = cameras.split(",")
|
||||
clauses.append((Recordings.camera << camera_list))
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
# Get recording start times
|
||||
data: list[Recordings] = (
|
||||
Recordings.select(Recordings.start_time, Recordings.end_time)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.order_by(Recordings.start_time.asc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
# Convert recordings to list of (start, end) tuples
|
||||
recordings = [(r["start_time"], r["end_time"]) for r in data]
|
||||
|
||||
# Iterate through time segments and check if each has any recording
|
||||
no_recording_segments = []
|
||||
current = after
|
||||
current_gap_start = None
|
||||
|
||||
while current < before:
|
||||
segment_end = min(current + scale, before)
|
||||
|
||||
# Check if this segment overlaps with any recording
|
||||
has_recording = any(
|
||||
rec_start < segment_end and rec_end > current
|
||||
for rec_start, rec_end in recordings
|
||||
)
|
||||
|
||||
if not has_recording:
|
||||
# This segment has no recordings
|
||||
if current_gap_start is None:
|
||||
current_gap_start = current # Start a new gap
|
||||
else:
|
||||
# This segment has recordings
|
||||
if current_gap_start is not None:
|
||||
# End the current gap and append it
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(current)}
|
||||
)
|
||||
current_gap_start = None
|
||||
|
||||
current = segment_end
|
||||
|
||||
# Append the last gap if it exists
|
||||
if current_gap_start is not None:
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(before)}
|
||||
)
|
||||
|
||||
return JSONResponse(content=no_recording_segments)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
|
||||
479
frigate/api/record.py
Normal file
479
frigate/api/record.py
Normal file
@@ -0,0 +1,479 @@
|
||||
"""Recording APIs."""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from functools import reduce
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from urllib.parse import unquote
|
||||
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi import Path as PathParam
|
||||
from fastapi.responses import JSONResponse
|
||||
from peewee import fn, operator
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.query.recordings_query_parameters import (
|
||||
MediaRecordingsAvailabilityQueryParams,
|
||||
MediaRecordingsSummaryQueryParams,
|
||||
RecordingsDeleteQueryParams,
|
||||
)
|
||||
from frigate.api.defs.response.generic_response import GenericResponse
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import RECORD_DIR
|
||||
from frigate.models import Event, Recordings
|
||||
from frigate.util.time import get_dst_transitions
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(tags=[Tags.recordings])
|
||||
|
||||
|
||||
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_recordings_storage_usage(request: Request):
|
||||
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
|
||||
"storage"
|
||||
][RECORD_DIR]
|
||||
|
||||
if not recording_stats:
|
||||
return JSONResponse({})
|
||||
|
||||
total_mb = recording_stats["total"]
|
||||
|
||||
camera_usages: dict[str, dict] = (
|
||||
request.app.storage_maintainer.calculate_camera_usages()
|
||||
)
|
||||
|
||||
for camera_name in camera_usages.keys():
|
||||
if camera_usages.get(camera_name, {}).get("usage"):
|
||||
camera_usages[camera_name]["usage_percent"] = (
|
||||
camera_usages.get(camera_name, {}).get("usage", 0) / total_mb
|
||||
) * 100
|
||||
|
||||
return JSONResponse(content=camera_usages)
|
||||
|
||||
|
||||
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
|
||||
def all_recordings_summary(
|
||||
request: Request,
|
||||
params: MediaRecordingsSummaryQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Returns true/false by day indicating if recordings exist"""
|
||||
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content={})
|
||||
camera_list = list(filtered)
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera << camera_list)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content={})
|
||||
|
||||
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
|
||||
|
||||
days: dict[str, bool] = {}
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
period_query = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("day")
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera << camera_list)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
)
|
||||
)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
for g in period_query:
|
||||
days[g.day] = True
|
||||
|
||||
return JSONResponse(content=dict(sorted(days.items())))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
async def recordings_summary(camera_name: str, timezone: str = "utc"):
|
||||
"""Returns hourly summary for recordings of given camera"""
|
||||
|
||||
time_range_query = (
|
||||
Recordings.select(
|
||||
fn.MIN(Recordings.start_time).alias("min_time"),
|
||||
fn.MAX(Recordings.start_time).alias("max_time"),
|
||||
)
|
||||
.where(Recordings.camera == camera_name)
|
||||
.dicts()
|
||||
.get()
|
||||
)
|
||||
|
||||
min_time = time_range_query.get("min_time")
|
||||
max_time = time_range_query.get("max_time")
|
||||
|
||||
days: dict[str, dict] = {}
|
||||
|
||||
if min_time is None or max_time is None:
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
dst_periods = get_dst_transitions(timezone, min_time, max_time)
|
||||
|
||||
for period_start, period_end, period_offset in dst_periods:
|
||||
hours_offset = int(period_offset / 60 / 60)
|
||||
minutes_offset = int(period_offset / 60 - hours_offset * 60)
|
||||
period_hour_modifier = f"{hours_offset} hour"
|
||||
period_minute_modifier = f"{minutes_offset} minute"
|
||||
|
||||
recording_groups = (
|
||||
Recordings.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Recordings.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.SUM(Recordings.duration).alias("duration"),
|
||||
fn.SUM(Recordings.motion).alias("motion"),
|
||||
fn.SUM(Recordings.objects).alias("objects"),
|
||||
)
|
||||
.where(
|
||||
(Recordings.camera == camera_name)
|
||||
& (Recordings.end_time >= period_start)
|
||||
& (Recordings.start_time <= period_end)
|
||||
)
|
||||
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_groups = (
|
||||
Event.select(
|
||||
fn.strftime(
|
||||
"%Y-%m-%d %H",
|
||||
fn.datetime(
|
||||
Event.start_time,
|
||||
"unixepoch",
|
||||
period_hour_modifier,
|
||||
period_minute_modifier,
|
||||
),
|
||||
).alias("hour"),
|
||||
fn.COUNT(Event.id).alias("count"),
|
||||
)
|
||||
.where(Event.camera == camera_name, Event.has_clip)
|
||||
.where(
|
||||
(Event.start_time >= period_start) & (Event.start_time <= period_end)
|
||||
)
|
||||
.group_by((Event.start_time + period_offset).cast("int") / 3600)
|
||||
.namedtuples()
|
||||
)
|
||||
|
||||
event_map = {g.hour: g.count for g in event_groups}
|
||||
|
||||
for recording_group in recording_groups:
|
||||
parts = recording_group.hour.split()
|
||||
hour = parts[1]
|
||||
day = parts[0]
|
||||
events_count = event_map.get(recording_group.hour, 0)
|
||||
hour_data = {
|
||||
"hour": hour,
|
||||
"events": events_count,
|
||||
"motion": recording_group.motion,
|
||||
"objects": recording_group.objects,
|
||||
"duration": round(recording_group.duration),
|
||||
}
|
||||
if day in days:
|
||||
# merge counts if already present (edge-case at DST boundary)
|
||||
days[day]["events"] += events_count or 0
|
||||
days[day]["hours"].append(hour_data)
|
||||
else:
|
||||
days[day] = {
|
||||
"events": events_count or 0,
|
||||
"hours": [hour_data],
|
||||
"day": day,
|
||||
}
|
||||
|
||||
return JSONResponse(content=list(days.values()))
|
||||
|
||||
|
||||
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
|
||||
async def recordings(
|
||||
camera_name: str,
|
||||
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
|
||||
before: float = datetime.now().timestamp(),
|
||||
):
|
||||
"""Return specific camera recordings between the given 'after'/'end' times. If not provided the last hour will be used"""
|
||||
recordings = (
|
||||
Recordings.select(
|
||||
Recordings.id,
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
Recordings.segment_size,
|
||||
Recordings.motion,
|
||||
Recordings.objects,
|
||||
Recordings.duration,
|
||||
)
|
||||
.where(
|
||||
Recordings.camera == camera_name,
|
||||
Recordings.end_time >= after,
|
||||
Recordings.start_time <= before,
|
||||
)
|
||||
.order_by(Recordings.start_time)
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
return JSONResponse(content=list(recordings))
|
||||
|
||||
|
||||
@router.get(
|
||||
"/recordings/unavailable",
|
||||
response_model=list[dict],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def no_recordings(
|
||||
request: Request,
|
||||
params: MediaRecordingsAvailabilityQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Get time ranges with no recordings."""
|
||||
cameras = params.cameras
|
||||
if cameras != "all":
|
||||
requested = set(unquote(cameras).split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
if not filtered:
|
||||
return JSONResponse(content=[])
|
||||
cameras = ",".join(filtered)
|
||||
else:
|
||||
cameras = allowed_cameras
|
||||
|
||||
before = params.before or datetime.datetime.now().timestamp()
|
||||
after = (
|
||||
params.after
|
||||
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
|
||||
)
|
||||
scale = params.scale
|
||||
|
||||
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
|
||||
if cameras != "all":
|
||||
camera_list = cameras.split(",")
|
||||
clauses.append((Recordings.camera << camera_list))
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
# Get recording start times
|
||||
data: list[Recordings] = (
|
||||
Recordings.select(Recordings.start_time, Recordings.end_time)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.order_by(Recordings.start_time.asc())
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
# Convert recordings to list of (start, end) tuples
|
||||
recordings = [(r["start_time"], r["end_time"]) for r in data]
|
||||
|
||||
# Iterate through time segments and check if each has any recording
|
||||
no_recording_segments = []
|
||||
current = after
|
||||
current_gap_start = None
|
||||
|
||||
while current < before:
|
||||
segment_end = min(current + scale, before)
|
||||
|
||||
# Check if this segment overlaps with any recording
|
||||
has_recording = any(
|
||||
rec_start < segment_end and rec_end > current
|
||||
for rec_start, rec_end in recordings
|
||||
)
|
||||
|
||||
if not has_recording:
|
||||
# This segment has no recordings
|
||||
if current_gap_start is None:
|
||||
current_gap_start = current # Start a new gap
|
||||
else:
|
||||
# This segment has recordings
|
||||
if current_gap_start is not None:
|
||||
# End the current gap and append it
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(current)}
|
||||
)
|
||||
current_gap_start = None
|
||||
|
||||
current = segment_end
|
||||
|
||||
# Append the last gap if it exists
|
||||
if current_gap_start is not None:
|
||||
no_recording_segments.append(
|
||||
{"start_time": int(current_gap_start), "end_time": int(before)}
|
||||
)
|
||||
|
||||
return JSONResponse(content=no_recording_segments)
|
||||
|
||||
|
||||
@router.delete(
|
||||
"/recordings/start/{start}/end/{end}",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Delete recordings",
|
||||
description="""Deletes recordings within the specified time range.
|
||||
Recordings can be filtered by cameras and kept based on motion, objects, or audio attributes.
|
||||
""",
|
||||
)
|
||||
async def delete_recordings(
|
||||
start: float = PathParam(..., description="Start timestamp (unix)"),
|
||||
end: float = PathParam(..., description="End timestamp (unix)"),
|
||||
params: RecordingsDeleteQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
"""Delete recordings in the specified time range."""
|
||||
if start >= end:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Start time must be less than end time.",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
cameras = params.cameras
|
||||
|
||||
if cameras != "all":
|
||||
requested = set(cameras.split(","))
|
||||
filtered = requested.intersection(allowed_cameras)
|
||||
|
||||
if not filtered:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "No valid cameras found in the request.",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
camera_list = list(filtered)
|
||||
else:
|
||||
camera_list = allowed_cameras
|
||||
|
||||
# Parse keep parameter
|
||||
keep_set = set()
|
||||
|
||||
if params.keep:
|
||||
keep_set = set(params.keep.split(","))
|
||||
|
||||
# Build query to find overlapping recordings
|
||||
clauses = [
|
||||
(
|
||||
Recordings.start_time.between(start, end)
|
||||
| Recordings.end_time.between(start, end)
|
||||
| ((start > Recordings.start_time) & (end < Recordings.end_time))
|
||||
),
|
||||
(Recordings.camera << camera_list),
|
||||
]
|
||||
|
||||
keep_clauses = []
|
||||
|
||||
if "motion" in keep_set:
|
||||
keep_clauses.append(Recordings.motion.is_null(False) & (Recordings.motion > 0))
|
||||
|
||||
if "object" in keep_set:
|
||||
keep_clauses.append(
|
||||
Recordings.objects.is_null(False) & (Recordings.objects > 0)
|
||||
)
|
||||
|
||||
if "audio" in keep_set:
|
||||
keep_clauses.append(Recordings.dBFS.is_null(False))
|
||||
|
||||
if keep_clauses:
|
||||
keep_condition = reduce(operator.or_, keep_clauses)
|
||||
clauses.append(~keep_condition)
|
||||
|
||||
recordings_to_delete = (
|
||||
Recordings.select(Recordings.id, Recordings.path)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.dicts()
|
||||
.iterator()
|
||||
)
|
||||
|
||||
recording_ids = []
|
||||
deleted_count = 0
|
||||
error_count = 0
|
||||
|
||||
for recording in recordings_to_delete:
|
||||
recording_ids.append(recording["id"])
|
||||
|
||||
try:
|
||||
Path(recording["path"]).unlink(missing_ok=True)
|
||||
deleted_count += 1
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete recording file {recording['path']}: {e}")
|
||||
error_count += 1
|
||||
|
||||
if recording_ids:
|
||||
max_deletes = 100000
|
||||
recording_ids_list = list(recording_ids)
|
||||
|
||||
for i in range(0, len(recording_ids_list), max_deletes):
|
||||
Recordings.delete().where(
|
||||
Recordings.id << recording_ids_list[i : i + max_deletes]
|
||||
).execute()
|
||||
|
||||
message = f"Successfully deleted {deleted_count} recording(s)."
|
||||
|
||||
if error_count > 0:
|
||||
message += f" {error_count} file deletion error(s) occurred."
|
||||
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": message},
|
||||
status_code=200,
|
||||
)
|
||||
@@ -19,6 +19,8 @@ class CameraMetrics:
|
||||
process_pid: Synchronized
|
||||
capture_process_pid: Synchronized
|
||||
ffmpeg_pid: Synchronized
|
||||
reconnects_last_hour: Synchronized
|
||||
stalls_last_hour: Synchronized
|
||||
|
||||
def __init__(self, manager: SyncManager):
|
||||
self.camera_fps = manager.Value("d", 0)
|
||||
@@ -35,6 +37,8 @@ class CameraMetrics:
|
||||
self.process_pid = manager.Value("i", 0)
|
||||
self.capture_process_pid = manager.Value("i", 0)
|
||||
self.ffmpeg_pid = manager.Value("i", 0)
|
||||
self.reconnects_last_hour = manager.Value("i", 0)
|
||||
self.stalls_last_hour = manager.Value("i", 0)
|
||||
|
||||
|
||||
class PTZMetrics:
|
||||
|
||||
@@ -28,6 +28,7 @@ from frigate.const import (
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
UPDATE_JOB_STATE,
|
||||
UPDATE_MODEL_STATE,
|
||||
UPDATE_REVIEW_DESCRIPTION,
|
||||
UPSERT_REVIEW_SEGMENT,
|
||||
@@ -60,6 +61,7 @@ class Dispatcher:
|
||||
self.camera_activity = CameraActivityManager(config, self.publish)
|
||||
self.audio_activity = AudioActivityManager(config, self.publish)
|
||||
self.model_state: dict[str, ModelStatusTypesEnum] = {}
|
||||
self.job_state: dict[str, dict[str, Any]] = {} # {job_type: job_data}
|
||||
self.embeddings_reindex: dict[str, Any] = {}
|
||||
self.birdseye_layout: dict[str, Any] = {}
|
||||
self.audio_transcription_state: str = "idle"
|
||||
@@ -180,6 +182,19 @@ class Dispatcher:
|
||||
def handle_model_state() -> None:
|
||||
self.publish("model_state", json.dumps(self.model_state.copy()))
|
||||
|
||||
def handle_update_job_state() -> None:
|
||||
if payload and isinstance(payload, dict):
|
||||
job_type = payload.get("job_type")
|
||||
if job_type:
|
||||
self.job_state[job_type] = payload
|
||||
self.publish(
|
||||
"job_state",
|
||||
json.dumps(self.job_state),
|
||||
)
|
||||
|
||||
def handle_job_state() -> None:
|
||||
self.publish("job_state", json.dumps(self.job_state.copy()))
|
||||
|
||||
def handle_update_audio_transcription_state() -> None:
|
||||
if payload:
|
||||
self.audio_transcription_state = payload
|
||||
@@ -277,6 +292,7 @@ class Dispatcher:
|
||||
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
|
||||
UPDATE_REVIEW_DESCRIPTION: handle_update_review_description,
|
||||
UPDATE_MODEL_STATE: handle_update_model_state,
|
||||
UPDATE_JOB_STATE: handle_update_job_state,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
|
||||
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE: handle_update_audio_transcription_state,
|
||||
@@ -284,6 +300,7 @@ class Dispatcher:
|
||||
"restart": handle_restart,
|
||||
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
|
||||
"modelState": handle_model_state,
|
||||
"jobState": handle_job_state,
|
||||
"audioTranscriptionState": handle_audio_transcription_state,
|
||||
"birdseyeLayout": handle_birdseye_layout,
|
||||
"onConnect": handle_on_connect,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from enum import Enum
|
||||
from typing import Optional
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
@@ -70,13 +70,13 @@ class RecordExportConfig(FrigateBaseModel):
|
||||
timelapse_args: str = Field(
|
||||
default=DEFAULT_TIME_LAPSE_FFMPEG_ARGS, title="Timelapse Args"
|
||||
)
|
||||
hwaccel_args: Union[str, list[str]] = Field(
|
||||
default="auto", title="Export-specific FFmpeg hardware acceleration arguments."
|
||||
)
|
||||
|
||||
|
||||
class RecordConfig(FrigateBaseModel):
|
||||
enabled: bool = Field(default=False, title="Enable record on all cameras.")
|
||||
sync_recordings: bool = Field(
|
||||
default=False, title="Sync recordings with disk on startup and once a day."
|
||||
)
|
||||
expire_interval: int = Field(
|
||||
default=60,
|
||||
title="Number of minutes to wait between cleanup runs.",
|
||||
|
||||
@@ -523,6 +523,14 @@ class FrigateConfig(FrigateBaseModel):
|
||||
if camera_config.ffmpeg.hwaccel_args == "auto":
|
||||
camera_config.ffmpeg.hwaccel_args = self.ffmpeg.hwaccel_args
|
||||
|
||||
# Resolve export hwaccel_args: camera export -> camera ffmpeg -> global ffmpeg
|
||||
# This allows per-camera override for exports (e.g., when camera resolution
|
||||
# exceeds hardware encoder limits)
|
||||
if camera_config.record.export.hwaccel_args == "auto":
|
||||
camera_config.record.export.hwaccel_args = (
|
||||
camera_config.ffmpeg.hwaccel_args
|
||||
)
|
||||
|
||||
for input in camera_config.ffmpeg.inputs:
|
||||
need_detect_dimensions = "detect" in input.roles and (
|
||||
camera_config.detect.height is None
|
||||
|
||||
@@ -119,6 +119,7 @@ UPDATE_REVIEW_DESCRIPTION = "update_review_description"
|
||||
UPDATE_MODEL_STATE = "update_model_state"
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS = "handle_embeddings_reindex_progress"
|
||||
UPDATE_BIRDSEYE_LAYOUT = "update_birdseye_layout"
|
||||
UPDATE_JOB_STATE = "update_job_state"
|
||||
NOTIFICATION_TEST = "notification_test"
|
||||
|
||||
# IO Nice Values
|
||||
|
||||
0
frigate/jobs/__init__.py
Normal file
0
frigate/jobs/__init__.py
Normal file
21
frigate/jobs/job.py
Normal file
21
frigate/jobs/job.py
Normal file
@@ -0,0 +1,21 @@
|
||||
"""Generic base class for long-running background jobs."""
|
||||
|
||||
from dataclasses import asdict, dataclass, field
|
||||
from typing import Any, Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class Job:
|
||||
"""Base class for long-running background jobs."""
|
||||
|
||||
id: str = field(default_factory=lambda: __import__("uuid").uuid4().__str__()[:12])
|
||||
job_type: str = "" # Must be set by subclasses
|
||||
status: str = "queued" # queued, running, success, failed, cancelled
|
||||
results: Optional[dict[str, Any]] = None
|
||||
start_time: Optional[float] = None
|
||||
end_time: Optional[float] = None
|
||||
error_message: Optional[str] = None
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert to dictionary for WebSocket transmission."""
|
||||
return asdict(self)
|
||||
70
frigate/jobs/manager.py
Normal file
70
frigate/jobs/manager.py
Normal file
@@ -0,0 +1,70 @@
|
||||
"""Generic job management for long-running background tasks."""
|
||||
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
from frigate.jobs.job import Job
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
|
||||
# Global state and locks for enforcing single concurrent job per job type
|
||||
_job_locks: dict[str, threading.Lock] = {}
|
||||
_current_jobs: dict[str, Optional[Job]] = {}
|
||||
# Keep completed jobs for retrieval, keyed by (job_type, job_id)
|
||||
_completed_jobs: dict[tuple[str, str], Job] = {}
|
||||
|
||||
|
||||
def _get_lock(job_type: str) -> threading.Lock:
|
||||
"""Get or create a lock for the specified job type."""
|
||||
if job_type not in _job_locks:
|
||||
_job_locks[job_type] = threading.Lock()
|
||||
return _job_locks[job_type]
|
||||
|
||||
|
||||
def set_current_job(job: Job) -> None:
|
||||
"""Set the current job for a given job type."""
|
||||
lock = _get_lock(job.job_type)
|
||||
with lock:
|
||||
# Store the previous job if it was completed
|
||||
old_job = _current_jobs.get(job.job_type)
|
||||
if old_job and old_job.status in (
|
||||
JobStatusTypesEnum.success,
|
||||
JobStatusTypesEnum.failed,
|
||||
JobStatusTypesEnum.cancelled,
|
||||
):
|
||||
_completed_jobs[(job.job_type, old_job.id)] = old_job
|
||||
_current_jobs[job.job_type] = job
|
||||
|
||||
|
||||
def clear_current_job(job_type: str, job_id: Optional[str] = None) -> None:
|
||||
"""Clear the current job for a given job type, optionally checking the ID."""
|
||||
lock = _get_lock(job_type)
|
||||
with lock:
|
||||
if job_type in _current_jobs:
|
||||
current = _current_jobs[job_type]
|
||||
if current is None or (job_id is None or current.id == job_id):
|
||||
_current_jobs[job_type] = None
|
||||
|
||||
|
||||
def get_current_job(job_type: str) -> Optional[Job]:
|
||||
"""Get the current running/queued job for a given job type, if any."""
|
||||
lock = _get_lock(job_type)
|
||||
with lock:
|
||||
return _current_jobs.get(job_type)
|
||||
|
||||
|
||||
def get_job_by_id(job_type: str, job_id: str) -> Optional[Job]:
|
||||
"""Get job by ID. Checks current job first, then completed jobs."""
|
||||
lock = _get_lock(job_type)
|
||||
with lock:
|
||||
# Check if it's the current job
|
||||
current = _current_jobs.get(job_type)
|
||||
if current and current.id == job_id:
|
||||
return current
|
||||
# Check if it's a completed job
|
||||
return _completed_jobs.get((job_type, job_id))
|
||||
|
||||
|
||||
def job_is_running(job_type: str) -> bool:
|
||||
"""Check if a job of the given type is currently running or queued."""
|
||||
job = get_current_job(job_type)
|
||||
return job is not None and job.status in ("queued", "running")
|
||||
135
frigate/jobs/media_sync.py
Normal file
135
frigate/jobs/media_sync.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""Media sync job management with background execution."""
|
||||
|
||||
import logging
|
||||
import threading
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.const import UPDATE_JOB_STATE
|
||||
from frigate.jobs.job import Job
|
||||
from frigate.jobs.manager import (
|
||||
get_current_job,
|
||||
get_job_by_id,
|
||||
job_is_running,
|
||||
set_current_job,
|
||||
)
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
from frigate.util.media import sync_all_media
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class MediaSyncJob(Job):
|
||||
"""In-memory job state for media sync operations."""
|
||||
|
||||
job_type: str = "media_sync"
|
||||
dry_run: bool = False
|
||||
media_types: list[str] = field(default_factory=lambda: ["all"])
|
||||
force: bool = False
|
||||
|
||||
|
||||
class MediaSyncRunner(threading.Thread):
|
||||
"""Thread-based runner for media sync jobs."""
|
||||
|
||||
def __init__(self, job: MediaSyncJob) -> None:
|
||||
super().__init__(daemon=True, name="media_sync")
|
||||
self.job = job
|
||||
self.requestor = InterProcessRequestor()
|
||||
|
||||
def run(self) -> None:
|
||||
"""Execute the media sync job and broadcast status updates."""
|
||||
try:
|
||||
# Update job status to running
|
||||
self.job.status = JobStatusTypesEnum.running
|
||||
self.job.start_time = datetime.now().timestamp()
|
||||
self._broadcast_status()
|
||||
|
||||
# Execute sync with provided parameters
|
||||
logger.debug(
|
||||
f"Starting media sync job {self.job.id}: "
|
||||
f"media_types={self.job.media_types}, "
|
||||
f"dry_run={self.job.dry_run}, "
|
||||
f"force={self.job.force}"
|
||||
)
|
||||
|
||||
results = sync_all_media(
|
||||
dry_run=self.job.dry_run,
|
||||
media_types=self.job.media_types,
|
||||
force=self.job.force,
|
||||
)
|
||||
|
||||
# Store results and mark as complete
|
||||
self.job.results = results.to_dict()
|
||||
self.job.status = JobStatusTypesEnum.success
|
||||
self.job.end_time = datetime.now().timestamp()
|
||||
|
||||
logger.debug(f"Media sync job {self.job.id} completed successfully")
|
||||
self._broadcast_status()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Media sync job {self.job.id} failed: {e}", exc_info=True)
|
||||
self.job.status = JobStatusTypesEnum.failed
|
||||
self.job.error_message = str(e)
|
||||
self.job.end_time = datetime.now().timestamp()
|
||||
self._broadcast_status()
|
||||
|
||||
finally:
|
||||
if self.requestor:
|
||||
self.requestor.stop()
|
||||
|
||||
def _broadcast_status(self) -> None:
|
||||
"""Broadcast job status update via IPC to all WebSocket subscribers."""
|
||||
try:
|
||||
self.requestor.send_data(
|
||||
UPDATE_JOB_STATE,
|
||||
self.job.to_dict(),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to broadcast media sync status: {e}")
|
||||
|
||||
|
||||
def start_media_sync_job(
|
||||
dry_run: bool = False,
|
||||
media_types: Optional[list[str]] = None,
|
||||
force: bool = False,
|
||||
) -> Optional[str]:
|
||||
"""Start a new media sync job if none is currently running.
|
||||
|
||||
Returns job ID on success, None if job already running.
|
||||
"""
|
||||
# Check if a job is already running
|
||||
if job_is_running("media_sync"):
|
||||
current = get_current_job("media_sync")
|
||||
logger.warning(
|
||||
f"Media sync job {current.id} is already running. Rejecting new request."
|
||||
)
|
||||
return None
|
||||
|
||||
# Create and start new job
|
||||
job = MediaSyncJob(
|
||||
dry_run=dry_run,
|
||||
media_types=media_types or ["all"],
|
||||
force=force,
|
||||
)
|
||||
|
||||
logger.debug(f"Creating new media sync job: {job.id}")
|
||||
set_current_job(job)
|
||||
|
||||
# Start the background runner
|
||||
runner = MediaSyncRunner(job)
|
||||
runner.start()
|
||||
|
||||
return job.id
|
||||
|
||||
|
||||
def get_current_media_sync_job() -> Optional[MediaSyncJob]:
|
||||
"""Get the current running/queued media sync job, if any."""
|
||||
return get_current_job("media_sync")
|
||||
|
||||
|
||||
def get_media_sync_job_by_id(job_id: str) -> Optional[MediaSyncJob]:
|
||||
"""Get media sync job by ID. Currently only tracks the current job."""
|
||||
return get_job_by_id("media_sync", job_id)
|
||||
@@ -80,6 +80,14 @@ class Recordings(Model):
|
||||
regions = IntegerField(null=True)
|
||||
|
||||
|
||||
class ExportCase(Model):
|
||||
id = CharField(null=False, primary_key=True, max_length=30)
|
||||
name = CharField(index=True, max_length=100)
|
||||
description = TextField(null=True)
|
||||
created_at = DateTimeField()
|
||||
updated_at = DateTimeField()
|
||||
|
||||
|
||||
class Export(Model):
|
||||
id = CharField(null=False, primary_key=True, max_length=30)
|
||||
camera = CharField(index=True, max_length=20)
|
||||
@@ -88,6 +96,12 @@ class Export(Model):
|
||||
video_path = CharField(unique=True)
|
||||
thumb_path = CharField(unique=True)
|
||||
in_progress = BooleanField()
|
||||
export_case = ForeignKeyField(
|
||||
ExportCase,
|
||||
null=True,
|
||||
backref="exports",
|
||||
column_name="export_case_id",
|
||||
)
|
||||
|
||||
|
||||
class ReviewSegment(Model):
|
||||
|
||||
@@ -13,9 +13,8 @@ from playhouse.sqlite_ext import SqliteExtDatabase
|
||||
from frigate.config import CameraConfig, FrigateConfig, RetainModeEnum
|
||||
from frigate.const import CACHE_DIR, CLIPS_DIR, MAX_WAL_SIZE, RECORD_DIR
|
||||
from frigate.models import Previews, Recordings, ReviewSegment, UserReviewStatus
|
||||
from frigate.record.util import remove_empty_directories, sync_recordings
|
||||
from frigate.util.builtin import clear_and_unlink
|
||||
from frigate.util.time import get_tomorrow_at_time
|
||||
from frigate.util.media import remove_empty_directories
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -347,11 +346,6 @@ class RecordingCleanup(threading.Thread):
|
||||
logger.debug("End expire recordings.")
|
||||
|
||||
def run(self) -> None:
|
||||
# on startup sync recordings with disk if enabled
|
||||
if self.config.record.sync_recordings:
|
||||
sync_recordings(limited=False)
|
||||
next_sync = get_tomorrow_at_time(3)
|
||||
|
||||
# Expire tmp clips every minute, recordings and clean directories every hour.
|
||||
for counter in itertools.cycle(range(self.config.record.expire_interval)):
|
||||
if self.stop_event.wait(60):
|
||||
@@ -360,14 +354,6 @@ class RecordingCleanup(threading.Thread):
|
||||
|
||||
self.clean_tmp_previews()
|
||||
|
||||
if (
|
||||
self.config.record.sync_recordings
|
||||
and datetime.datetime.now().astimezone(datetime.timezone.utc)
|
||||
> next_sync
|
||||
):
|
||||
sync_recordings(limited=True)
|
||||
next_sync = get_tomorrow_at_time(3)
|
||||
|
||||
if counter == 0:
|
||||
self.clean_tmp_clips()
|
||||
self.expire_recordings()
|
||||
|
||||
@@ -64,6 +64,7 @@ class RecordingExporter(threading.Thread):
|
||||
end_time: int,
|
||||
playback_factor: PlaybackFactorEnum,
|
||||
playback_source: PlaybackSourceEnum,
|
||||
export_case_id: Optional[str] = None,
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.config = config
|
||||
@@ -75,6 +76,7 @@ class RecordingExporter(threading.Thread):
|
||||
self.end_time = end_time
|
||||
self.playback_factor = playback_factor
|
||||
self.playback_source = playback_source
|
||||
self.export_case_id = export_case_id
|
||||
|
||||
# ensure export thumb dir
|
||||
Path(os.path.join(CLIPS_DIR, "export")).mkdir(exist_ok=True)
|
||||
@@ -226,7 +228,7 @@ class RecordingExporter(threading.Thread):
|
||||
ffmpeg_cmd = (
|
||||
parse_preset_hardware_acceleration_encode(
|
||||
self.config.ffmpeg.ffmpeg_path,
|
||||
self.config.ffmpeg.hwaccel_args,
|
||||
self.config.cameras[self.camera].record.export.hwaccel_args,
|
||||
f"-an {ffmpeg_input}",
|
||||
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart",
|
||||
EncodeTypeEnum.timelapse,
|
||||
@@ -317,7 +319,7 @@ class RecordingExporter(threading.Thread):
|
||||
ffmpeg_cmd = (
|
||||
parse_preset_hardware_acceleration_encode(
|
||||
self.config.ffmpeg.ffmpeg_path,
|
||||
self.config.ffmpeg.hwaccel_args,
|
||||
self.config.cameras[self.camera].record.export.hwaccel_args,
|
||||
f"{TIMELAPSE_DATA_INPUT_ARGS} {ffmpeg_input}",
|
||||
f"{self.config.cameras[self.camera].record.export.timelapse_args} -movflags +faststart {video_path}",
|
||||
EncodeTypeEnum.timelapse,
|
||||
@@ -348,17 +350,20 @@ class RecordingExporter(threading.Thread):
|
||||
video_path = f"{EXPORT_DIR}/{self.camera}_{filename_start_datetime}-{filename_end_datetime}_{cleaned_export_id}.mp4"
|
||||
thumb_path = self.save_thumbnail(self.export_id)
|
||||
|
||||
Export.insert(
|
||||
{
|
||||
Export.id: self.export_id,
|
||||
Export.camera: self.camera,
|
||||
Export.name: export_name,
|
||||
Export.date: self.start_time,
|
||||
Export.video_path: video_path,
|
||||
Export.thumb_path: thumb_path,
|
||||
Export.in_progress: True,
|
||||
}
|
||||
).execute()
|
||||
export_values = {
|
||||
Export.id: self.export_id,
|
||||
Export.camera: self.camera,
|
||||
Export.name: export_name,
|
||||
Export.date: self.start_time,
|
||||
Export.video_path: video_path,
|
||||
Export.thumb_path: thumb_path,
|
||||
Export.in_progress: True,
|
||||
}
|
||||
|
||||
if self.export_case_id is not None:
|
||||
export_values[Export.export_case] = self.export_case_id
|
||||
|
||||
Export.insert(export_values).execute()
|
||||
|
||||
try:
|
||||
if self.playback_source == PlaybackSourceEnum.recordings:
|
||||
|
||||
@@ -1,147 +0,0 @@
|
||||
"""Recordings Utilities."""
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import os
|
||||
|
||||
from peewee import DatabaseError, chunked
|
||||
|
||||
from frigate.const import RECORD_DIR
|
||||
from frigate.models import Recordings, RecordingsToDelete
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def remove_empty_directories(directory: str) -> None:
|
||||
# list all directories recursively and sort them by path,
|
||||
# longest first
|
||||
paths = sorted(
|
||||
[x[0] for x in os.walk(directory)],
|
||||
key=lambda p: len(str(p)),
|
||||
reverse=True,
|
||||
)
|
||||
for path in paths:
|
||||
# don't delete the parent
|
||||
if path == directory:
|
||||
continue
|
||||
if len(os.listdir(path)) == 0:
|
||||
os.rmdir(path)
|
||||
|
||||
|
||||
def sync_recordings(limited: bool) -> None:
|
||||
"""Check the db for stale recordings entries that don't exist in the filesystem."""
|
||||
|
||||
def delete_db_entries_without_file(check_timestamp: float) -> bool:
|
||||
"""Delete db entries where file was deleted outside of frigate."""
|
||||
|
||||
if limited:
|
||||
recordings = Recordings.select(Recordings.id, Recordings.path).where(
|
||||
Recordings.start_time >= check_timestamp
|
||||
)
|
||||
else:
|
||||
# get all recordings in the db
|
||||
recordings = Recordings.select(Recordings.id, Recordings.path)
|
||||
|
||||
# Use pagination to process records in chunks
|
||||
page_size = 1000
|
||||
num_pages = (recordings.count() + page_size - 1) // page_size
|
||||
recordings_to_delete = set()
|
||||
|
||||
for page in range(num_pages):
|
||||
for recording in recordings.paginate(page, page_size):
|
||||
if not os.path.exists(recording.path):
|
||||
recordings_to_delete.add(recording.id)
|
||||
|
||||
if len(recordings_to_delete) == 0:
|
||||
return True
|
||||
|
||||
logger.info(
|
||||
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
|
||||
)
|
||||
|
||||
# convert back to list of dictionaries for insertion
|
||||
recordings_to_delete = [
|
||||
{"id": recording_id} for recording_id in recordings_to_delete
|
||||
]
|
||||
|
||||
if float(len(recordings_to_delete)) / max(1, recordings.count()) > 0.5:
|
||||
logger.warning(
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings.count()) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
return False
|
||||
|
||||
# create a temporary table for deletion
|
||||
RecordingsToDelete.create_table(temporary=True)
|
||||
|
||||
# insert ids to the temporary table
|
||||
max_inserts = 1000
|
||||
for batch in chunked(recordings_to_delete, max_inserts):
|
||||
RecordingsToDelete.insert_many(batch).execute()
|
||||
|
||||
try:
|
||||
# delete records in the main table that exist in the temporary table
|
||||
query = Recordings.delete().where(
|
||||
Recordings.id.in_(RecordingsToDelete.select(RecordingsToDelete.id))
|
||||
)
|
||||
query.execute()
|
||||
except DatabaseError as e:
|
||||
logger.error(f"Database error during recordings db cleanup: {e}")
|
||||
|
||||
return True
|
||||
|
||||
def delete_files_without_db_entry(files_on_disk: list[str]):
|
||||
"""Delete files where file is not inside frigate db."""
|
||||
files_to_delete = []
|
||||
|
||||
for file in files_on_disk:
|
||||
if not Recordings.select().where(Recordings.path == file).exists():
|
||||
files_to_delete.append(file)
|
||||
|
||||
if len(files_to_delete) == 0:
|
||||
return True
|
||||
|
||||
logger.info(
|
||||
f"Deleting {len(files_to_delete)} recordings files with missing DB entries"
|
||||
)
|
||||
|
||||
if float(len(files_to_delete)) / max(1, len(files_on_disk)) > 0.5:
|
||||
logger.debug(
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
return False
|
||||
|
||||
for file in files_to_delete:
|
||||
os.unlink(file)
|
||||
|
||||
return True
|
||||
|
||||
logger.debug("Start sync recordings.")
|
||||
|
||||
# start checking on the hour 36 hours ago
|
||||
check_point = datetime.datetime.now().replace(
|
||||
minute=0, second=0, microsecond=0
|
||||
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
|
||||
db_success = delete_db_entries_without_file(check_point.timestamp())
|
||||
|
||||
# only try to cleanup files if db cleanup was successful
|
||||
if db_success:
|
||||
if limited:
|
||||
# get recording files from last 36 hours
|
||||
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
if root > hour_check
|
||||
}
|
||||
else:
|
||||
# get all recordings files on disk and put them in a set
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
}
|
||||
|
||||
delete_files_without_db_entry(files_on_disk)
|
||||
|
||||
logger.debug("End sync recordings.")
|
||||
@@ -22,6 +22,7 @@ from frigate.util.services import (
|
||||
get_bandwidth_stats,
|
||||
get_cpu_stats,
|
||||
get_fs_type,
|
||||
get_hailo_temps,
|
||||
get_intel_gpu_stats,
|
||||
get_jetson_stats,
|
||||
get_nvidia_gpu_stats,
|
||||
@@ -91,9 +92,80 @@ def get_temperatures() -> dict[str, float]:
|
||||
if temp is not None:
|
||||
temps[apex] = temp
|
||||
|
||||
# Get temperatures for Hailo devices
|
||||
temps.update(get_hailo_temps())
|
||||
|
||||
return temps
|
||||
|
||||
|
||||
def get_detector_temperature(
|
||||
detector_type: str,
|
||||
detector_index_by_type: dict[str, int],
|
||||
) -> Optional[float]:
|
||||
"""Get temperature for a specific detector based on its type."""
|
||||
if detector_type == "edgetpu":
|
||||
# Get temperatures for all attached Corals
|
||||
base = "/sys/class/apex/"
|
||||
if os.path.isdir(base):
|
||||
apex_devices = sorted(os.listdir(base))
|
||||
index = detector_index_by_type.get("edgetpu", 0)
|
||||
if index < len(apex_devices):
|
||||
apex_name = apex_devices[index]
|
||||
temp = read_temperature(os.path.join(base, apex_name, "temp"))
|
||||
if temp is not None:
|
||||
return temp
|
||||
elif detector_type == "hailo8l":
|
||||
# Get temperatures for Hailo devices
|
||||
hailo_temps = get_hailo_temps()
|
||||
if hailo_temps:
|
||||
hailo_device_names = sorted(hailo_temps.keys())
|
||||
index = detector_index_by_type.get("hailo8l", 0)
|
||||
if index < len(hailo_device_names):
|
||||
device_name = hailo_device_names[index]
|
||||
return hailo_temps[device_name]
|
||||
elif detector_type == "rknn":
|
||||
# Rockchip temperatures are handled by the GPU / NPU stats
|
||||
# as there are not detector specific temperatures
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_detector_stats(
|
||||
stats_tracking: StatsTrackingTypes,
|
||||
) -> dict[str, dict[str, Any]]:
|
||||
"""Get stats for all detectors, including temperatures based on detector type."""
|
||||
detector_stats: dict[str, dict[str, Any]] = {}
|
||||
detector_type_indices: dict[str, int] = {}
|
||||
|
||||
for name, detector in stats_tracking["detectors"].items():
|
||||
pid = detector.detect_process.pid if detector.detect_process else None
|
||||
detector_type = detector.detector_config.type
|
||||
|
||||
# Keep track of the index for each detector type to match temperatures correctly
|
||||
current_index = detector_type_indices.get(detector_type, 0)
|
||||
detector_type_indices[detector_type] = current_index + 1
|
||||
|
||||
detector_stat = {
|
||||
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"pid": pid,
|
||||
}
|
||||
|
||||
temp = get_detector_temperature(detector_type, {detector_type: current_index})
|
||||
|
||||
if temp is not None:
|
||||
detector_stat["temperature"] = round(temp, 1)
|
||||
|
||||
detector_stats[name] = detector_stat
|
||||
|
||||
return detector_stats
|
||||
|
||||
|
||||
def get_processing_stats(
|
||||
config: FrigateConfig, stats: dict[str, str], hwaccel_errors: list[str]
|
||||
) -> None:
|
||||
@@ -174,6 +246,7 @@ async def set_gpu_stats(
|
||||
"mem": str(round(float(nvidia_usage[i]["mem"]), 2)) + "%",
|
||||
"enc": str(round(float(nvidia_usage[i]["enc"]), 2)) + "%",
|
||||
"dec": str(round(float(nvidia_usage[i]["dec"]), 2)) + "%",
|
||||
"temp": str(nvidia_usage[i]["temp"]),
|
||||
}
|
||||
|
||||
else:
|
||||
@@ -279,6 +352,32 @@ def stats_snapshot(
|
||||
if camera_stats.capture_process_pid.value
|
||||
else None
|
||||
)
|
||||
# Calculate connection quality based on current state
|
||||
# This is computed at stats-collection time so offline cameras
|
||||
# correctly show as unusable rather than excellent
|
||||
expected_fps = config.cameras[name].detect.fps
|
||||
current_fps = camera_stats.camera_fps.value
|
||||
reconnects = camera_stats.reconnects_last_hour.value
|
||||
stalls = camera_stats.stalls_last_hour.value
|
||||
|
||||
if current_fps < 0.1:
|
||||
quality_str = "unusable"
|
||||
elif reconnects == 0 and current_fps >= 0.9 * expected_fps and stalls < 5:
|
||||
quality_str = "excellent"
|
||||
elif reconnects <= 2 and current_fps >= 0.6 * expected_fps:
|
||||
quality_str = "fair"
|
||||
elif reconnects > 10 or current_fps < 1.0 or stalls > 100:
|
||||
quality_str = "unusable"
|
||||
else:
|
||||
quality_str = "poor"
|
||||
|
||||
connection_quality = {
|
||||
"connection_quality": quality_str,
|
||||
"expected_fps": expected_fps,
|
||||
"reconnects_last_hour": reconnects,
|
||||
"stalls_last_hour": stalls,
|
||||
}
|
||||
|
||||
stats["cameras"][name] = {
|
||||
"camera_fps": round(camera_stats.camera_fps.value, 2),
|
||||
"process_fps": round(camera_stats.process_fps.value, 2),
|
||||
@@ -290,20 +389,10 @@ def stats_snapshot(
|
||||
"ffmpeg_pid": ffmpeg_pid,
|
||||
"audio_rms": round(camera_stats.audio_rms.value, 4),
|
||||
"audio_dBFS": round(camera_stats.audio_dBFS.value, 4),
|
||||
**connection_quality,
|
||||
}
|
||||
|
||||
stats["detectors"] = {}
|
||||
for name, detector in stats_tracking["detectors"].items():
|
||||
pid = detector.detect_process.pid if detector.detect_process else None
|
||||
stats["detectors"][name] = {
|
||||
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
|
||||
# issue https://github.com/python/typeshed/issues/8799
|
||||
# from mypy 0.981 onwards
|
||||
"pid": pid,
|
||||
}
|
||||
stats["detectors"] = get_detector_stats(stats_tracking)
|
||||
stats["camera_fps"] = round(total_camera_fps, 2)
|
||||
stats["process_fps"] = round(total_process_fps, 2)
|
||||
stats["skipped_fps"] = round(total_skipped_fps, 2)
|
||||
@@ -389,7 +478,6 @@ def stats_snapshot(
|
||||
"version": VERSION,
|
||||
"latest_version": stats_tracking["latest_frigate_version"],
|
||||
"storage": {},
|
||||
"temperatures": get_temperatures(),
|
||||
"last_updated": int(time.time()),
|
||||
}
|
||||
|
||||
|
||||
@@ -26,6 +26,15 @@ class ModelStatusTypesEnum(str, Enum):
|
||||
failed = "failed"
|
||||
|
||||
|
||||
class JobStatusTypesEnum(str, Enum):
|
||||
pending = "pending"
|
||||
queued = "queued"
|
||||
running = "running"
|
||||
success = "success"
|
||||
failed = "failed"
|
||||
cancelled = "cancelled"
|
||||
|
||||
|
||||
class TrackedObjectUpdateTypesEnum(str, Enum):
|
||||
description = "description"
|
||||
face = "face"
|
||||
|
||||
@@ -13,7 +13,7 @@ from frigate.util.services import get_video_properties
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
CURRENT_CONFIG_VERSION = "0.17-0"
|
||||
CURRENT_CONFIG_VERSION = "0.18-0"
|
||||
DEFAULT_CONFIG_FILE = os.path.join(CONFIG_DIR, "config.yml")
|
||||
|
||||
|
||||
@@ -98,6 +98,13 @@ def migrate_frigate_config(config_file: str):
|
||||
yaml.dump(new_config, f)
|
||||
previous_version = "0.17-0"
|
||||
|
||||
if previous_version < "0.18-0":
|
||||
logger.info(f"Migrating frigate config from {previous_version} to 0.18-0...")
|
||||
new_config = migrate_018_0(config)
|
||||
with open(config_file, "w") as f:
|
||||
yaml.dump(new_config, f)
|
||||
previous_version = "0.18-0"
|
||||
|
||||
logger.info("Finished frigate config migration...")
|
||||
|
||||
|
||||
@@ -427,6 +434,27 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
return new_config
|
||||
|
||||
|
||||
def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
|
||||
"""Handle migrating frigate config to 0.18-0"""
|
||||
new_config = config.copy()
|
||||
|
||||
# Remove deprecated sync_recordings from global record config
|
||||
if new_config.get("record", {}).get("sync_recordings") is not None:
|
||||
del new_config["record"]["sync_recordings"]
|
||||
|
||||
# Remove deprecated sync_recordings from camera-specific record configs
|
||||
for name, camera in config.get("cameras", {}).items():
|
||||
camera_config: dict[str, dict[str, Any]] = camera.copy()
|
||||
|
||||
if camera_config.get("record", {}).get("sync_recordings") is not None:
|
||||
del camera_config["record"]["sync_recordings"]
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
|
||||
new_config["version"] = "0.18-0"
|
||||
return new_config
|
||||
|
||||
|
||||
def get_relative_coordinates(
|
||||
mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
|
||||
) -> Union[str, list]:
|
||||
|
||||
789
frigate/util/media.py
Normal file
789
frigate/util/media.py
Normal file
@@ -0,0 +1,789 @@
|
||||
"""Recordings Utilities."""
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from peewee import DatabaseError, chunked
|
||||
|
||||
from frigate.const import CLIPS_DIR, EXPORT_DIR, RECORD_DIR, THUMB_DIR
|
||||
from frigate.models import (
|
||||
Event,
|
||||
Export,
|
||||
Previews,
|
||||
Recordings,
|
||||
RecordingsToDelete,
|
||||
ReviewSegment,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Safety threshold - abort if more than 50% of files would be deleted
|
||||
SAFETY_THRESHOLD = 0.5
|
||||
|
||||
|
||||
@dataclass
|
||||
class SyncResult:
|
||||
"""Result of a sync operation."""
|
||||
|
||||
media_type: str
|
||||
files_checked: int = 0
|
||||
orphans_found: int = 0
|
||||
orphans_deleted: int = 0
|
||||
orphan_paths: list[str] = field(default_factory=list)
|
||||
aborted: bool = False
|
||||
error: str | None = None
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
"media_type": self.media_type,
|
||||
"files_checked": self.files_checked,
|
||||
"orphans_found": self.orphans_found,
|
||||
"orphans_deleted": self.orphans_deleted,
|
||||
"aborted": self.aborted,
|
||||
"error": self.error,
|
||||
}
|
||||
|
||||
|
||||
def remove_empty_directories(directory: str) -> None:
|
||||
# list all directories recursively and sort them by path,
|
||||
# longest first
|
||||
paths = sorted(
|
||||
[x[0] for x in os.walk(directory)],
|
||||
key=lambda p: len(str(p)),
|
||||
reverse=True,
|
||||
)
|
||||
for path in paths:
|
||||
# don't delete the parent
|
||||
if path == directory:
|
||||
continue
|
||||
if len(os.listdir(path)) == 0:
|
||||
os.rmdir(path)
|
||||
|
||||
|
||||
def sync_recordings(
|
||||
limited: bool = False, dry_run: bool = False, force: bool = False
|
||||
) -> SyncResult:
|
||||
"""Sync recordings between the database and disk using the SyncResult format."""
|
||||
|
||||
result = SyncResult(media_type="recordings")
|
||||
|
||||
try:
|
||||
logger.debug("Start sync recordings.")
|
||||
|
||||
# start checking on the hour 36 hours ago
|
||||
check_point = datetime.datetime.now().replace(
|
||||
minute=0, second=0, microsecond=0
|
||||
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
|
||||
|
||||
# Gather DB recordings to inspect
|
||||
if limited:
|
||||
recordings_query = Recordings.select(Recordings.id, Recordings.path).where(
|
||||
Recordings.start_time >= check_point.timestamp()
|
||||
)
|
||||
else:
|
||||
recordings_query = Recordings.select(Recordings.id, Recordings.path)
|
||||
|
||||
recordings_count = recordings_query.count()
|
||||
page_size = 1000
|
||||
num_pages = (recordings_count + page_size - 1) // page_size
|
||||
recordings_to_delete: list[dict] = []
|
||||
|
||||
for page in range(num_pages):
|
||||
for recording in recordings_query.paginate(page, page_size):
|
||||
if not os.path.exists(recording.path):
|
||||
recordings_to_delete.append(
|
||||
{"id": recording.id, "path": recording.path}
|
||||
)
|
||||
|
||||
result.files_checked += recordings_count
|
||||
result.orphans_found += len(recordings_to_delete)
|
||||
result.orphan_paths.extend(
|
||||
[
|
||||
recording["path"]
|
||||
for recording in recordings_to_delete
|
||||
if recording.get("path")
|
||||
]
|
||||
)
|
||||
|
||||
if (
|
||||
recordings_count
|
||||
and len(recordings_to_delete) / recordings_count > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries (force=True, bypassing safety threshold)"
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if recordings_to_delete and not dry_run:
|
||||
logger.info(
|
||||
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
|
||||
)
|
||||
|
||||
RecordingsToDelete.create_table(temporary=True)
|
||||
|
||||
max_inserts = 1000
|
||||
for batch in chunked(recordings_to_delete, max_inserts):
|
||||
RecordingsToDelete.insert_many(batch).execute()
|
||||
|
||||
try:
|
||||
deleted = (
|
||||
Recordings.delete()
|
||||
.where(
|
||||
Recordings.id.in_(
|
||||
RecordingsToDelete.select(RecordingsToDelete.id)
|
||||
)
|
||||
)
|
||||
.execute()
|
||||
)
|
||||
result.orphans_deleted += int(deleted)
|
||||
except DatabaseError as e:
|
||||
logger.error(f"Database error during recordings db cleanup: {e}")
|
||||
result.error = str(e)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if result.aborted:
|
||||
logger.warning("Recording DB sync aborted; skipping file cleanup.")
|
||||
return result
|
||||
|
||||
# Only try to cleanup files if db cleanup was successful or dry_run
|
||||
if limited:
|
||||
# get recording files from last 36 hours
|
||||
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
if root > hour_check
|
||||
}
|
||||
else:
|
||||
# get all recordings files on disk and put them in a set
|
||||
files_on_disk = {
|
||||
os.path.join(root, file)
|
||||
for root, _, files in os.walk(RECORD_DIR)
|
||||
for file in files
|
||||
}
|
||||
|
||||
result.files_checked += len(files_on_disk)
|
||||
|
||||
files_to_delete: list[str] = []
|
||||
for file in files_on_disk:
|
||||
if not Recordings.select().where(Recordings.path == file).exists():
|
||||
files_to_delete.append(file)
|
||||
|
||||
result.orphans_found += len(files_to_delete)
|
||||
result.orphan_paths.extend(files_to_delete)
|
||||
|
||||
if (
|
||||
files_on_disk
|
||||
and len(files_to_delete) / len(files_on_disk) > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files (force=True, bypassing safety threshold)"
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files, could be due to configuration error. Aborting..."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Recordings sync (dry run): Found {len(files_to_delete)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(files_to_delete)} orphaned recordings files")
|
||||
for file in files_to_delete:
|
||||
try:
|
||||
os.unlink(file)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file}: {e}")
|
||||
|
||||
logger.debug("End sync recordings.")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing recordings: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_event_snapshots(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync event snapshots - delete files not referenced by any event.
|
||||
|
||||
Event snapshots are stored at: CLIPS_DIR/{camera}-{event_id}.jpg
|
||||
Also checks for clean variants: {camera}-{event_id}-clean.webp and -clean.png
|
||||
"""
|
||||
result = SyncResult(media_type="event_snapshots")
|
||||
|
||||
try:
|
||||
# Get all event IDs with snapshots from DB
|
||||
events_with_snapshots = set(
|
||||
f"{e.camera}-{e.id}"
|
||||
for e in Event.select(Event.id, Event.camera).where(
|
||||
Event.has_snapshot == True
|
||||
)
|
||||
)
|
||||
|
||||
# Find snapshot files on disk (directly in CLIPS_DIR, not subdirectories)
|
||||
snapshot_files: list[tuple[str, str]] = [] # (full_path, base_name)
|
||||
if os.path.isdir(CLIPS_DIR):
|
||||
for file in os.listdir(CLIPS_DIR):
|
||||
file_path = os.path.join(CLIPS_DIR, file)
|
||||
if os.path.isfile(file_path) and file.endswith(
|
||||
(".jpg", "-clean.webp", "-clean.png")
|
||||
):
|
||||
# Extract base name (camera-event_id) from filename
|
||||
base_name = file
|
||||
for suffix in ["-clean.webp", "-clean.png", ".jpg"]:
|
||||
if file.endswith(suffix):
|
||||
base_name = file[: -len(suffix)]
|
||||
break
|
||||
snapshot_files.append((file_path, base_name))
|
||||
|
||||
result.files_checked = len(snapshot_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path, base_name in snapshot_files:
|
||||
if base_name not in events_with_snapshots:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Event snapshots sync (dry run): Found {len(orphans)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned event snapshot files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing event snapshots: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_event_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync event thumbnails - delete files not referenced by any event.
|
||||
|
||||
Event thumbnails are stored at: THUMB_DIR/{camera}/{event_id}.webp
|
||||
Only events without inline thumbnail (thumbnail field is None/empty) use files.
|
||||
"""
|
||||
result = SyncResult(media_type="event_thumbnails")
|
||||
|
||||
try:
|
||||
# Get all events that use file-based thumbnails
|
||||
# Events with thumbnail field populated don't need files
|
||||
events_with_file_thumbs = set(
|
||||
(e.camera, e.id)
|
||||
for e in Event.select(Event.id, Event.camera, Event.thumbnail).where(
|
||||
(Event.thumbnail.is_null(True)) | (Event.thumbnail == "")
|
||||
)
|
||||
)
|
||||
|
||||
# Find thumbnail files on disk
|
||||
thumbnail_files: list[
|
||||
tuple[str, str, str]
|
||||
] = [] # (full_path, camera, event_id)
|
||||
if os.path.isdir(THUMB_DIR):
|
||||
for camera_dir in os.listdir(THUMB_DIR):
|
||||
camera_path = os.path.join(THUMB_DIR, camera_dir)
|
||||
if not os.path.isdir(camera_path):
|
||||
continue
|
||||
for file in os.listdir(camera_path):
|
||||
if file.endswith(".webp"):
|
||||
event_id = file[:-5] # Remove .webp
|
||||
file_path = os.path.join(camera_path, file)
|
||||
thumbnail_files.append((file_path, camera_dir, event_id))
|
||||
|
||||
result.files_checked = len(thumbnail_files)
|
||||
|
||||
# Find orphans - files where event doesn't exist or event has inline thumbnail
|
||||
orphans: list[str] = []
|
||||
for file_path, camera, event_id in thumbnail_files:
|
||||
if (camera, event_id) not in events_with_file_thumbs:
|
||||
# Check if event exists with inline thumbnail
|
||||
event_exists = Event.select().where(Event.id == event_id).exists()
|
||||
if not event_exists:
|
||||
orphans.append(file_path)
|
||||
# If event exists with inline thumbnail, the file is also orphaned
|
||||
elif event_exists:
|
||||
event = Event.get_or_none(Event.id == event_id)
|
||||
if event and event.thumbnail:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Event thumbnails sync (dry run): Found {len(orphans)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned event thumbnail files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing event thumbnails: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_review_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync review segment thumbnails - delete files not referenced by any review segment.
|
||||
|
||||
Review thumbnails are stored at: CLIPS_DIR/review/thumb-{camera}-{review_id}.webp
|
||||
The full path is stored in ReviewSegment.thumb_path
|
||||
"""
|
||||
result = SyncResult(media_type="review_thumbnails")
|
||||
|
||||
try:
|
||||
# Get all thumb paths from DB
|
||||
review_thumb_paths = set(
|
||||
r.thumb_path
|
||||
for r in ReviewSegment.select(ReviewSegment.thumb_path)
|
||||
if r.thumb_path
|
||||
)
|
||||
|
||||
# Find review thumbnail files on disk
|
||||
review_dir = os.path.join(CLIPS_DIR, "review")
|
||||
thumbnail_files: list[str] = []
|
||||
if os.path.isdir(review_dir):
|
||||
for file in os.listdir(review_dir):
|
||||
if file.startswith("thumb-") and file.endswith(".webp"):
|
||||
file_path = os.path.join(review_dir, file)
|
||||
thumbnail_files.append(file_path)
|
||||
|
||||
result.files_checked = len(thumbnail_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path in thumbnail_files:
|
||||
if file_path not in review_thumb_paths:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(
|
||||
f"Review thumbnails sync (dry run): Found {len(orphans)} orphaned files"
|
||||
)
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned review thumbnail files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing review thumbnails: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_previews(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync preview files - delete files not referenced by any preview record.
|
||||
|
||||
Previews are stored at: CLIPS_DIR/previews/{camera}/*.mp4
|
||||
The full path is stored in Previews.path
|
||||
"""
|
||||
result = SyncResult(media_type="previews")
|
||||
|
||||
try:
|
||||
# Get all preview paths from DB
|
||||
preview_paths = set(p.path for p in Previews.select(Previews.path) if p.path)
|
||||
|
||||
# Find preview files on disk
|
||||
previews_dir = os.path.join(CLIPS_DIR, "previews")
|
||||
preview_files: list[str] = []
|
||||
if os.path.isdir(previews_dir):
|
||||
for camera_dir in os.listdir(previews_dir):
|
||||
camera_path = os.path.join(previews_dir, camera_dir)
|
||||
if not os.path.isdir(camera_path):
|
||||
continue
|
||||
for file in os.listdir(camera_path):
|
||||
if file.endswith(".mp4"):
|
||||
file_path = os.path.join(camera_path, file)
|
||||
preview_files.append(file_path)
|
||||
|
||||
result.files_checked = len(preview_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path in preview_files:
|
||||
if file_path not in preview_paths:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(f"Previews sync (dry run): Found {len(orphans)} orphaned files")
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned preview files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing previews: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sync_exports(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync export files - delete files not referenced by any export record.
|
||||
|
||||
Export videos are stored at: EXPORT_DIR/*.mp4
|
||||
Export thumbnails are stored at: CLIPS_DIR/export/*.jpg
|
||||
The paths are stored in Export.video_path and Export.thumb_path
|
||||
"""
|
||||
result = SyncResult(media_type="exports")
|
||||
|
||||
try:
|
||||
# Get all export paths from DB
|
||||
export_video_paths = set()
|
||||
export_thumb_paths = set()
|
||||
for e in Export.select(Export.video_path, Export.thumb_path):
|
||||
if e.video_path:
|
||||
export_video_paths.add(e.video_path)
|
||||
if e.thumb_path:
|
||||
export_thumb_paths.add(e.thumb_path)
|
||||
|
||||
# Find export video files on disk
|
||||
export_files: list[str] = []
|
||||
if os.path.isdir(EXPORT_DIR):
|
||||
for file in os.listdir(EXPORT_DIR):
|
||||
if file.endswith(".mp4"):
|
||||
file_path = os.path.join(EXPORT_DIR, file)
|
||||
export_files.append(file_path)
|
||||
|
||||
# Find export thumbnail files on disk
|
||||
export_thumb_dir = os.path.join(CLIPS_DIR, "export")
|
||||
thumb_files: list[str] = []
|
||||
if os.path.isdir(export_thumb_dir):
|
||||
for file in os.listdir(export_thumb_dir):
|
||||
if file.endswith(".jpg"):
|
||||
file_path = os.path.join(export_thumb_dir, file)
|
||||
thumb_files.append(file_path)
|
||||
|
||||
result.files_checked = len(export_files) + len(thumb_files)
|
||||
|
||||
# Find orphans
|
||||
orphans: list[str] = []
|
||||
for file_path in export_files:
|
||||
if file_path not in export_video_paths:
|
||||
orphans.append(file_path)
|
||||
for file_path in thumb_files:
|
||||
if file_path not in export_thumb_paths:
|
||||
orphans.append(file_path)
|
||||
|
||||
result.orphans_found = len(orphans)
|
||||
result.orphan_paths = orphans
|
||||
|
||||
if len(orphans) == 0:
|
||||
return result
|
||||
|
||||
# Safety check
|
||||
if (
|
||||
result.files_checked > 0
|
||||
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
|
||||
):
|
||||
if force:
|
||||
logger.warning(
|
||||
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
|
||||
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
|
||||
"Aborting due to safety threshold."
|
||||
)
|
||||
result.aborted = True
|
||||
return result
|
||||
|
||||
if dry_run:
|
||||
logger.info(f"Exports sync (dry run): Found {len(orphans)} orphaned files")
|
||||
return result
|
||||
|
||||
# Delete orphans
|
||||
logger.info(f"Deleting {len(orphans)} orphaned export files")
|
||||
for file_path in orphans:
|
||||
try:
|
||||
os.unlink(file_path)
|
||||
result.orphans_deleted += 1
|
||||
except OSError as e:
|
||||
logger.error(f"Failed to delete {file_path}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing exports: {e}")
|
||||
result.error = str(e)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@dataclass
|
||||
class MediaSyncResults:
|
||||
"""Combined results from all media sync operations."""
|
||||
|
||||
event_snapshots: SyncResult | None = None
|
||||
event_thumbnails: SyncResult | None = None
|
||||
review_thumbnails: SyncResult | None = None
|
||||
previews: SyncResult | None = None
|
||||
exports: SyncResult | None = None
|
||||
recordings: SyncResult | None = None
|
||||
|
||||
@property
|
||||
def total_files_checked(self) -> int:
|
||||
total = 0
|
||||
for result in [
|
||||
self.event_snapshots,
|
||||
self.event_thumbnails,
|
||||
self.review_thumbnails,
|
||||
self.previews,
|
||||
self.exports,
|
||||
self.recordings,
|
||||
]:
|
||||
if result:
|
||||
total += result.files_checked
|
||||
return total
|
||||
|
||||
@property
|
||||
def total_orphans_found(self) -> int:
|
||||
total = 0
|
||||
for result in [
|
||||
self.event_snapshots,
|
||||
self.event_thumbnails,
|
||||
self.review_thumbnails,
|
||||
self.previews,
|
||||
self.exports,
|
||||
self.recordings,
|
||||
]:
|
||||
if result:
|
||||
total += result.orphans_found
|
||||
return total
|
||||
|
||||
@property
|
||||
def total_orphans_deleted(self) -> int:
|
||||
total = 0
|
||||
for result in [
|
||||
self.event_snapshots,
|
||||
self.event_thumbnails,
|
||||
self.review_thumbnails,
|
||||
self.previews,
|
||||
self.exports,
|
||||
self.recordings,
|
||||
]:
|
||||
if result:
|
||||
total += result.orphans_deleted
|
||||
return total
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert results to dictionary for API response."""
|
||||
results = {}
|
||||
for name, result in [
|
||||
("event_snapshots", self.event_snapshots),
|
||||
("event_thumbnails", self.event_thumbnails),
|
||||
("review_thumbnails", self.review_thumbnails),
|
||||
("previews", self.previews),
|
||||
("exports", self.exports),
|
||||
("recordings", self.recordings),
|
||||
]:
|
||||
if result:
|
||||
results[name] = {
|
||||
"files_checked": result.files_checked,
|
||||
"orphans_found": result.orphans_found,
|
||||
"orphans_deleted": result.orphans_deleted,
|
||||
"aborted": result.aborted,
|
||||
"error": result.error,
|
||||
}
|
||||
results["totals"] = {
|
||||
"files_checked": self.total_files_checked,
|
||||
"orphans_found": self.total_orphans_found,
|
||||
"orphans_deleted": self.total_orphans_deleted,
|
||||
}
|
||||
return results
|
||||
|
||||
|
||||
def sync_all_media(
|
||||
dry_run: bool = False, media_types: list[str] = ["all"], force: bool = False
|
||||
) -> MediaSyncResults:
|
||||
"""Sync specified media types with the database.
|
||||
|
||||
Args:
|
||||
dry_run: If True, only report orphans without deleting them.
|
||||
media_types: List of media types to sync. Can include: 'all', 'event_snapshots',
|
||||
'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'
|
||||
force: If True, bypass safety threshold checks.
|
||||
|
||||
Returns:
|
||||
MediaSyncResults with details of each sync operation.
|
||||
"""
|
||||
logger.debug(
|
||||
f"Starting media sync (dry_run={dry_run}, media_types={media_types}, force={force})"
|
||||
)
|
||||
|
||||
results = MediaSyncResults()
|
||||
|
||||
# Determine which media types to sync
|
||||
sync_all = "all" in media_types
|
||||
|
||||
if sync_all or "event_snapshots" in media_types:
|
||||
results.event_snapshots = sync_event_snapshots(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "event_thumbnails" in media_types:
|
||||
results.event_thumbnails = sync_event_thumbnails(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "review_thumbnails" in media_types:
|
||||
results.review_thumbnails = sync_review_thumbnails(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "previews" in media_types:
|
||||
results.previews = sync_previews(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "exports" in media_types:
|
||||
results.exports = sync_exports(dry_run=dry_run, force=force)
|
||||
|
||||
if sync_all or "recordings" in media_types:
|
||||
results.recordings = sync_recordings(dry_run=dry_run, force=force)
|
||||
|
||||
logger.info(
|
||||
f"Media sync complete: checked {results.total_files_checked} files, "
|
||||
f"found {results.total_orphans_found} orphans, "
|
||||
f"deleted {results.total_orphans_deleted}"
|
||||
)
|
||||
|
||||
return results
|
||||
@@ -417,12 +417,12 @@ def get_openvino_npu_stats() -> Optional[dict[str, str]]:
|
||||
else:
|
||||
usage = 0.0
|
||||
|
||||
return {"npu": f"{round(usage, 2)}", "mem": "-"}
|
||||
return {"npu": f"{round(usage, 2)}", "mem": "-%"}
|
||||
except (FileNotFoundError, PermissionError, ValueError):
|
||||
return None
|
||||
|
||||
|
||||
def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
|
||||
def get_rockchip_gpu_stats() -> Optional[dict[str, str | float]]:
|
||||
"""Get GPU stats using rk."""
|
||||
try:
|
||||
with open("/sys/kernel/debug/rkrga/load", "r") as f:
|
||||
@@ -440,7 +440,16 @@ def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
|
||||
return None
|
||||
|
||||
average_load = f"{round(sum(load_values) / len(load_values), 2)}%"
|
||||
return {"gpu": average_load, "mem": "-"}
|
||||
stats: dict[str, str | float] = {"gpu": average_load, "mem": "-%"}
|
||||
|
||||
try:
|
||||
with open("/sys/class/thermal/thermal_zone5/temp", "r") as f:
|
||||
line = f.readline().strip()
|
||||
stats["temp"] = round(int(line) / 1000, 1)
|
||||
except (FileNotFoundError, OSError, ValueError):
|
||||
pass
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def get_rockchip_npu_stats() -> Optional[dict[str, float | str]]:
|
||||
@@ -463,13 +472,25 @@ def get_rockchip_npu_stats() -> Optional[dict[str, float | str]]:
|
||||
|
||||
percentages = [int(load) for load in core_loads]
|
||||
mean = round(sum(percentages) / len(percentages), 2)
|
||||
return {"npu": mean, "mem": "-"}
|
||||
stats: dict[str, float | str] = {"npu": mean, "mem": "-%"}
|
||||
|
||||
try:
|
||||
with open("/sys/class/thermal/thermal_zone6/temp", "r") as f:
|
||||
line = f.readline().strip()
|
||||
stats["temp"] = round(int(line) / 1000, 1)
|
||||
except (FileNotFoundError, OSError, ValueError):
|
||||
pass
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def try_get_info(f, h, default="N/A"):
|
||||
def try_get_info(f, h, default="N/A", sensor=None):
|
||||
try:
|
||||
if h:
|
||||
v = f(h)
|
||||
if sensor is not None:
|
||||
v = f(h, sensor)
|
||||
else:
|
||||
v = f(h)
|
||||
else:
|
||||
v = f()
|
||||
except nvml.NVMLError_NotSupported:
|
||||
@@ -498,6 +519,9 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
|
||||
enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle)
|
||||
dec = try_get_info(nvml.nvmlDeviceGetDecoderUtilization, handle)
|
||||
temp = try_get_info(
|
||||
nvml.nvmlDeviceGetTemperature, handle, default=None, sensor=0
|
||||
)
|
||||
pstate = try_get_info(nvml.nvmlDeviceGetPowerState, handle, default=None)
|
||||
|
||||
if util != "N/A":
|
||||
@@ -510,6 +534,11 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
else:
|
||||
gpu_mem_util = -1
|
||||
|
||||
if temp != "N/A" and temp is not None:
|
||||
temp = float(temp)
|
||||
else:
|
||||
temp = None
|
||||
|
||||
if enc != "N/A":
|
||||
enc_util = enc[0]
|
||||
else:
|
||||
@@ -527,6 +556,7 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
"enc": enc_util,
|
||||
"dec": dec_util,
|
||||
"pstate": pstate or "unknown",
|
||||
"temp": temp,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
@@ -549,6 +579,53 @@ def get_jetson_stats() -> Optional[dict[int, dict]]:
|
||||
return results
|
||||
|
||||
|
||||
def get_hailo_temps() -> dict[str, float]:
|
||||
"""Get temperatures for Hailo devices."""
|
||||
try:
|
||||
from hailo_platform import Device
|
||||
except ModuleNotFoundError:
|
||||
return {}
|
||||
|
||||
temps = {}
|
||||
|
||||
try:
|
||||
device_ids = Device.scan()
|
||||
for i, device_id in enumerate(device_ids):
|
||||
try:
|
||||
with Device(device_id) as device:
|
||||
temp_info = device.control.get_chip_temperature()
|
||||
|
||||
# Get board name and normalise it
|
||||
identity = device.control.identify()
|
||||
board_name = None
|
||||
for line in str(identity).split("\n"):
|
||||
if line.startswith("Board Name:"):
|
||||
board_name = (
|
||||
line.split(":", 1)[1].strip().lower().replace("-", "")
|
||||
)
|
||||
break
|
||||
|
||||
if not board_name:
|
||||
board_name = f"hailo{i}"
|
||||
|
||||
# Use indexed name if multiple devices, otherwise just the board name
|
||||
device_name = (
|
||||
f"{board_name}-{i}" if len(device_ids) > 1 else board_name
|
||||
)
|
||||
|
||||
# ts1_temperature is also available, but appeared to be the same as ts0 in testing.
|
||||
temps[device_name] = round(temp_info.ts0_temperature, 1)
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
f"Failed to get temperature for Hailo device {device_id}: {e}"
|
||||
)
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to scan for Hailo devices: {e}")
|
||||
|
||||
return temps
|
||||
|
||||
|
||||
def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedProcess:
|
||||
"""Run ffprobe on stream."""
|
||||
clean_path = escape_special_characters(path)
|
||||
@@ -584,12 +661,17 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
|
||||
|
||||
def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess:
|
||||
"""Run vainfo."""
|
||||
ffprobe_cmd = (
|
||||
["vainfo"]
|
||||
if not device_name
|
||||
else ["vainfo", "--display", "drm", "--device", f"/dev/dri/{device_name}"]
|
||||
)
|
||||
return sp.run(ffprobe_cmd, capture_output=True)
|
||||
if not device_name:
|
||||
cmd = ["vainfo"]
|
||||
else:
|
||||
if os.path.isabs(device_name) and device_name.startswith("/dev/dri/"):
|
||||
device_path = device_name
|
||||
else:
|
||||
device_path = f"/dev/dri/{device_name}"
|
||||
|
||||
cmd = ["vainfo", "--display", "drm", "--device", device_path]
|
||||
|
||||
return sp.run(cmd, capture_output=True)
|
||||
|
||||
|
||||
def get_nvidia_driver_info() -> dict[str, Any]:
|
||||
|
||||
121
frigate/video.py
121
frigate/video.py
@@ -3,6 +3,7 @@ import queue
|
||||
import subprocess as sp
|
||||
import threading
|
||||
import time
|
||||
from collections import deque
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from multiprocessing import Queue, Value
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
@@ -115,6 +116,7 @@ def capture_frames(
|
||||
frame_rate.start()
|
||||
skipped_eps = EventsPerSecond()
|
||||
skipped_eps.start()
|
||||
|
||||
config_subscriber = CameraConfigUpdateSubscriber(
|
||||
None, {config.name: config}, [CameraConfigUpdateEnum.enabled]
|
||||
)
|
||||
@@ -179,6 +181,9 @@ class CameraWatchdog(threading.Thread):
|
||||
camera_fps,
|
||||
skipped_fps,
|
||||
ffmpeg_pid,
|
||||
stalls,
|
||||
reconnects,
|
||||
detection_frame,
|
||||
stop_event,
|
||||
):
|
||||
threading.Thread.__init__(self)
|
||||
@@ -199,6 +204,10 @@ class CameraWatchdog(threading.Thread):
|
||||
self.frame_index = 0
|
||||
self.stop_event = stop_event
|
||||
self.sleeptime = self.config.ffmpeg.retry_interval
|
||||
self.reconnect_timestamps = deque()
|
||||
self.stalls = stalls
|
||||
self.reconnects = reconnects
|
||||
self.detection_frame = detection_frame
|
||||
|
||||
self.config_subscriber = CameraConfigUpdateSubscriber(
|
||||
None,
|
||||
@@ -213,6 +222,35 @@ class CameraWatchdog(threading.Thread):
|
||||
self.latest_invalid_segment_time: float = 0
|
||||
self.latest_cache_segment_time: float = 0
|
||||
|
||||
# Stall tracking (based on last processed frame)
|
||||
self._stall_timestamps: deque[float] = deque()
|
||||
self._stall_active: bool = False
|
||||
|
||||
# Status caching to reduce message volume
|
||||
self._last_detect_status: str | None = None
|
||||
self._last_record_status: str | None = None
|
||||
self._last_status_update_time: float = 0.0
|
||||
|
||||
def _send_detect_status(self, status: str, now: float) -> None:
|
||||
"""Send detect status only if changed or retry_interval has elapsed."""
|
||||
if (
|
||||
status != self._last_detect_status
|
||||
or (now - self._last_status_update_time) >= self.sleeptime
|
||||
):
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", status)
|
||||
self._last_detect_status = status
|
||||
self._last_status_update_time = now
|
||||
|
||||
def _send_record_status(self, status: str, now: float) -> None:
|
||||
"""Send record status only if changed or retry_interval has elapsed."""
|
||||
if (
|
||||
status != self._last_record_status
|
||||
or (now - self._last_status_update_time) >= self.sleeptime
|
||||
):
|
||||
self.requestor.send_data(f"{self.config.name}/status/record", status)
|
||||
self._last_record_status = status
|
||||
self._last_status_update_time = now
|
||||
|
||||
def _update_enabled_state(self) -> bool:
|
||||
"""Fetch the latest config and update enabled state."""
|
||||
self.config_subscriber.check_for_updates()
|
||||
@@ -239,6 +277,14 @@ class CameraWatchdog(threading.Thread):
|
||||
else:
|
||||
self.ffmpeg_detect_process.wait()
|
||||
|
||||
# Update reconnects
|
||||
now = datetime.now().timestamp()
|
||||
self.reconnect_timestamps.append(now)
|
||||
while self.reconnect_timestamps and self.reconnect_timestamps[0] < now - 3600:
|
||||
self.reconnect_timestamps.popleft()
|
||||
if self.reconnects:
|
||||
self.reconnects.value = len(self.reconnect_timestamps)
|
||||
|
||||
# Wait for old capture thread to fully exit before starting a new one
|
||||
if self.capture_thread is not None and self.capture_thread.is_alive():
|
||||
self.logger.info("Waiting for capture thread to exit...")
|
||||
@@ -261,7 +307,10 @@ class CameraWatchdog(threading.Thread):
|
||||
self.start_all_ffmpeg()
|
||||
|
||||
time.sleep(self.sleeptime)
|
||||
while not self.stop_event.wait(self.sleeptime):
|
||||
last_restart_time = datetime.now().timestamp()
|
||||
|
||||
# 1 second watchdog loop
|
||||
while not self.stop_event.wait(1):
|
||||
enabled = self._update_enabled_state()
|
||||
if enabled != self.was_enabled:
|
||||
if enabled:
|
||||
@@ -277,12 +326,9 @@ class CameraWatchdog(threading.Thread):
|
||||
self.stop_all_ffmpeg()
|
||||
|
||||
# update camera status
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/detect", "disabled"
|
||||
)
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/record", "disabled"
|
||||
)
|
||||
now = datetime.now().timestamp()
|
||||
self._send_detect_status("disabled", now)
|
||||
self._send_record_status("disabled", now)
|
||||
self.was_enabled = enabled
|
||||
continue
|
||||
|
||||
@@ -321,36 +367,44 @@ class CameraWatchdog(threading.Thread):
|
||||
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
# Check if enough time has passed to allow ffmpeg restart (backoff pacing)
|
||||
time_since_last_restart = now - last_restart_time
|
||||
can_restart = time_since_last_restart >= self.sleeptime
|
||||
|
||||
if not self.capture_thread.is_alive():
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
|
||||
self._send_detect_status("offline", now)
|
||||
self.camera_fps.value = 0
|
||||
self.logger.error(
|
||||
f"Ffmpeg process crashed unexpectedly for {self.config.name}."
|
||||
)
|
||||
self.reset_capture_thread(terminate=False)
|
||||
if can_restart:
|
||||
self.reset_capture_thread(terminate=False)
|
||||
last_restart_time = now
|
||||
elif self.camera_fps.value >= (self.config.detect.fps + 10):
|
||||
self.fps_overflow_count += 1
|
||||
|
||||
if self.fps_overflow_count == 3:
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/detect", "offline"
|
||||
)
|
||||
self._send_detect_status("offline", now)
|
||||
self.fps_overflow_count = 0
|
||||
self.camera_fps.value = 0
|
||||
self.logger.info(
|
||||
f"{self.config.name} exceeded fps limit. Exiting ffmpeg..."
|
||||
)
|
||||
self.reset_capture_thread(drain_output=False)
|
||||
if can_restart:
|
||||
self.reset_capture_thread(drain_output=False)
|
||||
last_restart_time = now
|
||||
elif now - self.capture_thread.current_frame.value > 20:
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
|
||||
self._send_detect_status("offline", now)
|
||||
self.camera_fps.value = 0
|
||||
self.logger.info(
|
||||
f"No frames received from {self.config.name} in 20 seconds. Exiting ffmpeg..."
|
||||
)
|
||||
self.reset_capture_thread()
|
||||
if can_restart:
|
||||
self.reset_capture_thread()
|
||||
last_restart_time = now
|
||||
else:
|
||||
# process is running normally
|
||||
self.requestor.send_data(f"{self.config.name}/status/detect", "online")
|
||||
self._send_detect_status("online", now)
|
||||
self.fps_overflow_count = 0
|
||||
|
||||
for p in self.ffmpeg_other_processes:
|
||||
@@ -421,9 +475,7 @@ class CameraWatchdog(threading.Thread):
|
||||
|
||||
continue
|
||||
else:
|
||||
self.requestor.send_data(
|
||||
f"{self.config.name}/status/record", "online"
|
||||
)
|
||||
self._send_record_status("online", now)
|
||||
p["latest_segment_time"] = self.latest_cache_segment_time
|
||||
|
||||
if poll is None:
|
||||
@@ -439,6 +491,34 @@ class CameraWatchdog(threading.Thread):
|
||||
p["cmd"], self.logger, p["logpipe"], ffmpeg_process=p["process"]
|
||||
)
|
||||
|
||||
# Update stall metrics based on last processed frame timestamp
|
||||
now = datetime.now().timestamp()
|
||||
processed_ts = (
|
||||
float(self.detection_frame.value) if self.detection_frame else 0.0
|
||||
)
|
||||
if processed_ts > 0:
|
||||
delta = now - processed_ts
|
||||
observed_fps = (
|
||||
self.camera_fps.value
|
||||
if self.camera_fps.value > 0
|
||||
else self.config.detect.fps
|
||||
)
|
||||
interval = 1.0 / max(observed_fps, 0.1)
|
||||
stall_threshold = max(2.0 * interval, 2.0)
|
||||
|
||||
if delta > stall_threshold:
|
||||
if not self._stall_active:
|
||||
self._stall_timestamps.append(now)
|
||||
self._stall_active = True
|
||||
else:
|
||||
self._stall_active = False
|
||||
|
||||
while self._stall_timestamps and self._stall_timestamps[0] < now - 3600:
|
||||
self._stall_timestamps.popleft()
|
||||
|
||||
if self.stalls:
|
||||
self.stalls.value = len(self._stall_timestamps)
|
||||
|
||||
self.stop_all_ffmpeg()
|
||||
self.logpipe.close()
|
||||
self.config_subscriber.stop()
|
||||
@@ -576,6 +656,9 @@ class CameraCapture(FrigateProcess):
|
||||
self.camera_metrics.camera_fps,
|
||||
self.camera_metrics.skipped_fps,
|
||||
self.camera_metrics.ffmpeg_pid,
|
||||
self.camera_metrics.stalls_last_hour,
|
||||
self.camera_metrics.reconnects_last_hour,
|
||||
self.camera_metrics.detection_frame,
|
||||
self.stop_event,
|
||||
)
|
||||
camera_watchdog.start()
|
||||
|
||||
50
migrations/033_create_export_case_table.py
Normal file
50
migrations/033_create_export_case_table.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""Peewee migrations -- 033_create_export_case_table.py.
|
||||
|
||||
Some examples (model - class or model name)::
|
||||
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
> migrator.change_fields(model, **fields) # Change fields
|
||||
> migrator.remove_fields(model, *field_names, cascade=True)
|
||||
> migrator.rename_field(model, old_field_name, new_field_name)
|
||||
> migrator.rename_table(model, new_table_name)
|
||||
> migrator.add_index(model, *col_names, unique=False)
|
||||
> migrator.drop_index(model, *col_names)
|
||||
> migrator.add_not_null(model, *field_names)
|
||||
> migrator.drop_not_null(model, *field_names)
|
||||
> migrator.add_default(model, field_name, default)
|
||||
|
||||
"""
|
||||
|
||||
import peewee as pw
|
||||
|
||||
SQL = pw.SQL
|
||||
|
||||
|
||||
def migrate(migrator, database, fake=False, **kwargs):
|
||||
migrator.sql(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS "exportcase" (
|
||||
"id" VARCHAR(30) NOT NULL PRIMARY KEY,
|
||||
"name" VARCHAR(100) NOT NULL,
|
||||
"description" TEXT NULL,
|
||||
"created_at" DATETIME NOT NULL,
|
||||
"updated_at" DATETIME NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
migrator.sql(
|
||||
'CREATE INDEX IF NOT EXISTS "exportcase_name" ON "exportcase" ("name")'
|
||||
)
|
||||
migrator.sql(
|
||||
'CREATE INDEX IF NOT EXISTS "exportcase_created_at" ON "exportcase" ("created_at")'
|
||||
)
|
||||
|
||||
|
||||
def rollback(migrator, database, fake=False, **kwargs):
|
||||
pass
|
||||
40
migrations/034_add_export_case_to_exports.py
Normal file
40
migrations/034_add_export_case_to_exports.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""Peewee migrations -- 034_add_export_case_to_exports.py.
|
||||
|
||||
Some examples (model - class or model name)::
|
||||
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
> migrator.change_fields(model, **fields) # Change fields
|
||||
> migrator.remove_fields(model, *field_names, cascade=True)
|
||||
> migrator.rename_field(model, old_field_name, new_field_name)
|
||||
> migrator.rename_table(model, new_table_name)
|
||||
> migrator.add_index(model, *col_names, unique=False)
|
||||
> migrator.drop_index(model, *col_names)
|
||||
> migrator.add_not_null(model, *field_names)
|
||||
> migrator.drop_not_null(model, *field_names)
|
||||
> migrator.add_default(model, field_name, default)
|
||||
|
||||
"""
|
||||
|
||||
import peewee as pw
|
||||
|
||||
SQL = pw.SQL
|
||||
|
||||
|
||||
def migrate(migrator, database, fake=False, **kwargs):
|
||||
# Add nullable export_case_id column to export table
|
||||
migrator.sql('ALTER TABLE "export" ADD COLUMN "export_case_id" VARCHAR(30) NULL')
|
||||
|
||||
# Index for faster case-based queries
|
||||
migrator.sql(
|
||||
'CREATE INDEX IF NOT EXISTS "export_export_case_id" ON "export" ("export_case_id")'
|
||||
)
|
||||
|
||||
|
||||
def rollback(migrator, database, fake=False, **kwargs):
|
||||
pass
|
||||
@@ -48,6 +48,10 @@
|
||||
"name": {
|
||||
"placeholder": "Name the Export"
|
||||
},
|
||||
"case": {
|
||||
"label": "Case",
|
||||
"placeholder": "Select a case"
|
||||
},
|
||||
"select": "Select",
|
||||
"export": "Export",
|
||||
"selectOrExport": "Select or Export",
|
||||
|
||||
@@ -324,9 +324,6 @@
|
||||
"enabled": {
|
||||
"label": "Enable record on all cameras."
|
||||
},
|
||||
"sync_recordings": {
|
||||
"label": "Sync recordings with disk on startup and once a day."
|
||||
},
|
||||
"expire_interval": {
|
||||
"label": "Number of minutes to wait between cleanup runs."
|
||||
},
|
||||
@@ -758,4 +755,4 @@
|
||||
"label": "Keep track of original state of camera."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,9 +4,6 @@
|
||||
"enabled": {
|
||||
"label": "Enable record on all cameras."
|
||||
},
|
||||
"sync_recordings": {
|
||||
"label": "Sync recordings with disk on startup and once a day."
|
||||
},
|
||||
"expire_interval": {
|
||||
"label": "Number of minutes to wait between cleanup runs."
|
||||
},
|
||||
@@ -90,4 +87,4 @@
|
||||
"label": "Keep track of original state of recording."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
"documentTitle": "Export - Frigate",
|
||||
"search": "Search",
|
||||
"noExports": "No exports found",
|
||||
"headings": {
|
||||
"cases": "Cases",
|
||||
"uncategorizedExports": "Uncategorized Exports"
|
||||
},
|
||||
"deleteExport": "Delete Export",
|
||||
"deleteExport.desc": "Are you sure you want to delete {{exportName}}?",
|
||||
"editExport": {
|
||||
@@ -13,11 +17,21 @@
|
||||
"shareExport": "Share export",
|
||||
"downloadVideo": "Download video",
|
||||
"editName": "Edit name",
|
||||
"deleteExport": "Delete export"
|
||||
"deleteExport": "Delete export",
|
||||
"assignToCase": "Add to case"
|
||||
},
|
||||
"toast": {
|
||||
"error": {
|
||||
"renameExportFailed": "Failed to rename export: {{errorMessage}}"
|
||||
"renameExportFailed": "Failed to rename export: {{errorMessage}}",
|
||||
"assignCaseFailed": "Failed to update case assignment: {{errorMessage}}"
|
||||
}
|
||||
},
|
||||
"caseDialog": {
|
||||
"title": "Add to case",
|
||||
"description": "Choose an existing case or create a new one.",
|
||||
"selectLabel": "Case",
|
||||
"newCaseOption": "Create new case",
|
||||
"nameLabel": "Case name",
|
||||
"descriptionLabel": "Description"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1065,5 +1065,53 @@
|
||||
"deleteTriggerFailed": "Failed to delete trigger: {{errorMessage}}"
|
||||
}
|
||||
}
|
||||
},
|
||||
"maintenance": {
|
||||
"title": "Maintenance",
|
||||
"sync": {
|
||||
"title": "Media Sync",
|
||||
"desc": "Frigate will periodically clean up media on a regular schedule according to your retention configuration. It is normal to see a few orphaned files as Frigate runs. Use this feature to remove orphaned media files from disk that are no longer referenced in the database.",
|
||||
"started": "Media sync started.",
|
||||
"alreadyRunning": "A sync job is already running",
|
||||
"error": "Failed to start sync",
|
||||
"currentStatus": "Status",
|
||||
"jobId": "Job ID",
|
||||
"startTime": "Start Time",
|
||||
"endTime": "End Time",
|
||||
"statusLabel": "Status",
|
||||
"results": "Results",
|
||||
"errorLabel": "Error",
|
||||
"mediaTypes": "Media Types",
|
||||
"allMedia": "All Media",
|
||||
"dryRun": "Dry Run",
|
||||
"dryRunEnabled": "No files will be deleted",
|
||||
"dryRunDisabled": "Files will be deleted",
|
||||
"force": "Force",
|
||||
"forceDesc": "Bypass safety threshold and complete sync even if more than 50% of the files would be deleted.",
|
||||
"running": "Sync Running...",
|
||||
"start": "Start Sync",
|
||||
"inProgress": "Sync is in progress. This page is disabled.",
|
||||
"status": {
|
||||
"queued": "Queued",
|
||||
"running": "Running",
|
||||
"completed": "Completed",
|
||||
"failed": "Failed",
|
||||
"notRunning": "Not Running"
|
||||
},
|
||||
"resultsFields": {
|
||||
"filesChecked": "Files Checked",
|
||||
"orphansFound": "Orphans Found",
|
||||
"orphansDeleted": "Orphans Deleted",
|
||||
"aborted": "Aborted. Deletion would exceed safety threshold.",
|
||||
"error": "Error",
|
||||
"totals": "Totals"
|
||||
},
|
||||
"event_snapshots": "Tracked Object Snapshots",
|
||||
"event_thumbnails": "Tracked Object Thumbnails",
|
||||
"review_thumbnails": "Review Thumbnails",
|
||||
"previews": "Previews",
|
||||
"exports": "Exports",
|
||||
"recordings": "Recordings"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -51,6 +51,7 @@
|
||||
"gpuMemory": "GPU Memory",
|
||||
"gpuEncoder": "GPU Encoder",
|
||||
"gpuDecoder": "GPU Decoder",
|
||||
"gpuTemperature": "GPU Temperature",
|
||||
"gpuInfo": {
|
||||
"vainfoOutput": {
|
||||
"title": "Vainfo Output",
|
||||
@@ -77,6 +78,7 @@
|
||||
},
|
||||
"npuUsage": "NPU Usage",
|
||||
"npuMemory": "NPU Memory",
|
||||
"npuTemperature": "NPU Temperature",
|
||||
"intelGpuWarning": {
|
||||
"title": "Intel GPU Stats Warning",
|
||||
"message": "GPU stats unavailable",
|
||||
@@ -151,6 +153,17 @@
|
||||
"cameraDetectionsPerSecond": "{{camName}} detections per second",
|
||||
"cameraSkippedDetectionsPerSecond": "{{camName}} skipped detections per second"
|
||||
},
|
||||
"connectionQuality": {
|
||||
"title": "Connection Quality",
|
||||
"excellent": "Excellent",
|
||||
"fair": "Fair",
|
||||
"poor": "Poor",
|
||||
"unusable": "Unusable",
|
||||
"fps": "FPS",
|
||||
"expectedFps": "Expected FPS",
|
||||
"reconnectsLastHour": "Reconnects (last hour)",
|
||||
"stallsLastHour": "Stalls (last hour)"
|
||||
},
|
||||
"toast": {
|
||||
"success": {
|
||||
"copyToClipboard": "Copied probe data to clipboard."
|
||||
|
||||
@@ -11,6 +11,7 @@ import {
|
||||
TrackedObjectUpdateReturnType,
|
||||
TriggerStatus,
|
||||
FrigateAudioDetections,
|
||||
Job,
|
||||
} from "@/types/ws";
|
||||
import { FrigateStats } from "@/types/stats";
|
||||
import { createContainer } from "react-tracked";
|
||||
@@ -651,3 +652,40 @@ export function useTriggers(): { payload: TriggerStatus } {
|
||||
: { name: "", camera: "", event_id: "", type: "", score: 0 };
|
||||
return { payload: useDeepMemo(parsed) };
|
||||
}
|
||||
|
||||
export function useJobStatus(
|
||||
jobType: string,
|
||||
revalidateOnFocus: boolean = true,
|
||||
): { payload: Job | null } {
|
||||
const {
|
||||
value: { payload },
|
||||
send: sendCommand,
|
||||
} = useWs("job_state", "jobState");
|
||||
|
||||
const jobData = useDeepMemo(
|
||||
payload && typeof payload === "string" ? JSON.parse(payload) : {},
|
||||
);
|
||||
const currentJob = jobData[jobType] || null;
|
||||
|
||||
useEffect(() => {
|
||||
let listener: (() => void) | undefined;
|
||||
if (revalidateOnFocus) {
|
||||
sendCommand("jobState");
|
||||
listener = () => {
|
||||
if (document.visibilityState === "visible") {
|
||||
sendCommand("jobState");
|
||||
}
|
||||
};
|
||||
addEventListener("visibilitychange", listener);
|
||||
}
|
||||
|
||||
return () => {
|
||||
if (listener) {
|
||||
removeEventListener("visibilitychange", listener);
|
||||
}
|
||||
};
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [revalidateOnFocus]);
|
||||
|
||||
return { payload: currentJob as Job | null };
|
||||
}
|
||||
|
||||
76
web/src/components/camera/ConnectionQualityIndicator.tsx
Normal file
76
web/src/components/camera/ConnectionQualityIndicator.tsx
Normal file
@@ -0,0 +1,76 @@
|
||||
import { useTranslation } from "react-i18next";
|
||||
import {
|
||||
Tooltip,
|
||||
TooltipContent,
|
||||
TooltipTrigger,
|
||||
} from "@/components/ui/tooltip";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
type ConnectionQualityIndicatorProps = {
|
||||
quality: "excellent" | "fair" | "poor" | "unusable";
|
||||
expectedFps: number;
|
||||
reconnects: number;
|
||||
stalls: number;
|
||||
};
|
||||
|
||||
export function ConnectionQualityIndicator({
|
||||
quality,
|
||||
expectedFps,
|
||||
reconnects,
|
||||
stalls,
|
||||
}: ConnectionQualityIndicatorProps) {
|
||||
const { t } = useTranslation(["views/system"]);
|
||||
|
||||
const getColorClass = (quality: string): string => {
|
||||
switch (quality) {
|
||||
case "excellent":
|
||||
return "bg-success";
|
||||
case "fair":
|
||||
return "bg-yellow-500";
|
||||
case "poor":
|
||||
return "bg-orange-500";
|
||||
case "unusable":
|
||||
return "bg-destructive";
|
||||
default:
|
||||
return "bg-gray-500";
|
||||
}
|
||||
};
|
||||
|
||||
const qualityLabel = t(`cameras.connectionQuality.${quality}`);
|
||||
|
||||
return (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div
|
||||
className={cn(
|
||||
"inline-block size-3 cursor-pointer rounded-full",
|
||||
getColorClass(quality),
|
||||
)}
|
||||
/>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent className="max-w-xs">
|
||||
<div className="space-y-2">
|
||||
<div className="font-semibold">
|
||||
{t("cameras.connectionQuality.title")}
|
||||
</div>
|
||||
<div className="text-sm">
|
||||
<div className="capitalize">{qualityLabel}</div>
|
||||
<div className="mt-2 space-y-1 text-xs">
|
||||
<div>
|
||||
{t("cameras.connectionQuality.expectedFps")}:{" "}
|
||||
{expectedFps.toFixed(1)} {t("cameras.connectionQuality.fps")}
|
||||
</div>
|
||||
<div>
|
||||
{t("cameras.connectionQuality.reconnectsLastHour")}:{" "}
|
||||
{reconnects}
|
||||
</div>
|
||||
<div>
|
||||
{t("cameras.connectionQuality.stallsLastHour")}: {stalls}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
);
|
||||
}
|
||||
@@ -1,9 +1,8 @@
|
||||
import ActivityIndicator from "../indicators/activity-indicator";
|
||||
import { LuTrash } from "react-icons/lu";
|
||||
import { Button } from "../ui/button";
|
||||
import { useCallback, useState } from "react";
|
||||
import { isDesktop, isMobile } from "react-device-detect";
|
||||
import { FaDownload, FaPlay, FaShareAlt } from "react-icons/fa";
|
||||
import { useCallback, useMemo, useState } from "react";
|
||||
import { isMobile } from "react-device-detect";
|
||||
import { FiMoreVertical } from "react-icons/fi";
|
||||
import { Skeleton } from "../ui/skeleton";
|
||||
import {
|
||||
Dialog,
|
||||
@@ -14,35 +13,81 @@ import {
|
||||
} from "../ui/dialog";
|
||||
import { Input } from "../ui/input";
|
||||
import useKeyboardListener from "@/hooks/use-keyboard-listener";
|
||||
import { DeleteClipType, Export } from "@/types/export";
|
||||
import { MdEditSquare } from "react-icons/md";
|
||||
import { DeleteClipType, Export, ExportCase } from "@/types/export";
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { shareOrCopy } from "@/utils/browserUtil";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
|
||||
import BlurredIconButton from "../button/BlurredIconButton";
|
||||
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
import {
|
||||
DropdownMenu,
|
||||
DropdownMenuContent,
|
||||
DropdownMenuItem,
|
||||
DropdownMenuTrigger,
|
||||
} from "../ui/dropdown-menu";
|
||||
import { FaFolder } from "react-icons/fa";
|
||||
|
||||
type ExportProps = {
|
||||
type CaseCardProps = {
|
||||
className: string;
|
||||
exportCase: ExportCase;
|
||||
exports: Export[];
|
||||
onSelect: () => void;
|
||||
};
|
||||
export function CaseCard({
|
||||
className,
|
||||
exportCase,
|
||||
exports,
|
||||
onSelect,
|
||||
}: CaseCardProps) {
|
||||
const firstExport = useMemo(
|
||||
() => exports.find((exp) => exp.thumb_path && exp.thumb_path.length > 0),
|
||||
[exports],
|
||||
);
|
||||
|
||||
return (
|
||||
<div
|
||||
className={cn(
|
||||
"relative flex aspect-video size-full cursor-pointer items-center justify-center overflow-hidden rounded-lg bg-secondary md:rounded-2xl",
|
||||
className,
|
||||
)}
|
||||
onClick={() => onSelect()}
|
||||
>
|
||||
{firstExport && (
|
||||
<img
|
||||
className="absolute inset-0 size-full object-cover"
|
||||
src={`${baseUrl}${firstExport.thumb_path.replace("/media/frigate/", "")}`}
|
||||
alt=""
|
||||
/>
|
||||
)}
|
||||
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-16 bg-gradient-to-t from-black/60 to-transparent" />
|
||||
<div className="absolute bottom-2 left-2 z-20 flex items-center justify-start gap-2 text-white">
|
||||
<FaFolder />
|
||||
<div className="capitalize">{exportCase.name}</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
type ExportCardProps = {
|
||||
className: string;
|
||||
exportedRecording: Export;
|
||||
onSelect: (selected: Export) => void;
|
||||
onRename: (original: string, update: string) => void;
|
||||
onDelete: ({ file, exportName }: DeleteClipType) => void;
|
||||
onAssignToCase?: (selected: Export) => void;
|
||||
};
|
||||
|
||||
export default function ExportCard({
|
||||
export function ExportCard({
|
||||
className,
|
||||
exportedRecording,
|
||||
onSelect,
|
||||
onRename,
|
||||
onDelete,
|
||||
}: ExportProps) {
|
||||
onAssignToCase,
|
||||
}: ExportCardProps) {
|
||||
const { t } = useTranslation(["views/exports"]);
|
||||
const isAdmin = useIsAdmin();
|
||||
const [hovered, setHovered] = useState(false);
|
||||
const [loading, setLoading] = useState(
|
||||
exportedRecording.thumb_path.length > 0,
|
||||
);
|
||||
@@ -136,12 +181,14 @@ export default function ExportCard({
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
"relative flex aspect-video items-center justify-center rounded-lg bg-black md:rounded-2xl",
|
||||
"relative flex aspect-video cursor-pointer items-center justify-center rounded-lg bg-black md:rounded-2xl",
|
||||
className,
|
||||
)}
|
||||
onMouseEnter={isDesktop ? () => setHovered(true) : undefined}
|
||||
onMouseLeave={isDesktop ? () => setHovered(false) : undefined}
|
||||
onClick={isDesktop ? undefined : () => setHovered(!hovered)}
|
||||
onClick={() => {
|
||||
if (!exportedRecording.in_progress) {
|
||||
onSelect(exportedRecording);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{exportedRecording.in_progress ? (
|
||||
<ActivityIndicator />
|
||||
@@ -158,95 +205,88 @@ export default function ExportCard({
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
{hovered && (
|
||||
<>
|
||||
<div className="absolute inset-0 rounded-lg bg-black bg-opacity-60 md:rounded-2xl" />
|
||||
<div className="absolute right-3 top-2">
|
||||
<div className="flex items-center justify-center gap-4">
|
||||
{!exportedRecording.in_progress && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<BlurredIconButton
|
||||
onClick={() =>
|
||||
shareOrCopy(
|
||||
`${baseUrl}export?id=${exportedRecording.id}`,
|
||||
exportedRecording.name.replaceAll("_", " "),
|
||||
)
|
||||
}
|
||||
>
|
||||
<FaShareAlt className="size-4" />
|
||||
</BlurredIconButton>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>{t("tooltip.shareExport")}</TooltipContent>
|
||||
</Tooltip>
|
||||
)}
|
||||
{!exportedRecording.in_progress && (
|
||||
{!exportedRecording.in_progress && (
|
||||
<div className="absolute bottom-2 right-3 z-40">
|
||||
<DropdownMenu modal={false}>
|
||||
<DropdownMenuTrigger>
|
||||
<BlurredIconButton
|
||||
aria-label={t("tooltip.editName")}
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
>
|
||||
<FiMoreVertical className="size-5" />
|
||||
</BlurredIconButton>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent align="end">
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label={t("tooltip.shareExport")}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
shareOrCopy(
|
||||
`${baseUrl}export?id=${exportedRecording.id}`,
|
||||
exportedRecording.name.replaceAll("_", " "),
|
||||
);
|
||||
}}
|
||||
>
|
||||
{t("tooltip.shareExport")}
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label={t("tooltip.downloadVideo")}
|
||||
>
|
||||
<a
|
||||
download
|
||||
href={`${baseUrl}${exportedRecording.video_path.replace("/media/frigate/", "")}`}
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<BlurredIconButton>
|
||||
<FaDownload className="size-4" />
|
||||
</BlurredIconButton>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("tooltip.downloadVideo")}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
{t("tooltip.downloadVideo")}
|
||||
</a>
|
||||
)}
|
||||
{isAdmin && !exportedRecording.in_progress && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<BlurredIconButton
|
||||
onClick={() =>
|
||||
setEditName({
|
||||
original: exportedRecording.name,
|
||||
update: undefined,
|
||||
})
|
||||
}
|
||||
>
|
||||
<MdEditSquare className="size-4" />
|
||||
</BlurredIconButton>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>{t("tooltip.editName")}</TooltipContent>
|
||||
</Tooltip>
|
||||
</DropdownMenuItem>
|
||||
{isAdmin && onAssignToCase && (
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label={t("tooltip.assignToCase")}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
onAssignToCase(exportedRecording);
|
||||
}}
|
||||
>
|
||||
{t("tooltip.assignToCase")}
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
{isAdmin && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<BlurredIconButton
|
||||
onClick={() =>
|
||||
onDelete({
|
||||
file: exportedRecording.id,
|
||||
exportName: exportedRecording.name,
|
||||
})
|
||||
}
|
||||
>
|
||||
<LuTrash className="size-4 fill-destructive text-destructive hover:text-white" />
|
||||
</BlurredIconButton>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>{t("tooltip.deleteExport")}</TooltipContent>
|
||||
</Tooltip>
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label={t("tooltip.editName")}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
setEditName({
|
||||
original: exportedRecording.name,
|
||||
update: undefined,
|
||||
});
|
||||
}}
|
||||
>
|
||||
{t("tooltip.editName")}
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{!exportedRecording.in_progress && (
|
||||
<Button
|
||||
className="absolute left-1/2 top-1/2 h-20 w-20 -translate-x-1/2 -translate-y-1/2 cursor-pointer text-white hover:bg-transparent hover:text-white"
|
||||
aria-label={t("button.play", { ns: "common" })}
|
||||
variant="ghost"
|
||||
onClick={() => {
|
||||
onSelect(exportedRecording);
|
||||
}}
|
||||
>
|
||||
<FaPlay />
|
||||
</Button>
|
||||
)}
|
||||
</>
|
||||
{isAdmin && (
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label={t("tooltip.deleteExport")}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
onDelete({
|
||||
file: exportedRecording.id,
|
||||
exportName: exportedRecording.name,
|
||||
});
|
||||
}}
|
||||
>
|
||||
{t("tooltip.deleteExport")}
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
</div>
|
||||
)}
|
||||
{loading && (
|
||||
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />
|
||||
|
||||
67
web/src/components/filter/ExportFilterGroup.tsx
Normal file
67
web/src/components/filter/ExportFilterGroup.tsx
Normal file
@@ -0,0 +1,67 @@
|
||||
import { cn } from "@/lib/utils";
|
||||
import {
|
||||
DEFAULT_EXPORT_FILTERS,
|
||||
ExportFilter,
|
||||
ExportFilters,
|
||||
} from "@/types/export";
|
||||
import { CamerasFilterButton } from "./CamerasFilterButton";
|
||||
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
|
||||
import { useMemo } from "react";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import useSWR from "swr";
|
||||
|
||||
type ExportFilterGroupProps = {
|
||||
className: string;
|
||||
filters?: ExportFilters[];
|
||||
filter?: ExportFilter;
|
||||
onUpdateFilter: (filter: ExportFilter) => void;
|
||||
};
|
||||
export default function ExportFilterGroup({
|
||||
className,
|
||||
filter,
|
||||
filters = DEFAULT_EXPORT_FILTERS,
|
||||
onUpdateFilter,
|
||||
}: ExportFilterGroupProps) {
|
||||
const { data: config } = useSWR<FrigateConfig>("config", {
|
||||
revalidateOnFocus: false,
|
||||
});
|
||||
const allowedCameras = useAllowedCameras();
|
||||
|
||||
const filterValues = useMemo(
|
||||
() => ({
|
||||
cameras: allowedCameras,
|
||||
}),
|
||||
[allowedCameras],
|
||||
);
|
||||
|
||||
const groups = useMemo(() => {
|
||||
if (!config) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return Object.entries(config.camera_groups).sort(
|
||||
(a, b) => a[1].order - b[1].order,
|
||||
);
|
||||
}, [config]);
|
||||
|
||||
return (
|
||||
<div
|
||||
className={cn(
|
||||
"scrollbar-container flex justify-center gap-2 overflow-x-auto",
|
||||
className,
|
||||
)}
|
||||
>
|
||||
{filters.includes("cameras") && (
|
||||
<CamerasFilterButton
|
||||
allCameras={filterValues.cameras}
|
||||
groups={groups}
|
||||
selectedCameras={filter?.cameras}
|
||||
hideText={false}
|
||||
updateCameraFilter={(newCameras) => {
|
||||
onUpdateFilter({ ...filter, cameras: newCameras });
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -22,7 +22,14 @@ import useSWR from "swr";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { Popover, PopoverContent, PopoverTrigger } from "../ui/popover";
|
||||
import { TimezoneAwareCalendar } from "./ReviewActivityCalendar";
|
||||
import { SelectSeparator } from "../ui/select";
|
||||
import {
|
||||
Select,
|
||||
SelectContent,
|
||||
SelectItem,
|
||||
SelectSeparator,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
} from "../ui/select";
|
||||
import { isDesktop, isIOS, isMobile } from "react-device-detect";
|
||||
import { Drawer, DrawerContent, DrawerTrigger } from "../ui/drawer";
|
||||
import SaveExportOverlay from "./SaveExportOverlay";
|
||||
@@ -31,6 +38,7 @@ import { baseUrl } from "@/api/baseUrl";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { GenericVideoPlayer } from "../player/GenericVideoPlayer";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { ExportCase } from "@/types/export";
|
||||
|
||||
const EXPORT_OPTIONS = [
|
||||
"1",
|
||||
@@ -67,6 +75,9 @@ export default function ExportDialog({
|
||||
}: ExportDialogProps) {
|
||||
const { t } = useTranslation(["components/dialog"]);
|
||||
const [name, setName] = useState("");
|
||||
const [selectedCaseId, setSelectedCaseId] = useState<string | undefined>(
|
||||
undefined,
|
||||
);
|
||||
|
||||
const onStartExport = useCallback(() => {
|
||||
if (!range) {
|
||||
@@ -89,6 +100,7 @@ export default function ExportDialog({
|
||||
{
|
||||
playback: "realtime",
|
||||
name,
|
||||
export_case_id: selectedCaseId || undefined,
|
||||
},
|
||||
)
|
||||
.then((response) => {
|
||||
@@ -102,6 +114,7 @@ export default function ExportDialog({
|
||||
),
|
||||
});
|
||||
setName("");
|
||||
setSelectedCaseId(undefined);
|
||||
setRange(undefined);
|
||||
setMode("none");
|
||||
}
|
||||
@@ -118,10 +131,11 @@ export default function ExportDialog({
|
||||
{ position: "top-center" },
|
||||
);
|
||||
});
|
||||
}, [camera, name, range, setRange, setName, setMode, t]);
|
||||
}, [camera, name, range, selectedCaseId, setRange, setName, setMode, t]);
|
||||
|
||||
const handleCancel = useCallback(() => {
|
||||
setName("");
|
||||
setSelectedCaseId(undefined);
|
||||
setMode("none");
|
||||
setRange(undefined);
|
||||
}, [setMode, setRange]);
|
||||
@@ -190,8 +204,10 @@ export default function ExportDialog({
|
||||
currentTime={currentTime}
|
||||
range={range}
|
||||
name={name}
|
||||
selectedCaseId={selectedCaseId}
|
||||
onStartExport={onStartExport}
|
||||
setName={setName}
|
||||
setSelectedCaseId={setSelectedCaseId}
|
||||
setRange={setRange}
|
||||
setMode={setMode}
|
||||
onCancel={handleCancel}
|
||||
@@ -207,8 +223,10 @@ type ExportContentProps = {
|
||||
currentTime: number;
|
||||
range?: TimeRange;
|
||||
name: string;
|
||||
selectedCaseId?: string;
|
||||
onStartExport: () => void;
|
||||
setName: (name: string) => void;
|
||||
setSelectedCaseId: (caseId: string | undefined) => void;
|
||||
setRange: (range: TimeRange | undefined) => void;
|
||||
setMode: (mode: ExportMode) => void;
|
||||
onCancel: () => void;
|
||||
@@ -218,14 +236,17 @@ export function ExportContent({
|
||||
currentTime,
|
||||
range,
|
||||
name,
|
||||
selectedCaseId,
|
||||
onStartExport,
|
||||
setName,
|
||||
setSelectedCaseId,
|
||||
setRange,
|
||||
setMode,
|
||||
onCancel,
|
||||
}: ExportContentProps) {
|
||||
const { t } = useTranslation(["components/dialog"]);
|
||||
const [selectedOption, setSelectedOption] = useState<ExportOption>("1");
|
||||
const { data: cases } = useSWR<ExportCase[]>("cases");
|
||||
|
||||
const onSelectTime = useCallback(
|
||||
(option: ExportOption) => {
|
||||
@@ -320,6 +341,44 @@ export function ExportContent({
|
||||
value={name}
|
||||
onChange={(e) => setName(e.target.value)}
|
||||
/>
|
||||
<div className="my-4">
|
||||
<Label className="text-sm text-secondary-foreground">
|
||||
{t("export.case.label", { defaultValue: "Case (optional)" })}
|
||||
</Label>
|
||||
<Select
|
||||
value={selectedCaseId || "none"}
|
||||
onValueChange={(value) =>
|
||||
setSelectedCaseId(value === "none" ? undefined : value)
|
||||
}
|
||||
>
|
||||
<SelectTrigger className="mt-2">
|
||||
<SelectValue
|
||||
placeholder={t("export.case.placeholder", {
|
||||
defaultValue: "Select a case (optional)",
|
||||
})}
|
||||
/>
|
||||
</SelectTrigger>
|
||||
<SelectContent>
|
||||
<SelectItem
|
||||
value="none"
|
||||
className="cursor-pointer hover:bg-accent hover:text-accent-foreground"
|
||||
>
|
||||
{t("label.none", { ns: "common" })}
|
||||
</SelectItem>
|
||||
{cases
|
||||
?.sort((a, b) => a.name.localeCompare(b.name))
|
||||
.map((caseItem) => (
|
||||
<SelectItem
|
||||
key={caseItem.id}
|
||||
value={caseItem.id}
|
||||
className="cursor-pointer hover:bg-accent hover:text-accent-foreground"
|
||||
>
|
||||
{caseItem.name}
|
||||
</SelectItem>
|
||||
))}
|
||||
</SelectContent>
|
||||
</Select>
|
||||
</div>
|
||||
{isDesktop && <SelectSeparator className="my-4 bg-secondary" />}
|
||||
<DialogFooter
|
||||
className={isDesktop ? "" : "mt-3 flex flex-col-reverse gap-4"}
|
||||
|
||||
@@ -75,6 +75,9 @@ export default function MobileReviewSettingsDrawer({
|
||||
// exports
|
||||
|
||||
const [name, setName] = useState("");
|
||||
const [selectedCaseId, setSelectedCaseId] = useState<string | undefined>(
|
||||
undefined,
|
||||
);
|
||||
const onStartExport = useCallback(() => {
|
||||
if (!range) {
|
||||
toast.error(t("toast.error.noValidTimeSelected"), {
|
||||
@@ -96,6 +99,7 @@ export default function MobileReviewSettingsDrawer({
|
||||
{
|
||||
playback: "realtime",
|
||||
name,
|
||||
export_case_id: selectedCaseId || undefined,
|
||||
},
|
||||
)
|
||||
.then((response) => {
|
||||
@@ -114,6 +118,7 @@ export default function MobileReviewSettingsDrawer({
|
||||
},
|
||||
);
|
||||
setName("");
|
||||
setSelectedCaseId(undefined);
|
||||
setRange(undefined);
|
||||
setMode("none");
|
||||
}
|
||||
@@ -133,7 +138,7 @@ export default function MobileReviewSettingsDrawer({
|
||||
},
|
||||
);
|
||||
});
|
||||
}, [camera, name, range, setRange, setName, setMode, t]);
|
||||
}, [camera, name, range, selectedCaseId, setRange, setName, setMode, t]);
|
||||
|
||||
// filters
|
||||
|
||||
@@ -200,8 +205,10 @@ export default function MobileReviewSettingsDrawer({
|
||||
currentTime={currentTime}
|
||||
range={range}
|
||||
name={name}
|
||||
selectedCaseId={selectedCaseId}
|
||||
onStartExport={onStartExport}
|
||||
setName={setName}
|
||||
setSelectedCaseId={setSelectedCaseId}
|
||||
setRange={setRange}
|
||||
setMode={(mode) => {
|
||||
setMode(mode);
|
||||
@@ -213,6 +220,7 @@ export default function MobileReviewSettingsDrawer({
|
||||
onCancel={() => {
|
||||
setMode("none");
|
||||
setRange(undefined);
|
||||
setSelectedCaseId(undefined);
|
||||
setDrawerMode("select");
|
||||
}}
|
||||
/>
|
||||
|
||||
166
web/src/components/overlay/dialog/OptionAndInputDialog.tsx
Normal file
166
web/src/components/overlay/dialog/OptionAndInputDialog.tsx
Normal file
@@ -0,0 +1,166 @@
|
||||
import { Button } from "@/components/ui/button";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogFooter,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog";
|
||||
import { Input } from "@/components/ui/input";
|
||||
import {
|
||||
Select,
|
||||
SelectContent,
|
||||
SelectItem,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
} from "@/components/ui/select";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { isMobile } from "react-device-detect";
|
||||
import { useEffect, useMemo, useState } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
|
||||
type Option = {
|
||||
value: string;
|
||||
label: string;
|
||||
};
|
||||
|
||||
type OptionAndInputDialogProps = {
|
||||
open: boolean;
|
||||
title: string;
|
||||
description?: string;
|
||||
options: Option[];
|
||||
newValueKey: string;
|
||||
initialValue?: string;
|
||||
nameLabel: string;
|
||||
descriptionLabel: string;
|
||||
setOpen: (open: boolean) => void;
|
||||
onSave: (value: string) => void;
|
||||
onCreateNew: (name: string, description: string) => void;
|
||||
};
|
||||
|
||||
export default function OptionAndInputDialog({
|
||||
open,
|
||||
title,
|
||||
description,
|
||||
options,
|
||||
newValueKey,
|
||||
initialValue,
|
||||
nameLabel,
|
||||
descriptionLabel,
|
||||
setOpen,
|
||||
onSave,
|
||||
onCreateNew,
|
||||
}: OptionAndInputDialogProps) {
|
||||
const { t } = useTranslation("common");
|
||||
const firstOption = useMemo(() => options[0]?.value, [options]);
|
||||
|
||||
const [selectedValue, setSelectedValue] = useState<string | undefined>(
|
||||
initialValue ?? firstOption,
|
||||
);
|
||||
const [name, setName] = useState("");
|
||||
const [descriptionValue, setDescriptionValue] = useState("");
|
||||
|
||||
useEffect(() => {
|
||||
if (open) {
|
||||
setSelectedValue(initialValue ?? firstOption);
|
||||
setName("");
|
||||
setDescriptionValue("");
|
||||
}
|
||||
}, [open, initialValue, firstOption]);
|
||||
|
||||
const isNew = selectedValue === newValueKey;
|
||||
const disableSave = !selectedValue || (isNew && name.trim().length === 0);
|
||||
|
||||
const handleSave = () => {
|
||||
if (!selectedValue) {
|
||||
return;
|
||||
}
|
||||
|
||||
const trimmedName = name.trim();
|
||||
const trimmedDescription = descriptionValue.trim();
|
||||
|
||||
if (isNew) {
|
||||
onCreateNew(trimmedName, trimmedDescription);
|
||||
} else {
|
||||
onSave(selectedValue);
|
||||
}
|
||||
setOpen(false);
|
||||
};
|
||||
|
||||
return (
|
||||
<Dialog open={open} defaultOpen={false} onOpenChange={setOpen}>
|
||||
<DialogContent
|
||||
className={cn("space-y-4", isMobile && "px-4")}
|
||||
onOpenAutoFocus={(e) => {
|
||||
if (isMobile) {
|
||||
e.preventDefault();
|
||||
}
|
||||
}}
|
||||
>
|
||||
<DialogHeader>
|
||||
<DialogTitle>{title}</DialogTitle>
|
||||
{description && <DialogDescription>{description}</DialogDescription>}
|
||||
</DialogHeader>
|
||||
|
||||
<div className="space-y-2">
|
||||
<Select
|
||||
value={selectedValue}
|
||||
onValueChange={(val) => setSelectedValue(val)}
|
||||
>
|
||||
<SelectTrigger>
|
||||
<SelectValue />
|
||||
</SelectTrigger>
|
||||
<SelectContent>
|
||||
{options.map((option) => (
|
||||
<SelectItem key={option.value} value={option.value}>
|
||||
{option.label}
|
||||
</SelectItem>
|
||||
))}
|
||||
</SelectContent>
|
||||
</Select>
|
||||
</div>
|
||||
|
||||
{isNew && (
|
||||
<div className="space-y-4">
|
||||
<div className="space-y-1">
|
||||
<label className="text-sm font-medium text-secondary-foreground">
|
||||
{nameLabel}
|
||||
</label>
|
||||
<Input value={name} onChange={(e) => setName(e.target.value)} />
|
||||
</div>
|
||||
<div className="space-y-1">
|
||||
<label className="text-sm font-medium text-secondary-foreground">
|
||||
{descriptionLabel}
|
||||
</label>
|
||||
<Input
|
||||
value={descriptionValue}
|
||||
onChange={(e) => setDescriptionValue(e.target.value)}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<DialogFooter className={cn("pt-2", isMobile && "gap-2")}>
|
||||
<Button
|
||||
type="button"
|
||||
variant="outline"
|
||||
onClick={() => {
|
||||
setOpen(false);
|
||||
}}
|
||||
>
|
||||
{t("button.cancel")}
|
||||
</Button>
|
||||
<Button
|
||||
type="button"
|
||||
variant="select"
|
||||
disabled={disableSave}
|
||||
onClick={handleSave}
|
||||
>
|
||||
{t("button.save")}
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
);
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import ExportCard from "@/components/card/ExportCard";
|
||||
import { CaseCard, ExportCard } from "@/components/card/ExportCard";
|
||||
import {
|
||||
AlertDialog,
|
||||
AlertDialogCancel,
|
||||
@@ -11,64 +11,144 @@ import {
|
||||
} from "@/components/ui/alert-dialog";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Dialog, DialogContent, DialogTitle } from "@/components/ui/dialog";
|
||||
import Heading from "@/components/ui/heading";
|
||||
import { Input } from "@/components/ui/input";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import useKeyboardListener from "@/hooks/use-keyboard-listener";
|
||||
import { useSearchEffect } from "@/hooks/use-overlay-state";
|
||||
import { useHistoryBack } from "@/hooks/use-history-back";
|
||||
import { useApiFilterArgs } from "@/hooks/use-api-filter";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { DeleteClipType, Export } from "@/types/export";
|
||||
import {
|
||||
DeleteClipType,
|
||||
Export,
|
||||
ExportCase,
|
||||
ExportFilter,
|
||||
} from "@/types/export";
|
||||
import OptionAndInputDialog from "@/components/overlay/dialog/OptionAndInputDialog";
|
||||
import axios from "axios";
|
||||
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import { isMobile } from "react-device-detect";
|
||||
import {
|
||||
MutableRefObject,
|
||||
useCallback,
|
||||
useEffect,
|
||||
useMemo,
|
||||
useRef,
|
||||
useState,
|
||||
} from "react";
|
||||
import { isMobile, isMobileOnly } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
|
||||
import { LuFolderX } from "react-icons/lu";
|
||||
import { toast } from "sonner";
|
||||
import useSWR from "swr";
|
||||
import ExportFilterGroup from "@/components/filter/ExportFilterGroup";
|
||||
|
||||
// always parse these as string arrays
|
||||
const EXPORT_FILTER_ARRAY_KEYS = ["cameras"];
|
||||
|
||||
function Exports() {
|
||||
const { t } = useTranslation(["views/exports"]);
|
||||
const { data: exports, mutate } = useSWR<Export[]>("exports");
|
||||
|
||||
useEffect(() => {
|
||||
document.title = t("documentTitle");
|
||||
}, [t]);
|
||||
|
||||
// Filters
|
||||
|
||||
const [exportFilter, setExportFilter, exportSearchParams] =
|
||||
useApiFilterArgs<ExportFilter>(EXPORT_FILTER_ARRAY_KEYS);
|
||||
|
||||
// Data
|
||||
|
||||
const { data: cases, mutate: updateCases } = useSWR<ExportCase[]>("cases");
|
||||
const { data: rawExports, mutate: updateExports } = useSWR<Export[]>(
|
||||
exportSearchParams && Object.keys(exportSearchParams).length > 0
|
||||
? ["exports", exportSearchParams]
|
||||
: "exports",
|
||||
);
|
||||
|
||||
const exportsByCase = useMemo<{ [caseId: string]: Export[] }>(() => {
|
||||
const grouped: { [caseId: string]: Export[] } = {};
|
||||
(rawExports ?? []).forEach((exp) => {
|
||||
const caseId = exp.export_case || "none";
|
||||
if (!grouped[caseId]) {
|
||||
grouped[caseId] = [];
|
||||
}
|
||||
|
||||
grouped[caseId].push(exp);
|
||||
});
|
||||
return grouped;
|
||||
}, [rawExports]);
|
||||
|
||||
const filteredCases = useMemo<ExportCase[]>(() => {
|
||||
if (!cases) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return cases.filter((caseItem) => {
|
||||
const caseExports = exportsByCase[caseItem.id];
|
||||
return caseExports?.length;
|
||||
});
|
||||
}, [cases, exportsByCase]);
|
||||
|
||||
const exports = useMemo<Export[]>(
|
||||
() => exportsByCase["none"] || [],
|
||||
[exportsByCase],
|
||||
);
|
||||
|
||||
const mutate = useCallback(() => {
|
||||
updateExports();
|
||||
updateCases();
|
||||
}, [updateExports, updateCases]);
|
||||
|
||||
// Search
|
||||
|
||||
const [search, setSearch] = useState("");
|
||||
|
||||
const filteredExports = useMemo(() => {
|
||||
if (!search || !exports) {
|
||||
return exports;
|
||||
}
|
||||
|
||||
return exports.filter((exp) =>
|
||||
exp.name
|
||||
.toLowerCase()
|
||||
.replaceAll("_", " ")
|
||||
.includes(search.toLowerCase()),
|
||||
);
|
||||
}, [exports, search]);
|
||||
|
||||
// Viewing
|
||||
|
||||
const [selected, setSelected] = useState<Export>();
|
||||
const [selectedCaseId, setSelectedCaseId] = useState<string | undefined>(
|
||||
undefined,
|
||||
);
|
||||
const [selectedAspect, setSelectedAspect] = useState(0.0);
|
||||
|
||||
// Handle browser back button to deselect case before navigating away
|
||||
useHistoryBack({
|
||||
enabled: true,
|
||||
open: selectedCaseId !== undefined,
|
||||
onClose: () => setSelectedCaseId(undefined),
|
||||
});
|
||||
|
||||
useSearchEffect("id", (id) => {
|
||||
if (!exports) {
|
||||
if (!rawExports) {
|
||||
return false;
|
||||
}
|
||||
|
||||
setSelected(exports.find((exp) => exp.id == id));
|
||||
setSelected(rawExports.find((exp) => exp.id == id));
|
||||
return true;
|
||||
});
|
||||
|
||||
// Deleting
|
||||
useSearchEffect("caseId", (caseId: string) => {
|
||||
if (!filteredCases) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const exists = filteredCases.some((c) => c.id === caseId);
|
||||
|
||||
if (!exists) {
|
||||
return false;
|
||||
}
|
||||
|
||||
setSelectedCaseId(caseId);
|
||||
return true;
|
||||
});
|
||||
|
||||
// Modifying
|
||||
|
||||
const [deleteClip, setDeleteClip] = useState<DeleteClipType | undefined>();
|
||||
const [exportToAssign, setExportToAssign] = useState<Export | undefined>();
|
||||
|
||||
const onHandleDelete = useCallback(() => {
|
||||
if (!deleteClip) {
|
||||
@@ -83,8 +163,6 @@ function Exports() {
|
||||
});
|
||||
}, [deleteClip, mutate]);
|
||||
|
||||
// Renaming
|
||||
|
||||
const onHandleRename = useCallback(
|
||||
(id: string, update: string) => {
|
||||
axios
|
||||
@@ -107,7 +185,7 @@ function Exports() {
|
||||
});
|
||||
});
|
||||
},
|
||||
[mutate, t],
|
||||
[mutate, setDeleteClip, t],
|
||||
);
|
||||
|
||||
// Keyboard Listener
|
||||
@@ -115,10 +193,27 @@ function Exports() {
|
||||
const contentRef = useRef<HTMLDivElement | null>(null);
|
||||
useKeyboardListener([], undefined, contentRef);
|
||||
|
||||
const selectedCase = useMemo(
|
||||
() => filteredCases?.find((c) => c.id === selectedCaseId),
|
||||
[filteredCases, selectedCaseId],
|
||||
);
|
||||
|
||||
const resetCaseDialog = useCallback(() => {
|
||||
setExportToAssign(undefined);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div className="flex size-full flex-col gap-2 overflow-hidden px-1 pt-2 md:p-2">
|
||||
<Toaster closeButton={true} />
|
||||
|
||||
<CaseAssignmentDialog
|
||||
exportToAssign={exportToAssign}
|
||||
cases={cases}
|
||||
selectedCaseId={selectedCaseId}
|
||||
onClose={resetCaseDialog}
|
||||
mutate={mutate}
|
||||
/>
|
||||
|
||||
<AlertDialog
|
||||
open={deleteClip != undefined}
|
||||
onOpenChange={() => setDeleteClip(undefined)}
|
||||
@@ -187,47 +282,364 @@ function Exports() {
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
|
||||
{exports && (
|
||||
<div className="flex w-full items-center justify-center p-2">
|
||||
<div
|
||||
className={cn(
|
||||
"flex w-full flex-col items-start space-y-2 pr-2 md:mb-2 lg:relative lg:h-10 lg:flex-row lg:items-center lg:space-y-0",
|
||||
isMobileOnly && "mb-2 h-auto flex-wrap gap-2 space-y-0",
|
||||
)}
|
||||
>
|
||||
<div className="w-full">
|
||||
<Input
|
||||
className="text-md w-full bg-muted md:w-1/3"
|
||||
className="text-md w-full bg-muted md:w-1/2"
|
||||
placeholder={t("search")}
|
||||
value={search}
|
||||
onChange={(e) => setSearch(e.target.value)}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
<ExportFilterGroup
|
||||
className="w-full justify-between md:justify-start lg:justify-end"
|
||||
filter={exportFilter}
|
||||
filters={["cameras"]}
|
||||
onUpdateFilter={setExportFilter}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="w-full overflow-hidden">
|
||||
{exports && filteredExports && filteredExports.length > 0 ? (
|
||||
<div
|
||||
ref={contentRef}
|
||||
className="scrollbar-container grid size-full gap-2 overflow-y-auto sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4"
|
||||
>
|
||||
{Object.values(exports).map((item) => (
|
||||
<ExportCard
|
||||
key={item.name}
|
||||
className={
|
||||
search == "" || filteredExports.includes(item) ? "" : "hidden"
|
||||
}
|
||||
exportedRecording={item}
|
||||
onSelect={setSelected}
|
||||
onRename={onHandleRename}
|
||||
onDelete={({ file, exportName }) =>
|
||||
setDeleteClip({ file, exportName })
|
||||
}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
) : (
|
||||
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
|
||||
<LuFolderX className="size-16" />
|
||||
{t("noExports")}
|
||||
</div>
|
||||
)}
|
||||
{selectedCase ? (
|
||||
<CaseView
|
||||
contentRef={contentRef}
|
||||
selectedCase={selectedCase}
|
||||
exports={exportsByCase[selectedCase.id] || []}
|
||||
search={search}
|
||||
setSelected={setSelected}
|
||||
renameClip={onHandleRename}
|
||||
setDeleteClip={setDeleteClip}
|
||||
onAssignToCase={setExportToAssign}
|
||||
/>
|
||||
) : (
|
||||
<AllExportsView
|
||||
contentRef={contentRef}
|
||||
search={search}
|
||||
cases={filteredCases}
|
||||
exports={exports}
|
||||
exportsByCase={exportsByCase}
|
||||
setSelectedCaseId={setSelectedCaseId}
|
||||
setSelected={setSelected}
|
||||
renameClip={onHandleRename}
|
||||
setDeleteClip={setDeleteClip}
|
||||
onAssignToCase={setExportToAssign}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
type AllExportsViewProps = {
|
||||
contentRef: MutableRefObject<HTMLDivElement | null>;
|
||||
search: string;
|
||||
cases?: ExportCase[];
|
||||
exports: Export[];
|
||||
exportsByCase: { [caseId: string]: Export[] };
|
||||
setSelectedCaseId: (id: string) => void;
|
||||
setSelected: (e: Export) => void;
|
||||
renameClip: (id: string, update: string) => void;
|
||||
setDeleteClip: (d: DeleteClipType | undefined) => void;
|
||||
onAssignToCase: (e: Export) => void;
|
||||
};
|
||||
function AllExportsView({
|
||||
contentRef,
|
||||
search,
|
||||
cases,
|
||||
exports,
|
||||
exportsByCase,
|
||||
setSelectedCaseId,
|
||||
setSelected,
|
||||
renameClip,
|
||||
setDeleteClip,
|
||||
onAssignToCase,
|
||||
}: AllExportsViewProps) {
|
||||
const { t } = useTranslation(["views/exports"]);
|
||||
|
||||
// Filter
|
||||
|
||||
const filteredCases = useMemo(() => {
|
||||
if (!search || !cases) {
|
||||
return cases || [];
|
||||
}
|
||||
|
||||
return cases.filter(
|
||||
(caseItem) =>
|
||||
caseItem.name.toLowerCase().includes(search.toLowerCase()) ||
|
||||
(caseItem.description &&
|
||||
caseItem.description.toLowerCase().includes(search.toLowerCase())),
|
||||
);
|
||||
}, [search, cases]);
|
||||
|
||||
const filteredExports = useMemo<Export[]>(() => {
|
||||
if (!search) {
|
||||
return exports;
|
||||
}
|
||||
|
||||
return exports.filter((exp) =>
|
||||
exp.name
|
||||
.toLowerCase()
|
||||
.replaceAll("_", " ")
|
||||
.includes(search.toLowerCase()),
|
||||
);
|
||||
}, [exports, search]);
|
||||
|
||||
return (
|
||||
<div className="w-full overflow-hidden">
|
||||
{filteredCases?.length || filteredExports.length ? (
|
||||
<div
|
||||
ref={contentRef}
|
||||
className="scrollbar-container flex size-full flex-col gap-4 overflow-y-auto"
|
||||
>
|
||||
{filteredCases.length > 0 && (
|
||||
<div className="space-y-2">
|
||||
<Heading as="h4">{t("headings.cases")}</Heading>
|
||||
<div className="grid gap-2 sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4">
|
||||
{cases?.map((item) => (
|
||||
<CaseCard
|
||||
key={item.name}
|
||||
className={
|
||||
search == "" || filteredCases?.includes(item)
|
||||
? ""
|
||||
: "hidden"
|
||||
}
|
||||
exportCase={item}
|
||||
exports={exportsByCase[item.id] || []}
|
||||
onSelect={() => {
|
||||
setSelectedCaseId(item.id);
|
||||
}}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{filteredExports.length > 0 && (
|
||||
<div className="space-y-4">
|
||||
<Heading as="h4">{t("headings.uncategorizedExports")}</Heading>
|
||||
<div
|
||||
ref={contentRef}
|
||||
className="scrollbar-container grid gap-2 overflow-y-auto sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4"
|
||||
>
|
||||
{exports.map((item) => (
|
||||
<ExportCard
|
||||
key={item.name}
|
||||
className={
|
||||
search == "" || filteredExports.includes(item)
|
||||
? ""
|
||||
: "hidden"
|
||||
}
|
||||
exportedRecording={item}
|
||||
onSelect={setSelected}
|
||||
onRename={renameClip}
|
||||
onDelete={({ file, exportName }) =>
|
||||
setDeleteClip({ file, exportName })
|
||||
}
|
||||
onAssignToCase={onAssignToCase}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
) : (
|
||||
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
|
||||
<LuFolderX className="size-16" />
|
||||
{t("noExports")}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
type CaseViewProps = {
|
||||
contentRef: MutableRefObject<HTMLDivElement | null>;
|
||||
selectedCase: ExportCase;
|
||||
exports?: Export[];
|
||||
search: string;
|
||||
setSelected: (e: Export) => void;
|
||||
renameClip: (id: string, update: string) => void;
|
||||
setDeleteClip: (d: DeleteClipType | undefined) => void;
|
||||
onAssignToCase: (e: Export) => void;
|
||||
};
|
||||
function CaseView({
|
||||
contentRef,
|
||||
selectedCase,
|
||||
exports,
|
||||
search,
|
||||
setSelected,
|
||||
renameClip,
|
||||
setDeleteClip,
|
||||
onAssignToCase,
|
||||
}: CaseViewProps) {
|
||||
const filteredExports = useMemo<Export[]>(() => {
|
||||
const caseExports = (exports || []).filter(
|
||||
(e) => e.export_case == selectedCase.id,
|
||||
);
|
||||
|
||||
if (!search) {
|
||||
return caseExports;
|
||||
}
|
||||
|
||||
return caseExports.filter((exp) =>
|
||||
exp.name
|
||||
.toLowerCase()
|
||||
.replaceAll("_", " ")
|
||||
.includes(search.toLowerCase()),
|
||||
);
|
||||
}, [selectedCase, exports, search]);
|
||||
|
||||
return (
|
||||
<div className="flex size-full flex-col gap-8 overflow-hidden">
|
||||
<div className="flex shrink-0 flex-col gap-1">
|
||||
<Heading className="capitalize" as="h2">
|
||||
{selectedCase.name}
|
||||
</Heading>
|
||||
<div className="text-secondary-foreground">
|
||||
{selectedCase.description}
|
||||
</div>
|
||||
</div>
|
||||
<div
|
||||
ref={contentRef}
|
||||
className="scrollbar-container grid min-h-0 flex-1 content-start gap-2 overflow-y-auto sm:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4"
|
||||
>
|
||||
{exports?.map((item) => (
|
||||
<ExportCard
|
||||
key={item.name}
|
||||
className={filteredExports.includes(item) ? "" : "hidden"}
|
||||
exportedRecording={item}
|
||||
onSelect={setSelected}
|
||||
onRename={renameClip}
|
||||
onDelete={({ file, exportName }) =>
|
||||
setDeleteClip({ file, exportName })
|
||||
}
|
||||
onAssignToCase={onAssignToCase}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
type CaseAssignmentDialogProps = {
|
||||
exportToAssign?: Export;
|
||||
cases?: ExportCase[];
|
||||
selectedCaseId?: string;
|
||||
onClose: () => void;
|
||||
mutate: () => void;
|
||||
};
|
||||
function CaseAssignmentDialog({
|
||||
exportToAssign,
|
||||
cases,
|
||||
selectedCaseId,
|
||||
onClose,
|
||||
mutate,
|
||||
}: CaseAssignmentDialogProps) {
|
||||
const { t } = useTranslation(["views/exports"]);
|
||||
const caseOptions = useMemo(
|
||||
() => [
|
||||
...(cases ?? [])
|
||||
.map((c) => ({
|
||||
value: c.id,
|
||||
label: c.name,
|
||||
}))
|
||||
.sort((cA, cB) => cA.label.localeCompare(cB.label)),
|
||||
{
|
||||
value: "new",
|
||||
label: t("caseDialog.newCaseOption"),
|
||||
},
|
||||
],
|
||||
[cases, t],
|
||||
);
|
||||
|
||||
const handleSave = useCallback(
|
||||
async (caseId: string) => {
|
||||
if (!exportToAssign) return;
|
||||
|
||||
try {
|
||||
await axios.patch(`export/${exportToAssign.id}/case`, {
|
||||
export_case_id: caseId,
|
||||
});
|
||||
mutate();
|
||||
onClose();
|
||||
} catch (error: unknown) {
|
||||
const apiError = error as {
|
||||
response?: { data?: { message?: string; detail?: string } };
|
||||
};
|
||||
const errorMessage =
|
||||
apiError.response?.data?.message ||
|
||||
apiError.response?.data?.detail ||
|
||||
"Unknown error";
|
||||
toast.error(t("toast.error.assignCaseFailed", { errorMessage }), {
|
||||
position: "top-center",
|
||||
});
|
||||
}
|
||||
},
|
||||
[exportToAssign, mutate, onClose, t],
|
||||
);
|
||||
|
||||
const handleCreateNew = useCallback(
|
||||
async (name: string, description: string) => {
|
||||
if (!exportToAssign) return;
|
||||
|
||||
try {
|
||||
const createResp = await axios.post("cases", {
|
||||
name,
|
||||
description,
|
||||
});
|
||||
|
||||
const newCaseId: string | undefined = createResp.data?.id;
|
||||
|
||||
if (newCaseId) {
|
||||
await axios.patch(`export/${exportToAssign.id}/case`, {
|
||||
export_case_id: newCaseId,
|
||||
});
|
||||
}
|
||||
|
||||
mutate();
|
||||
onClose();
|
||||
} catch (error: unknown) {
|
||||
const apiError = error as {
|
||||
response?: { data?: { message?: string; detail?: string } };
|
||||
};
|
||||
const errorMessage =
|
||||
apiError.response?.data?.message ||
|
||||
apiError.response?.data?.detail ||
|
||||
"Unknown error";
|
||||
toast.error(t("toast.error.assignCaseFailed", { errorMessage }), {
|
||||
position: "top-center",
|
||||
});
|
||||
}
|
||||
},
|
||||
[exportToAssign, mutate, onClose, t],
|
||||
);
|
||||
|
||||
if (!exportToAssign) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<OptionAndInputDialog
|
||||
open={!!exportToAssign}
|
||||
title={t("caseDialog.title")}
|
||||
description={t("caseDialog.description")}
|
||||
setOpen={(open) => {
|
||||
if (!open) {
|
||||
onClose();
|
||||
}
|
||||
}}
|
||||
options={caseOptions}
|
||||
nameLabel={t("caseDialog.nameLabel")}
|
||||
descriptionLabel={t("caseDialog.descriptionLabel")}
|
||||
initialValue={selectedCaseId}
|
||||
newValueKey="new"
|
||||
onSave={handleSave}
|
||||
onCreateNew={handleCreateNew}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
export default Exports;
|
||||
|
||||
@@ -36,6 +36,7 @@ import NotificationView from "@/views/settings/NotificationsSettingsView";
|
||||
import EnrichmentsSettingsView from "@/views/settings/EnrichmentsSettingsView";
|
||||
import UiSettingsView from "@/views/settings/UiSettingsView";
|
||||
import FrigatePlusSettingsView from "@/views/settings/FrigatePlusSettingsView";
|
||||
import MaintenanceSettingsView from "@/views/settings/MaintenanceSettingsView";
|
||||
import { useSearchEffect } from "@/hooks/use-overlay-state";
|
||||
import { useNavigate, useSearchParams } from "react-router-dom";
|
||||
import { useInitialCameraState } from "@/api/ws";
|
||||
@@ -81,6 +82,7 @@ const allSettingsViews = [
|
||||
"roles",
|
||||
"notifications",
|
||||
"frigateplus",
|
||||
"maintenance",
|
||||
] as const;
|
||||
type SettingsType = (typeof allSettingsViews)[number];
|
||||
|
||||
@@ -120,6 +122,10 @@ const settingsGroups = [
|
||||
label: "frigateplus",
|
||||
items: [{ key: "frigateplus", component: FrigatePlusSettingsView }],
|
||||
},
|
||||
{
|
||||
label: "maintenance",
|
||||
items: [{ key: "maintenance", component: MaintenanceSettingsView }],
|
||||
},
|
||||
];
|
||||
|
||||
const CAMERA_SELECT_BUTTON_PAGES = [
|
||||
|
||||
@@ -6,9 +6,28 @@ export type Export = {
|
||||
video_path: string;
|
||||
thumb_path: string;
|
||||
in_progress: boolean;
|
||||
export_case?: string;
|
||||
};
|
||||
|
||||
export type ExportCase = {
|
||||
id: string;
|
||||
name: string;
|
||||
description: string;
|
||||
created_at: number;
|
||||
updated_at: number;
|
||||
};
|
||||
|
||||
export type DeleteClipType = {
|
||||
file: string;
|
||||
exportName: string;
|
||||
};
|
||||
|
||||
// filtering
|
||||
|
||||
const EXPORT_FILTERS = ["cameras"] as const;
|
||||
export type ExportFilters = (typeof EXPORT_FILTERS)[number];
|
||||
export const DEFAULT_EXPORT_FILTERS: ExportFilters[] = ["cameras"];
|
||||
|
||||
export type ExportFilter = {
|
||||
cameras?: string[];
|
||||
};
|
||||
|
||||
@@ -197,7 +197,6 @@ export interface CameraConfig {
|
||||
days: number;
|
||||
mode: string;
|
||||
};
|
||||
sync_recordings: boolean;
|
||||
};
|
||||
review: {
|
||||
alerts: {
|
||||
@@ -542,7 +541,6 @@ export interface FrigateConfig {
|
||||
days: number;
|
||||
mode: string;
|
||||
};
|
||||
sync_recordings: boolean;
|
||||
};
|
||||
|
||||
rtmp: {
|
||||
|
||||
@@ -24,6 +24,10 @@ export type CameraStats = {
|
||||
pid: number;
|
||||
process_fps: number;
|
||||
skipped_fps: number;
|
||||
connection_quality: "excellent" | "fair" | "poor" | "unusable";
|
||||
expected_fps: number;
|
||||
reconnects_last_hour: number;
|
||||
stalls_last_hour: number;
|
||||
};
|
||||
|
||||
export type CpuStats = {
|
||||
@@ -37,6 +41,7 @@ export type DetectorStats = {
|
||||
detection_start: number;
|
||||
inference_speed: number;
|
||||
pid: number;
|
||||
temperature?: number;
|
||||
};
|
||||
|
||||
export type EmbeddingsStats = {
|
||||
@@ -56,11 +61,13 @@ export type GpuStats = {
|
||||
enc?: string;
|
||||
dec?: string;
|
||||
pstate?: string;
|
||||
temp?: number;
|
||||
};
|
||||
|
||||
export type NpuStats = {
|
||||
npu: number;
|
||||
mem: string;
|
||||
temp?: number;
|
||||
};
|
||||
|
||||
export type GpuInfo = "vainfo" | "nvinfo";
|
||||
@@ -68,7 +75,6 @@ export type GpuInfo = "vainfo" | "nvinfo";
|
||||
export type ServiceStats = {
|
||||
last_updated: number;
|
||||
storage: { [path: string]: StorageStats };
|
||||
temperatures: { [apex: string]: number };
|
||||
uptime: number;
|
||||
latest_version: string;
|
||||
version: string;
|
||||
|
||||
@@ -126,3 +126,32 @@ export type TriggerStatus = {
|
||||
type: string;
|
||||
score: number;
|
||||
};
|
||||
|
||||
export type MediaSyncStats = {
|
||||
files_checked: number;
|
||||
orphans_found: number;
|
||||
orphans_deleted: number;
|
||||
aborted: boolean;
|
||||
error: string | null;
|
||||
};
|
||||
|
||||
export type MediaSyncTotals = {
|
||||
files_checked: number;
|
||||
orphans_found: number;
|
||||
orphans_deleted: number;
|
||||
};
|
||||
|
||||
export type MediaSyncResults = {
|
||||
[mediaType: string]: MediaSyncStats | MediaSyncTotals;
|
||||
totals: MediaSyncTotals;
|
||||
};
|
||||
|
||||
export type Job = {
|
||||
id: string;
|
||||
job_type: string;
|
||||
status: string;
|
||||
results?: MediaSyncResults;
|
||||
start_time?: number;
|
||||
end_time?: number;
|
||||
error_message?: string;
|
||||
};
|
||||
|
||||
442
web/src/views/settings/MaintenanceSettingsView.tsx
Normal file
442
web/src/views/settings/MaintenanceSettingsView.tsx
Normal file
@@ -0,0 +1,442 @@
|
||||
import Heading from "@/components/ui/heading";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { Separator } from "@/components/ui/separator";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { useCallback, useState } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import axios from "axios";
|
||||
import { toast } from "sonner";
|
||||
import { useJobStatus } from "@/api/ws";
|
||||
import { Switch } from "@/components/ui/switch";
|
||||
import { LuCheck, LuX } from "react-icons/lu";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { formatUnixTimestampToDateTime } from "@/utils/dateUtil";
|
||||
import { MediaSyncStats } from "@/types/ws";
|
||||
|
||||
export default function MaintenanceSettingsView() {
|
||||
const { t } = useTranslation("views/settings");
|
||||
const [selectedMediaTypes, setSelectedMediaTypes] = useState<string[]>([
|
||||
"all",
|
||||
]);
|
||||
const [dryRun, setDryRun] = useState(true);
|
||||
const [force, setForce] = useState(false);
|
||||
const [isSubmitting, setIsSubmitting] = useState(false);
|
||||
|
||||
const MEDIA_TYPES = [
|
||||
{ id: "event_snapshots", label: t("maintenance.sync.event_snapshots") },
|
||||
{ id: "event_thumbnails", label: t("maintenance.sync.event_thumbnails") },
|
||||
{ id: "review_thumbnails", label: t("maintenance.sync.review_thumbnails") },
|
||||
{ id: "previews", label: t("maintenance.sync.previews") },
|
||||
{ id: "exports", label: t("maintenance.sync.exports") },
|
||||
{ id: "recordings", label: t("maintenance.sync.recordings") },
|
||||
];
|
||||
|
||||
// Subscribe to media sync status via WebSocket
|
||||
const { payload: currentJob } = useJobStatus("media_sync");
|
||||
|
||||
const isJobRunning = Boolean(
|
||||
currentJob &&
|
||||
(currentJob.status === "queued" || currentJob.status === "running"),
|
||||
);
|
||||
|
||||
const handleMediaTypeChange = useCallback((id: string, checked: boolean) => {
|
||||
setSelectedMediaTypes((prev) => {
|
||||
if (id === "all") {
|
||||
return checked ? ["all"] : [];
|
||||
}
|
||||
|
||||
let next = prev.filter((t) => t !== "all");
|
||||
if (checked) {
|
||||
next.push(id);
|
||||
} else {
|
||||
next = next.filter((t) => t !== id);
|
||||
}
|
||||
return next.length === 0 ? ["all"] : next;
|
||||
});
|
||||
}, []);
|
||||
|
||||
const handleStartSync = useCallback(async () => {
|
||||
setIsSubmitting(true);
|
||||
|
||||
try {
|
||||
const response = await axios.post(
|
||||
"/media/sync",
|
||||
{
|
||||
dry_run: dryRun,
|
||||
media_types: selectedMediaTypes,
|
||||
force: force,
|
||||
},
|
||||
{
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
},
|
||||
);
|
||||
|
||||
if (response.status === 202) {
|
||||
toast.success(t("maintenance.sync.started"), {
|
||||
position: "top-center",
|
||||
closeButton: true,
|
||||
});
|
||||
} else if (response.status === 409) {
|
||||
toast.error(t("maintenance.sync.alreadyRunning"), {
|
||||
position: "top-center",
|
||||
closeButton: true,
|
||||
});
|
||||
}
|
||||
} catch {
|
||||
toast.error(t("maintenance.sync.error"), {
|
||||
position: "top-center",
|
||||
closeButton: true,
|
||||
});
|
||||
} finally {
|
||||
setIsSubmitting(false);
|
||||
}
|
||||
}, [selectedMediaTypes, dryRun, force, t]);
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto px-2 md:order-none">
|
||||
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
|
||||
<div className="col-span-1">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("maintenance.sync.title")}
|
||||
</Heading>
|
||||
|
||||
<div className="max-w-6xl">
|
||||
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
|
||||
<p>{t("maintenance.sync.desc")}</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="space-y-6">
|
||||
{/* Media Types Selection */}
|
||||
<div>
|
||||
<Label className="mb-2 flex flex-row items-center text-base">
|
||||
{t("maintenance.sync.mediaTypes")}
|
||||
</Label>
|
||||
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
|
||||
<div className="flex items-center justify-between">
|
||||
<Label
|
||||
htmlFor="all-media"
|
||||
className="cursor-pointer font-medium"
|
||||
>
|
||||
{t("maintenance.sync.allMedia")}
|
||||
</Label>
|
||||
<Switch
|
||||
id="all-media"
|
||||
checked={selectedMediaTypes.includes("all")}
|
||||
onCheckedChange={(checked) =>
|
||||
handleMediaTypeChange("all", checked)
|
||||
}
|
||||
disabled={isJobRunning}
|
||||
/>
|
||||
</div>
|
||||
<div className="ml-4 space-y-2">
|
||||
{MEDIA_TYPES.map((type) => (
|
||||
<div
|
||||
key={type.id}
|
||||
className="flex items-center justify-between"
|
||||
>
|
||||
<Label htmlFor={type.id} className="cursor-pointer">
|
||||
{type.label}
|
||||
</Label>
|
||||
<Switch
|
||||
id={type.id}
|
||||
checked={
|
||||
selectedMediaTypes.includes("all") ||
|
||||
selectedMediaTypes.includes(type.id)
|
||||
}
|
||||
onCheckedChange={(checked) =>
|
||||
handleMediaTypeChange(type.id, checked)
|
||||
}
|
||||
disabled={
|
||||
isJobRunning || selectedMediaTypes.includes("all")
|
||||
}
|
||||
/>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Options */}
|
||||
<div className="space-y-4">
|
||||
<div className="flex flex-col">
|
||||
<div className="flex flex-row items-center">
|
||||
<Switch
|
||||
id="dry-run"
|
||||
className="mr-3"
|
||||
checked={dryRun}
|
||||
onCheckedChange={setDryRun}
|
||||
disabled={isJobRunning}
|
||||
/>
|
||||
<div className="space-y-0.5">
|
||||
<Label htmlFor="dry-run" className="cursor-pointer">
|
||||
{t("maintenance.sync.dryRun")}
|
||||
</Label>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{dryRun
|
||||
? t("maintenance.sync.dryRunEnabled")
|
||||
: t("maintenance.sync.dryRunDisabled")}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="flex flex-col">
|
||||
<div className="flex flex-row items-center">
|
||||
<Switch
|
||||
id="force"
|
||||
className="mr-3"
|
||||
checked={force}
|
||||
onCheckedChange={setForce}
|
||||
disabled={isJobRunning}
|
||||
/>
|
||||
<div className="space-y-0.5">
|
||||
<Label htmlFor="force" className="cursor-pointer">
|
||||
{t("maintenance.sync.force")}
|
||||
</Label>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{t("maintenance.sync.forceDesc")}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Action Buttons */}
|
||||
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
|
||||
<Button
|
||||
onClick={handleStartSync}
|
||||
disabled={isJobRunning || isSubmitting}
|
||||
className="flex flex-1"
|
||||
variant={"select"}
|
||||
>
|
||||
{(isSubmitting || isJobRunning) && (
|
||||
<ActivityIndicator className="mr-2 size-6" />
|
||||
)}
|
||||
{isJobRunning
|
||||
? t("maintenance.sync.running")
|
||||
: t("maintenance.sync.start")}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="col-span-1">
|
||||
<div className="mt-4 gap-2 space-y-3 md:mt-8">
|
||||
<Separator className="my-2 flex bg-secondary md:hidden" />
|
||||
<div className="flex flex-row items-center justify-between rounded-lg bg-card p-3 md:mr-2">
|
||||
<Heading as="h4" className="my-2">
|
||||
{t("maintenance.sync.currentStatus")}
|
||||
</Heading>
|
||||
<div
|
||||
className={cn(
|
||||
"flex flex-row items-center gap-2",
|
||||
currentJob?.status === "success" && "text-green-500",
|
||||
currentJob?.status === "failed" && "text-destructive",
|
||||
)}
|
||||
>
|
||||
{currentJob?.status === "success" && (
|
||||
<LuCheck className="size-5" />
|
||||
)}
|
||||
{currentJob?.status === "failed" && (
|
||||
<LuX className="size-5" />
|
||||
)}
|
||||
{(currentJob?.status === "running" ||
|
||||
currentJob?.status === "queued") && (
|
||||
<ActivityIndicator className="size-5" />
|
||||
)}
|
||||
{t(
|
||||
`maintenance.sync.status.${currentJob?.status ?? "notRunning"}`,
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Current Job Status */}
|
||||
<div className="space-y-2 text-sm">
|
||||
{currentJob?.start_time && (
|
||||
<div className="flex gap-1">
|
||||
<span className="text-muted-foreground">
|
||||
{t("maintenance.sync.startTime")}:
|
||||
</span>
|
||||
<span className="font-mono">
|
||||
{formatUnixTimestampToDateTime(
|
||||
currentJob?.start_time ?? "-",
|
||||
)}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
{currentJob?.end_time && (
|
||||
<div className="flex gap-1">
|
||||
<span className="text-muted-foreground">
|
||||
{t("maintenance.sync.endTime")}:
|
||||
</span>
|
||||
<span className="font-mono">
|
||||
{formatUnixTimestampToDateTime(currentJob?.end_time)}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
{currentJob?.results && (
|
||||
<div className="mt-2 space-y-2 md:mr-2">
|
||||
<p className="text-sm font-medium text-muted-foreground">
|
||||
{t("maintenance.sync.results")}
|
||||
</p>
|
||||
<div className="rounded-md border border-secondary">
|
||||
{/* Individual media type results */}
|
||||
<div className="divide-y divide-secondary">
|
||||
{Object.entries(currentJob.results)
|
||||
.filter(([key]) => key !== "totals")
|
||||
.map(([mediaType, stats]) => {
|
||||
const mediaStats = stats as MediaSyncStats;
|
||||
return (
|
||||
<div key={mediaType} className="p-3 pb-3">
|
||||
<p className="mb-1 font-medium capitalize">
|
||||
{t(`maintenance.sync.${mediaType}`)}
|
||||
</p>
|
||||
<div className="ml-2 space-y-0.5">
|
||||
<div className="flex justify-between">
|
||||
<span className="text-muted-foreground">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.filesChecked",
|
||||
)}
|
||||
</span>
|
||||
<span>{mediaStats.files_checked}</span>
|
||||
</div>
|
||||
<div className="flex justify-between">
|
||||
<span className="text-muted-foreground">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.orphansFound",
|
||||
)}
|
||||
</span>
|
||||
<span
|
||||
className={
|
||||
mediaStats.orphans_found > 0
|
||||
? "text-yellow-500"
|
||||
: ""
|
||||
}
|
||||
>
|
||||
{mediaStats.orphans_found}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between">
|
||||
<span className="text-muted-foreground">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.orphansDeleted",
|
||||
)}
|
||||
</span>
|
||||
<span
|
||||
className={cn(
|
||||
"text-muted-foreground",
|
||||
mediaStats.orphans_deleted > 0 &&
|
||||
"text-success",
|
||||
mediaStats.orphans_deleted === 0 &&
|
||||
mediaStats.aborted &&
|
||||
"text-destructive",
|
||||
)}
|
||||
>
|
||||
{mediaStats.orphans_deleted}
|
||||
</span>
|
||||
</div>
|
||||
{mediaStats.aborted && (
|
||||
<div className="flex items-center gap-2 text-destructive">
|
||||
<LuX className="size-4" />
|
||||
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.aborted",
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
{mediaStats.error && (
|
||||
<div className="text-destructive">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.error",
|
||||
)}
|
||||
{": "}
|
||||
{mediaStats.error}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
{/* Totals */}
|
||||
{currentJob.results.totals && (
|
||||
<div className="border-t border-secondary bg-background_alt p-3">
|
||||
<p className="mb-1 font-medium">
|
||||
{t("maintenance.sync.resultsFields.totals")}
|
||||
</p>
|
||||
<div className="ml-2 space-y-0.5">
|
||||
<div className="flex justify-between">
|
||||
<span className="text-muted-foreground">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.filesChecked",
|
||||
)}
|
||||
</span>
|
||||
<span className="font-medium">
|
||||
{currentJob.results.totals.files_checked}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between">
|
||||
<span className="text-muted-foreground">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.orphansFound",
|
||||
)}
|
||||
</span>
|
||||
<span
|
||||
className={
|
||||
currentJob.results.totals.orphans_found > 0
|
||||
? "font-medium text-yellow-500"
|
||||
: "font-medium"
|
||||
}
|
||||
>
|
||||
{currentJob.results.totals.orphans_found}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between">
|
||||
<span className="text-muted-foreground">
|
||||
{t(
|
||||
"maintenance.sync.resultsFields.orphansDeleted",
|
||||
)}
|
||||
</span>
|
||||
<span
|
||||
className={cn(
|
||||
"text-medium",
|
||||
currentJob.results.totals.orphans_deleted >
|
||||
0
|
||||
? "text-success"
|
||||
: "text-muted-foreground",
|
||||
)}
|
||||
>
|
||||
{currentJob.results.totals.orphans_deleted}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
{currentJob?.error_message && (
|
||||
<div className="text-destructive">
|
||||
<p className="text-muted-foreground">
|
||||
{t("maintenance.sync.errorLabel")}
|
||||
</p>
|
||||
<p>{currentJob?.error_message}</p>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
import { useFrigateStats } from "@/api/ws";
|
||||
import { CameraLineGraph } from "@/components/graph/LineGraph";
|
||||
import CameraInfoDialog from "@/components/overlay/CameraInfoDialog";
|
||||
import { ConnectionQualityIndicator } from "@/components/camera/ConnectionQualityIndicator";
|
||||
import { Skeleton } from "@/components/ui/skeleton";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { FrigateStats } from "@/types/stats";
|
||||
@@ -282,8 +283,37 @@ export default function CameraMetrics({
|
||||
)}
|
||||
<div className="flex w-full flex-col gap-3">
|
||||
<div className="flex flex-row items-center justify-between">
|
||||
<div className="text-sm font-medium text-muted-foreground smart-capitalize">
|
||||
<CameraNameLabel camera={camera} />
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="text-sm font-medium text-muted-foreground smart-capitalize">
|
||||
<CameraNameLabel camera={camera} />
|
||||
</div>
|
||||
{statsHistory.length > 0 &&
|
||||
statsHistory[statsHistory.length - 1]?.cameras[
|
||||
camera.name
|
||||
] && (
|
||||
<ConnectionQualityIndicator
|
||||
quality={
|
||||
statsHistory[statsHistory.length - 1]?.cameras[
|
||||
camera.name
|
||||
]?.connection_quality
|
||||
}
|
||||
expectedFps={
|
||||
statsHistory[statsHistory.length - 1]?.cameras[
|
||||
camera.name
|
||||
]?.expected_fps || 0
|
||||
}
|
||||
reconnects={
|
||||
statsHistory[statsHistory.length - 1]?.cameras[
|
||||
camera.name
|
||||
]?.reconnects_last_hour || 0
|
||||
}
|
||||
stalls={
|
||||
statsHistory[statsHistory.length - 1]?.cameras[
|
||||
camera.name
|
||||
]?.stalls_last_hour || 0
|
||||
}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
<Tooltip>
|
||||
<TooltipTrigger>
|
||||
|
||||
@@ -127,13 +127,6 @@ export default function GeneralMetrics({
|
||||
return undefined;
|
||||
}
|
||||
|
||||
if (
|
||||
statsHistory.length > 0 &&
|
||||
Object.keys(statsHistory[0].service.temperatures).length == 0
|
||||
) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const series: {
|
||||
[key: string]: { name: string; data: { x: number; y: number }[] };
|
||||
} = {};
|
||||
@@ -143,22 +136,22 @@ export default function GeneralMetrics({
|
||||
return;
|
||||
}
|
||||
|
||||
Object.entries(stats.detectors).forEach(([key], cIdx) => {
|
||||
if (!key.includes("coral")) {
|
||||
Object.entries(stats.detectors).forEach(([key, detectorStats]) => {
|
||||
if (detectorStats.temperature === undefined) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (cIdx <= Object.keys(stats.service.temperatures).length) {
|
||||
if (!(key in series)) {
|
||||
series[key] = {
|
||||
name: key,
|
||||
data: [],
|
||||
};
|
||||
}
|
||||
|
||||
const temp = Object.values(stats.service.temperatures)[cIdx];
|
||||
series[key].data.push({ x: statsIdx + 1, y: Math.round(temp) });
|
||||
if (!(key in series)) {
|
||||
series[key] = {
|
||||
name: key,
|
||||
data: [],
|
||||
};
|
||||
}
|
||||
|
||||
series[key].data.push({
|
||||
x: statsIdx + 1,
|
||||
y: Math.round(detectorStats.temperature),
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -375,6 +368,40 @@ export default function GeneralMetrics({
|
||||
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
||||
}, [statsHistory]);
|
||||
|
||||
const gpuTempSeries = useMemo(() => {
|
||||
if (!statsHistory) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const series: {
|
||||
[key: string]: { name: string; data: { x: number; y: number }[] };
|
||||
} = {};
|
||||
let hasValidGpu = false;
|
||||
|
||||
statsHistory.forEach((stats, statsIdx) => {
|
||||
if (!stats) {
|
||||
return;
|
||||
}
|
||||
|
||||
Object.entries(stats.gpu_usages || {}).forEach(([key, stats]) => {
|
||||
if (!(key in series)) {
|
||||
series[key] = { name: key, data: [] };
|
||||
}
|
||||
|
||||
if (stats.temp !== undefined) {
|
||||
hasValidGpu = true;
|
||||
series[key].data.push({ x: statsIdx + 1, y: stats.temp });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (!hasValidGpu) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
||||
}, [statsHistory]);
|
||||
|
||||
// Check if Intel GPU has all 0% usage values (known bug)
|
||||
const showIntelGpuWarning = useMemo(() => {
|
||||
if (!statsHistory || statsHistory.length < 3) {
|
||||
@@ -455,6 +482,40 @@ export default function GeneralMetrics({
|
||||
return Object.keys(series).length > 0 ? Object.values(series) : [];
|
||||
}, [statsHistory]);
|
||||
|
||||
const npuTempSeries = useMemo(() => {
|
||||
if (!statsHistory) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const series: {
|
||||
[key: string]: { name: string; data: { x: number; y: number }[] };
|
||||
} = {};
|
||||
let hasValidNpu = false;
|
||||
|
||||
statsHistory.forEach((stats, statsIdx) => {
|
||||
if (!stats) {
|
||||
return;
|
||||
}
|
||||
|
||||
Object.entries(stats.npu_usages || {}).forEach(([key, stats]) => {
|
||||
if (!(key in series)) {
|
||||
series[key] = { name: key, data: [] };
|
||||
}
|
||||
|
||||
if (stats.temp !== undefined) {
|
||||
hasValidNpu = true;
|
||||
series[key].data.push({ x: statsIdx + 1, y: stats.temp });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (!hasValidNpu) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
||||
}, [statsHistory]);
|
||||
|
||||
// other processes stats
|
||||
|
||||
const hardwareType = useMemo(() => {
|
||||
@@ -676,7 +737,11 @@ export default function GeneralMetrics({
|
||||
<div
|
||||
className={cn(
|
||||
"mt-4 grid grid-cols-1 gap-2 sm:grid-cols-2",
|
||||
gpuEncSeries?.length && "md:grid-cols-4",
|
||||
gpuTempSeries?.length && "md:grid-cols-3",
|
||||
gpuEncSeries?.length && "xl:grid-cols-4",
|
||||
gpuEncSeries?.length &&
|
||||
gpuTempSeries?.length &&
|
||||
"3xl:grid-cols-5",
|
||||
)}
|
||||
>
|
||||
{statsHistory[0]?.gpu_usages && (
|
||||
@@ -811,6 +876,30 @@ export default function GeneralMetrics({
|
||||
) : (
|
||||
<Skeleton className="aspect-video w-full" />
|
||||
)}
|
||||
{statsHistory.length != 0 ? (
|
||||
<>
|
||||
{gpuTempSeries && gpuTempSeries?.length != 0 && (
|
||||
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
||||
<div className="mb-5">
|
||||
{t("general.hardwareInfo.gpuTemperature")}
|
||||
</div>
|
||||
{gpuTempSeries.map((series) => (
|
||||
<ThresholdBarGraph
|
||||
key={series.name}
|
||||
graphId={`${series.name}-temp`}
|
||||
name={series.name}
|
||||
unit="°C"
|
||||
threshold={DetectorTempThreshold}
|
||||
updateTimes={updateTimes}
|
||||
data={[series]}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
) : (
|
||||
<Skeleton className="aspect-video w-full" />
|
||||
)}
|
||||
|
||||
{statsHistory[0]?.npu_usages && (
|
||||
<>
|
||||
@@ -834,6 +923,30 @@ export default function GeneralMetrics({
|
||||
) : (
|
||||
<Skeleton className="aspect-video w-full" />
|
||||
)}
|
||||
{statsHistory.length != 0 ? (
|
||||
<>
|
||||
{npuTempSeries && npuTempSeries?.length != 0 && (
|
||||
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
||||
<div className="mb-5">
|
||||
{t("general.hardwareInfo.npuTemperature")}
|
||||
</div>
|
||||
{npuTempSeries.map((series) => (
|
||||
<ThresholdBarGraph
|
||||
key={series.name}
|
||||
graphId={`${series.name}-temp`}
|
||||
name={series.name}
|
||||
unit="°C"
|
||||
threshold={DetectorTempThreshold}
|
||||
updateTimes={updateTimes}
|
||||
data={[series]}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
) : (
|
||||
<Skeleton className="aspect-video w-full" />
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</>
|
||||
|
||||
@@ -4,7 +4,7 @@ import { defineConfig } from "vite";
|
||||
import react from "@vitejs/plugin-react-swc";
|
||||
import monacoEditorPlugin from "vite-plugin-monaco-editor";
|
||||
|
||||
const proxyHost = process.env.PROXY_HOST || "localhost:5000";
|
||||
const proxyHost = process.env.PROXY_HOST || "1ocalhost:5000";
|
||||
|
||||
// https://vitejs.dev/config/
|
||||
export default defineConfig({
|
||||
|
||||
Reference in New Issue
Block a user