Compare commits

..

1 Commits

Author SHA1 Message Date
dependabot[bot]
3003a8d44a Bump onnx from 1.14.0 to 1.20.1 in /docker/tensorrt
Bumps [onnx](https://github.com/onnx/onnx) from 1.14.0 to 1.20.1.
- [Release notes](https://github.com/onnx/onnx/releases)
- [Changelog](https://github.com/onnx/onnx/blob/main/docs/Changelog-ml.md)
- [Commits](https://github.com/onnx/onnx/compare/v1.14.0...v1.20.1)

---
updated-dependencies:
- dependency-name: onnx
  dependency-version: 1.20.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-12 14:08:13 +00:00
544 changed files with 1928 additions and 16150 deletions

View File

@@ -1,385 +1,2 @@
# GitHub Copilot Instructions for Frigate NVR
This document provides coding guidelines and best practices for contributing to Frigate NVR, a complete and local NVR designed for Home Assistant with AI object detection.
## Project Overview
Frigate NVR is a realtime object detection system for IP cameras that uses:
- **Backend**: Python 3.13+ with FastAPI, OpenCV, TensorFlow/ONNX
- **Frontend**: React with TypeScript, Vite, TailwindCSS
- **Architecture**: Multiprocessing design with ZMQ and MQTT communication
- **Focus**: Minimal resource usage with maximum performance
## Code Review Guidelines
When reviewing code, do NOT comment on:
- Missing imports - Static analysis tooling catches these
- Code formatting - Ruff (Python) and Prettier (TypeScript/React) handle formatting
- Minor style inconsistencies already enforced by linters
## Python Backend Standards
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Pattern matching
- Type hints (comprehensive typing preferred)
- f-strings (preferred over `%` or `.format()`)
- Dataclasses
- Async/await patterns
### Code Quality Standards
- **Formatting**: Ruff (configured in `pyproject.toml`)
- **Linting**: Ruff with rules defined in project config
- **Type Checking**: Use type hints consistently
- **Testing**: unittest framework - use `python3 -u -m unittest` to run tests
- **Language**: American English for all code, comments, and documentation
### Logging Standards
- **Logger Pattern**: Use module-level logger
```python
import logging
logger = logging.getLogger(__name__)
```
- **Format Guidelines**:
- No periods at end of log messages
- No sensitive data (keys, tokens, passwords)
- Use lazy logging: `logger.debug("Message with %s", variable)`
- **Log Levels**:
- `debug`: Development and troubleshooting information
- `info`: Important runtime events (startup, shutdown, state changes)
- `warning`: Recoverable issues that should be addressed
- `error`: Errors that affect functionality but don't crash the app
- `exception`: Use in except blocks to include traceback
### Error Handling
- **Exception Types**: Choose most specific exception available
- **Try/Catch Best Practices**:
- Only wrap code that can throw exceptions
- Keep try blocks minimal - process data after the try/except
- Avoid bare exceptions except in background tasks
Bad pattern:
```python
try:
data = await device.get_data() # Can throw
# ❌ Don't process data inside try block
processed = data.get("value", 0) * 100
result = processed
except DeviceError:
logger.error("Failed to get data")
```
Good pattern:
```python
try:
data = await device.get_data() # Can throw
except DeviceError:
logger.error("Failed to get data")
return
# ✅ Process data outside try block
processed = data.get("value", 0) * 100
result = processed
```
### Async Programming
- **External I/O**: All external I/O operations must be async
- **Best Practices**:
- Avoid sleeping in loops - use `asyncio.sleep()` not `time.sleep()`
- Avoid awaiting in loops - use `asyncio.gather()` instead
- No blocking calls in async functions
- Use `asyncio.create_task()` for background operations
- **Thread Safety**: Use proper synchronization for shared state
### Documentation Standards
- **Module Docstrings**: Concise descriptions at top of files
```python
"""Utilities for motion detection and analysis."""
```
- **Function Docstrings**: Required for public functions and methods
```python
async def process_frame(frame: ndarray, config: Config) -> Detection:
"""Process a video frame for object detection.
Args:
frame: The video frame as numpy array
config: Detection configuration
Returns:
Detection results with bounding boxes
"""
```
- **Comment Style**:
- Explain the "why" not just the "what"
- Keep lines under 88 characters when possible
- Use clear, descriptive comments
### File Organization
- **API Endpoints**: `frigate/api/` - FastAPI route handlers
- **Configuration**: `frigate/config/` - Configuration parsing and validation
- **Detectors**: `frigate/detectors/` - Object detection backends
- **Events**: `frigate/events/` - Event management and storage
- **Utilities**: `frigate/util/` - Shared utility functions
## Frontend (React/TypeScript) Standards
### Internationalization (i18n)
- **CRITICAL**: Never write user-facing strings directly in components
- **Always use react-i18next**: Import and use the `t()` function
```tsx
import { useTranslation } from "react-i18next";
function MyComponent() {
const { t } = useTranslation(["views/live"]);
return <div>{t("camera_not_found")}</div>;
}
```
- **Translation Files**: Add English strings to the appropriate json files in `web/public/locales/en`
- **Namespaces**: Organize translations by feature/view (e.g., `views/live`, `common`, `views/system`)
### Code Quality
- **Linting**: ESLint (see `web/.eslintrc.cjs`)
- **Formatting**: Prettier with Tailwind CSS plugin
- **Type Safety**: TypeScript strict mode enabled
- **Testing**: Vitest for unit tests
### Component Patterns
- **UI Components**: Use Radix UI primitives (in `web/src/components/ui/`)
- **Styling**: TailwindCSS with `cn()` utility for class merging
- **State Management**: React hooks (useState, useEffect, useCallback, useMemo)
- **Data Fetching**: Custom hooks with proper loading and error states
### ESLint Rules
Key rules enforced:
- `react-hooks/rules-of-hooks`: error
- `react-hooks/exhaustive-deps`: error
- `no-console`: error (use proper logging or remove)
- `@typescript-eslint/no-explicit-any`: warn (always use proper types instead of `any`)
- Unused variables must be prefixed with `_`
- Comma dangles required for multiline objects/arrays
### File Organization
- **Pages**: `web/src/pages/` - Route components
- **Views**: `web/src/views/` - Complex view components
- **Components**: `web/src/components/` - Reusable components
- **Hooks**: `web/src/hooks/` - Custom React hooks
- **API**: `web/src/api/` - API client functions
- **Types**: `web/src/types/` - TypeScript type definitions
## Testing Requirements
### Backend Testing
- **Framework**: Python unittest
- **Run Command**: `python3 -u -m unittest`
- **Location**: `frigate/test/`
- **Coverage**: Aim for comprehensive test coverage of core functionality
- **Pattern**: Use `TestCase` classes with descriptive test method names
```python
class TestMotionDetection(unittest.TestCase):
def test_detects_motion_above_threshold(self):
# Test implementation
```
### Test Best Practices
- Always have a way to test your work and confirm your changes
- Write tests for bug fixes to prevent regressions
- Test edge cases and error conditions
- Mock external dependencies (cameras, APIs, hardware)
- Use fixtures for test data
## Development Commands
### Python Backend
```bash
# Run all tests
python3 -u -m unittest
# Run specific test file
python3 -u -m unittest frigate.test.test_ffmpeg_presets
# Check formatting (Ruff)
ruff format --check frigate/
# Apply formatting
ruff format frigate/
# Run linter
ruff check frigate/
```
### Frontend (from web/ directory)
```bash
# Start dev server (AI agents should never run this directly unless asked)
npm run dev
# Build for production
npm run build
# Run linter
npm run lint
# Fix linting issues
npm run lint:fix
# Format code
npm run prettier:write
```
### Docker Development
AI agents should never run these commands directly unless instructed.
```bash
# Build local image
make local
# Build debug image
make debug
```
## Common Patterns
### API Endpoint Pattern
```python
from fastapi import APIRouter, Request
from frigate.api.defs.tags import Tags
router = APIRouter(tags=[Tags.Events])
@router.get("/events")
async def get_events(request: Request, limit: int = 100):
"""Retrieve events from the database."""
# Implementation
```
### Configuration Access
```python
# Access Frigate configuration
config: FrigateConfig = request.app.frigate_config
camera_config = config.cameras["front_door"]
```
### Database Queries
```python
from frigate.models import Event
# Use Peewee ORM for database access
events = (
Event.select()
.where(Event.camera == camera_name)
.order_by(Event.start_time.desc())
.limit(limit)
)
```
## Common Anti-Patterns to Avoid
### ❌ Avoid These
```python
# Blocking operations in async functions
data = requests.get(url) # ❌ Use async HTTP client
time.sleep(5) # ❌ Use asyncio.sleep()
# Hardcoded strings in React components
<div>Camera not found</div> # ❌ Use t("camera_not_found")
# Missing error handling
data = await api.get_data() # ❌ No exception handling
# Bare exceptions in regular code
try:
value = await sensor.read()
except Exception: # ❌ Too broad
logger.error("Failed")
```
### ✅ Use These Instead
```python
# Async operations
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.json()
await asyncio.sleep(5) # ✅ Non-blocking
# Translatable strings in React
const { t } = useTranslation();
<div>{t("camera_not_found")}</div> # ✅ Translatable
# Proper error handling
try:
data = await api.get_data()
except ApiException as err:
logger.error("API error: %s", err)
raise
# Specific exceptions
try:
value = await sensor.read()
except SensorException as err: # ✅ Specific
logger.exception("Failed to read sensor")
```
## Project-Specific Conventions
### Configuration Files
- Main config: `config/config.yml`
### Directory Structure
- Backend code: `frigate/`
- Frontend code: `web/`
- Docker files: `docker/`
- Documentation: `docs/`
- Database migrations: `migrations/`
### Code Style Conformance
Always conform new and refactored code to the existing coding style in the project:
- Follow established patterns in similar files
- Match indentation and formatting of surrounding code
- Use consistent naming conventions (snake_case for Python, camelCase for TypeScript)
- Maintain the same level of verbosity in comments and docstrings
## Additional Resources
- Documentation: https://docs.frigate.video
- Main Repository: https://github.com/blakeblackshear/frigate
- Home Assistant Integration: https://github.com/blakeblackshear/frigate-hass-integration
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.

View File

@@ -2,19 +2,15 @@
# Update package list and install dependencies
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget linux-headers-$(uname -r)
sudo apt-get install -y build-essential cmake git wget
hailo_version="4.21.0"
arch=$(uname -m)
if [[ $arch == "aarch64" ]]; then
source /etc/os-release
os_codename=$VERSION_CODENAME
echo "Detected OS codename: $os_codename"
fi
if [ "$os_codename" = "trixie" ]; then
sudo apt install -y dkms
if [[ $arch == "x86_64" ]]; then
sudo apt install -y linux-headers-$(uname -r);
else
sudo apt install -y linux-modules-extra-$(uname -r);
fi
# Clone the HailoRT driver repository
@@ -51,4 +47,3 @@ sudo udevadm control --reload-rules && sudo udevadm trigger
echo "HailoRT driver installation complete."
echo "reboot your system to load the firmware!"
echo "Driver version: $(modinfo -F version hailo_pci)"

View File

@@ -47,7 +47,7 @@ onnxruntime == 1.22.*
# Embeddings
transformers == 4.45.*
# Generative AI
google-genai == 1.58.*
google-generativeai == 0.8.*
ollama == 0.6.*
openai == 1.65.*
# push notifications

View File

@@ -54,8 +54,8 @@ function setup_homekit_config() {
local config_path="$1"
if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty config file for HomeKit..."
echo '{}' > "${config_path}"
echo "[INFO] Creating empty HomeKit config file..."
echo 'homekit: {}' > "${config_path}"
fi
# Convert YAML to JSON for jq processing
@@ -69,15 +69,15 @@ function setup_homekit_config() {
local cleaned_json="/tmp/cache/homekit_cleaned.json"
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {} end
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{}' > "${cleaned_json}"
echo '{"homekit": {}}' > "${cleaned_json}"
}
# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo '{}' > "${config_path}"
echo 'homekit: {}' > "${config_path}"
}
# Clean up temp files

View File

@@ -23,28 +23,8 @@ sys.path.remove("/opt/frigate")
yaml = YAML()
# Check if arbitrary exec sources are allowed (defaults to False for security)
allow_arbitrary_exec = None
if "GO2RTC_ALLOW_ARBITRARY_EXEC" in os.environ:
allow_arbitrary_exec = os.environ.get("GO2RTC_ALLOW_ARBITRARY_EXEC")
elif (
os.path.isdir("/run/secrets")
and os.access("/run/secrets", os.R_OK)
and "GO2RTC_ALLOW_ARBITRARY_EXEC" in os.listdir("/run/secrets")
):
allow_arbitrary_exec = (
Path(os.path.join("/run/secrets", "GO2RTC_ALLOW_ARBITRARY_EXEC"))
.read_text()
.strip()
)
# check for the add-on options file
elif os.path.isfile("/data/options.json"):
with open("/data/options.json") as f:
raw_options = f.read()
options = json.loads(raw_options)
allow_arbitrary_exec = options.get("go2rtc_allow_arbitrary_exec")
ALLOW_ARBITRARY_EXEC = allow_arbitrary_exec is not None and str(
allow_arbitrary_exec
ALLOW_ARBITRARY_EXEC = os.environ.get(
"GO2RTC_ALLOW_ARBITRARY_EXEC", "false"
).lower() in ("true", "1", "yes")
FRIGATE_ENV_VARS = {k: v for k, v in os.environ.items() if k.startswith("FRIGATE_")}

View File

@@ -13,6 +13,6 @@ nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnx==1.20.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@@ -1,2 +1,2 @@
onnx == 1.14.0; platform_machine == 'aarch64'
onnx == 1.20.1; platform_machine == 'aarch64'
protobuf == 3.20.3; platform_machine == 'aarch64'

View File

@@ -29,10 +29,6 @@ auth:
reset_admin_password: true
```
## Password guidance
Constructing secure passwords and managing them properly is important. Frigate requires a minimum length of 12 characters. For guidance on password standards see [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html). To learn what makes a password truly secure, read this [article](https://medium.com/peerio/how-to-build-a-billion-dollar-password-3d92568d9277).
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
@@ -166,10 +162,6 @@ In this example:
- If no mapping matches, Frigate falls back to `default_role` if configured.
- If `role_map` is not defined, Frigate assumes the role header directly contains `admin`, `viewer`, or a custom role name.
**Note on matching semantics:**
- Admin precedence: if the `admin` mapping matches, Frigate resolves the session to `admin` to avoid accidental downgrade when a user belongs to multiple groups (for example both `admin` and `viewer` groups).
#### Port Considerations
**Authenticated Port (8971)**

View File

@@ -79,12 +79,6 @@ cameras:
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::note
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
:::
:::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
@@ -101,7 +95,7 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |

View File

@@ -0,0 +1,249 @@
---
id: genai
title: Generative AI
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
cameras:
front_camera:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
enabled: False # <- disable GenAI for your indoor camera
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:4b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end: true # default
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:8b-instruct
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@@ -17,23 +17,11 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
@@ -53,12 +41,12 @@ If you are trying to use a single model for Frigate and HomeAssistant, it will n
The following models are recommended:
| Model | Notes |
| ------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
| Model | Notes |
| ----------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::note
@@ -66,26 +54,26 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:4b
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
## Google Gemini
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
@@ -102,32 +90,16 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.5-flash
model: gemini-1.5-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
```
genai:
provider: gemini
...
provider_options:
base_url: https://...
```
Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai).
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
@@ -148,41 +120,23 @@ To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` env
:::
:::tip
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
```yaml
genai:
provider: openai
base_url: http://your-llama-server
model: your-model-name
provider_options:
context_size: 8192 # Specify the configured context size
```
This ensures Frigate uses the correct context window size when generating prompts.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
```

View File

@@ -125,10 +125,10 @@ review:
## Review Reports
Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.

View File

@@ -12,20 +12,23 @@ Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accel
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
- **AMD**
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
- **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
- **RockChip**
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image to run enrichments on an Nvidia GPU and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is the `tensorrt` image for object detection on an Nvidia GPU and Intel iGPU for enrichments.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
:::note

View File

@@ -29,12 +29,12 @@ cameras:
When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running.
| Add-on Variant | Configuration directory |
| -------------------------- | ----------------------------------------- |
| Frigate | `/addon_configs/ccab4aaf_frigate` |
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
| Add-on Variant | Configuration directory |
| -------------------------- | -------------------------------------------- |
| Frigate | `/addon_configs/ccab4aaf_frigate` |
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` |
| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` |
**Whenever you see `/config` in the documentation, it refers to this directory.**
@@ -109,16 +109,15 @@ detectors:
record:
enabled: True
motion:
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motion
snapshots:
enabled: True
@@ -166,16 +165,15 @@ detectors:
record:
enabled: True
motion:
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motion
snapshots:
enabled: True
@@ -233,16 +231,15 @@ model:
record:
enabled: True
motion:
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motion
snapshots:
enabled: True

View File

@@ -68,8 +68,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `None`
- This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- Default: `CPU`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`.
@@ -381,7 +381,6 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
```yaml
lpr:
enabled: true
device: CPU
debug_save_plates: true
```
@@ -433,6 +432,6 @@ If you are using a model that natively detects `license_plate`, add an _object m
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
### I see "Error running ... model" in my logs, or my inference time is very high. How can I fix this?
### I see "Error running ... model" in my logs. How can I fix this?
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.

View File

@@ -34,7 +34,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia GPU**
- [ONNX](#onnx): Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
**Nvidia Jetson** <CommunityBadge />
@@ -65,7 +65,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
# Officially Supported Detectors
Frigate provides a number of builtin detector types. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -157,13 +157,7 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite`
#### YOLOv9
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Instructions](#yolov9-for-google-coral-support) for downloading a model with support for the Google Coral.
:::tip
**Frigate+ Users:** Follow the [instructions](../integrations/plus#use-models) to set a model ID in your config file.
:::
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
<details>
<summary>YOLOv9 Setup & Config</summary>
@@ -660,9 +654,11 @@ ONNX is an open format for building machine learning models, Frigate supports ru
If the correct build is used for your GPU then the GPU will be detected and used automatically.
- **AMD**
- ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image.
- **Nvidia**
@@ -1518,11 +1514,11 @@ RF-DETR can be exported as ONNX by running the command below. You can copy and p
```sh
docker build . --build-arg MODEL_SIZE=Nano --rm --output . -f- <<'EOF'
FROM python:3.12 AS build
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.10.4 /uv /bin/
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /rfdetr
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 transformers==4.57.6 onnxscript
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 onnxscript
ARG MODEL_SIZE
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
FROM scratch
@@ -1560,11 +1556,7 @@ cd tensorrt_demos/yolo
python3 yolo_to_onnx.py -m yolov7-320
```
#### YOLOv9 for Google Coral Support
[Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
#### YOLOv9 for other detectors
#### YOLOv9
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).

View File

@@ -696,9 +696,6 @@ genai:
# Optional additional args to pass to the GenAI Provider (default: None)
provider_options:
keep_alive: -1
# Optional: Options to pass during inference calls (default: {})
runtime_options:
temperature: 0.7
# Optional: Configuration for audio transcription
# NOTE: only the enabled option can be overridden at the camera level

View File

@@ -214,12 +214,6 @@ The `exec:`, `echo:`, and `expr:` sources are disabled by default for security.
:::
:::warning
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
:::
NOTE: The output will need to be passed with two curly braces `{{output}}`
```yaml

View File

@@ -9,25 +9,4 @@ Snapshots are accessible in the UI in the Explore pane. This allows for quick su
To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones)
Snapshots sent via MQTT are configured in the [config file](/configuration) under `cameras -> your_camera -> mqtt`
## Frame Selection
Frigate does not save every frame — it picks a single "best" frame for each tracked object and uses it for both the snapshot and clean copy. As the object is tracked across frames, Frigate continuously evaluates whether the current frame is better than the previous best based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. The snapshot is written to disk once tracking ends using whichever frame was determined to be the best.
MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under `cameras -> your_camera -> mqtt`.
## Clean Copy
Frigate can produce up to two snapshot files per event, each used in different places:
| Version | File | Annotations | Used by |
| --- | --- | --- | --- |
| **Regular snapshot** | `<camera>-<id>.jpg` | Respects your `timestamp`, `bounding_box`, `crop`, and `height` settings | API (`/api/events/<id>/snapshot.jpg`), MQTT (`<camera>/<label>/snapshot`), Explore pane in the UI |
| **Clean copy** | `<camera>-<id>-clean.webp` | Always unannotated — no bounding box, no timestamp, no crop, full resolution | API (`/api/events/<id>/snapshot-clean.webp`), [Frigate+](/plus/first_model) submissions, "Download Clean Snapshot" in the UI |
MQTT snapshots are configured separately under `cameras -> your_camera -> mqtt` and are unrelated to the clean copy.
The clean copy is required for submitting events to [Frigate+](/plus/first_model) — if you plan to use Frigate+, keep `clean_copy` enabled regardless of your other snapshot settings.
If you are not using Frigate+ and `timestamp`, `bounding_box`, and `crop` are all disabled, the regular snapshot is already effectively clean, so `clean_copy` provides no benefit and only uses additional disk space. You can safely set `clean_copy: False` in this case.
Snapshots sent via MQTT are configured in the [config file](https://docs.frigate.video/configuration/) under `cameras -> your_camera -> mqtt`

View File

@@ -11,12 +11,6 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
:::tip
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
:::
### Choosing a detect resolution
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.

View File

@@ -41,8 +41,8 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
| Name | Capabilities | Notes |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP)) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM)) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors
@@ -55,10 +55,12 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#configuration)
- Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
@@ -86,7 +88,8 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia**
- [Nvidia GPU](#nvidia-gpus): Nvidia GPUs can provide efficient object detection.
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
@@ -149,7 +152,9 @@ The OpenVINO detector type is able to run on:
:::note
Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported.
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
:::
@@ -167,12 +172,12 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms s-640: 30 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
### Nvidia GPUs
### TensorRT - Nvidia GPU
Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA libraries.
@@ -182,15 +187,17 @@ Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
There are improved capabilities in newer GPU architectures that TensorRT can benefit from, such as INT8 operations and Tensor cores. The features compatible with your hardware will be optimized when the model is converted to a trt file. Currently the script provided for generating the model provides a switch to enable/disable FP16 operations. If you wish to use newer features such as INT8 optimization, more work is required.
#### Compatibility References:
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/getting-started/support-matrix.html)
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-841/support-matrix/index.html)
[NVIDIA CUDA Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
[NVIDIA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
Inference is done with the `onnx` detector type. Speeds will vary greatly depending on the GPU and the model used.
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny (t)` variants are faster than the equivalent non-tiny model, some known examples are below:
✅ - Accelerated with CUDA Graphs

View File

@@ -56,7 +56,7 @@ services:
volumes:
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000
@@ -112,23 +112,19 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
:::warning
On Raspberry Pi OS **Bookworm**, the kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the kernel. It is installed via DKMS, and the conflict described below does not apply. You can simply run the installation script.
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
:::
1. **Disable the built-in Hailo driver (Raspberry Pi Bookworm OS only)**:
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
:::note
If you are **not** using a Raspberry Pi with **Bookworm OS**, skip this step and proceed directly to step 2.
If you are using Raspberry Pi with **Trixie OS**, also skip this step and proceed directly to step 2.
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
:::
First, check if the driver is currently loaded:
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
```bash
lsmod | grep hailo
@@ -137,39 +133,19 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
If it shows `hailo_pci`, unload it:
```bash
sudo modprobe -r hailo_pci
sudo rmmod hailo_pci
```
Then locate the built-in kernel driver and rename it so it cannot be loaded.
Renaming allows the original driver to be restored later if needed.
First, locate the currently installed kernel module:
Now blacklist the driver to prevent it from loading on boot:
```bash
modinfo -n hailo_pci
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
```
Example output:
```
/lib/modules/6.6.31+rpt-rpi-2712/kernel/drivers/media/pci/hailo/hailo_pci.ko.xz
```
Save the module path to a variable:
Update initramfs to ensure the blacklist takes effect:
```bash
BUILTIN=$(modinfo -n hailo_pci)
```
And rename the module by appending .bak:
```bash
sudo mv "$BUILTIN" "${BUILTIN}.bak"
```
Now refresh the kernel module map so the system recognizes the change:
```bash
sudo depmod -a
sudo update-initramfs -u
```
Reboot your Raspberry Pi:
@@ -184,7 +160,7 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
lsmod | grep hailo
```
This command should return no results.
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
2. **Run the installation script**:
@@ -207,6 +183,7 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
```
The script will:
- Install necessary build dependencies
- Clone and build the Hailo driver from the official repository
- Install the driver
@@ -235,38 +212,6 @@ On Raspberry Pi OS **Trixie**, the Hailo driver is no longer shipped with the ke
lsmod | grep hailo_pci
```
Verify the driver version:
```bash
cat /sys/module/hailo_pci/version
```
Verify that the firmware was installed correctly:
```bash
ls -l /lib/firmware/hailo/hailo8_fw.bin
```
**Optional: Fix PCIe descriptor page size error**
If you encounter the following error:
```
[HailoRT] [error] CHECK failed - max_desc_page_size given 16384 is bigger than hw max desc page size 4096
```
Create a configuration file to force the correct descriptor page size:
```bash
echo 'options hailo_pci force_desc_page_size=4096' | sudo tee /etc/modprobe.d/hailo_pci.conf
```
and reboot:
```bash
sudo reboot
```
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
@@ -462,7 +407,7 @@ services:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000
@@ -502,12 +447,12 @@ The official docker image tags for the current stable version are:
- `stable` - Standard Frigate build for amd64 & RPi Optimized Frigate build for arm64. This build includes support for Hailo devices as well.
- `stable-standard-arm64` - Standard Frigate build for arm64
- `stable-tensorrt` - Frigate build specific for amd64 devices running an Nvidia GPU
- `stable-tensorrt` - Frigate build specific for amd64 devices running an nvidia GPU
- `stable-rocm` - Frigate build for [AMD GPUs](../configuration/object_detectors.md#amdrocm-gpu-detector)
The community supported docker image tags for the current stable version are:
- `stable-tensorrt-jp6` - Frigate build optimized for Nvidia Jetson devices running Jetpack 6
- `stable-tensorrt-jp6` - Frigate build optimized for nvidia Jetson devices running Jetpack 6
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
## Home Assistant Add-on
@@ -521,7 +466,7 @@ There are important limitations in HA OS to be aware of:
- Separate local storage for media is not yet supported by Home Assistant
- AMD GPUs are not supported because HA OS does not include the mesa driver.
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
- Nvidia GPUs are not supported because addons do not support the Nvidia runtime.
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
:::
@@ -689,43 +634,3 @@ docker run \
```
Log into QNAP, open Container Station. Frigate docker container should be listed under 'Overview' and running. Visit Frigate Web UI by clicking Frigate docker, and then clicking the URL shown at the top of the detail page.
## macOS - Apple Silicon
:::warning
macOS uses port 5000 for its Airplay Receiver service. If you want to expose port 5000 in Frigate for local app and API access the port will need to be mapped to another port on the host e.g. 5001
Failure to remap port 5000 on the host will result in the WebUI and all API endpoints on port 5000 being unreachable, even if port 5000 is exposed correctly in Docker.
:::
Docker containers on macOS can be orchestrated by either [Docker Desktop](https://docs.docker.com/desktop/setup/install/mac-install/) or [OrbStack](https://orbstack.dev) (native swift app). The difference in inference speeds is negligable, however CPU, power consumption and container start times will be lower on OrbStack because it is a native Swift application.
To allow Frigate to use the Apple Silicon Neural Engine / Processing Unit (NPU) the host must be running [Apple Silicon Detector](../configuration/object_detectors.md#apple-silicon-detector) on the host (outside Docker)
#### Docker Compose example
```yaml
services:
frigate:
container_name: frigate
image: ghcr.io/blakeblackshear/frigate:stable-standard-arm64
restart: unless-stopped
shm_size: "512mb" # update for your cameras based on calculation above
volumes:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config:/config
- /path/to/your/recordings:/recordings
ports:
- "8971:8971"
# If exposing on macOS map to a diffent host port like 5001 or any orher port with no conflicts
# - "5001:5000" # Internal unauthenticated access. Expose carefully.
- "8554:8554" # RTSP feeds
extra_hosts:
# This is very important
# It allows frigate access to the NPU on Apple Silicon via Apple Silicon Detector
- "host.docker.internal:host-gateway" # Required to talk to the NPU detector
environment:
- FRIGATE_RTSP_PASSWORD: "password"
```

View File

@@ -20,6 +20,7 @@ Keeping Frigate up to date ensures you benefit from the latest features, perform
If youre running Frigate via Docker (recommended method), follow these steps:
1. **Stop the Container**:
- If using Docker Compose:
```bash
docker compose down frigate
@@ -30,8 +31,9 @@ If youre running Frigate via Docker (recommended method), follow these steps:
```
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.4`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
```yaml
services:
frigate:
@@ -49,6 +51,7 @@ If youre running Frigate via Docker (recommended method), follow these steps:
```
3. **Start the Container**:
- If using Docker Compose:
```bash
docker compose up -d
@@ -72,15 +75,18 @@ If youre running Frigate via Docker (recommended method), follow these steps:
For users running Frigate as a Home Assistant Addon:
1. **Check for Updates**:
- Navigate to **Settings > Add-ons** in Home Assistant.
- Find your installed Frigate addon (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
- If an update is available, youll see an "Update" button.
2. **Update the Addon**:
- Click the "Update" button next to the Frigate addon.
- Wait for the process to complete. Home Assistant will handle downloading and installing the new version.
3. **Restart the Addon**:
- After updating, go to the addons page and click "Restart" to apply the changes.
4. **Verify the Update**:
@@ -99,8 +105,8 @@ If an update causes issues:
1. Stop Frigate.
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again.

View File

@@ -119,7 +119,7 @@ services:
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs # 1GB In-memory filesystem for recording segment storage
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000

View File

@@ -16,15 +16,7 @@ See the [MQTT integration
documentation](https://www.home-assistant.io/integrations/mqtt/) for more
details.
In addition, MQTT must be enabled in your Frigate configuration file and Frigate must be connected to the same MQTT server as Home Assistant for many of the entities created by the integration to function, e.g.:
```yaml
mqtt:
enabled: True
host: mqtt.server.com # the address of your HA server that's running the MQTT integration
user: your_mqtt_broker_username
password: your_mqtt_broker_password
```
In addition, MQTT must be enabled in your Frigate configuration file and Frigate must be connected to the same MQTT server as Home Assistant for many of the entities created by the integration to function.
### Integration installation
@@ -103,12 +95,12 @@ services:
If you are using Home Assistant Add-on, the URL should be one of the following depending on which Add-on variant you are using. Note that if you are using the Proxy Add-on, you should NOT point the integration at the proxy URL. Just enter the same URL used to access Frigate directly from your network.
| Add-on Variant | URL |
| -------------------------- | -------------------------------------- |
| Frigate | `http://ccab4aaf-frigate:5000` |
| Frigate (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Add-on Variant | URL |
| -------------------------- | ----------------------------------------- |
| Frigate | `http://ccab4aaf-frigate:5000` |
| Frigate (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
### Frigate running on a separate machine

View File

@@ -120,7 +120,7 @@ Message published for each changed tracked object. The first message is publishe
### `frigate/tracked_object_update`
Message published for updates to tracked object metadata. All messages include an `id` field which is the tracked object's event ID, and can be used to look up the event via the API or match it to items in the UI.
Message published for updates to tracked object metadata, for example:
#### Generative AI Description Update
@@ -134,14 +134,12 @@ Message published for updates to tracked object metadata. All messages include a
#### Face Recognition Update
Published after each recognition attempt, regardless of whether the score meets `recognition_threshold`. See the [Face Recognition](/configuration/face_recognition) documentation for details on how scoring works.
```json
{
"type": "face",
"id": "1607123955.475377-mxklsc",
"name": "John", // best matching person, or null if no match
"score": 0.95, // running weighted average across all recognition attempts
"name": "John",
"score": 0.95,
"camera": "front_door_cam",
"timestamp": 1607123958.748393
}
@@ -149,13 +147,11 @@ Published after each recognition attempt, regardless of whether the score meets
#### License Plate Recognition Update
Published when a license plate is recognized on a car object. See the [License Plate Recognition](/configuration/license_plate_recognition) documentation for details.
```json
{
"type": "lpr",
"id": "1607123955.475377-mxklsc",
"name": "John's Car", // known name for the plate, or null
"name": "John's Car",
"plate": "123ABC",
"score": 0.95,
"camera": "driveway_cam",

View File

@@ -54,8 +54,6 @@ Once you have [requested your first model](../plus/first_model.md) and gotten yo
You can either choose the new model from the Frigate+ pane in the Settings page of the Frigate UI, or manually set the model at the root level in your config:
```yaml
detectors: ...
model:
path: plus://<your_model_id>
```

View File

@@ -24,8 +24,6 @@ You will receive an email notification when your Frigate+ model is ready.
Models available in Frigate+ can be used with a special model path. No other information needs to be configured because it fetches the remaining config from Frigate+ automatically.
```yaml
detectors: ...
model:
path: plus://<your_model_id>
```

View File

@@ -15,15 +15,15 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on most hardware. |
| Model Type | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
### YOLOv9 Details
YOLOv9 models are available in `s`, `t`, `edgetpu` variants. When requesting a `yolov9` model, you will be prompted to choose a variant. If you want the model to be compatible with a Google Coral, you will need to choose the `edgetpu` variant. If you are unsure what variant to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
:::info
@@ -37,21 +37,23 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support
Rockchip models are automatically converted as of 0.17. For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it.
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
## Supported detector types
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip (`rknn`) detectors.
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
| Hardware | Recommended Detector Type | Recommended Model Type |
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `yolov9` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform) | `rknn` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
## Improving your model
@@ -79,7 +81,7 @@ Candidate labels are also available for annotation. These labels don't have enou
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`, `la_poste`, `lawnmower`, `heron`, `rickshaw`, `wombat`, `auspost`, `aramex`, `bobcat`, `mustelid`, `transoflex`, `airplane`, `drone`, `mountain_lion`, `crocodile`, `turkey`, `baby_stroller`, `monkey`, `coyote`, `porcupine`, `parcelforce`, `sheep`, `snake`, `helicopter`, `lizard`, `duck`, `hermes`, `cargus`, `fan_courier`, `sameday`
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`
Candidate labels are not available for automatic suggestions.

View File

@@ -37,7 +37,7 @@ cameras:
## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate.

View File

@@ -18490,9 +18490,9 @@
}
},
"node_modules/qs": {
"version": "6.14.1",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.14.1.tgz",
"integrity": "sha512-4EK3+xJl8Ts67nLYNwqw/dsFVnCf+qR7RgXSK9jEEm9unao3njwMDdmsdvoKBKHzxd7tCYz5e5M+SnMjdtXGQQ==",
"version": "6.14.0",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz",
"integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==",
"license": "BSD-3-Clause",
"dependencies": {
"side-channel": "^1.1.0"

View File

@@ -23,12 +23,7 @@ from markupsafe import escape
from peewee import SQL, fn, operator
from pydantic import ValidationError
from frigate.api.auth import (
allow_any_authenticated,
allow_public,
get_allowed_cameras_for_filter,
require_role,
)
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
@@ -692,19 +687,13 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
@router.get(
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
)
def get_recognized_license_plates(
split_joined: Optional[int] = None,
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def get_recognized_license_plates(split_joined: Optional[int] = None):
try:
query = (
Event.select(
SQL("json_extract(data, '$.recognized_license_plate') AS plate")
)
.where(
(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
& (Event.camera << allowed_cameras)
)
.where(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
.distinct()
)
recognized_license_plates = [row[0] for row in query.tuples()]

View File

@@ -350,15 +350,21 @@ def validate_password_strength(password: str) -> tuple[bool, Optional[str]]:
Validate password strength.
Returns a tuple of (is_valid, error_message).
Longer passwords are harder to crack than shorter complex ones.
https://pages.nist.gov/800-63-3/sp800-63b.html
"""
if not password:
return False, "Password cannot be empty"
if len(password) < 12:
return False, "Password must be at least 12 characters long"
if len(password) < 8:
return False, "Password must be at least 8 characters long"
if not any(c.isupper() for c in password):
return False, "Password must contain at least one uppercase letter"
if not any(c.isdigit() for c in password):
return False, "Password must contain at least one digit"
if not any(c in '!@#$%^&*(),.?":{}|<>' for c in password):
return False, "Password must contain at least one special character"
return True, None
@@ -439,11 +445,10 @@ def resolve_role(
Determine the effective role for a request based on proxy headers and configuration.
Order of resolution:
1. If a role header is defined in proxy_config.header_map.role:
- If a role_map is configured, treat the header as group claims
(split by proxy_config.separator) and map to roles.
Admin matches short-circuit to admin.
- If no role_map is configured, treat the header as role names directly.
1. If a role header is defined in proxy_config.header_map.role:
- If a role_map is configured, treat the header as group claims
(split by proxy_config.separator) and map to roles.
- If no role_map is configured, treat the header as role names directly.
2. If no valid role is found, return proxy_config.default_role if it's valid in config_roles, else 'viewer'.
Args:
@@ -493,12 +498,6 @@ def resolve_role(
}
logger.debug("Matched roles from role_map: %s", matched_roles)
# If admin matches, prioritize it to avoid accidental downgrade when
# users belong to both admin and lower-privilege groups.
if "admin" in matched_roles and "admin" in config_roles:
logger.debug("Resolved role (with role_map) to 'admin'.")
return "admin"
if matched_roles:
resolved = next(
(r for r in config_roles if r in matched_roles), validated_default
@@ -801,7 +800,7 @@ def get_users():
"/users",
dependencies=[Depends(require_role(["admin"]))],
summary="Create new user",
description="Creates a new user with the specified username, password, and role. Requires admin role. Password must be at least 12 characters long.",
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
)
def create_user(
request: Request,
@@ -818,15 +817,6 @@ def create_user(
content={"message": f"Role must be one of: {', '.join(config_roles)}"},
status_code=400,
)
# Validate password strength
is_valid, error_message = validate_password_strength(body.password)
if not is_valid:
return JSONResponse(
content={"message": error_message},
status_code=400,
)
role = body.role or "viewer"
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
User.insert(
@@ -861,7 +851,7 @@ def delete_user(request: Request, username: str):
"/users/{username}/password",
dependencies=[Depends(allow_any_authenticated())],
summary="Update user password",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must be at least 12 characters long. If user changes their own password, a new JWT cookie is automatically issued.",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
)
async def update_password(
request: Request,

View File

@@ -848,10 +848,9 @@ async def onvif_probe(
try:
if isinstance(uri, str) and uri.startswith("rtsp://"):
if username and password and "@" not in uri:
# Inject raw credentials and add only the
# authenticated version. The credentials will be encoded
# later by ffprobe_stream or the config system.
cred = f"{username}:{password}@"
# Inject URL-encoded credentials and add only the
# authenticated version.
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
injected = uri.replace(
"rtsp://", f"rtsp://{cred}", 1
)
@@ -904,8 +903,12 @@ async def onvif_probe(
"/cam/realmonitor?channel=1&subtype=0",
"/11",
]
# Use raw credentials for pattern fallback URIs when provided
auth_str = f"{username}:{password}@" if username and password else ""
# Use URL-encoded credentials for pattern fallback URIs when provided
auth_str = (
f"{quote_plus(username)}:{quote_plus(password)}@"
if username and password
else ""
)
rtsp_port = 554
for path in common_paths:
uri = f"rtsp://{auth_str}{host}:{rtsp_port}{path}"
@@ -927,7 +930,7 @@ async def onvif_probe(
and uri.startswith("rtsp://")
and "@" not in uri
):
cred = f"{username}:{password}@"
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
cred_uri = uri.replace("rtsp://", f"rtsp://{cred}", 1)
if cred_uri not in to_test:
to_test.append(cred_uri)

View File

@@ -73,7 +73,7 @@ def get_faces():
face_dict[name] = []
for file in filter(
lambda f: f.lower().endswith((".webp", ".png", ".jpg", ".jpeg")),
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(face_dir),
):
face_dict[name].append(file)
@@ -582,7 +582,7 @@ def get_classification_dataset(name: str):
dataset_dict[category_name] = []
for file in filter(
lambda f: f.lower().endswith((".webp", ".png", ".jpg", ".jpeg")),
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(category_dir),
):
dataset_dict[category_name].append(file)
@@ -693,7 +693,7 @@ def get_classification_images(name: str):
status_code=200,
content=list(
filter(
lambda f: f.lower().endswith((".webp", ".png", ".jpg", ".jpeg")),
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(train_dir),
)
),
@@ -759,28 +759,15 @@ def delete_classification_dataset_images(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
)
deleted_count = 0
for id in list_of_ids:
file_path = os.path.join(folder, sanitize_filename(id))
if os.path.isfile(file_path):
os.unlink(file_path)
deleted_count += 1
if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none":
os.rmdir(folder)
# Update training metadata to reflect deleted images
# This ensures the dataset is marked as changed after deletion
# (even if the total count happens to be the same after adding and deleting)
if deleted_count > 0:
sanitized_name = sanitize_filename(name)
metadata = read_training_metadata(sanitized_name)
if metadata:
last_count = metadata.get("last_training_image_count", 0)
updated_count = max(0, last_count - deleted_count)
write_training_metadata(sanitized_name, updated_count)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted images."}),
status_code=200,

View File

@@ -10,7 +10,7 @@ class ReviewQueryParams(BaseModel):
cameras: str = "all"
labels: str = "all"
zones: str = "all"
reviewed: Union[int, SkipJsonSchema[None]] = None
reviewed: int = 0
limit: Union[int, SkipJsonSchema[None]] = None
severity: Union[SeverityEnum, SkipJsonSchema[None]] = None
before: Union[float, SkipJsonSchema[None]] = None

View File

@@ -69,25 +69,6 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.events])
def _build_attribute_filter_clause(attributes: str):
filtered_attributes = [
attr.strip() for attr in attributes.split(",") if attr.strip()
]
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
escaped_attr = json.dumps(attr, ensure_ascii=True)[1:-1]
if escaped_attr != attr:
attribute_clauses.append(Event.data.cast("text") % f'*:"{escaped_attr}"*')
if not attribute_clauses:
return None
return reduce(operator.or_, attribute_clauses)
@router.get(
"/events",
response_model=list[EventResponse],
@@ -212,9 +193,14 @@ def events(
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
attribute_clause = _build_attribute_filter_clause(attributes)
if attribute_clause is not None:
clauses.append(attribute_clause)
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
attribute_clause = reduce(operator.or_, attribute_clauses)
clauses.append(attribute_clause)
if recognized_license_plate != "all":
filtered_recognized_license_plates = recognized_license_plate.split(",")
@@ -522,7 +508,7 @@ def events_search(
cameras = params.cameras
labels = params.labels
sub_labels = params.sub_labels
attributes = unquote(params.attributes)
attributes = params.attributes
zones = params.zones
after = params.after
before = params.before
@@ -621,9 +607,13 @@ def events_search(
if attributes != "all":
# Custom classification results are stored as data[model_name] = result_value
attribute_clause = _build_attribute_filter_clause(attributes)
if attribute_clause is not None:
event_filters.append(attribute_clause)
filtered_attributes = attributes.split(",")
attribute_clauses = []
for attr in filtered_attributes:
attribute_clauses.append(Event.data.cast("text") % f'*:"{attr}"*')
event_filters.append(reduce(operator.or_, attribute_clauses))
if zones != "all":
zone_clauses = []

View File

@@ -26,6 +26,3 @@ class GenAIConfig(FrigateBaseModel):
provider_options: dict[str, Any] = Field(
default={}, title="GenAI Provider extra options."
)
runtime_options: dict[str, Any] = Field(
default={}, title="Options to pass during inference calls."
)

View File

@@ -108,13 +108,12 @@ class GenAIReviewConfig(FrigateBaseModel):
default="""### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Routine residential vehicle access during daytime/evening (6 AM - 10 PM): entering, exiting, loading/unloading items — normal commute and travel patterns
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Checking or probing vehicle/building access**: trying handles without entering, peering through windows, examining multiple vehicles, or possessing break-in tools — Level 1
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
@@ -134,8 +133,8 @@ Evaluate in this order:
1. **If person is verified/known** → Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
- Check actions: If probing access (trying handles without entering, checking multiple vehicles), taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service, routine vehicle access) → Level 0
- Check actions: If testing doors/handles, taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",

View File

@@ -662,13 +662,6 @@ class FrigateConfig(FrigateBaseModel):
# generate zone contours
if len(camera_config.zones) > 0:
for zone in camera_config.zones.values():
if zone.filters:
for object_name, filter_config in zone.filters.items():
zone.filters[object_name] = RuntimeFilterConfig(
frame_shape=camera_config.frame_shape,
**filter_config.model_dump(exclude_unset=True),
)
zone.generate_contour(camera_config.frame_shape)
# Set live view stream if none is set

View File

@@ -97,7 +97,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0, indexed=False)
self.labelmap = load_labels(labelmap_path, prefill=0)
self.classifications_per_second.start()
def __update_metrics(self, duration: float) -> None:
@@ -398,7 +398,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(labelmap_path, prefill=0, indexed=False)
self.labelmap = load_labels(labelmap_path, prefill=0)
def __update_metrics(self, duration: float) -> None:
self.classifications_per_second.update()
@@ -419,21 +419,14 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
"""
if object_id not in self.classification_history:
self.classification_history[object_id] = []
logger.debug(f"Created new classification history for {object_id}")
self.classification_history[object_id].append(
(current_label, current_score, current_time)
)
history = self.classification_history[object_id]
logger.debug(
f"History for {object_id}: {len(history)} entries, latest=({current_label}, {current_score})"
)
if len(history) < 3:
logger.debug(
f"History for {object_id} has {len(history)} entries, need at least 3"
)
return None, 0.0
label_counts = {}
@@ -452,27 +445,14 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
best_count = label_counts[best_label]
consensus_threshold = total_attempts * 0.6
logger.debug(
f"Consensus calc for {object_id}: label_counts={label_counts}, "
f"best_label={best_label}, best_count={best_count}, "
f"total={total_attempts}, threshold={consensus_threshold}"
)
if best_count < consensus_threshold:
logger.debug(
f"No consensus for {object_id}: {best_count} < {consensus_threshold}"
)
return None, 0.0
avg_score = sum(label_scores[best_label]) / len(label_scores[best_label])
if best_label == "none":
logger.debug(f"Filtering 'none' label for {object_id}")
return None, 0.0
logger.debug(
f"Consensus reached for {object_id}: {best_label} with avg_score={avg_score}"
)
return best_label, avg_score
def process_frame(self, obj_data, frame):
@@ -580,30 +560,17 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
)
if score < self.model_config.threshold:
logger.debug(
f"{self.model_config.name}: Score {score} < threshold {self.model_config.threshold} for {object_id}, skipping"
)
logger.debug(f"Score {score} is less than threshold.")
return
sub_label = self.labelmap[best_id]
logger.debug(
f"{self.model_config.name}: Object {object_id} (label={obj_data['label']}) passed threshold with sub_label={sub_label}, score={score}"
)
consensus_label, consensus_score = self.get_weighted_score(
object_id, sub_label, score, now
)
logger.debug(
f"{self.model_config.name}: get_weighted_score returned consensus_label={consensus_label}, consensus_score={consensus_score} for {object_id}"
)
if consensus_label is not None:
camera = obj_data["camera"]
logger.debug(
f"{self.model_config.name}: Publishing sub_label={consensus_label} for {obj_data['label']} object {object_id} on {camera}"
)
if (
self.model_config.object_config.classification_type
@@ -658,7 +625,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
def handle_request(self, topic, request_data):
if topic == EmbeddingsRequestEnum.reload_classification_model.value:
if request_data.get("model_name") == self.model_config.name:
self.__build_detector()
logger.info(
f"Successfully loaded updated model for {self.model_config.name}"
)
@@ -696,7 +662,7 @@ def write_classification_attempt(
# delete oldest face image if maximum is reached
try:
files = sorted(
filter(lambda f: f.endswith(".webp"), os.listdir(folder)),
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
key=lambda f: os.path.getctime(os.path.join(folder, f)),
reverse=True,
)

View File

@@ -539,7 +539,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
cv2.imwrite(file, frame)
files = sorted(
filter(lambda f: f.endswith(".webp"), os.listdir(folder)),
filter(lambda f: (f.endswith(".webp")), os.listdir(folder)),
key=lambda f: os.path.getctime(os.path.join(folder, f)),
reverse=True,
)

View File

@@ -633,7 +633,7 @@ class EmbeddingMaintainer(threading.Thread):
camera, frame_name, _, _, motion_boxes, _ = data
if not camera or camera not in self.config.cameras:
if not camera or len(motion_boxes) == 0 or camera not in self.config.cameras:
return
camera_config = self.config.cameras[camera]
@@ -660,10 +660,8 @@ class EmbeddingMaintainer(threading.Thread):
return
for processor in self.realtime_processors:
if (
dedicated_lpr_enabled
and len(motion_boxes) > 0
and isinstance(processor, LicensePlateRealTimeProcessor)
if dedicated_lpr_enabled and isinstance(
processor, LicensePlateRealTimeProcessor
):
processor.process_frame(camera, yuv_frame, True)

View File

@@ -2,7 +2,6 @@
import logging
import os
import threading
import warnings
from transformers import AutoFeatureExtractor, AutoTokenizer
@@ -55,7 +54,6 @@ class JinaV1TextEmbedding(BaseEmbedding):
self.tokenizer = None
self.feature_extractor = None
self.runner = None
self._lock = threading.Lock()
files_names = list(self.download_urls.keys()) + [self.tokenizer_file]
if not all(
@@ -136,18 +134,17 @@ class JinaV1TextEmbedding(BaseEmbedding):
)
def _preprocess_inputs(self, raw_inputs):
with self._lock:
max_length = max(len(self.tokenizer.encode(text)) for text in raw_inputs)
return [
self.tokenizer(
text,
padding="max_length",
truncation=True,
max_length=max_length,
return_tensors="np",
)
for text in raw_inputs
]
max_length = max(len(self.tokenizer.encode(text)) for text in raw_inputs)
return [
self.tokenizer(
text,
padding="max_length",
truncation=True,
max_length=max_length,
return_tensors="np",
)
for text in raw_inputs
]
class JinaV1ImageEmbedding(BaseEmbedding):
@@ -177,7 +174,6 @@ class JinaV1ImageEmbedding(BaseEmbedding):
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
self.feature_extractor = None
self.runner: BaseModelRunner | None = None
self._lock = threading.Lock()
files_names = list(self.download_urls.keys())
if not all(
os.path.exists(os.path.join(self.download_path, n)) for n in files_names
@@ -220,9 +216,8 @@ class JinaV1ImageEmbedding(BaseEmbedding):
)
def _preprocess_inputs(self, raw_inputs):
with self._lock:
processed_images = [self._process_image(img) for img in raw_inputs]
return [
self.feature_extractor(images=image, return_tensors="np")
for image in processed_images
]
processed_images = [self._process_image(img) for img in raw_inputs]
return [
self.feature_extractor(images=image, return_tensors="np")
for image in processed_images
]

View File

@@ -6,7 +6,6 @@ from typing import Dict
from frigate.comms.events_updater import EventEndPublisher, EventUpdateSubscriber
from frigate.config import FrigateConfig
from frigate.config.classification import ObjectClassificationType
from frigate.events.types import EventStateEnum, EventTypeEnum
from frigate.models import Event
from frigate.util.builtin import to_relative_box
@@ -16,16 +15,6 @@ logger = logging.getLogger(__name__)
def should_update_db(prev_event: Event, current_event: Event) -> bool:
"""If current_event has updated fields and (clip or snapshot)."""
# If event is ending and was previously saved, always update to set end_time
# This ensures events are properly ended even when alerts/detections are disabled
# mid-event (which can cause has_clip/has_snapshot to become False)
if (
prev_event["end_time"] is None
and current_event["end_time"] is not None
and (prev_event["has_clip"] or prev_event["has_snapshot"])
):
return True
if current_event["has_clip"] or current_event["has_snapshot"]:
# if this is the first time has_clip or has_snapshot turned true
if not prev_event["has_clip"] and not prev_event["has_snapshot"]:
@@ -248,18 +237,6 @@ class EventProcessor(threading.Thread):
"recognized_license_plate"
][1]
# only overwrite attribute-type custom model fields in the database if they're set
for name, model_config in self.config.classification.custom.items():
if (
model_config.object_config
and model_config.object_config.classification_type
== ObjectClassificationType.attribute
):
value = event_data.get(name)
if value is not None:
event[Event.data][name] = value[0]
event[Event.data][f"{name}_score"] = value[1]
(
Event.insert(event)
.on_conflict(

View File

@@ -99,8 +99,8 @@ When forming your description:
## Response Format
Your response MUST be a flat JSON object with:
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `title` (string): A concise, grammatically complete title in the format "[Subject] [action verb] [context]" that matches your scene description. Use names from "Objects in Scene" when you visually observe them.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail. This should be a condensed version of the scene description above.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.

View File

@@ -64,7 +64,6 @@ class OpenAIClient(GenAIClient):
},
],
timeout=self.timeout,
**self.genai_config.runtime_options,
)
except Exception as e:
logger.warning("Azure OpenAI returned an error: %s", str(e))

View File

@@ -3,8 +3,8 @@
import logging
from typing import Optional
from google import genai
from google.genai import errors, types
import google.generativeai as genai
from google.api_core.exceptions import GoogleAPICallError
from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider
@@ -16,58 +16,40 @@ logger = logging.getLogger(__name__)
class GeminiClient(GenAIClient):
"""Generative AI client for Frigate using Gemini."""
provider: genai.Client
provider: genai.GenerativeModel
def _init_provider(self):
"""Initialize the client."""
# Merge provider_options into HttpOptions
http_options_dict = {
"timeout": int(self.timeout * 1000), # requires milliseconds
"retry_options": types.HttpRetryOptions(
attempts=3,
initial_delay=1.0,
max_delay=60.0,
exp_base=2.0,
jitter=1.0,
http_status_codes=[429, 500, 502, 503, 504],
),
}
if isinstance(self.genai_config.provider_options, dict):
http_options_dict.update(self.genai_config.provider_options)
return genai.Client(
api_key=self.genai_config.api_key,
http_options=types.HttpOptions(**http_options_dict),
genai.configure(api_key=self.genai_config.api_key)
return genai.GenerativeModel(
self.genai_config.model, **self.genai_config.provider_options
)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Gemini."""
contents = [
types.Part.from_bytes(data=img, mime_type="image/jpeg") for img in images
data = [
{
"mime_type": "image/jpeg",
"data": img,
}
for img in images
] + [prompt]
try:
# Merge runtime_options into generation_config if provided
generation_config_dict = {"candidate_count": 1}
generation_config_dict.update(self.genai_config.runtime_options)
response = self.provider.models.generate_content(
model=self.genai_config.model,
contents=contents,
config=types.GenerateContentConfig(
**generation_config_dict,
response = self.provider.generate_content(
data,
generation_config=genai.types.GenerationConfig(
candidate_count=1,
),
request_options=genai.types.RequestOptions(
timeout=self.timeout,
),
)
except errors.APIError as e:
except GoogleAPICallError as e:
logger.warning("Gemini returned an error: %s", str(e))
return None
except Exception as e:
logger.warning("An unexpected error occurred with Gemini: %s", str(e))
return None
try:
description = response.text.strip()
except (ValueError, AttributeError):
except ValueError:
# No description was generated
return None
return description

View File

@@ -58,15 +58,11 @@ class OllamaClient(GenAIClient):
)
return None
try:
ollama_options = {
**self.provider_options,
**self.genai_config.runtime_options,
}
result = self.provider.generate(
self.genai_config.model,
prompt,
images=images if images else None,
**ollama_options,
**self.provider_options,
)
logger.debug(
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"

View File

@@ -22,14 +22,9 @@ class OpenAIClient(GenAIClient):
def _init_provider(self):
"""Initialize the client."""
# Extract context_size from provider_options as it's not a valid OpenAI client parameter
# It will be used in get_context_size() instead
provider_opts = {
k: v
for k, v in self.genai_config.provider_options.items()
if k != "context_size"
}
return OpenAI(api_key=self.genai_config.api_key, **provider_opts)
return OpenAI(
api_key=self.genai_config.api_key, **self.genai_config.provider_options
)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to OpenAI."""
@@ -61,7 +56,6 @@ class OpenAIClient(GenAIClient):
},
],
timeout=self.timeout,
**self.genai_config.runtime_options,
)
if (
result is not None
@@ -79,16 +73,6 @@ class OpenAIClient(GenAIClient):
if self.context_size is not None:
return self.context_size
# First check provider_options for manually specified context size
# This is necessary for llama.cpp and other OpenAI-compatible servers
# that don't expose the configured runtime context size in the API response
if "context_size" in self.genai_config.provider_options:
self.context_size = self.genai_config.provider_options["context_size"]
logger.debug(
f"Using context size {self.context_size} from provider_options for model {self.genai_config.model}"
)
return self.context_size
try:
models = self.provider.models.list()
for model in models.data:

View File

@@ -26,16 +26,15 @@ LOG_HANDLER.setFormatter(
# filter out norfair warning
LOG_HANDLER.addFilter(
lambda record: (
not record.getMessage().startswith("You are using a scalar distance function")
lambda record: not record.getMessage().startswith(
"You are using a scalar distance function"
)
)
# filter out tflite logging
LOG_HANDLER.addFilter(
lambda record: (
"Created TensorFlow Lite XNNPACK delegate for CPU." not in record.getMessage()
)
lambda record: "Created TensorFlow Lite XNNPACK delegate for CPU."
not in record.getMessage()
)
@@ -90,7 +89,6 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
"ws4py": LogLevel.error,
"PIL": LogLevel.warning,
"numba": LogLevel.warning,
"google_genai.models": LogLevel.warning,
**log_levels,
}

View File

@@ -97,7 +97,6 @@ class RecordingMaintainer(threading.Thread):
self.object_recordings_info: dict[str, list] = defaultdict(list)
self.audio_recordings_info: dict[str, list] = defaultdict(list)
self.end_time_cache: dict[str, Tuple[datetime.datetime, float]] = {}
self.unexpected_cache_files_logged: bool = False
async def move_files(self) -> None:
cache_files = [
@@ -113,14 +112,7 @@ class RecordingMaintainer(threading.Thread):
for cache in cache_files:
cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0]
try:
camera, date = basename.rsplit("@", maxsplit=1)
except ValueError:
if not self.unexpected_cache_files_logged:
logger.warning("Skipping unexpected files in cache")
self.unexpected_cache_files_logged = True
continue
camera, date = basename.rsplit("@", maxsplit=1)
start_time = datetime.datetime.strptime(
date, CACHE_SEGMENT_FORMAT
).astimezone(datetime.timezone.utc)
@@ -172,13 +164,7 @@ class RecordingMaintainer(threading.Thread):
cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0]
try:
camera, date = basename.rsplit("@", maxsplit=1)
except ValueError:
if not self.unexpected_cache_files_logged:
logger.warning("Skipping unexpected files in cache")
self.unexpected_cache_files_logged = True
continue
camera, date = basename.rsplit("@", maxsplit=1)
# important that start_time is utc because recordings are stored and compared in utc
start_time = datetime.datetime.strptime(
@@ -208,10 +194,8 @@ class RecordingMaintainer(threading.Thread):
processed_segment_count = len(
list(
filter(
lambda r: (
r["start_time"].timestamp()
< most_recently_processed_frame_time
),
lambda r: r["start_time"].timestamp()
< most_recently_processed_frame_time,
grouped_recordings[camera],
)
)

View File

@@ -168,57 +168,6 @@ class TestHttpApp(BaseTestHttp):
assert events[0]["id"] == id
assert events[1]["id"] == id2
def test_get_event_list_match_multilingual_attribute(self):
event_id = "123456.zh"
attribute = "中文标签"
with AuthTestClient(self.app) as client:
super().insert_mock_event(event_id, data={"custom_attr": attribute})
events = client.get("/events", params={"attributes": attribute}).json()
assert len(events) == 1
assert events[0]["id"] == event_id
events = client.get(
"/events", params={"attributes": "%E4%B8%AD%E6%96%87%E6%A0%87%E7%AD%BE"}
).json()
assert len(events) == 1
assert events[0]["id"] == event_id
def test_events_search_match_multilingual_attribute(self):
event_id = "123456.zh.search"
attribute = "中文标签"
mock_embeddings = Mock()
mock_embeddings.search_thumbnail.return_value = [(event_id, 0.05)]
self.app.frigate_config.semantic_search.enabled = True
self.app.embeddings = mock_embeddings
with AuthTestClient(self.app) as client:
super().insert_mock_event(event_id, data={"custom_attr": attribute})
events = client.get(
"/events/search",
params={
"search_type": "similarity",
"event_id": event_id,
"attributes": attribute,
},
).json()
assert len(events) == 1
assert events[0]["id"] == event_id
events = client.get(
"/events/search",
params={
"search_type": "similarity",
"event_id": event_id,
"attributes": "%E4%B8%AD%E6%96%87%E6%A0%87%E7%AD%BE",
},
).json()
assert len(events) == 1
assert events[0]["id"] == event_id
def test_get_good_event(self):
id = "123456.random"

View File

@@ -632,49 +632,6 @@ class TestConfig(unittest.TestCase):
)
assert frigate_config.cameras["back"].zones["test"].color != (0, 0, 0)
def test_zone_filter_area_percent_converts_to_pixels(self):
config = {
"mqtt": {"host": "mqtt"},
"record": {
"alerts": {
"retain": {
"days": 20,
}
}
},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 1080,
"width": 1920,
"fps": 5,
},
"zones": {
"notification": {
"coordinates": "0.03,1,0.025,0,0.626,0,0.643,1",
"objects": ["person"],
"filters": {"person": {"min_area": 0.1}},
}
},
}
},
}
frigate_config = FrigateConfig(**config)
expected_min_area = int(1080 * 1920 * 0.1)
assert (
frigate_config.cameras["back"]
.zones["notification"]
.filters["person"]
.min_area
== expected_min_area
)
def test_zone_relative_matches_explicit(self):
config = {
"mqtt": {"host": "mqtt"},

View File

@@ -1,66 +0,0 @@
import sys
import unittest
from unittest.mock import MagicMock, patch
# Mock complex imports before importing maintainer
sys.modules["frigate.comms.inter_process"] = MagicMock()
sys.modules["frigate.comms.detections_updater"] = MagicMock()
sys.modules["frigate.comms.recordings_updater"] = MagicMock()
sys.modules["frigate.config.camera.updater"] = MagicMock()
# Now import the class under test
from frigate.config import FrigateConfig # noqa: E402
from frigate.record.maintainer import RecordingMaintainer # noqa: E402
class TestMaintainer(unittest.IsolatedAsyncioTestCase):
async def test_move_files_survives_bad_filename(self):
config = MagicMock(spec=FrigateConfig)
config.cameras = {}
stop_event = MagicMock()
maintainer = RecordingMaintainer(config, stop_event)
# We need to mock end_time_cache to avoid key errors if logic proceeds
maintainer.end_time_cache = {}
# Mock filesystem
# One bad file, one good file
files = ["bad_filename.mp4", "camera@20210101000000+0000.mp4"]
with patch("os.listdir", return_value=files):
with patch("os.path.isfile", return_value=True):
with patch(
"frigate.record.maintainer.psutil.process_iter", return_value=[]
):
with patch("frigate.record.maintainer.logger.warning") as warn:
# Mock validate_and_move_segment to avoid further logic
maintainer.validate_and_move_segment = MagicMock()
try:
await maintainer.move_files()
except ValueError as e:
if "not enough values to unpack" in str(e):
self.fail("move_files() crashed on bad filename!")
raise e
except Exception:
# Ignore other errors (like DB connection) as we only care about the unpack crash
pass
# The bad filename is encountered in multiple loops, but should only warn once.
matching = [
c
for c in warn.call_args_list
if c.args
and isinstance(c.args[0], str)
and "Skipping unexpected files in cache" in c.args[0]
]
self.assertEqual(
1,
len(matching),
f"Expected a single warning for unexpected files, got {len(matching)}",
)
if __name__ == "__main__":
unittest.main()

View File

@@ -31,21 +31,6 @@ class TestProxyRoleResolution(unittest.TestCase):
role = resolve_role(headers, self.proxy_config, self.config_roles)
self.assertEqual(role, "admin")
def test_role_map_or_matching(self):
config = self.proxy_config
config.header_map.role_map = {
"admin": ["group_admin", "group_privileged"],
}
# OR semantics: a single matching group should map to the role
headers = {"x-remote-role": "group_admin"}
role = resolve_role(headers, config, self.config_roles)
self.assertEqual(role, "admin")
headers = {"x-remote-role": "group_admin|group_privileged"}
role = resolve_role(headers, config, self.config_roles)
self.assertEqual(role, "admin")
def test_direct_role_header_with_separator(self):
config = self.proxy_config
config.header_map.role_map = None # disable role_map

View File

@@ -377,14 +377,7 @@ class TrackedObject:
return (thumb_update, significant_change, path_update, autotracker_update)
def to_dict(self) -> dict[str, Any]:
# Tracking internals excluded from output (centroid, estimate, estimate_velocity)
_EXCLUDED_OBJ_DATA_KEYS = {
"centroid",
"estimate",
"estimate_velocity",
}
event: dict[str, Any] = {
event = {
"id": self.obj_data["id"],
"camera": self.camera_config.name,
"frame_time": self.obj_data["frame_time"],
@@ -419,11 +412,6 @@ class TrackedObject:
"recognized_license_plate": self.obj_data.get("recognized_license_plate"),
}
# Add any other obj_data keys (e.g. custom attribute fields) not yet included
for key, value in self.obj_data.items():
if key not in _EXCLUDED_OBJ_DATA_KEYS and key not in event:
event[key] = value
return event
def is_active(self) -> bool:

View File

@@ -129,9 +129,7 @@ def get_ffmpeg_arg_list(arg: Any) -> list:
return arg if isinstance(arg, list) else shlex.split(arg)
def load_labels(
path: Optional[str], encoding="utf-8", prefill=91, indexed: bool | None = None
):
def load_labels(path: Optional[str], encoding="utf-8", prefill=91):
"""Loads labels from file (with or without index numbers).
Args:
path: path to label file.
@@ -148,12 +146,11 @@ def load_labels(
if not lines:
return {}
if indexed != False and lines[0].split(" ", maxsplit=1)[0].isdigit():
if lines[0].split(" ", maxsplit=1)[0].isdigit():
pairs = [line.split(" ", maxsplit=1) for line in lines]
labels.update({int(index): label.strip() for index, label in pairs})
else:
labels.update({index: line.strip() for index, line in enumerate(lines)})
return labels

View File

@@ -43,7 +43,6 @@ def write_training_metadata(model_name: str, image_count: int) -> None:
model_name: Name of the classification model
image_count: Number of images used in training
"""
model_name = model_name.strip()
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
os.makedirs(clips_model_dir, exist_ok=True)
@@ -71,7 +70,6 @@ def read_training_metadata(model_name: str) -> dict[str, any] | None:
Returns:
Dictionary with last_training_date and last_training_image_count, or None if not found
"""
model_name = model_name.strip()
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
metadata_path = os.path.join(clips_model_dir, TRAINING_METADATA_FILE)
@@ -97,7 +95,6 @@ def get_dataset_image_count(model_name: str) -> int:
Returns:
Total count of images across all categories
"""
model_name = model_name.strip()
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
if not os.path.exists(dataset_dir):
@@ -129,7 +126,6 @@ class ClassificationTrainingProcess(FrigateProcess):
"TF_KERAS_MOBILENET_V2_WEIGHTS_URL",
"",
)
model_name = model_name.strip()
super().__init__(
stop_event=None,
priority=PROCESS_PRIORITY_LOW,
@@ -296,7 +292,6 @@ class ClassificationTrainingProcess(FrigateProcess):
def kickoff_model_training(
embeddingRequestor: EmbeddingsRequestor, model_name: str
) -> None:
model_name = model_name.strip()
requestor = InterProcessRequestor()
requestor.send_data(
UPDATE_MODEL_STATE,
@@ -364,7 +359,6 @@ def collect_state_classification_examples(
model_name: Name of the classification model
cameras: Dict mapping camera names to normalized crop coordinates [x1, y1, x2, y2] (0-1)
"""
model_name = model_name.strip()
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
# Step 1: Get review items for the cameras
@@ -720,7 +714,6 @@ def collect_object_classification_examples(
model_name: Name of the classification model
label: Object label to collect (e.g., "person", "car")
"""
model_name = model_name.strip()
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
temp_dir = os.path.join(dataset_dir, "temp")
os.makedirs(temp_dir, exist_ok=True)

View File

@@ -540,16 +540,9 @@ def get_jetson_stats() -> Optional[dict[int, dict]]:
try:
results["mem"] = "-" # no discrete gpu memory
if os.path.exists("/sys/devices/gpu.0/load"):
with open("/sys/devices/gpu.0/load", "r") as f:
gpuload = float(f.readline()) / 10
results["gpu"] = f"{gpuload}%"
elif os.path.exists("/sys/devices/platform/gpu.0/load"):
with open("/sys/devices/platform/gpu.0/load", "r") as f:
gpuload = float(f.readline()) / 10
results["gpu"] = f"{gpuload}%"
else:
results["gpu"] = "-"
with open("/sys/devices/gpu.0/load", "r") as f:
gpuload = float(f.readline()) / 10
results["gpu"] = f"{gpuload}%"
except Exception:
return None

View File

@@ -64,12 +64,10 @@ def stop_ffmpeg(ffmpeg_process: sp.Popen[Any], logger: logging.Logger):
try:
logger.info("Waiting for ffmpeg to exit gracefully...")
ffmpeg_process.communicate(timeout=30)
logger.info("FFmpeg has exited")
except sp.TimeoutExpired:
logger.info("FFmpeg didn't exit. Force killing...")
ffmpeg_process.kill()
ffmpeg_process.communicate()
logger.info("FFmpeg has been killed")
ffmpeg_process = None
@@ -214,7 +212,6 @@ class CameraWatchdog(threading.Thread):
self.latest_valid_segment_time: float = 0
self.latest_invalid_segment_time: float = 0
self.latest_cache_segment_time: float = 0
self.record_enable_time: datetime | None = None
def _update_enabled_state(self) -> bool:
"""Fetch the latest config and update enabled state."""
@@ -262,9 +259,6 @@ class CameraWatchdog(threading.Thread):
def run(self) -> None:
if self._update_enabled_state():
self.start_all_ffmpeg()
# If recording is enabled at startup, set the grace period timer
if self.config.record.enabled:
self.record_enable_time = datetime.now().astimezone(timezone.utc)
time.sleep(self.sleeptime)
while not self.stop_event.wait(self.sleeptime):
@@ -274,15 +268,13 @@ class CameraWatchdog(threading.Thread):
self.logger.debug(f"Enabling camera {self.config.name}")
self.start_all_ffmpeg()
# reset all timestamps and record the enable time for grace period
# reset all timestamps
self.latest_valid_segment_time = 0
self.latest_invalid_segment_time = 0
self.latest_cache_segment_time = 0
self.record_enable_time = datetime.now().astimezone(timezone.utc)
else:
self.logger.debug(f"Disabling camera {self.config.name}")
self.stop_all_ffmpeg()
self.record_enable_time = None
# update camera status
self.requestor.send_data(
@@ -367,12 +359,6 @@ class CameraWatchdog(threading.Thread):
if self.config.record.enabled and "record" in p["roles"]:
now_utc = datetime.now().astimezone(timezone.utc)
# Check if we're within the grace period after enabling recording
# Grace period: 90 seconds allows time for ffmpeg to start and create first segment
in_grace_period = self.record_enable_time is not None and (
now_utc - self.record_enable_time
) < timedelta(seconds=90)
latest_cache_dt = (
datetime.fromtimestamp(
self.latest_cache_segment_time, tz=timezone.utc
@@ -398,16 +384,10 @@ class CameraWatchdog(threading.Thread):
)
# ensure segments are still being created and that they have valid video data
# Skip checks during grace period to allow segments to start being created
cache_stale = not in_grace_period and now_utc > (
latest_cache_dt + timedelta(seconds=120)
)
valid_stale = not in_grace_period and now_utc > (
latest_valid_dt + timedelta(seconds=120)
)
cache_stale = now_utc > (latest_cache_dt + timedelta(seconds=120))
valid_stale = now_utc > (latest_valid_dt + timedelta(seconds=120))
invalid_stale_condition = (
self.latest_invalid_segment_time > 0
and not in_grace_period
and now_utc > (latest_invalid_dt + timedelta(seconds=120))
and self.latest_valid_segment_time
<= self.latest_invalid_segment_time

300
web/package-lock.json generated
View File

@@ -48,7 +48,7 @@
"idb-keyval": "^6.2.1",
"immer": "^10.1.1",
"konva": "^9.3.18",
"lodash": "^4.17.23",
"lodash": "^4.17.21",
"lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0",
@@ -64,7 +64,7 @@
"react-i18next": "^15.2.0",
"react-icons": "^5.5.0",
"react-konva": "^18.2.10",
"react-router-dom": "^6.30.3",
"react-router-dom": "^6.26.0",
"react-swipeable": "^7.0.2",
"react-tracked": "^2.0.1",
"react-transition-group": "^4.4.5",
@@ -116,7 +116,7 @@
"prettier-plugin-tailwindcss": "^0.6.5",
"tailwindcss": "^3.4.9",
"typescript": "^5.8.2",
"vite": "^6.4.1",
"vite": "^6.2.0",
"vitest": "^3.0.7"
}
},
@@ -3293,9 +3293,9 @@
"license": "MIT"
},
"node_modules/@remix-run/router": {
"version": "1.23.2",
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.23.2.tgz",
"integrity": "sha512-Ic6m2U/rMjTkhERIa/0ZtXJP17QUi2CbWE7cqx4J58M8aA3QTfW+2UlQ4psvTX9IO1RfNVhK3pcpdjej7L+t2w==",
"version": "1.19.0",
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.19.0.tgz",
"integrity": "sha512-zDICCLKEwbVYTS6TjYaWtHXxkdoUvD/QXvyVZjGCsWz5vyH7aFeONlPffPdW+Y/t6KT0MgXb2Mfjun9YpWN1dA==",
"license": "MIT",
"engines": {
"node": ">=14.0.0"
@@ -4683,19 +4683,6 @@
"node": ">=8"
}
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/callsites": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
@@ -5632,20 +5619,6 @@
"csstype": "^3.0.2"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/eastasianwidth": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz",
@@ -5706,24 +5679,6 @@
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-module-lexer": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.6.0.tgz",
@@ -5731,33 +5686,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/es-object-atoms": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/esbuild": {
"version": "0.25.0",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.0.tgz",
@@ -6294,15 +6222,12 @@
}
},
"node_modules/form-data": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"license": "MIT",
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz",
"integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==",
"dependencies": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
},
"engines": {
@@ -6382,30 +6307,6 @@
"node": "6.* || 8.* || >= 10.*"
}
},
"node_modules/get-intrinsic": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.1.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-nonce": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-nonce/-/get-nonce-1.0.1.tgz",
@@ -6415,19 +6316,6 @@
"node": ">=6"
}
},
"node_modules/get-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
"license": "MIT",
"dependencies": {
"dunder-proto": "^1.0.1",
"es-object-atoms": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/glob": {
"version": "7.2.3",
"resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz",
@@ -6496,18 +6384,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/graphemer": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/graphemer/-/graphemer-1.4.0.tgz",
@@ -6537,38 +6413,10 @@
"node": ">=8"
}
},
"node_modules/has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"license": "MIT",
"dependencies": {
"has-symbols": "^1.0.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"license": "MIT",
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.0.tgz",
"integrity": "sha512-vUptKVTpIJhcczKBbgnS+RtcuYMB8+oNzPK2/Hp3hanz8JmpATdmmgLgSaadVREkDm+e2giHwY3ZRkyjSIDDFA==",
"dependencies": {
"function-bind": "^1.1.2"
},
@@ -7210,10 +7058,9 @@
}
},
"node_modules/lodash": {
"version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"license": "MIT"
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg=="
},
"node_modules/lodash.merge": {
"version": "4.6.2",
@@ -7293,15 +7140,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/merge-stream": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz",
@@ -8618,12 +8456,12 @@
}
},
"node_modules/react-router": {
"version": "6.30.3",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.30.3.tgz",
"integrity": "sha512-XRnlbKMTmktBkjCLE8/XcZFlnHvr2Ltdr1eJX4idL55/9BbORzyZEaIkBFDhFGCEWBBItsVrDxwx3gnisMitdw==",
"version": "6.26.0",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.26.0.tgz",
"integrity": "sha512-wVQq0/iFYd3iZ9H2l3N3k4PL8EEHcb0XlU2Na8nEwmiXgIUElEH6gaJDtUQxJ+JFzmIXaQjfdpcGWaM6IoQGxg==",
"license": "MIT",
"dependencies": {
"@remix-run/router": "1.23.2"
"@remix-run/router": "1.19.0"
},
"engines": {
"node": ">=14.0.0"
@@ -8633,13 +8471,13 @@
}
},
"node_modules/react-router-dom": {
"version": "6.30.3",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.30.3.tgz",
"integrity": "sha512-pxPcv1AczD4vso7G4Z3TKcvlxK7g7TNt3/FNGMhfqyntocvYKj+GCatfigGDjbLozC4baguJ0ReCigoDJXb0ag==",
"version": "6.26.0",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.26.0.tgz",
"integrity": "sha512-RRGUIiDtLrkX3uYcFiCIxKFWMcWQGMojpYZfcstc63A1+sSnVgILGIm9gNUA6na3Fm1QuPGSBQH2EMbAZOnMsQ==",
"license": "MIT",
"dependencies": {
"@remix-run/router": "1.23.2",
"react-router": "6.30.3"
"@remix-run/router": "1.19.0",
"react-router": "6.26.0"
},
"engines": {
"node": ">=14.0.0"
@@ -9664,54 +9502,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/tinyglobby": {
"version": "0.2.15",
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
"integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"fdir": "^6.5.0",
"picomatch": "^4.0.3"
},
"engines": {
"node": ">=12.0.0"
},
"funding": {
"url": "https://github.com/sponsors/SuperchupuDev"
}
},
"node_modules/tinyglobby/node_modules/fdir": {
"version": "6.5.0",
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12.0.0"
},
"peerDependencies": {
"picomatch": "^3 || ^4"
},
"peerDependenciesMeta": {
"picomatch": {
"optional": true
}
}
},
"node_modules/tinyglobby/node_modules/picomatch": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/tinypool": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/tinypool/-/tinypool-1.0.2.tgz",
@@ -10078,18 +9868,15 @@
}
},
"node_modules/vite": {
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.2.0.tgz",
"integrity": "sha512-7dPxoo+WsT/64rDcwoOjk76XHj+TqNTIvHKcuMQ1k4/SeHDaQt5GFAeLYzrimZrMpn/O6DtdI03WUjdxuPM0oQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"esbuild": "^0.25.0",
"fdir": "^6.4.4",
"picomatch": "^4.0.2",
"postcss": "^8.5.3",
"rollup": "^4.34.9",
"tinyglobby": "^0.2.13"
"rollup": "^4.30.1"
},
"bin": {
"vite": "bin/vite.js"
@@ -10183,37 +9970,6 @@
"monaco-editor": ">=0.33.0"
}
},
"node_modules/vite/node_modules/fdir": {
"version": "6.5.0",
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12.0.0"
},
"peerDependencies": {
"picomatch": "^3 || ^4"
},
"peerDependenciesMeta": {
"picomatch": {
"optional": true
}
}
},
"node_modules/vite/node_modules/picomatch": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/vitest": {
"version": "3.0.7",
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.0.7.tgz",

View File

@@ -54,7 +54,7 @@
"idb-keyval": "^6.2.1",
"immer": "^10.1.1",
"konva": "^9.3.18",
"lodash": "^4.17.23",
"lodash": "^4.17.21",
"lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0",
@@ -70,7 +70,7 @@
"react-i18next": "^15.2.0",
"react-icons": "^5.5.0",
"react-konva": "^18.2.10",
"react-router-dom": "^6.30.3",
"react-router-dom": "^6.26.0",
"react-swipeable": "^7.0.2",
"react-tracked": "^2.0.1",
"react-transition-group": "^4.4.5",
@@ -122,7 +122,7 @@
"prettier-plugin-tailwindcss": "^0.6.5",
"tailwindcss": "^3.4.9",
"typescript": "^5.8.2",
"vite": "^6.4.1",
"vite": "^6.2.0",
"vitest": "^3.0.7"
}
}

View File

@@ -1,6 +1 @@
{
"train": {
"titleShort": "الأخيرة"
},
"documentTitle": "تصنيف النماذج - Frigate"
}
{}

View File

@@ -1,6 +1,6 @@
{
"description": {
"addFace": "أضف مجموعة جديدة إلى مكتبة الوجوه عن طريق رفع صورتك الأولى.",
"addFace": "قم بإضافة مجموعة جديدة لمكتبة الأوجه.",
"invalidName": "أسم غير صالح. يجب أن يشمل الأسم فقط على الحروف، الأرقام، المسافات، الفاصلة العليا، الشرطة التحتية، والشرطة الواصلة.",
"placeholder": "أدخل أسم لهذه المجموعة"
},
@@ -21,88 +21,6 @@
"collections": "المجموعات",
"createFaceLibrary": {
"title": "إنشاء المجاميع",
"desc": "إنشاء مجموعة جديدة",
"new": "إضافة وجه جديد",
"nextSteps": "لبناء أساس قوي:<li>استخدم علامة التبويب \"التعرّفات الأخيرة\" لاختيار الصور والتدريب عليها لكل شخص تم اكتشافه.</li> <li>ركّز على الصور الأمامية المباشرة للحصول على أفضل النتائج؛ وتجنّب صور التدريب التي تُظهر الوجوه بزاوية.</li>"
},
"steps": {
"faceName": "ادخل اسم للوجه",
"uploadFace": "ارفع صورة للوجه",
"nextSteps": "الخطوة التالية",
"description": {
"uploadFace": "قم برفع صورة لـ {{name}} تُظهر وجهه من زاوية أمامية مباشرة. لا يلزم أن تكون الصورة مقتصرة على الوجه فقط."
}
},
"train": {
"title": "التعرّفات الأخيرة",
"titleShort": "الأخيرة",
"aria": "اختر التعرّفات الأخيرة",
"empty": "لا توجد أي محاولات حديثة للتعرّف على الوجوه"
},
"deleteFaceLibrary": {
"title": "احذف الاسم",
"desc": "هل أنت متأكد أنك تريد حذف المجموعة {{name}}؟ سيؤدي هذا إلى حذف جميع الوجوه المرتبطة بها نهائيًا."
},
"deleteFaceAttempts": {
"title": "احذف الوجوه",
"desc_zero": "وجه",
"desc_one": "وجه",
"desc_two": "وجهان",
"desc_few": "وجوه",
"desc_many": "وجهًا",
"desc_other": "وجه"
},
"renameFace": {
"title": "اعادة تسمية الوجه",
"desc": "ادخل اسم جديد لـ{{name}}"
},
"button": {
"deleteFaceAttempts": "احذف الوجوه",
"addFace": "اظف وجهًا",
"renameFace": "اعد تسمية وجه",
"deleteFace": "احذف وجهًا",
"uploadImage": "ارفع صورة",
"reprocessFace": "إعادة معالجة الوجه"
},
"imageEntry": {
"validation": {
"selectImage": "يرجى اختيار ملف صورة."
},
"dropActive": "اسحب الصورة إلى هنا…",
"dropInstructions": "اسحب وأفلت أو الصق صورة هنا، أو انقر للاختيار",
"maxSize": "الحجم الأقصى: {{size}} ميغابايت"
},
"nofaces": "لا توجد وجوه متاحة",
"trainFaceAs": "درّب الوجه كـ:",
"trainFace": "درّب الوجه",
"toast": {
"success": {
"uploadedImage": "تم رفع الصورة بنجاح.",
"addFaceLibrary": "تمت إضافة {{name}} بنجاح إلى مكتبة الوجوه!",
"deletedFace_zero": "وجه",
"deletedFace_one": "وجه",
"deletedFace_two": "وجهين",
"deletedFace_few": "وجوه",
"deletedFace_many": "وجهًا",
"deletedFace_other": "وجه",
"deletedName_zero": "وجه",
"deletedName_one": "وجه",
"deletedName_two": "وجهين",
"deletedName_few": "وجوه",
"deletedName_many": "وجهًا",
"deletedName_other": "وجه",
"renamedFace": "تمت إعادة تسمية الوجه بنجاح إلى {{name}}",
"trainedFace": "تم تدريب الوجه بنجاح.",
"updatedFaceScore": "تم تحديث درجة الوجه بنجاح إلى {{name}} ({{score}})."
},
"error": {
"uploadingImageFailed": "فشل في رفع الصورة: {{errorMessage}}",
"addFaceLibraryFailed": "فشل في تعيين اسم الوجه: {{errorMessage}}",
"deleteFaceFailed": "فشل الحذف: {{errorMessage}}",
"deleteNameFailed": "فشل في حذف الاسم: {{errorMessage}}",
"renameFaceFailed": "فشل في إعادة تسمية الوجه: {{errorMessage}}",
"trainFailed": "فشل التدريب: {{errorMessage}}",
"updateFaceScoreFailed": "فشل في تحديث درجة الوجه: {{errorMessage}}"
}
"desc": "إنشاء مجموعة جديدة"
}
}

View File

@@ -2,9 +2,9 @@
"babbling": "Бърборене",
"whispering": "Шепнене",
"laughter": "Смях",
"crying": "Плач",
"crying": "Плача",
"sigh": "Въздишка",
"singing": еене",
"singing": одписвам",
"choir": "Хор",
"yodeling": "Йоделинг",
"mantra": "Мантра",
@@ -264,6 +264,5 @@
"pant": "Здъхване",
"stomach_rumble": "Къркорене на стомах",
"heartbeat": "Сърцебиене",
"scream": "Вик",
"snicker": "Хихикане"
"scream": "Вик"
}

View File

@@ -1,16 +1,6 @@
{
"form": {
"user": "Потребителско име",
"password": "Парола",
"login": "Вход",
"firstTimeLogin": "Опитвате да влезете за първи път? Данните за вход са разпечатани в логовете на Frigate.",
"errors": {
"usernameRequired": "Потребителското име е задължително",
"passwordRequired": "Паролата е задължителна",
"rateLimit": "Надхвърлен брой опити. Моля Опитайте по-късно.",
"loginFailed": "Неуспешен вход",
"unknownError": "Неизвестна грешка. Поля проверете логовете.",
"webUnknownError": "Неизвестна грешка. Поля проверете изхода в конзолата."
}
"password": "Парола"
}
}

View File

@@ -7,7 +7,7 @@
"label": "Изтрий група за камери",
"confirm": {
"title": "Потвърди изтриването",
"desc": "Сигурни ли сте, че искате да изтриете група <em>{{name}}</em>?"
"desc": "Сигурни ли сте, че искате да изтриете група </em>{{name}}</em>?"
}
},
"name": {

View File

@@ -11,9 +11,6 @@
},
"restart": {
"title": "Сигурен ли сте, че искате да рестартирате Frigate?",
"button": "Рестартирай",
"restarting": {
"title": "Frigare се рестартира"
}
"button": "Рестартирай"
}
}

View File

@@ -1,6 +1,3 @@
{
"documentTitle": "Модели за класификация - Frigate",
"description": {
"invalidName": "Невалидно име. Имената могат да съдържат единствено: букви, числа, празни места, долни черти и тирета."
}
"documentTitle": "Модели за класификация"
}

View File

@@ -1,18 +1,4 @@
{
"documentTitle": "Настройки на конфигурацията - Frigate",
"configEditor": "Конфигуратор",
"safeConfigEditor": "Конфигуратор (Safe Mode)",
"safeModeDescription": "Frigate е в режим \"Safe Mode\" тъй като конфигурацията не минава проверките за валидност.",
"copyConfig": "Копирай Конфигурацията",
"saveAndRestart": "Запази и Рестартирай",
"saveOnly": "Запази",
"confirm": "Изход без запис?",
"toast": {
"success": {
"copyToClipboard": "Конфигурацията е копирана."
},
"error": {
"savingError": "Грешка при запис на конфигурацията"
}
}
"documentTitle": "Настройки на конфигурацията - Фригейт",
"configEditor": "Настройки на конфигурацията"
}

View File

@@ -11,8 +11,5 @@
},
"allCameras": "Всички камери",
"alerts": "Известия",
"detections": "Засичания",
"motion": {
"label": "Движение"
}
"detections": "Засичания"
}

View File

@@ -10,5 +10,5 @@
"trackedObjectsCount_one": "{{count}} проследен обект ",
"trackedObjectsCount_other": "{{count}} проследени обекта ",
"documentTitle": "Разгледай - Фригейт",
"generativeAI": "Генеративен Изкъствен Интелект"
"generativeAI": "Генериращ Изкъствен Интелект"
}

View File

@@ -1,23 +1,4 @@
{
"documentTitle": "Експорт - Frigate",
"search": "Търси",
"noExports": "Няма намерени експорти",
"deleteExport": "Изтрий експорт",
"deleteExport.desc": "Сигурни ли сте, че искате да изтриете {{exportName}}?",
"editExport": {
"title": "Преименувай експорт",
"desc": "Въведете ново име за този експорт.",
"saveExport": "Запази експорт"
},
"tooltip": {
"shareExport": "Сподели експорт",
"downloadVideo": "Свали видео",
"editName": "Редактирай име",
"deleteExport": "Изтрий експорт"
},
"toast": {
"error": {
"renameExportFailed": "Неуспешно преименуване на експорт: {{errorMessage}}"
}
}
"search": "Търси"
}

View File

@@ -13,7 +13,6 @@
},
"description": {
"addFace": "Добавете нова колекция във библиотеката за лица при качването на първата ви снимка.",
"placeholder": "Напишете име за тази колекция",
"invalidName": "Невалидно име. Имената могат да съдържат единствено: букви, числа, празни места, долни черти и тирета."
"placeholder": "Напишете име за тази колекция"
}
}

View File

@@ -3,6 +3,5 @@
"save": "Запазване на търсенето"
},
"search": "Търси",
"savedSearches": "Запазени търсения",
"searchFor": "Търсене за {{inputValue}}"
"savedSearches": "Запазени търсения"
}

View File

@@ -4,7 +4,6 @@
},
"documentTitle": {
"cameras": "Статистики за Камери - Фригейт",
"storage": "Статистика за паметта - Фригейт",
"general": "Обща Статистика - Frigate"
"storage": "Статистика за паметта - Фригейт"
}
}

View File

@@ -60,7 +60,7 @@
"cough": "Tos",
"throat_clearing": "Carraspeig",
"sneeze": "Esternut",
"sniff": "olorar",
"sniff": "Fregit nasal",
"run": "Córrer",
"shuffle": "Passos arrossegats",
"footsteps": "Passos",
@@ -97,7 +97,7 @@
"moo": "Mugir",
"cowbell": "Esquellot",
"pig": "Porc",
"oink": "Oinc",
"oink": "Oink",
"bleat": "Brama",
"fowl": "Au de corral",
"chicken": "Pollastre",
@@ -439,37 +439,37 @@
"inside": "Interior",
"pulse": "Pols",
"outside": "Fora",
"chirp_tone": "Gisclada",
"chirp_tone": "To de grinyol",
"harmonic": "Harmònic",
"sine_wave": "Ona sinus",
"crunch": "Cruixit",
"hum": "Zunzum",
"plop": "Xip-xap",
"hum": "Taral·lejar",
"plop": "Chof",
"clickety_clack": "Clic-Clac",
"clicking": "Clicant",
"clatter": "Rebombori",
"clatter": "Soroll",
"chird": "Piular",
"liquid": "Líquid",
"splash": "Esquitx",
"slosh": "Xipolleig",
"boing": "Rebot",
"zing": "Zunzum agut",
"rumble": "Retombori",
"sizzle": "Crepitació",
"splash": "Xof",
"slosh": "Xip-xap",
"boing": "Boing",
"zing": "Fiu",
"rumble": "Bum-bum",
"sizzle": "Xiu-xiu",
"whir": "Brrrm",
"rustle": "Frec",
"creak": "Rascada",
"clang": "Soroll metàl·lic",
"rustle": "Fru-Fru",
"creak": "Clic-clac",
"clang": "Clang",
"squish": "Xaf",
"drip": "Goteig",
"pour": "Abocament",
"trickle": "Raig fi",
"gush": "Raig fort",
"fill": "Ompliment",
"ding": "Ting",
"ping": "Ressò",
"beep": "Pitit",
"squeal": "Chirrit",
"drip": "Plic-plic",
"pour": "Glug-glug",
"trickle": "Xiulet",
"gush": "Xuuuix",
"fill": "Glug-glug",
"ding": "Ding",
"ping": "Ping",
"beep": "Bip",
"squeal": "Xiscle",
"crumpling": "Arrugant-se",
"rub": "Fregar",
"scrape": "Raspar",
@@ -480,13 +480,13 @@
"smash": "Aixafar",
"whack": "Cop",
"slap": "Bufetada",
"bang": "Cop fort",
"bang": "Bang",
"basketball_bounce": "Rebot de bàsquet",
"chorus_effect": "Efecte de cor",
"effects_unit": "Unitat d'Efectes",
"electronic_tuner": "Afinador electrònic",
"thunk": "Bruix",
"thump": "Soroll sord",
"thump": "Cop fort",
"whoosh": "Xiuxiueig",
"arrow": "Fletxa",
"sonar": "Sonar",

View File

@@ -48,8 +48,7 @@
"bg": "Български (Búlgar)",
"gl": "Galego (Gallec)",
"id": "Bahasa Indonesia (Indonesi)",
"ur": "اردو (Urdú)",
"hr": "Hrvatski (croat)"
"ur": "اردو (Urdú)"
},
"system": "Sistema",
"systemMetrics": "Mètriques del sistema",
@@ -202,8 +201,7 @@
},
"inProgress": "En curs",
"invalidStartTime": "Hora d'inici no vàlida",
"invalidEndTime": "Hora de finalització no vàlida",
"never": "Mai"
"invalidEndTime": "Hora de finalització no vàlida"
},
"unit": {
"speed": {
@@ -229,8 +227,7 @@
"show": "Mostra {{item}}",
"ID": "ID",
"none": "Cap",
"all": "Tots",
"other": "Altres"
"all": "Tots"
},
"button": {
"apply": "Aplicar",

View File

@@ -6,8 +6,7 @@
"title": "Frigate s'està reiniciant",
"content": "Aquesta pàgina es tornarà a carregar d'aquí a {{countdown}} segons.",
"button": "Forçar la recàrrega ara"
},
"description": "Això aturarà breument Frigate mentre es reinicia."
}
},
"explore": {
"plus": {

View File

@@ -10,11 +10,7 @@
"empty": {
"alert": "Hi ha cap alerta per revisar",
"detection": "Hi ha cap detecció per revisar",
"motion": "No s'haan trobat dades de moviment",
"recordingsDisabled": {
"title": "S'han d'activar les gravacions",
"description": "Només es poden revisar temes quan s'han activat les gravacions de la càmera."
}
"motion": "No s'haan trobat dades de moviment"
},
"timeline": "Línia de temps",
"timeline.aria": "Seleccionar línia de temps",

View File

@@ -169,10 +169,7 @@
"title": "Edita els atributs",
"desc": "Seleccioneu els atributs de classificació per a aquesta {{label}}"
},
"attributes": "Atributs de classificació",
"title": {
"label": "Títol"
}
"attributes": "Atributs de classificació"
},
"searchResult": {
"tooltip": "S'ha identificat {{type}} amb una confiança del {{confidence}}%",

View File

@@ -19,8 +19,7 @@
"description": {
"addFace": "Afegiu una col·lecció nova a la biblioteca de cares pujant la vostra primera imatge.",
"placeholder": "Introduïu un nom per a aquesta col·lecció",
"invalidName": "Nom no vàlid. Els noms només poden incloure lletres, números, espais, apòstrofs, guions baixos i guions.",
"nameCannotContainHash": "El nom no pot contenir #."
"invalidName": "Nom no vàlid. Els noms només poden incloure lletres, números, espais, apòstrofs, guions baixos i guions."
},
"documentTitle": "Biblioteca de rostres - Frigate",
"uploadFaceImage": {

View File

@@ -184,16 +184,6 @@
"restricted": {
"title": "No hi ha càmeres disponibles",
"description": "No teniu permís per veure cap càmera en aquest grup."
},
"default": {
"title": "No s'ha configurat cap càmera",
"description": "Comenceu connectant una càmera a Frigate.",
"buttonText": "Afegeix una càmera"
},
"group": {
"title": "No hi ha càmeres al grup",
"description": "Aquest grup de càmeres no té càmeres assignades o habilitades.",
"buttonText": "Gestiona els grups"
}
}
}

View File

@@ -114,11 +114,6 @@
},
"error": {
"mustBeFinished": "El dibuix del polígon s'ha d'acabar abans de desar."
},
"type": {
"zone": "zona",
"motion_mask": "màscara de moviment",
"object_mask": "màscara d'objecte"
}
},
"zoneName": {
@@ -537,7 +532,7 @@
"hide": "Amaga contrasenya",
"requirements": {
"title": "Requisits contrasenya:",
"length": "Com a mínim 12 carácters",
"length": "Com a mínim 8 carácters",
"uppercase": "Com a mínim una majúscula",
"digit": "Com a mínim un digit",
"special": "Com a mínim un carácter especial (!@#$%^&*(),.?\":{}|<>)"
@@ -959,7 +954,7 @@
"useDigestAuthDescription": "Usa l'autenticació de resum HTTP per a ONVIF. Algunes càmeres poden requerir un nom d'usuari/contrasenya ONVIF dedicat en lloc de l'usuari administrador estàndard."
},
"save": {
"failure": "S'ha produït un error en desar {{cameraName}}.",
"failure": "SS'ha produït un error en desar {{cameraName}}.",
"success": "S'ha desat correctament la càmera nova {{cameraName}}."
},
"testResultLabels": {
@@ -1216,11 +1211,11 @@
"cameraReview": {
"object_descriptions": {
"title": "Descripcions d'objectes generadors d'IA",
"desc": "Activa/desactiva temporalment les descripcions d'objectes generatius d'IA per a aquesta càmera fins que es reiniciï Frigate. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als objectes rastrejats en aquesta càmera."
"desc": "Activa/desactiva temporalment les descripcions d'objectes generatius d'IA per a aquesta càmera. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als objectes rastrejats en aquesta càmera."
},
"review_descriptions": {
"title": "Descripcions de la IA generativa",
"desc": "Activa/desactiva temporalment les descripcions de la IA Generativa per a aquesta càmera fins que es reiniciï Frigate. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als elements de revisió d'aquesta càmera."
"desc": "Activa/desactiva temporalment les descripcions de revisió de la IA generativa per a aquesta càmera. Quan està desactivat, les descripcions generades per IA no se sol·licitaran per als elements de revisió d'aquesta càmera."
},
"review": {
"title": "Revisió",

View File

@@ -86,14 +86,7 @@
"otherProcesses": {
"title": "Altres processos",
"processMemoryUsage": "Ús de memòria de procés",
"processCpuUsage": "Ús de la CPU del procés",
"series": {
"recording": "gravant",
"review_segment": "segment de revisió",
"embeddings": "incrustacions",
"audio_detector": "detector d'àudio",
"go2rtc": "go2rtc"
}
"processCpuUsage": "Ús de la CPU del procés"
}
},
"storage": {

View File

@@ -78,11 +78,7 @@
"formattedTimestampFilename": {
"24hour": "dd-MM-yy-HH-mm-ss",
"12hour": "dd-MM.yy-h-mm-ss-a"
},
"never": "Nikdy",
"inProgress": "Zpracovává se",
"invalidStartTime": "Neplatný čas začátku",
"invalidEndTime": "Neplatný čas konce"
}
},
"button": {
"twoWayTalk": "Obousměrná komunikace",
@@ -119,17 +115,10 @@
"unselect": "Zrušit výběr",
"deleteNow": "Smazat hned",
"next": "Další",
"export": "Exportovat",
"continue": "Pokračovat"
"export": "Exportovat"
},
"label": {
"back": "Jdi zpět",
"hide": "Skrýt {{item}}",
"show": "Zobrazit {{item}}",
"ID": "ID",
"none": "Nic",
"all": "Vše",
"other": "Ostatní"
"back": "Jdi zpět"
},
"unit": {
"speed": {
@@ -139,14 +128,6 @@
"length": {
"feet": "stopa",
"meters": "metry"
},
"data": {
"kbps": "kB/s",
"mbps": "MB/s",
"gbps": "GB/s",
"kbph": "kB/hodinu",
"mbph": "MB/hodinu",
"gbph": "GB/hodinu"
}
},
"selectItem": "Vybrat {{item}}",
@@ -249,8 +230,7 @@
"uiPlayground": "UI hřiště",
"faceLibrary": "Knihovna Obličejů",
"configurationEditor": "Editor Konfigurace",
"withSystem": "Systém",
"classification": "Klasifikace"
"withSystem": "Systém"
},
"pagination": {
"previous": {
@@ -290,17 +270,5 @@
"viewer": "Divák",
"desc": "Správci mají plný přístup ke všem funkcím v uživatelském rozhraní Frigate. Diváci jsou omezeni na sledování kamer, položek přehledu a historických záznamů v UI."
},
"readTheDocumentation": "Přečtěte si dokumentaci",
"list": {
"two": "{{0}} a {{1}}",
"many": "{{items}}, a {{last}}",
"separatorWithSpace": ", "
},
"field": {
"optional": "Volitelné",
"internalID": "Interní ID Frigate používá v konfiguraci a databázi"
},
"information": {
"pixels": "{{area}}px"
}
"readTheDocumentation": "Přečtěte si dokumentaci"
}

View File

@@ -44,8 +44,7 @@
"button": {
"markAsReviewed": "Označit jako zkontrolované",
"deleteNow": "Smazat hned",
"export": "Exportovat",
"markAsUnreviewed": "Označit jako nezkontrolované"
"export": "Exportovat"
}
},
"export": {
@@ -68,13 +67,12 @@
"export": "Exportovat",
"selectOrExport": "Vybrat pro Export",
"toast": {
"success": "Export úspěšně spuštěn. Soubor najdete na stránce exportů.",
"success": "Export úspěšně spuštěn. Soubor najdete v adresáři /exports.",
"error": {
"failed": "Chyba spuštění exportu: {{error}}",
"endTimeMustAfterStartTime": "Čas konce musí být po čase začátku",
"noVaildTimeSelected": "Není vybráno žádné platné časové období"
},
"view": "Zobrazení"
}
},
"fromTimeline": {
"saveExport": "Uložit export",
@@ -118,7 +116,6 @@
"search": {
"placeholder": "Hledej pomocí štítku nebo podštítku..."
},
"noImages": "Nebyly nalezeny žádné náhledy pro tuto kameru",
"unknownLabel": "Uložený obrázek Spouštěče"
"noImages": "Nebyly nalezeny žádné náhledy pro tuto kameru"
}
}

View File

@@ -132,9 +132,5 @@
},
"count_one": "Třída {{count}}",
"count_other": "Třídy {{count}}"
},
"attributes": {
"label": "Atributy Klasifikace",
"all": "Všechny Atributy"
}
}

View File

@@ -38,65 +38,10 @@
"deleteImageFailed": "Chyba při mazání: {{errorMessage}}",
"deleteCategoryFailed": "Chyba při mazání třídy: {{errorMessage}}",
"deleteModelFailed": "Chyba při mazání modelu: {{errorMessage}}",
"categorizeFailed": "Chyba při mazání obrázku: {{errorMessage}}",
"trainingFailed": "Trénování modelu selhalo. Zkontrolujte logy Frigate pro zjištění detailů.",
"trainingFailedToStart": "Chyba spuštění trénování modelu: {{errorMessage}}",
"updateModelFailed": "Chyba aktualizace modelu: {{errorMessage}}",
"renameCategoryFailed": "Chyba přejmenování třídy: {{errorMessage}}"
"categorizeFailed": "Chyba při mazání obrázku: {{errorMessage}}"
}
},
"train": {
"titleShort": "Nedávný",
"title": "Předchozí klasifikace",
"aria": "Vybrat předchozí Klasifikace"
},
"deleteModel": {
"desc_one": "Jste si jistí, že chcete odstranit {{count}} model? Tím trvale odstraníte všechny související data včetně obrázků a tréninkových dat. Tato akce je nevratná.",
"desc_few": "Jste si jistí, že chcete odstranit {{count}} modely? Tím trvale odstraníte všechny související data včetně obrázků a tréninkových dat. Tato akce je nevratná.",
"desc_other": "Jste si jistí, že chcete odstranit {{count}} modelů? Tím trvale odstraníte všechny související data včetně obrázků a tréninkových dat. Tato akce je nevratná."
},
"deleteDatasetImages": {
"desc_one": "Opravdu chcete odstranit {{count}} obrázek z {{dataset}}? Tato akce je nevratná a vyžaduje přetrénování modelu.",
"desc_few": "Opravdu chcete odstranit {{count}} obrázky z {{dataset}}? Tato akce je nevratná a vyžaduje přetrénování modelu.",
"desc_other": "Opravdu chcete odstranit {{count}} obrázků z {{dataset}}? Tato akce je nevratná a vyžaduje přetrénování modelu.",
"title": "Smazat obrázky datové sady"
},
"deleteTrainImages": {
"desc_one": "Opravdu chcete odstranit {{count}} obrázek? Tato akce je nevratná.",
"desc_few": "Opravdu chcete odstranit {{count}} obrázky? Tato akce je nevratná.",
"desc_other": "Opravdu chcete odstranit {{count}} obrázků? Tato akce je nevratná.",
"title": "Odstranit tréninkové obrázky"
},
"wizard": {
"step3": {
"allImagesRequired_one": "Prosím, zařaďte všechny obrázky. Zbývá {{count}} obrázek.",
"allImagesRequired_few": "Prosím, zařaďte všechny obrázky. Zbývají {{count}} obrázky.",
"allImagesRequired_other": "Prosím, zařaďte všechny obrázky. Zbývá {{count}} obrázků.",
"trainingStarted": "Trénování úspěšně spuštěno",
"generateSuccess": "Vzorové obrázky byly úspěšně vytvořeny"
}
},
"deleteCategory": {
"title": "Smazat Třídu",
"desc": "Opravdu chcete odstranit třídu {{name}}? Tím se na trvalo odstraní všechny související obrázky a bude potřeba přetrénovat model.",
"minClassesTitle": "Nemůžete smazat třídu",
"minClassesDesc": "Klasifikační model musí mít alespoň 2 třídy. Než tuto třídu odstraníte přidejte další třídu."
},
"edit": {
"descriptionObject": "Upravte typ objektu a typ klasifikace pro tento model klasifikace.",
"stateClassesInfo": "Poznámka: Změna tříd stavů vyžaduje přetrénování modelu s aktualizovanými třídami."
},
"renameCategory": {
"title": "Přejmenovat třídu",
"desc": "Vložte nové jméno pro {{name}}. Aby se změna názvu projevila, bude nutné model znovu natrénovat."
},
"description": {
"invalidName": "Neplatné jméno. Jméno můžou obsahovat pouze písmena, čísla, mezery, apostrofy, podtržítka a spojovníky."
},
"categories": "Třídy",
"createCategory": {
"new": "Vytvořit novou Třídu"
},
"categorizeImageAs": "Klasifikovat obrázek jako:",
"categorizeImage": "Klasifikovat obrázek"
"titleShort": "Nedávný"
}
}

View File

@@ -9,18 +9,14 @@
"empty": {
"alert": "Nejsou žádné výstrahy na kontrolu",
"detection": "Nejsou žádné detekce na kontrolu",
"motion": "Nenalezena žádná data o pohybu",
"recordingsDisabled": {
"title": "Nahrávání musí být povoleno",
"description": "Položky revize lze pro kameru vytvořit pouze tehdy, je-li pro ni povoleno nahrávání."
}
"motion": "Nenalezena žádná data o pohybu"
},
"timeline": "Časová osa",
"timeline.aria": "Zvolit časovou osu",
"events": {
"label": "Události",
"aria": "Zvolit události",
"noFoundForTimePeriod": "Pro toto časové období nebyly nalezeny žádné události."
"noFoundForTimePeriod": "Pro toto období nebyly nalezeny žádné události."
},
"documentTitle": "Revize - Frigate",
"camera": "Kamera",
@@ -30,8 +26,8 @@
"markAsReviewed": "Označit jako zkontrolované",
"markTheseItemsAsReviewed": "Označit tyto položky jako zkontrolované",
"newReviewItems": {
"label": "Zobrazit nové položky revize",
"button": "Nové položky revize"
"label": "Zobrazit nové položky na kontrolu",
"button": "Nové položky na kontrolu"
},
"recordings": {
"documentTitle": "Záznamy - Frigate"
@@ -46,22 +42,8 @@
"detail": {
"label": "Detail",
"noDataFound": "Žádná detailní data k prohlédnutí",
"aria": "Přepnout zobrazení detailů",
"aria": "Přepnout detailní zobrazení",
"trackedObject_other": "{{count}} objektů",
"trackedObject_one": "{{count}} objekt",
"noObjectDetailData": "Nejsou k dispozici žádné podrobné údaje o objektu.",
"settings": "Nastavení Detailního Zobrazení",
"alwaysExpandActive": {
"title": "Vždy rozbalit aktivní",
"desc": "Vždy zobrazit podrobnosti objektu aktivní položky revize, pokud jsou k dispozici."
}
},
"objectTrack": {
"trackedPoint": "Sledovaný bod",
"clickToSeek": "Kliknutím přeskočte na tento čas"
},
"select_all": "Vše",
"normalActivity": "Normální",
"needsReview": "Potřebuje revizi",
"securityConcern": "Obava o bezpečnost"
"trackedObject_one": "{{count}} objektů"
}
}

View File

@@ -24,8 +24,7 @@
"regenerate": "Od {{provider}} byl vyžádán nový popis. V závislosti na rychlosti vašeho poskytovatele může obnovení nového popisu nějakou dobu trvat.",
"updatedSublabel": "Úspěšně aktualizovaný podružný štítek.",
"updatedLPR": "Úspěšně aktualizovaná SPZ.",
"audioTranscription": "Požádání o přepis zvuku bylo úspěšné. V závislosti na rychlosti Vašeho Frigate serveru může přepis trvat nějaký čas než bude dokončen.",
"updatedAttributes": "Atributy byly úspěšně aktualizovány."
"audioTranscription": "Požádání o přepis zvuku bylo úspěšné."
},
"error": {
"regenerate": "Chyba volání {{provider}} pro nový popis: {{errorMessage}}",
@@ -207,7 +206,7 @@
"dialog": {
"confirmDelete": {
"title": "Potvrdit smazání",
"desc": "Odstraněním tohoto sledovaného objektu se odstraní snímek, všechna uložená vložení a všechny související položky s podrobnostmi o sledování. Zaznamenaný záznam tohoto sledovaného objektu v zobrazení Historie <em>NEBUDE</em> smazán.<br /><br />Opravdu chcete pokračovat?"
"desc": "Odstraněním tohoto sledovaného objektu se odstraní snímek, všechna uložená vložení a všechny související položky životního cyklu objektu. Zaznamenaný záznam tohoto sledovaného objektu v zobrazení Historie <em>NEBUDE</em> smazán.<br /><br />Opravdu chcete pokračovat?"
}
},
"trackedObjectDetails": "Detaily sledovaných objektů",
@@ -215,9 +214,7 @@
"details": "detaily",
"snapshot": "snímek",
"video": "video",
"object_lifecycle": "životní cyklus objektu",
"thumbnail": "Náhled",
"tracking_details": "detaily sledování"
"object_lifecycle": "životní cyklus objektu"
},
"noTrackedObjects": "Žádné sledované objekty nebyly nalezeny",
"fetchingTrackedObjectsFailed": "Chyba při načítání sledovaných objektů: {{errorMessage}}",
@@ -227,49 +224,5 @@
},
"concerns": {
"label": "Obavy"
},
"trackingDetails": {
"title": "Detaily Sledování",
"noImageFound": "Nebyl nalezen obrázek pro tuto časovou značku.",
"createObjectMask": "Vytvořit Masku Objektu",
"adjustAnnotationSettings": "Upravte nastavení poznámek",
"scrollViewTips": "Klikněte pro zobrazení významných okamžiků z životního cyklu tohoto objektu.",
"autoTrackingTips": "Pozice ohraničujících rámečků budou nepřesné pro kamery s automatickým sledováním.",
"count": "{{first}} z {{second}}",
"trackedPoint": "Sledovaný Bod",
"lifecycleItemDesc": {
"visible": "Detekován {{label}}",
"entered_zone": "{{label}} vstoupil do {{zones}}",
"active": "{{label}} se stal aktivním",
"stationary": "{{label}} se zastavil",
"attribute": {
"faceOrLicense_plate": "Pro {{label}} zjištěn {{attribute}}"
},
"header": {
"ratio": "Poměr",
"area": "Oblast",
"score": "Skóre"
}
},
"annotationSettings": {
"title": "Nastavení anotací",
"showAllZones": {
"title": "Zobrazit všechny zóny",
"desc": "Vždy zobrazovat zóny na snímcích, na kterých objekty vstoupili do zóny."
},
"offset": {
"label": "Odsazení anotace",
"desc": "Tato data pocházejí z detekčního kanálu vaší kamery, ale překrývají se s obrázky ze záznamového kanálu. Je nepravděpodobné, že by oba streamy byly dokonale synchronizované. V důsledku toho se ohraničovací rámeček a záznam nebudou dokonale srovnávat. Toto nastavení můžete použít k časovému posunutí anotací dopředu nebo dozadu, abyste je lépe zarovnali se zaznamenaným záznamem.",
"millisecondsToOffset": "Milisekundy na posunutí detekce anotací. <em>Výchozí: 0</em>",
"tips": "Snižte hodnotu, pokud je přehrávané video před ohraničením a body cesty, nebo zvyšte hodnotu, pokud je přehrávané video za nimi. Hodnota může být i záporná.",
"toast": {
"success": "Odsazení anotací pro {{camera}} bylo uloženo do konfiguračního souboru."
}
}
},
"carousel": {
"previous": "Předcházející snímek",
"next": "Další snímek"
}
}
}

View File

@@ -1,6 +1,6 @@
{
"imageEntry": {
"dropInstructions": "Přetáhněte obrázek sem, nebo klikněte na výběr",
"dropInstructions": "Přetáhněte obrázek zde, nebo klikněte na výběr",
"maxSize": "Maximální velikost: {{size}}MB",
"dropActive": "Přetáhněte obrázek zde…",
"validation": {
@@ -10,7 +10,7 @@
"createFaceLibrary": {
"new": "Vytvořit nový obličej",
"desc": "Vytvořit novou kolekci",
"nextSteps": "Chcete-li vybudovat pevný základ:<li>Použijte kartu Nedávná Rozpoznání k výběru a trénování na snímcích pro každou detekovanou osobu.</li><li>Pro nejlepší výsledky se zaměřte na přímé snímky; vyhněte se trénování snímků, které zachycují obličeje pod úhlem.</li></ul>",
"nextSteps": "Chcete-li vybudovat pevný základ:<li>Použijte kartu Trénování k výběru a trénování na snímcích pro každou detekovanou osobu.</li><li>Pro nejlepší výsledky se zaměřte na přímé snímky; vyhněte se trénování snímků, které zachycují obličeje pod úhlem.</li></ul>",
"title": "Vytvořit kolekci"
},
"details": {
@@ -44,7 +44,7 @@
"description": {
"addFace": "Přidejte novou kolekci do Knihovny obličejů nahráním prvního obrázku.",
"placeholder": "Zadejte název pro tuto kolekci",
"invalidName": "Neplatné jméno. Jméno můžou obsahovat pouze písmena, čísla, mezery, apostrofy, podtržítka a spojovníky."
"invalidName": "Neplatný název. Názvy mohou obsahovat pouze písmena, čísla, mezery, apostrofy, podtržítka a pomlčky."
},
"documentTitle": "Knihovna obličejů - Frigate",
"uploadFaceImage": {

View File

@@ -86,7 +86,7 @@
"enable": "Ukázat statistiky streamu"
},
"manualRecording": {
"title": "Na požádání",
"title": "Nahrávání na vyžádání",
"playInBackground": {
"label": "Přehrát na pozadí",
"desc": "Povolte tuto volbu pro pokračování streamování i když je přehrávač skrytý."
@@ -103,7 +103,7 @@
"started": "Manuálně spuštěno nahrávání na požádání.",
"ended": "Ukončeno manuální nahrávání na vyžádání.",
"recordDisabledTips": "Protože je v konfiguraci této kamery nahrávání zakázáno nebo omezeno, bude uložen pouze snímek.",
"tips": "Stáhněte si aktuální snímek nebo spusťte ruční událost na základě nastavení uchování záznamu této kamery."
"tips": "Spustit ruční událost na základě nastavení uchování záznamů této kamery."
},
"streamingSettings": "Nastavení Streamování",
"audio": "Zvuk",
@@ -167,11 +167,5 @@
"transcription": {
"enable": "Povolit živý přepis zvuku",
"disable": "Zakázat živý přepis zvuku"
},
"snapshot": {
"takeSnapshot": "Stáhnout aktuální snímek",
"noVideoSource": "Pro snímek není k dispozici žádné video.",
"captureFailed": "Zachycení snímku selhalo.",
"downloadStarted": "Stažení snímku spuštěno."
}
}

View File

@@ -134,7 +134,7 @@
"name": {
"inputPlaceHolder": "Zadejte jméno…",
"title": "Jméno",
"tips": "Název musí mít alespoň 2 znaky, musí obsahovat alespoň jedno písmeno a nesmí být shodný s názvem kamery nebo jiné zóny této kamery."
"tips": "Název musí mít alespoň 2 znaky a nesmí být shodný s názvem kamery nebo jiné zóny."
},
"inertia": {
"title": "Setrvačnost",
@@ -160,7 +160,7 @@
}
},
"toast": {
"success": "Zóna {{zoneName}} byla uložena."
"success": "Zóna {{zoneName}} byla uložena. Restartujte Frigate pro aplikování změn."
},
"label": "Zóny",
"desc": {
@@ -199,8 +199,8 @@
"clickDrawPolygon": "Kliknutím nakreslíte polygon do obrázku.",
"toast": {
"success": {
"title": "{{polygonName}} byl uložen.",
"noName": "Maska Detekce pohybu byla uložena."
"title": "{{polygonName}} byl uložen. Restartujte Frigate pro aplikování změn.",
"noName": "Maska Detekce pohybu byla uložena. Restartujte Frigate pro aplikování změn."
}
}
},
@@ -284,8 +284,8 @@
"clickDrawPolygon": "Kliknutím nakreslete polygon do obrázku.",
"toast": {
"success": {
"title": "{{polygonName}} byl uložen.",
"noName": "Maska Objektu byla uložena."
"title": "{{polygonName}} byl uložen. Restartujte Frigate pro aplikování změn.",
"noName": "Maska Objektu byla uložena. Restartujte Frigate pro aplikování změn."
}
},
"point_one": "{{count}} bod",
@@ -322,7 +322,7 @@
"noCamera": "Žádná Kamera"
},
"general": {
"title": "Nastaverozhraní",
"title": "Hlavní nastavení",
"liveDashboard": {
"title": "Živý dashboard",
"automaticLiveView": {
@@ -332,13 +332,6 @@
"playAlertVideos": {
"label": "Přehrát videa s výstrahou",
"desc": "Ve výchozím nastavení se nedávná upozornění na ovládacím panelu Živě přehrávají jako malá opakující se videa. Vypněte tuto možnost, chcete-li na tomto zařízení/prohlížeči zobrazovat pouze statický obrázek nedávných výstrah."
},
"displayCameraNames": {
"label": "Vždy zobrazit názvy kamer",
"desc": "Vždy zobrazit názvy kamer v čipu na ovládacím panelu živého náhledu s více kamerami."
},
"liveFallbackTimeout": {
"label": "Časový limit pádu živého přehrávání"
}
},
"storedLayouts": {
@@ -636,11 +629,11 @@
"actions": "Akce",
"noUsers": "Žádní uživatelé nebyli nalezeni.",
"changeRole": "Změnit roli uživatele",
"password": "Resetovat Heslo",
"password": "Heslo",
"deleteUser": "Smazat uživatele",
"role": "Role"
},
"updatePassword": "Resetovat heslo",
"updatePassword": "Aktualizovat heslo",
"toast": {
"success": {
"createUser": "Uživatel {{user}} úspěšně vytvořen",
@@ -750,7 +743,7 @@
"triggers": {
"documentTitle": "Spouštěče",
"management": {
"title": "Spouštěče",
"title": "Správa spouštěčů",
"desc": "Spravovat spouštěče pro {{camera}}. Použít typ miniatury ke spuštění u miniatur podobných vybranému sledovanému objektu a typ popisu ke spuštění u popisů podobných zadanému textu."
},
"addTrigger": "Přidat spouštěč",
@@ -789,10 +782,10 @@
"form": {
"name": {
"title": "Název",
"placeholder": "Pojmenujte tento spouštěč",
"placeholder": "Zadejte název spouštěče",
"error": {
"minLength": "Pole musí mít alespoň 2 znaky.",
"invalidCharacters": "Pole může obsahovat pouze písmena, číslice, podtržítka a pomlčky.",
"minLength": "Název musí mít alespoň 2 znaky.",
"invalidCharacters": "Jméno může obsahovat pouze písmena, číslice, podtržítka a pomlčky.",
"alreadyExists": "Spouštěč s tímto názvem již pro tuto kameru existuje."
}
},
@@ -805,9 +798,9 @@
},
"content": {
"title": "Obsah",
"imagePlaceholder": "Vyberte miniaturu",
"imagePlaceholder": "Vybrat obrázek",
"textPlaceholder": "Zadat textový obsah",
"imageDesc": "Je zobrazeno pouze posledních 100 miniatur. Pokud nemůžete najít požadovanou miniaturu, prosím zkontrolujte dřívější objekty v Prozkoumat a nastavte spouštěč ze tamějšího menu.",
"imageDesc": "Vybrat obrázek, který spustí tuto akci, když bude detekován podobný obrázek.",
"textDesc": "Zadejte text, který spustí tuto akci, když bude zjištěn podobný popis sledovaného objektu.",
"error": {
"required": "Obsah je povinný."
@@ -815,7 +808,7 @@
},
"actions": {
"title": "Akce",
"desc": "Ve výchozím nastavení Frigate odesílá MQTT zprávu pro všechny spouštěče. Podřazené popisky přidávají název spouštěče k popisku objektu. Atributy jsou prohledávatelná metadata uložená samostatně v metadatech sledovaného objektu.",
"desc": "Ve výchozím nastavení Frigate odesílá MQTT zprávu pro všechny spouštěče. Zvolte dodatečnou akci, která se má provést, když se tento spouštěč aktivuje.",
"error": {
"min": "Musí být vybrána alespoň jedna akce."
}
@@ -857,9 +850,9 @@
"createRole": "Role {{role}} byla úspěšně vytvořena",
"updateCameras": "Kamery byly aktualizovány pro roli {{role}}",
"deleteRole": "Role {{role}} byla úspěšně smazána",
"userRolesUpdated_one": "{{count}} uživatel přiřazený k této roli byl aktualizován na „diváka“, který má přístup ke všem kamerám.",
"userRolesUpdated_few": "{{count}} uživatelé přiřazení k této roli bylo aktualizováno na „diváky“, kteří mají přístup ke všem kamerám.",
"userRolesUpdated_other": "{{count}} uživatelů přiřazených k této roli bylo aktualizováno na „diváky“, kteří mají přístup ke všem kamerám."
"userRolesUpdated_one": "{{count}} uživatel(ů) přiřazených k této roli bylo aktualizováno na „Divák“, který má přístup ke všem kamerám.",
"userRolesUpdated_few": "",
"userRolesUpdated_other": ""
},
"error": {
"createRoleFailed": "Nepodařilo se vytvořit roli: {{errorMessage}}",
@@ -903,36 +896,5 @@
"title": "Správa role diváka",
"desc": "Spravujte vlastní role diváků a jejich oprávnění k přístupu ke kamerám pro tuto instanci Frigate."
}
},
"cameraWizard": {
"save": {
"success": "Nová kamera {{cameraName}} úspěšně uložena."
},
"step2": {
"testSuccess": "Test připojení v pořádku!",
"probeSuccessful": "Sonda úspěšná",
"probeNoSuccess": "Sonda neúspěšná"
},
"step3": {
"testSuccess": "Test streamu v pořádku!"
},
"step4": {
"reconnectionSuccess": "Opakované připojení úspěšné.",
"streamValidated": "Stream {{number}} úspěšně ověřený"
}
},
"cameraManagement": {
"cameraConfig": {
"toast": {
"success": "Kamera {{cameraName}} úspěšně uložena"
}
}
},
"cameraReview": {
"reviewClassification": {
"toast": {
"success": "Konfigurace Klasifikací Revizí byla uložena. Restartujte Frigate pro aplikování změn."
}
}
}
}

View File

@@ -112,23 +112,12 @@
"gpuUsage": "Využití CPU",
"gpuMemory": "Paměť GPU",
"gpuEncoder": "GPU kodér",
"gpuDecoder": "GPU Dekodér",
"intelGpuWarning": {
"title": "Upozornění Intel GPU Stats",
"message": "Statistiky GPU nedostupné",
"description": "Toto je známá chyba v nástrojích Intel pro hlášení statistik GPU (intel_gpu_top), která selhává a opakovaně vrací využití GPU 0 %, a to i v případech, kdy na (i)GPU správně běží hardwarová akcelerace a detekce objektů. Nejedná se o chybu Frigate. Můžete restartovat hostitele, abyste problém dočasně vyřešili a potvrdili, že GPU funguje správně. Toto neovlivňuje výkon."
}
"gpuDecoder": "GPU Dekodér"
},
"otherProcesses": {
"title": "Ostatní procesy",
"processCpuUsage": "Využití CPU procesy",
"processMemoryUsage": "Využití paměti procesy",
"series": {
"go2rtc": "go2rtc",
"recording": "nahrávání",
"review_segment": "revidovat segment",
"embeddings": "vložení"
}
"processMemoryUsage": "Využití paměti procesy"
},
"title": "Hlavní"
},

View File

@@ -27,7 +27,7 @@
"harp": "Harpe",
"bell": "Klokke",
"harmonica": "Harmonika",
"bagpipes": "Sækkepiber",
"bagpipes": "Sækkepibe",
"didgeridoo": "Didgeridoo",
"jazz": "Jazz",
"opera": "Opera",
@@ -78,122 +78,11 @@
"camera": "Kamera",
"tools": "Værktøj",
"hammer": "Hammer",
"drill": "Boremaskine",
"drill": "Bore",
"explosion": "Eksplosion",
"fireworks": "Nytårskrudt",
"babbling": "Pludren",
"yell": "Råb",
"whoop": "Jubel",
"snicker": "Smålatter",
"bird": "Fugl",
"cat": "Kat",
"dog": "Hund",
"horse": "Hest",
"sheep": "Får",
"mouse": "Mus",
"keyboard": "Tastatur",
"blender": "Mixer",
"hair_dryer": "Føntørrer",
"animal": "Dyr",
"bark": "Gø",
"goat": "Gæd",
"sigh": "Suk",
"singing": "Synger",
"choir": "Kor",
"yodeling": "Jodlen",
"chant": "Messe",
"mantra": "Meditationsmantra",
"child_singing": "Barn Synger",
"synthetic_singing": "Syntetisk Sang",
"rapping": "Rapper",
"humming": "Nynner",
"groan": "Støn",
"grunt": "Grynt",
"whistling": "Fløjter",
"breathing": "Vejrtrækning",
"wheeze": "Hæsende vejrtrækning",
"snoring": "Snorker",
"gasp": "Gisp",
"pant": "Anstrengende vejrtrækning",
"snort": "Fnyse",
"cough": "Hoster",
"throat_clearing": "Rømmer sig",
"sneeze": "Nyser",
"sniff": "Snøfter",
"run": "Løb",
"shuffle": "Trække fødderne",
"footsteps": "Fodtrin",
"chewing": "Tygger",
"biting": "Bider",
"gargling": "Gurgler",
"stomach_rumble": "Maverumlen",
"burping": "Bøvser",
"hiccup": "Hikke",
"fart": "Prut",
"hands": "Hænder",
"finger_snapping": "Knipse fingere",
"clapping": "Klapper",
"heartbeat": "Hjertebanken",
"heart_murmur": "Hjertemislyd",
"cheering": "Hujen",
"applause": "Bifald",
"chatter": "Snak",
"crowd": "Forsamling",
"children_playing": "Børn leger",
"pets": "Kæledyr",
"yip": "Jubel",
"howl": "Hyl",
"bow_wow": "Vov vov",
"growling": "Knurren",
"whimper_dog": "Hunde­klynk",
"purr": "Spinde",
"meow": "Meaw",
"hiss": "Hvæser",
"caterwaul": "Kattejammer",
"livestock": "Husdyr",
"oink": "Nøf",
"bleat": "Brægen",
"vibration": "Vibration",
"fowl": "Fjerkræ",
"chicken": "Kylling",
"cluck": "Kagle",
"cock_a_doodle_doo": "Kykeliky",
"turkey": "Kalkun",
"gobble": "Kalkunlyd",
"duck": "And",
"quack": "Rap",
"goose": "Gås",
"honk": "Dyt",
"wild_animals": "Vilde dyr",
"roaring_cats": "Brølende katte",
"roar": "Brøl",
"chirp": "Pip",
"squawk": "Skræppen",
"pigeon": "Due",
"coo": "Kurre",
"crow": "Krage",
"caw": "Kragelyd",
"owl": "Ugle",
"hoot": "Uglehyl",
"flapping_wings": "Vingeslag",
"dogs": "Hunde",
"rats": "Rotter",
"patter": "Dråbelyd",
"insect": "Insekt",
"cricket": "Cricket",
"guitar": "Guitar",
"electric_guitar": "Elektrisk Guitar",
"bass_guitar": "Basguitar",
"acoustic_guitar": "Akustisk Guitar",
"steel_guitar": "Stål Guitar",
"tapping": "Tapping på guitar",
"strum": "Slå an",
"banjo": "Banjo",
"sitar": "Sitar",
"mandolin": "Mandolin",
"snare_drum": "Lilletromme",
"rimshot": "Kantslag",
"drum_roll": "Trommehvirvel",
"bass_drum": "Stortromme",
"techno": "Techno"
"snicker": "Smålatter"
}

View File

@@ -24,13 +24,13 @@
"am": "am",
"year_one": "{{time}} år",
"year_other": "{{time}} år",
"mo": "{{time}}må",
"mo": "{{time}}mo",
"month_one": "{{time}} måned",
"month_other": "{{time}} måneder",
"d": "{{time}}d",
"day_one": "{{time}} dag",
"day_other": "{{time}} dage",
"h": "{{time}}t",
"h": "{{time}}h",
"yr": "{{time}}yr",
"hour_one": "{{time}} time",
"hour_other": "{{time}} timer",
@@ -41,11 +41,11 @@
"second_one": "{{time}} sekund",
"second_other": "{{time}} sekunder",
"formattedTimestamp": {
"12hour": "d MMM, h:mm:ss aaa",
"24hour": "d. MMM, HH:mm:ss"
"12hour": "MMM d, h:mm:ss aaa",
"24hour": "MMM d, HH:mm:ss"
},
"formattedTimestamp2": {
"12hour": "dd/MM h:mm:ss",
"12hour": "MM/dd h:mm:ssa",
"24hour": "d MMM HH:mm:ss"
},
"formattedTimestampHourMinute": {
@@ -57,26 +57,22 @@
"24hour": "HH:mm:ss"
},
"formattedTimestampMonthDayHourMinute": {
"12hour": "d MMM, h:mm aaa",
"24hour": "d MMM, HH:mm"
"12hour": "MMM d, h:mm aaa",
"24hour": "MMM d, HH:mm"
},
"formattedTimestampMonthDayYear": {
"12hour": "d MMM, yyyy",
"24hour": "d MMM, yyyy"
"12hour": "MMM d, yyyy",
"24hour": "MMM d, yyyy"
},
"formattedTimestampMonthDayYearHourMinute": {
"12hour": "d MMM yyyy, h:mm aaa",
"24hour": "d MMM yyyy, HH:mm"
"12hour": "MMM d yyyy, h:mm aaa",
"24hour": "MMM d yyyy, HH:mm"
},
"formattedTimestampMonthDay": "d MMM",
"formattedTimestampMonthDay": "MMM d",
"formattedTimestampFilename": {
"12hour": "dd-MM-yy-h-mm-ss-a",
"24hour": "dd-MM-yy-HH-mm-ss"
},
"never": "Aldrig",
"inProgress": "Under behandling",
"invalidStartTime": "Ugyldig starttid",
"invalidEndTime": "Ugyldig sluttid"
"12hour": "MM-dd-yy-h-mm-ss-a",
"24hour": "MM-dd-yy-HH-mm-ss"
}
},
"unit": {
"speed": {
@@ -86,28 +82,14 @@
"length": {
"feet": "fod",
"meters": "meter"
},
"data": {
"kbps": "kB/s",
"mbps": "MB/s",
"gbps": "GB/s",
"kbph": "kB/time",
"mbph": "MB/time",
"gbph": "GB/time"
}
},
"label": {
"back": "Gå tilbage",
"hide": "Skjul {{item}}",
"show": "Vis {{item}}",
"ID": "ID",
"none": "Ingen",
"all": "Alle",
"other": "Andet"
"back": "Gå tilbage"
},
"button": {
"apply": "Anvend",
"reset": "Nulstil",
"reset": "Reset",
"done": "Udført",
"enabled": "Aktiveret",
"enable": "Aktiver",
@@ -134,22 +116,21 @@
"no": "Nej",
"download": "Download",
"info": "Info",
"suspended": "Sat på pause",
"unsuspended": "Genoptag",
"suspended": "Suspenderet",
"unsuspended": "Ophæv suspendering",
"play": "Afspil",
"unselect": "Fravælg",
"export": "Eksporter",
"deleteNow": "Slet nu",
"next": "Næste",
"continue": "Fortsæt"
"next": "Næste"
},
"menu": {
"system": "System",
"systemMetrics": "Systemstatistik",
"systemMetrics": "System metrics",
"configuration": "Konfiguration",
"systemLogs": "Systemlogfiler",
"systemLogs": "System logs",
"settings": "Indstillinger",
"configurationEditor": "Konfigurationsværktøj",
"configurationEditor": "Konfiguratons Editor",
"languages": "Sprog",
"language": {
"en": "English (Engelsk)",
@@ -184,17 +165,8 @@
"th": "ไทย (Thai)",
"ca": "Català (Katalansk)",
"withSystem": {
"label": "Brug systemindstillinger for sprog"
},
"ptBR": "Português brasileiro (Brasiliansk Portugisisk)",
"sr": "Српски (Serbisk)",
"sl": "Slovenščina (Slovensk)",
"lt": "Lietuvių (Litauisk)",
"bg": "Български (Bulgarsk)",
"gl": "Galego (Galisisk)",
"id": "Bahasa Indonesia (Indonesisk)",
"ur": "اردو (Urdu)",
"hr": "Hrvatski (Kroatisk)"
"label": "Brug system indstillinger for sprog"
}
},
"appearance": "Udseende",
"darkMode": {
@@ -213,7 +185,7 @@
"nord": "Nord",
"red": "Rød",
"highcontrast": "Høj Kontrast",
"default": "Standard"
"default": "Default"
},
"help": "Hjælp",
"documentation": {
@@ -222,7 +194,7 @@
},
"restart": "Genstart Frigate",
"live": {
"title": "Direkte",
"title": "Live",
"allCameras": "Alle kameraer",
"cameras": {
"title": "Kameraer",
@@ -230,28 +202,27 @@
"count_other": "{{count}} Kameraer"
}
},
"review": "Gennemse",
"review": "Review",
"explore": "Udforsk",
"export": "Eksporter",
"uiPlayground": "UI sandkasse",
"faceLibrary": "Ansigtsarkiv",
"faceLibrary": "Face Library",
"user": {
"title": "Bruger",
"account": "Konto",
"current": "Aktiv bruger: {{user}}",
"anonymous": "anonym",
"logout": "Log ud",
"setPassword": "Vælg kodeord"
},
"classification": "Kategorisering"
"logout": "Logout",
"setPassword": "Set Password"
}
},
"toast": {
"copyUrlToClipboard": "Kopieret URL til udklipsholder.",
"copyUrlToClipboard": "Kopieret URL til klippebord.",
"save": {
"title": "Gem",
"error": {
"title": "Ændringer kunne ikke gemmes: {{errorMessage}}",
"noMessage": "Kunne ikke gemme konfigurationsændringer"
"title": "Ændringer kan ikke gemmes: {{errorMessage}}",
"noMessage": "Kan ikke gemme konfigurationsændringer"
}
}
},
@@ -262,7 +233,7 @@
"desc": "Admins har fuld adgang til Frigate UI. Viewers er begrænset til at se kameraer, gennemse items, og historik i UI."
},
"pagination": {
"label": "sideinddeling",
"label": "paginering",
"previous": {
"title": "Forrige",
"label": "Gå til forrige side"
@@ -274,27 +245,15 @@
"more": "Flere sider"
},
"accessDenied": {
"documentTitle": "Adgang nægtet - Frigate",
"title": "Adgang nægtet",
"desc": "Du har ikke rettigheder til at se denne side."
"documentTitle": "Adgang forbudt - Frigate",
"title": "Adgang forbudt",
"desc": "Du har ikke tiiladelse til at se denne side."
},
"notFound": {
"documentTitle": "Ikke fundet - Frigate",
"title": "404",
"desc": "Siden blev ikke fundet"
"desc": "Side ikke fundet"
},
"selectItem": "Vælg {{item}}",
"readTheDocumentation": "Læs dokumentationen",
"list": {
"two": "{{0}} og {{1}}",
"many": "{{items}}, og {{last}}",
"separatorWithSpace": ", "
},
"field": {
"optional": "Valgfrit",
"internalID": "Det interne ID som Frigate bruger i konfigurationen og databasen"
},
"information": {
"pixels": "{{area}}px"
}
"readTheDocumentation": "Læs dokumentationen"
}

View File

@@ -8,8 +8,7 @@
"passwordRequired": "Kodeord kræves",
"loginFailed": "Login fejlede",
"unknownError": "Ukendt fejl. Tjek logs.",
"rateLimit": "Grænsen for forespørgsler er overskredet. Prøv igen senere.",
"webUnknownError": "Ukendt fejl. Tjek konsollogs."
"rateLimit": "Grænsen for forespørgsler er overskredet. Prøv igen senere."
},
"firstTimeLogin": "Forsøger du at logge ind for første gang? Loginoplysningerne står i Frigate-loggene."
}

View File

@@ -14,73 +14,8 @@
"label": "Navn",
"placeholder": "Indtast et navn…",
"errorMessage": {
"mustLeastCharacters": "Kameragruppens navn skal være mindst 2 tegn.",
"exists": "Kameragruppenavn findes allerede.",
"nameMustNotPeriod": "Kameragruppenavn må ikke indeholde en periode.",
"invalid": "Ugyldigt kamera gruppenavn."
}
},
"cameras": {
"label": "Kameraer",
"desc": "Vælg kameraer til denne gruppe."
},
"icon": "Ikon",
"success": "Kameragruppe ({{name}}) er blevet gemt.",
"camera": {
"birdseye": "Fugleøje",
"setting": {
"label": "Kamera Streaming Indstillinger",
"title": "{{cameraName}} Streaming Indstillinger",
"desc": "Skift de live streaming muligheder for denne kameragruppes dashboard. <em> Disse indstillinger er enheds- og browserspecifikke.</em>",
"audioIsAvailable": "Lyd er tilgængelig for denne stream",
"audioIsUnavailable": "Lyd er ikke tilgængelig for denne strøm",
"audio": {
"tips": {
"title": "Lyd skal komme fra dit kamera og konfigureret i go2rtc til denne stream."
}
},
"stream": "Stream",
"placeholder": "Vælg en stream",
"streamMethod": {
"label": "Streaming Metode",
"placeholder": "Vælg en streaming metode",
"method": {
"noStreaming": {
"label": "Ingen Streaming",
"desc": "Kamerabilleder vil kun opdatere én gang i minuttet og ingen live streaming vil forekomme."
},
"smartStreaming": {
"label": "Smart Streaming (anbefalet)",
"desc": "Smart streaming vil opdatere dit kamerabillede én gang i minuttet, når der ikke sker noget, for at spare båndbredde og ressourcer. Når der registreres aktivitet, skifter billedet problemfrit til en live stream."
},
"continuousStreaming": {
"label": "Kontinuerlig Streaming",
"desc": {
"title": "Kamerabillede vil altid være en live stream, når det er synligt på instrumentbrættet, selv om der ikke registreres nogen aktivitet.",
"warning": "Kontinuerlig streaming kan forårsage højt båndbreddeforbrug og ydelsesproblemer. Brug med omtanke."
}
}
}
},
"compatibilityMode": {
"label": "Kompatibilitetstilstand",
"desc": "Aktivér kun denne mulighed, hvis kameraets live stream viser farve artefakter og har en diagonal linje på højre side af billedet."
}
"mustLeastCharacters": "Kameragruppens navn skal være mindst 2 tegn."
}
}
},
"debug": {
"options": {
"label": "Indstillinger",
"title": "Valgmuligheder",
"showOptions": "Vis muligheder",
"hideOptions": "Skjul muligheder"
},
"boundingBox": "Afgrænsningsfelt",
"timestamp": "Tidsstempel",
"zones": "Zoner",
"mask": "Maske",
"motion": "Bevægelse",
"regions": "Regioner"
}
}

Some files were not shown because too many files have changed in this diff Show More