mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-31 00:21:44 -05:00
Compare commits
1 Commits
dev
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a2397b5308 |
387
.github/copilot-instructions.md
vendored
387
.github/copilot-instructions.md
vendored
@@ -1,385 +1,2 @@
|
||||
# GitHub Copilot Instructions for Frigate NVR
|
||||
|
||||
This document provides coding guidelines and best practices for contributing to Frigate NVR, a complete and local NVR designed for Home Assistant with AI object detection.
|
||||
|
||||
## Project Overview
|
||||
|
||||
Frigate NVR is a realtime object detection system for IP cameras that uses:
|
||||
|
||||
- **Backend**: Python 3.13+ with FastAPI, OpenCV, TensorFlow/ONNX
|
||||
- **Frontend**: React with TypeScript, Vite, TailwindCSS
|
||||
- **Architecture**: Multiprocessing design with ZMQ and MQTT communication
|
||||
- **Focus**: Minimal resource usage with maximum performance
|
||||
|
||||
## Code Review Guidelines
|
||||
|
||||
When reviewing code, do NOT comment on:
|
||||
|
||||
- Missing imports - Static analysis tooling catches these
|
||||
- Code formatting - Ruff (Python) and Prettier (TypeScript/React) handle formatting
|
||||
- Minor style inconsistencies already enforced by linters
|
||||
|
||||
## Python Backend Standards
|
||||
|
||||
### Python Requirements
|
||||
|
||||
- **Compatibility**: Python 3.13+
|
||||
- **Language Features**: Use modern Python features:
|
||||
- Pattern matching
|
||||
- Type hints (comprehensive typing preferred)
|
||||
- f-strings (preferred over `%` or `.format()`)
|
||||
- Dataclasses
|
||||
- Async/await patterns
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
- **Formatting**: Ruff (configured in `pyproject.toml`)
|
||||
- **Linting**: Ruff with rules defined in project config
|
||||
- **Type Checking**: Use type hints consistently
|
||||
- **Testing**: unittest framework - use `python3 -u -m unittest` to run tests
|
||||
- **Language**: American English for all code, comments, and documentation
|
||||
|
||||
### Logging Standards
|
||||
|
||||
- **Logger Pattern**: Use module-level logger
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
```
|
||||
|
||||
- **Format Guidelines**:
|
||||
- No periods at end of log messages
|
||||
- No sensitive data (keys, tokens, passwords)
|
||||
- Use lazy logging: `logger.debug("Message with %s", variable)`
|
||||
- **Log Levels**:
|
||||
- `debug`: Development and troubleshooting information
|
||||
- `info`: Important runtime events (startup, shutdown, state changes)
|
||||
- `warning`: Recoverable issues that should be addressed
|
||||
- `error`: Errors that affect functionality but don't crash the app
|
||||
- `exception`: Use in except blocks to include traceback
|
||||
|
||||
### Error Handling
|
||||
|
||||
- **Exception Types**: Choose most specific exception available
|
||||
- **Try/Catch Best Practices**:
|
||||
- Only wrap code that can throw exceptions
|
||||
- Keep try blocks minimal - process data after the try/except
|
||||
- Avoid bare exceptions except in background tasks
|
||||
|
||||
Bad pattern:
|
||||
|
||||
```python
|
||||
try:
|
||||
data = await device.get_data() # Can throw
|
||||
# ❌ Don't process data inside try block
|
||||
processed = data.get("value", 0) * 100
|
||||
result = processed
|
||||
except DeviceError:
|
||||
logger.error("Failed to get data")
|
||||
```
|
||||
|
||||
Good pattern:
|
||||
|
||||
```python
|
||||
try:
|
||||
data = await device.get_data() # Can throw
|
||||
except DeviceError:
|
||||
logger.error("Failed to get data")
|
||||
return
|
||||
|
||||
# ✅ Process data outside try block
|
||||
processed = data.get("value", 0) * 100
|
||||
result = processed
|
||||
```
|
||||
|
||||
### Async Programming
|
||||
|
||||
- **External I/O**: All external I/O operations must be async
|
||||
- **Best Practices**:
|
||||
- Avoid sleeping in loops - use `asyncio.sleep()` not `time.sleep()`
|
||||
- Avoid awaiting in loops - use `asyncio.gather()` instead
|
||||
- No blocking calls in async functions
|
||||
- Use `asyncio.create_task()` for background operations
|
||||
- **Thread Safety**: Use proper synchronization for shared state
|
||||
|
||||
### Documentation Standards
|
||||
|
||||
- **Module Docstrings**: Concise descriptions at top of files
|
||||
```python
|
||||
"""Utilities for motion detection and analysis."""
|
||||
```
|
||||
- **Function Docstrings**: Required for public functions and methods
|
||||
|
||||
```python
|
||||
async def process_frame(frame: ndarray, config: Config) -> Detection:
|
||||
"""Process a video frame for object detection.
|
||||
|
||||
Args:
|
||||
frame: The video frame as numpy array
|
||||
config: Detection configuration
|
||||
|
||||
Returns:
|
||||
Detection results with bounding boxes
|
||||
"""
|
||||
```
|
||||
|
||||
- **Comment Style**:
|
||||
- Explain the "why" not just the "what"
|
||||
- Keep lines under 88 characters when possible
|
||||
- Use clear, descriptive comments
|
||||
|
||||
### File Organization
|
||||
|
||||
- **API Endpoints**: `frigate/api/` - FastAPI route handlers
|
||||
- **Configuration**: `frigate/config/` - Configuration parsing and validation
|
||||
- **Detectors**: `frigate/detectors/` - Object detection backends
|
||||
- **Events**: `frigate/events/` - Event management and storage
|
||||
- **Utilities**: `frigate/util/` - Shared utility functions
|
||||
|
||||
## Frontend (React/TypeScript) Standards
|
||||
|
||||
### Internationalization (i18n)
|
||||
|
||||
- **CRITICAL**: Never write user-facing strings directly in components
|
||||
- **Always use react-i18next**: Import and use the `t()` function
|
||||
|
||||
```tsx
|
||||
import { useTranslation } from "react-i18next";
|
||||
|
||||
function MyComponent() {
|
||||
const { t } = useTranslation(["views/live"]);
|
||||
return <div>{t("camera_not_found")}</div>;
|
||||
}
|
||||
```
|
||||
|
||||
- **Translation Files**: Add English strings to the appropriate json files in `web/public/locales/en`
|
||||
- **Namespaces**: Organize translations by feature/view (e.g., `views/live`, `common`, `views/system`)
|
||||
|
||||
### Code Quality
|
||||
|
||||
- **Linting**: ESLint (see `web/.eslintrc.cjs`)
|
||||
- **Formatting**: Prettier with Tailwind CSS plugin
|
||||
- **Type Safety**: TypeScript strict mode enabled
|
||||
- **Testing**: Vitest for unit tests
|
||||
|
||||
### Component Patterns
|
||||
|
||||
- **UI Components**: Use Radix UI primitives (in `web/src/components/ui/`)
|
||||
- **Styling**: TailwindCSS with `cn()` utility for class merging
|
||||
- **State Management**: React hooks (useState, useEffect, useCallback, useMemo)
|
||||
- **Data Fetching**: Custom hooks with proper loading and error states
|
||||
|
||||
### ESLint Rules
|
||||
|
||||
Key rules enforced:
|
||||
|
||||
- `react-hooks/rules-of-hooks`: error
|
||||
- `react-hooks/exhaustive-deps`: error
|
||||
- `no-console`: error (use proper logging or remove)
|
||||
- `@typescript-eslint/no-explicit-any`: warn (always use proper types instead of `any`)
|
||||
- Unused variables must be prefixed with `_`
|
||||
- Comma dangles required for multiline objects/arrays
|
||||
|
||||
### File Organization
|
||||
|
||||
- **Pages**: `web/src/pages/` - Route components
|
||||
- **Views**: `web/src/views/` - Complex view components
|
||||
- **Components**: `web/src/components/` - Reusable components
|
||||
- **Hooks**: `web/src/hooks/` - Custom React hooks
|
||||
- **API**: `web/src/api/` - API client functions
|
||||
- **Types**: `web/src/types/` - TypeScript type definitions
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
### Backend Testing
|
||||
|
||||
- **Framework**: Python unittest
|
||||
- **Run Command**: `python3 -u -m unittest`
|
||||
- **Location**: `frigate/test/`
|
||||
- **Coverage**: Aim for comprehensive test coverage of core functionality
|
||||
- **Pattern**: Use `TestCase` classes with descriptive test method names
|
||||
```python
|
||||
class TestMotionDetection(unittest.TestCase):
|
||||
def test_detects_motion_above_threshold(self):
|
||||
# Test implementation
|
||||
```
|
||||
|
||||
### Test Best Practices
|
||||
|
||||
- Always have a way to test your work and confirm your changes
|
||||
- Write tests for bug fixes to prevent regressions
|
||||
- Test edge cases and error conditions
|
||||
- Mock external dependencies (cameras, APIs, hardware)
|
||||
- Use fixtures for test data
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Python Backend
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
python3 -u -m unittest
|
||||
|
||||
# Run specific test file
|
||||
python3 -u -m unittest frigate.test.test_ffmpeg_presets
|
||||
|
||||
# Check formatting (Ruff)
|
||||
ruff format --check frigate/
|
||||
|
||||
# Apply formatting
|
||||
ruff format frigate/
|
||||
|
||||
# Run linter
|
||||
ruff check frigate/
|
||||
```
|
||||
|
||||
### Frontend (from web/ directory)
|
||||
|
||||
```bash
|
||||
# Start dev server (AI agents should never run this directly unless asked)
|
||||
npm run dev
|
||||
|
||||
# Build for production
|
||||
npm run build
|
||||
|
||||
# Run linter
|
||||
npm run lint
|
||||
|
||||
# Fix linting issues
|
||||
npm run lint:fix
|
||||
|
||||
# Format code
|
||||
npm run prettier:write
|
||||
```
|
||||
|
||||
### Docker Development
|
||||
|
||||
AI agents should never run these commands directly unless instructed.
|
||||
|
||||
```bash
|
||||
# Build local image
|
||||
make local
|
||||
|
||||
# Build debug image
|
||||
make debug
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### API Endpoint Pattern
|
||||
|
||||
```python
|
||||
from fastapi import APIRouter, Request
|
||||
from frigate.api.defs.tags import Tags
|
||||
|
||||
router = APIRouter(tags=[Tags.Events])
|
||||
|
||||
@router.get("/events")
|
||||
async def get_events(request: Request, limit: int = 100):
|
||||
"""Retrieve events from the database."""
|
||||
# Implementation
|
||||
```
|
||||
|
||||
### Configuration Access
|
||||
|
||||
```python
|
||||
# Access Frigate configuration
|
||||
config: FrigateConfig = request.app.frigate_config
|
||||
camera_config = config.cameras["front_door"]
|
||||
```
|
||||
|
||||
### Database Queries
|
||||
|
||||
```python
|
||||
from frigate.models import Event
|
||||
|
||||
# Use Peewee ORM for database access
|
||||
events = (
|
||||
Event.select()
|
||||
.where(Event.camera == camera_name)
|
||||
.order_by(Event.start_time.desc())
|
||||
.limit(limit)
|
||||
)
|
||||
```
|
||||
|
||||
## Common Anti-Patterns to Avoid
|
||||
|
||||
### ❌ Avoid These
|
||||
|
||||
```python
|
||||
# Blocking operations in async functions
|
||||
data = requests.get(url) # ❌ Use async HTTP client
|
||||
time.sleep(5) # ❌ Use asyncio.sleep()
|
||||
|
||||
# Hardcoded strings in React components
|
||||
<div>Camera not found</div> # ❌ Use t("camera_not_found")
|
||||
|
||||
# Missing error handling
|
||||
data = await api.get_data() # ❌ No exception handling
|
||||
|
||||
# Bare exceptions in regular code
|
||||
try:
|
||||
value = await sensor.read()
|
||||
except Exception: # ❌ Too broad
|
||||
logger.error("Failed")
|
||||
```
|
||||
|
||||
### ✅ Use These Instead
|
||||
|
||||
```python
|
||||
# Async operations
|
||||
import aiohttp
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get(url) as response:
|
||||
data = await response.json()
|
||||
|
||||
await asyncio.sleep(5) # ✅ Non-blocking
|
||||
|
||||
# Translatable strings in React
|
||||
const { t } = useTranslation();
|
||||
<div>{t("camera_not_found")}</div> # ✅ Translatable
|
||||
|
||||
# Proper error handling
|
||||
try:
|
||||
data = await api.get_data()
|
||||
except ApiException as err:
|
||||
logger.error("API error: %s", err)
|
||||
raise
|
||||
|
||||
# Specific exceptions
|
||||
try:
|
||||
value = await sensor.read()
|
||||
except SensorException as err: # ✅ Specific
|
||||
logger.exception("Failed to read sensor")
|
||||
```
|
||||
|
||||
## Project-Specific Conventions
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- Main config: `config/config.yml`
|
||||
|
||||
### Directory Structure
|
||||
|
||||
- Backend code: `frigate/`
|
||||
- Frontend code: `web/`
|
||||
- Docker files: `docker/`
|
||||
- Documentation: `docs/`
|
||||
- Database migrations: `migrations/`
|
||||
|
||||
### Code Style Conformance
|
||||
|
||||
Always conform new and refactored code to the existing coding style in the project:
|
||||
|
||||
- Follow established patterns in similar files
|
||||
- Match indentation and formatting of surrounding code
|
||||
- Use consistent naming conventions (snake_case for Python, camelCase for TypeScript)
|
||||
- Maintain the same level of verbosity in comments and docstrings
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- Documentation: https://docs.frigate.video
|
||||
- Main Repository: https://github.com/blakeblackshear/frigate
|
||||
- Home Assistant Integration: https://github.com/blakeblackshear/frigate-hass-integration
|
||||
Never write strings in the frontend directly, always write to and reference the relevant translations file.
|
||||
Always conform new and refactored code to the existing coding style in the project.
|
||||
|
||||
4
.github/workflows/release.yml
vendored
4
.github/workflows/release.yml
vendored
@@ -39,14 +39,14 @@ jobs:
|
||||
STABLE_TAG=${BASE}:stable
|
||||
PULL_TAG=${BASE}:${BUILD_TAG}
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm synaptics; do
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm; do
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
|
||||
done
|
||||
|
||||
# stable tag
|
||||
if [[ "${BUILD_TYPE}" == "stable" ]]; then
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm synaptics; do
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm; do
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
|
||||
done
|
||||
fi
|
||||
|
||||
2
LICENSE
2
LICENSE
@@ -1,6 +1,6 @@
|
||||
The MIT License
|
||||
|
||||
Copyright (c) 2026 Frigate, Inc. (Frigate™)
|
||||
Copyright (c) 2025 Frigate LLC (Frigate™)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
@@ -40,7 +40,7 @@ If you would like to make a donation to support development, please use [Github
|
||||
This project is licensed under the **MIT License**.
|
||||
|
||||
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
|
||||
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate, Inc.** and are **not** covered by the MIT License.
|
||||
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
|
||||
|
||||
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
|
||||
|
||||
@@ -67,7 +67,7 @@ Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of
|
||||
### Built-in mask and zone editor
|
||||
|
||||
<div>
|
||||
<img width="800" alt="Built-in mask and zone editor" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
|
||||
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
|
||||
</div>
|
||||
|
||||
## Translations
|
||||
@@ -80,4 +80,4 @@ We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support la
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2026 Frigate, Inc.**
|
||||
**Copyright © 2025 Frigate LLC.**
|
||||
|
||||
@@ -41,7 +41,7 @@
|
||||
|
||||
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
|
||||
|
||||
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate, Inc. 的商标**,**不在** MIT 许可证覆盖范围内。
|
||||
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
|
||||
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
|
||||
|
||||
## 截图
|
||||
@@ -87,4 +87,4 @@ Bilibili:https://space.bilibili.com/3546894915602564
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2026 Frigate, Inc.**
|
||||
**Copyright © 2025 Frigate LLC.**
|
||||
|
||||
@@ -6,7 +6,7 @@ This document outlines the policy regarding the use of the trademarks associated
|
||||
|
||||
## 1. Our Trademarks
|
||||
|
||||
The following terms and visual assets are trademarks (the "Marks") of **Frigate, Inc.**:
|
||||
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
|
||||
|
||||
- **Frigate™**
|
||||
- **Frigate NVR™**
|
||||
@@ -14,7 +14,7 @@ The following terms and visual assets are trademarks (the "Marks") of **Frigate,
|
||||
- **The Frigate Logo**
|
||||
|
||||
**Note on Common Law Rights:**
|
||||
Frigate, Inc. asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
|
||||
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
|
||||
|
||||
## 2. Interaction with the MIT License
|
||||
|
||||
@@ -25,7 +25,7 @@ The software in this repository is licensed under the [MIT License](LICENSE).
|
||||
- The **Code** is free to use, modify, and distribute under the MIT terms.
|
||||
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
|
||||
|
||||
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate, Inc.
|
||||
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
|
||||
|
||||
## 3. Acceptable Use
|
||||
|
||||
@@ -40,7 +40,7 @@ You may use the Marks without prior written permission in the following specific
|
||||
You may **NOT** use the Marks in the following ways:
|
||||
|
||||
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
|
||||
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate, Inc.
|
||||
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
|
||||
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
|
||||
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).
|
||||
|
||||
|
||||
@@ -47,8 +47,8 @@ onnxruntime == 1.22.*
|
||||
# Embeddings
|
||||
transformers == 4.45.*
|
||||
# Generative AI
|
||||
google-genai == 1.58.*
|
||||
ollama == 0.6.*
|
||||
google-generativeai == 0.8.*
|
||||
ollama == 0.5.*
|
||||
openai == 1.65.*
|
||||
# push notifications
|
||||
py-vapid == 1.9.*
|
||||
|
||||
@@ -54,7 +54,7 @@ function setup_homekit_config() {
|
||||
local config_path="$1"
|
||||
|
||||
if [[ ! -f "${config_path}" ]]; then
|
||||
echo "[INFO] Creating empty config file for HomeKit..."
|
||||
echo "[INFO] Creating empty HomeKit config file..."
|
||||
echo '{}' > "${config_path}"
|
||||
fi
|
||||
|
||||
@@ -69,15 +69,13 @@ function setup_homekit_config() {
|
||||
local cleaned_json="/tmp/cache/homekit_cleaned.json"
|
||||
jq '
|
||||
# Keep only the homekit section if it exists, otherwise empty object
|
||||
if has("homekit") then {homekit: .homekit} else {} end
|
||||
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
|
||||
echo '{}' > "${cleaned_json}"
|
||||
}
|
||||
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
|
||||
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || echo '{"homekit": {}}' > "${cleaned_json}"
|
||||
|
||||
# Convert back to YAML and write to the config file
|
||||
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
|
||||
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
|
||||
echo '{}' > "${config_path}"
|
||||
echo '{"homekit": {}}' > "${config_path}"
|
||||
}
|
||||
|
||||
# Clean up temp files
|
||||
|
||||
@@ -22,31 +22,6 @@ sys.path.remove("/opt/frigate")
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
# Check if arbitrary exec sources are allowed (defaults to False for security)
|
||||
allow_arbitrary_exec = None
|
||||
if "GO2RTC_ALLOW_ARBITRARY_EXEC" in os.environ:
|
||||
allow_arbitrary_exec = os.environ.get("GO2RTC_ALLOW_ARBITRARY_EXEC")
|
||||
elif (
|
||||
os.path.isdir("/run/secrets")
|
||||
and os.access("/run/secrets", os.R_OK)
|
||||
and "GO2RTC_ALLOW_ARBITRARY_EXEC" in os.listdir("/run/secrets")
|
||||
):
|
||||
allow_arbitrary_exec = (
|
||||
Path(os.path.join("/run/secrets", "GO2RTC_ALLOW_ARBITRARY_EXEC"))
|
||||
.read_text()
|
||||
.strip()
|
||||
)
|
||||
# check for the add-on options file
|
||||
elif os.path.isfile("/data/options.json"):
|
||||
with open("/data/options.json") as f:
|
||||
raw_options = f.read()
|
||||
options = json.loads(raw_options)
|
||||
allow_arbitrary_exec = options.get("go2rtc_allow_arbitrary_exec")
|
||||
|
||||
ALLOW_ARBITRARY_EXEC = allow_arbitrary_exec is not None and str(
|
||||
allow_arbitrary_exec
|
||||
).lower() in ("true", "1", "yes")
|
||||
|
||||
FRIGATE_ENV_VARS = {k: v for k, v in os.environ.items() if k.startswith("FRIGATE_")}
|
||||
# read docker secret files as env vars too
|
||||
if os.path.isdir("/run/secrets"):
|
||||
@@ -134,26 +109,14 @@ if LIBAVFORMAT_VERSION_MAJOR < 59:
|
||||
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
|
||||
go2rtc_config["ffmpeg"]["rtsp"] = rtsp_args
|
||||
|
||||
|
||||
def is_restricted_source(stream_source: str) -> bool:
|
||||
"""Check if a stream source is restricted (echo, expr, or exec)."""
|
||||
return stream_source.strip().startswith(("echo:", "expr:", "exec:"))
|
||||
|
||||
|
||||
for name in list(go2rtc_config.get("streams", {})):
|
||||
for name in go2rtc_config.get("streams", {}):
|
||||
stream = go2rtc_config["streams"][name]
|
||||
|
||||
if isinstance(stream, str):
|
||||
try:
|
||||
formatted_stream = stream.format(**FRIGATE_ENV_VARS)
|
||||
if not ALLOW_ARBITRARY_EXEC and is_restricted_source(formatted_stream):
|
||||
print(
|
||||
f"[ERROR] Stream '{name}' uses a restricted source (echo/expr/exec) which is disabled by default for security. "
|
||||
f"Set GO2RTC_ALLOW_ARBITRARY_EXEC=true to enable arbitrary exec sources."
|
||||
)
|
||||
del go2rtc_config["streams"][name]
|
||||
continue
|
||||
go2rtc_config["streams"][name] = formatted_stream
|
||||
go2rtc_config["streams"][name] = go2rtc_config["streams"][name].format(
|
||||
**FRIGATE_ENV_VARS
|
||||
)
|
||||
except KeyError as e:
|
||||
print(
|
||||
"[ERROR] Invalid substitution found, see https://docs.frigate.video/configuration/restream#advanced-restream-configurations for more info."
|
||||
@@ -161,33 +124,15 @@ for name in list(go2rtc_config.get("streams", {})):
|
||||
sys.exit(e)
|
||||
|
||||
elif isinstance(stream, list):
|
||||
filtered_streams = []
|
||||
for i, stream_item in enumerate(stream):
|
||||
for i, stream in enumerate(stream):
|
||||
try:
|
||||
formatted_stream = stream_item.format(**FRIGATE_ENV_VARS)
|
||||
if not ALLOW_ARBITRARY_EXEC and is_restricted_source(formatted_stream):
|
||||
print(
|
||||
f"[ERROR] Stream '{name}' item {i + 1} uses a restricted source (echo/expr/exec) which is disabled by default for security. "
|
||||
f"Set GO2RTC_ALLOW_ARBITRARY_EXEC=true to enable arbitrary exec sources."
|
||||
)
|
||||
continue
|
||||
|
||||
filtered_streams.append(formatted_stream)
|
||||
go2rtc_config["streams"][name][i] = stream.format(**FRIGATE_ENV_VARS)
|
||||
except KeyError as e:
|
||||
print(
|
||||
"[ERROR] Invalid substitution found, see https://docs.frigate.video/configuration/restream#advanced-restream-configurations for more info."
|
||||
)
|
||||
sys.exit(e)
|
||||
|
||||
if filtered_streams:
|
||||
go2rtc_config["streams"][name] = filtered_streams
|
||||
else:
|
||||
print(
|
||||
f"[ERROR] Stream '{name}' was removed because all sources were restricted (echo/expr/exec). "
|
||||
f"Set GO2RTC_ALLOW_ARBITRARY_EXEC=true to enable arbitrary exec sources."
|
||||
)
|
||||
del go2rtc_config["streams"][name]
|
||||
|
||||
# add birdseye restream stream if enabled
|
||||
if config.get("birdseye", {}).get("restream", False):
|
||||
birdseye: dict[str, Any] = config.get("birdseye")
|
||||
|
||||
@@ -18,10 +18,6 @@ proxy_set_header X-Forwarded-User $http_x_forwarded_user;
|
||||
proxy_set_header X-Forwarded-Groups $http_x_forwarded_groups;
|
||||
proxy_set_header X-Forwarded-Email $http_x_forwarded_email;
|
||||
proxy_set_header X-Forwarded-Preferred-Username $http_x_forwarded_preferred_username;
|
||||
proxy_set_header X-Auth-Request-User $http_x_auth_request_user;
|
||||
proxy_set_header X-Auth-Request-Groups $http_x_auth_request_groups;
|
||||
proxy_set_header X-Auth-Request-Email $http_x_auth_request_email;
|
||||
proxy_set_header X-Auth-Request-Preferred-Username $http_x_auth_request_preferred_username;
|
||||
proxy_set_header X-authentik-username $http_x_authentik_username;
|
||||
proxy_set_header X-authentik-groups $http_x_authentik_groups;
|
||||
proxy_set_header X-authentik-email $http_x_authentik_email;
|
||||
|
||||
@@ -14,5 +14,5 @@ nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
|
||||
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
|
||||
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
|
||||
onnx==1.16.*; platform_machine == 'x86_64'
|
||||
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
|
||||
onnxruntime-gpu==1.23.*; platform_machine == 'x86_64'
|
||||
protobuf==3.20.3; platform_machine == 'x86_64'
|
||||
|
||||
@@ -50,7 +50,7 @@ cameras:
|
||||
|
||||
### Configuring Minimum Volume
|
||||
|
||||
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that Frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
|
||||
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
|
||||
|
||||
:::tip
|
||||
|
||||
|
||||
@@ -188,10 +188,10 @@ go2rtc:
|
||||
# example for connectin to a Reolink camera that supports two way talk
|
||||
your_reolink_camera_twt:
|
||||
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
|
||||
- "rtsp://username:password@reolink_ip/Preview_01_sub"
|
||||
- "rtsp://username:password@reolink_ip/Preview_01_sub
|
||||
your_reolink_camera_twt_sub:
|
||||
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
|
||||
- "rtsp://username:password@reolink_ip/Preview_01_sub"
|
||||
- "rtsp://username:password@reolink_ip/Preview_01_sub
|
||||
# example for connecting to a Reolink NVR
|
||||
your_reolink_camera_via_nvr:
|
||||
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
|
||||
@@ -227,12 +227,6 @@ cameras:
|
||||
|
||||
### Unifi Protect Cameras
|
||||
|
||||
:::note
|
||||
|
||||
Unifi G5s cameras and newer need a Unifi Protect server to enable rtsps stream, it's not posible to enable it in standalone mode.
|
||||
|
||||
:::
|
||||
|
||||
Unifi protect cameras require the rtspx stream to be used with go2rtc.
|
||||
To utilize a Unifi protect camera, modify the rtsps link to begin with rtspx.
|
||||
Additionally, remove the "?enableSrtp" from the end of the Unifi link.
|
||||
@@ -258,10 +252,6 @@ ffmpeg:
|
||||
|
||||
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
|
||||
|
||||
### Wyze Wireless Cameras
|
||||
|
||||
Some community members have found better performance on Wyze cameras by using an alternative firmware known as [Thingino](https://thingino.com/).
|
||||
|
||||
## USB Cameras (aka Webcams)
|
||||
|
||||
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
|
||||
|
||||
@@ -79,12 +79,6 @@ cameras:
|
||||
|
||||
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
|
||||
|
||||
:::note
|
||||
|
||||
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
|
||||
@@ -100,19 +94,18 @@ This list of working and non-working PTZ cameras is based on user feedback. If y
|
||||
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`. These features are required for autotracking, but some cameras still fail to respond even if they claim support. If they are missing, autotracking will not work (though basic PTZ in the WebUI might). Avoid cameras with no database entry unless they are confirmed as working below.
|
||||
|
||||
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
|
||||
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
|
||||
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- |
|
||||
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
|
||||
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
|
||||
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
|
||||
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
|
||||
| Annke CZ504 | ✅ | ✅ | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
|
||||
| Axis Q-6155E | ✅ | ❌ | ONVIF service port: 80; Camera does not support MoveStatus. |
|
||||
| Ctronics PTZ | ✅ | ❌ | |
|
||||
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, picoo series (commonly), among others) have been reported to not support autotracking. These models usually don't have a four digit model number with chassis prefix and options postfix (e.g. DH-P5AE-PV vs DH-SD49825GB-HNR). |
|
||||
| Dahua DH-SD2A500HB | ✅ | ❌ | |
|
||||
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
|
||||
| Dahua DH-P5AE-PV | ❌ | ❌ | |
|
||||
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database |
|
||||
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database | |
|
||||
| Foscam R5 | ✅ | ❌ | |
|
||||
| Foscam SD4 | ✅ | ❌ | |
|
||||
| Hanwha XNP-6550RH | ✅ | ❌ | |
|
||||
|
||||
@@ -39,7 +39,7 @@ For object classification:
|
||||
|
||||
:::note
|
||||
|
||||
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. This could also occur with `car` objects that are assigned a sub label for a delivery carrier. Consider using the `attribute` type instead.
|
||||
A tracked object can only have a single sub label. If you are using Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
|
||||
|
||||
:::
|
||||
|
||||
@@ -89,9 +89,9 @@ Creating and training the model is done within the Frigate UI using the `Classif
|
||||
|
||||
### Step 1: Name and Define
|
||||
|
||||
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
|
||||
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
|
||||
|
||||
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
|
||||
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". Create a third class, "none", for other neighborhood cats that are not your own.
|
||||
|
||||
### Step 2: Assign Training Examples
|
||||
|
||||
|
||||
235
docs/docs/configuration/genai.md
Normal file
235
docs/docs/configuration/genai.md
Normal file
@@ -0,0 +1,235 @@
|
||||
---
|
||||
id: genai
|
||||
title: Generative AI
|
||||
---
|
||||
|
||||
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
|
||||
|
||||
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
|
||||
|
||||
## Configuration
|
||||
|
||||
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
|
||||
|
||||
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-2.0-flash
|
||||
|
||||
cameras:
|
||||
front_camera:
|
||||
genai:
|
||||
enabled: True # <- enable GenAI for your front camera
|
||||
use_snapshot: True
|
||||
objects:
|
||||
- person
|
||||
required_zones:
|
||||
- steps
|
||||
indoor_camera:
|
||||
objects:
|
||||
genai:
|
||||
enabled: False # <- disable GenAI for your indoor camera
|
||||
```
|
||||
|
||||
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
|
||||
|
||||
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
|
||||
|
||||
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
|
||||
|
||||
## Ollama
|
||||
|
||||
:::warning
|
||||
|
||||
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
|
||||
|
||||
:::
|
||||
|
||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||
|
||||
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
|
||||
|
||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
|
||||
|
||||
:::note
|
||||
|
||||
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
||||
|
||||
:::
|
||||
|
||||
#### Ollama Cloud models
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: qwen3-vl:4b
|
||||
```
|
||||
|
||||
## Google Gemini
|
||||
|
||||
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||
|
||||
### Get API Key
|
||||
|
||||
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
|
||||
|
||||
1. Accept the Terms of Service
|
||||
2. Click "Get API Key" from the right hand navigation
|
||||
3. Click "Create API key in new project"
|
||||
4. Copy the API key for use in your config
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-2.0-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
:::
|
||||
|
||||
## OpenAI
|
||||
|
||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
|
||||
|
||||
### Get API Key
|
||||
|
||||
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: openai
|
||||
api_key: "{FRIGATE_OPENAI_API_KEY}"
|
||||
model: gpt-4o
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
:::
|
||||
|
||||
## Azure OpenAI
|
||||
|
||||
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
|
||||
|
||||
### Create Resource and Get API Key
|
||||
|
||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: azure_openai
|
||||
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
|
||||
model: gpt-5-mini
|
||||
api_key: "{FRIGATE_OPENAI_API_KEY}"
|
||||
```
|
||||
|
||||
## Usage and Best Practices
|
||||
|
||||
Frigate's thumbnail search excels at identifying specific details about tracked objects – for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate’s default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
|
||||
|
||||
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate’s default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what’s happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they’re moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation’s context.
|
||||
|
||||
### Using GenAI for notifications
|
||||
|
||||
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
|
||||
|
||||
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
send_triggers:
|
||||
tracked_object_end: true # default
|
||||
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
|
||||
```
|
||||
|
||||
## Custom Prompts
|
||||
|
||||
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
|
||||
|
||||
```
|
||||
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
|
||||
|
||||
:::
|
||||
|
||||
You are also able to define custom prompts in your configuration.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: llava
|
||||
|
||||
objects:
|
||||
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
||||
object_prompts:
|
||||
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
|
||||
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
|
||||
```
|
||||
|
||||
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
front_door:
|
||||
objects:
|
||||
genai:
|
||||
enabled: True
|
||||
use_snapshot: True
|
||||
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
|
||||
object_prompts:
|
||||
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
|
||||
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
|
||||
objects:
|
||||
- person
|
||||
- cat
|
||||
required_zones:
|
||||
- steps
|
||||
```
|
||||
|
||||
### Experiment with prompts
|
||||
|
||||
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
|
||||
|
||||
- OpenAI - [ChatGPT](https://chatgpt.com)
|
||||
- Gemini - [Google AI Studio](https://aistudio.google.com)
|
||||
- Ollama - [Open WebUI](https://docs.openwebui.com/)
|
||||
@@ -17,23 +17,11 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
|
||||
|
||||
:::
|
||||
|
||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
|
||||
|
||||
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
|
||||
|
||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
|
||||
|
||||
### Model Types: Instruct vs Thinking
|
||||
|
||||
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
|
||||
|
||||
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
|
||||
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
|
||||
|
||||
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
|
||||
|
||||
**Recommendation:**
|
||||
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
|
||||
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
|
||||
|
||||
### Supported Models
|
||||
|
||||
@@ -53,12 +41,12 @@ If you are trying to use a single model for Frigate and HomeAssistant, it will n
|
||||
|
||||
The following models are recommended:
|
||||
|
||||
| Model | Notes |
|
||||
| ------------- | -------------------------------------------------------------------- |
|
||||
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
|
||||
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
||||
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
||||
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
||||
| Model | Notes |
|
||||
| ----------------- | -------------------------------------------------------------------- |
|
||||
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
|
||||
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
||||
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
||||
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
||||
|
||||
:::note
|
||||
|
||||
@@ -66,26 +54,26 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
|
||||
|
||||
:::
|
||||
|
||||
#### Ollama Cloud models
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: qwen3-vl:4b
|
||||
model: minicpm-v:8b
|
||||
provider_options: # other Ollama client options can be defined
|
||||
keep_alive: -1
|
||||
options:
|
||||
num_ctx: 8192 # make sure the context matches other services that are using ollama
|
||||
```
|
||||
|
||||
## Google Gemini
|
||||
|
||||
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
|
||||
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
|
||||
|
||||
### Get API Key
|
||||
|
||||
@@ -102,32 +90,16 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
|
||||
genai:
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-2.5-flash
|
||||
model: gemini-1.5-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
|
||||
|
||||
```
|
||||
genai:
|
||||
provider: gemini
|
||||
...
|
||||
provider_options:
|
||||
base_url: https://...
|
||||
```
|
||||
|
||||
Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai).
|
||||
|
||||
:::
|
||||
|
||||
## OpenAI
|
||||
|
||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
|
||||
|
||||
### Get API Key
|
||||
|
||||
@@ -148,41 +120,23 @@ To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` env
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: openai
|
||||
base_url: http://your-llama-server
|
||||
model: your-model-name
|
||||
provider_options:
|
||||
context_size: 8192 # Specify the configured context size
|
||||
```
|
||||
|
||||
This ensures Frigate uses the correct context window size when generating prompts.
|
||||
|
||||
:::
|
||||
|
||||
## Azure OpenAI
|
||||
|
||||
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
|
||||
|
||||
### Create Resource and Get API Key
|
||||
|
||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
|
||||
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: azure_openai
|
||||
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
|
||||
model: gpt-5-mini
|
||||
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
|
||||
api_key: "{FRIGATE_OPENAI_API_KEY}"
|
||||
```
|
||||
|
||||
@@ -39,10 +39,9 @@ You are also able to define custom prompts in your configuration.
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: http://localhost:11434
|
||||
model: qwen3-vl:8b-instruct
|
||||
model: llava
|
||||
|
||||
objects:
|
||||
genai:
|
||||
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
||||
object_prompts:
|
||||
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
|
||||
|
||||
@@ -16,13 +16,12 @@ Review summaries provide structured JSON responses that are saved for each revie
|
||||
```
|
||||
- `title` (string): A concise, direct title that describes the purpose or overall action (e.g., "Person taking out trash", "Joe walking dog").
|
||||
- `scene` (string): A narrative description of what happens across the sequence from start to finish, including setting, detected objects, and their observable actions.
|
||||
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. This is a condensed version of the scene description.
|
||||
- `confidence` (float): 0-1 confidence in the analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous.
|
||||
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
|
||||
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
|
||||
```
|
||||
|
||||
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will automatically show the title and `shortSummary` when the data is available, while the full `scene` description is available in the UI for detailed review.
|
||||
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will also automatically show the title and description when the data is available.
|
||||
|
||||
### Defining Typical Activity
|
||||
|
||||
@@ -31,43 +30,40 @@ Each installation and even camera can have different parameters for what is cons
|
||||
<details>
|
||||
<summary>Default Activity Context Prompt</summary>
|
||||
|
||||
```yaml
|
||||
review:
|
||||
genai:
|
||||
activity_context_prompt: |
|
||||
### Normal Activity Indicators (Level 0)
|
||||
- Known/verified people in any zone at any time
|
||||
- People with pets in residential areas
|
||||
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
|
||||
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
|
||||
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
|
||||
```
|
||||
### Normal Activity Indicators (Level 0)
|
||||
- Known/verified people in any zone at any time
|
||||
- People with pets in residential areas
|
||||
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
|
||||
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
|
||||
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
|
||||
|
||||
### Suspicious Activity Indicators (Level 1)
|
||||
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
|
||||
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
|
||||
- Taking items that don't belong to them (packages, objects from porches/driveways)
|
||||
- Climbing or jumping fences/barriers to access property
|
||||
- Attempting to conceal actions or items from view
|
||||
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
|
||||
### Suspicious Activity Indicators (Level 1)
|
||||
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
|
||||
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
|
||||
- Taking items that don't belong to them (packages, objects from porches/driveways)
|
||||
- Climbing or jumping fences/barriers to access property
|
||||
- Attempting to conceal actions or items from view
|
||||
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
|
||||
|
||||
### Critical Threat Indicators (Level 2)
|
||||
- Holding break-in tools (crowbars, pry bars, bolt cutters)
|
||||
- Weapons visible (guns, knives, bats used aggressively)
|
||||
- Forced entry in progress
|
||||
- Physical aggression or violence
|
||||
- Active property damage or theft in progress
|
||||
### Critical Threat Indicators (Level 2)
|
||||
- Holding break-in tools (crowbars, pry bars, bolt cutters)
|
||||
- Weapons visible (guns, knives, bats used aggressively)
|
||||
- Forced entry in progress
|
||||
- Physical aggression or violence
|
||||
- Active property damage or theft in progress
|
||||
|
||||
### Assessment Guidance
|
||||
Evaluate in this order:
|
||||
### Assessment Guidance
|
||||
Evaluate in this order:
|
||||
|
||||
1. **If person is verified/known** → Level 0 regardless of time or activity
|
||||
2. **If person is unidentified:**
|
||||
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
|
||||
- Check actions: If testing doors/handles, taking items, climbing → Level 1
|
||||
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
|
||||
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
|
||||
1. **If person is verified/known** → Level 0 regardless of time or activity
|
||||
2. **If person is unidentified:**
|
||||
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
|
||||
- Check actions: If testing doors/handles, taking items, climbing → Level 1
|
||||
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
|
||||
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
|
||||
|
||||
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
|
||||
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
|
||||
```
|
||||
|
||||
</details>
|
||||
@@ -112,23 +108,12 @@ review:
|
||||
- animals in the garden
|
||||
```
|
||||
|
||||
### Preferred Language
|
||||
|
||||
By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option:
|
||||
|
||||
```yaml
|
||||
review:
|
||||
genai:
|
||||
enabled: true
|
||||
preferred_language: Spanish
|
||||
```
|
||||
|
||||
## Review Reports
|
||||
|
||||
Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
|
||||
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
|
||||
|
||||
### Requesting Reports Programmatically
|
||||
|
||||
Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
|
||||
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
|
||||
|
||||
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.
|
||||
|
||||
@@ -3,65 +3,78 @@ id: hardware_acceleration_video
|
||||
title: Video Decoding
|
||||
---
|
||||
|
||||
import CommunityBadge from '@site/src/components/CommunityBadge';
|
||||
|
||||
# Video Decoding
|
||||
|
||||
It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate.
|
||||
It is highly recommended to use a GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
|
||||
|
||||
Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg. To verify that hardware acceleration is working:
|
||||
- Check the logs: A message will either say that hardware acceleration was automatically detected, or there will be a warning that no hardware acceleration was automatically detected
|
||||
- If hardware acceleration is specified in the config, verification can be done by ensuring the logs are free from errors. There is no CPU fallback for hardware acceleration.
|
||||
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
|
||||
|
||||
:::info
|
||||
|
||||
Frigate supports presets for optimal hardware accelerated video decoding:
|
||||
## Raspberry Pi 3/4
|
||||
|
||||
**AMD**
|
||||
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
|
||||
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
|
||||
|
||||
- [AMD](#amd-based-cpus): Frigate can utilize modern AMD integrated GPUs and AMD discrete GPUs to accelerate video decoding.
|
||||
```yaml
|
||||
# if you want to decode a h264 stream
|
||||
ffmpeg:
|
||||
hwaccel_args: preset-rpi-64-h264
|
||||
|
||||
**Intel**
|
||||
# if you want to decode a h265 (hevc) stream
|
||||
ffmpeg:
|
||||
hwaccel_args: preset-rpi-64-h265
|
||||
```
|
||||
|
||||
- [Intel](#intel-based-cpus): Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video decoding.
|
||||
:::note
|
||||
|
||||
**Nvidia GPU**
|
||||
If running Frigate through Docker, you either need to run in privileged mode or
|
||||
map the `/dev/video*` devices to Frigate. With Docker Compose add:
|
||||
|
||||
- [Nvidia GPU](#nvidia-gpus): Frigate can utilize most modern Nvidia GPUs to accelerate video decoding.
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
devices:
|
||||
- /dev/video11:/dev/video11
|
||||
```
|
||||
|
||||
**Raspberry Pi 3/4**
|
||||
Or with `docker run`:
|
||||
|
||||
- [Raspberry Pi](#raspberry-pi-34): Frigate can utilize the media engine in the Raspberry Pi 3 and 4 to slightly accelerate video decoding.
|
||||
```bash
|
||||
docker run -d \
|
||||
--name frigate \
|
||||
...
|
||||
--device /dev/video11 \
|
||||
ghcr.io/blakeblackshear/frigate:stable
|
||||
```
|
||||
|
||||
**Nvidia Jetson** <CommunityBadge />
|
||||
`/dev/video11` is the correct device (on Raspberry Pi 4B). You can check
|
||||
by running the following and looking for `H264`:
|
||||
|
||||
- [Jetson](#nvidia-jetson): Frigate can utilize the media engine in Jetson hardware to accelerate video decoding.
|
||||
```bash
|
||||
for d in /dev/video*; do
|
||||
echo -e "---\n$d"
|
||||
v4l2-ctl --list-formats-ext -d $d
|
||||
done
|
||||
```
|
||||
|
||||
**Rockchip** <CommunityBadge />
|
||||
|
||||
- [RKNN](#rockchip-platform): Frigate can utilize the media engine in RockChip SOCs to accelerate video decoding.
|
||||
|
||||
**Other Hardware**
|
||||
|
||||
Depending on your system, these presets may not be compatible, and you may need to use manual hwaccel args to take advantage of your hardware. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
|
||||
Or map in all the `/dev/video*` devices.
|
||||
|
||||
:::
|
||||
|
||||
## Intel-based CPUs
|
||||
|
||||
Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video decoding.
|
||||
|
||||
:::info
|
||||
|
||||
**Recommended hwaccel Preset**
|
||||
|
||||
| CPU Generation | Intel Driver | Recommended Preset | Notes |
|
||||
| -------------- | ------------ | ------------------- | ------------------------------------------- |
|
||||
| gen1 - gen5 | i965 | preset-vaapi | qsv is not supported, may not support H.265 |
|
||||
| gen6 - gen7 | iHD | preset-vaapi | qsv is not supported |
|
||||
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
|
||||
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
|
||||
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
|
||||
| CPU Generation | Intel Driver | Recommended Preset | Notes |
|
||||
| -------------- | ------------ | ------------------- | ------------------------------------ |
|
||||
| gen1 - gen5 | i965 | preset-vaapi | qsv is not supported |
|
||||
| gen6 - gen7 | iHD | preset-vaapi | qsv is not supported |
|
||||
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
|
||||
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
|
||||
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
|
||||
|
||||
:::
|
||||
|
||||
@@ -182,17 +195,15 @@ telemetry:
|
||||
|
||||
If you are passing in a device path, make sure you've passed the device through to the container.
|
||||
|
||||
## AMD-based CPUs
|
||||
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
|
||||
|
||||
Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video decoding using VAAPI.
|
||||
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
|
||||
|
||||
### Configuring Radeon Driver
|
||||
:::note
|
||||
|
||||
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
|
||||
|
||||
### Via VAAPI
|
||||
|
||||
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
ffmpeg:
|
||||
@@ -253,7 +264,7 @@ processes:
|
||||
|
||||
:::note
|
||||
|
||||
`nvidia-smi` will not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
|
||||
`nvidia-smi` may not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
|
||||
|
||||
:::
|
||||
|
||||
@@ -289,63 +300,12 @@ If you do not see these processes, check the `docker logs` for the container and
|
||||
|
||||
These instructions were originally based on the [Jellyfin documentation](https://jellyfin.org/docs/general/administration/hardware-acceleration.html#nvidia-hardware-acceleration-on-docker-linux).
|
||||
|
||||
## Raspberry Pi 3/4
|
||||
|
||||
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
|
||||
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
|
||||
|
||||
```yaml
|
||||
# if you want to decode a h264 stream
|
||||
ffmpeg:
|
||||
hwaccel_args: preset-rpi-64-h264
|
||||
|
||||
# if you want to decode a h265 (hevc) stream
|
||||
ffmpeg:
|
||||
hwaccel_args: preset-rpi-64-h265
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If running Frigate through Docker, you either need to run in privileged mode or
|
||||
map the `/dev/video*` devices to Frigate. With Docker Compose add:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
devices:
|
||||
- /dev/video11:/dev/video11
|
||||
```
|
||||
|
||||
Or with `docker run`:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name frigate \
|
||||
...
|
||||
--device /dev/video11 \
|
||||
ghcr.io/blakeblackshear/frigate:stable
|
||||
```
|
||||
|
||||
`/dev/video11` is the correct device (on Raspberry Pi 4B). You can check
|
||||
by running the following and looking for `H264`:
|
||||
|
||||
```bash
|
||||
for d in /dev/video*; do
|
||||
echo -e "---\n$d"
|
||||
v4l2-ctl --list-formats-ext -d $d
|
||||
done
|
||||
```
|
||||
|
||||
Or map in all the `/dev/video*` devices.
|
||||
|
||||
:::
|
||||
|
||||
# Community Supported
|
||||
|
||||
## NVIDIA Jetson
|
||||
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
|
||||
|
||||
A separate set of docker images is available for Jetson devices. They come with an `ffmpeg` build with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
|
||||
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
|
||||
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
|
||||
|
||||
You will need to use the image with the nvidia container runtime:
|
||||
|
||||
|
||||
@@ -68,8 +68,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
|
||||
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
|
||||
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
|
||||
- **`device`**: Device to use to run license plate detection _and_ recognition models.
|
||||
- Default: `None`
|
||||
- This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
|
||||
- Default: `CPU`
|
||||
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
|
||||
- **`model_size`**: The size of the model used to identify regions of text on plates.
|
||||
- Default: `small`
|
||||
- This can be `small` or `large`.
|
||||
@@ -432,6 +432,6 @@ If you are using a model that natively detects `license_plate`, add an _object m
|
||||
|
||||
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
|
||||
|
||||
### I see "Error running ... model" in my logs, or my inference time is very high. How can I fix this?
|
||||
### I see "Error running ... model" in my logs. How can I fix this?
|
||||
|
||||
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.
|
||||
|
||||
@@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt
|
||||
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
|
||||
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
|
||||
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
|
||||
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
|
||||
|
||||
### Camera Settings Recommendations
|
||||
|
||||
@@ -127,8 +127,7 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
|
||||
```
|
||||
|
||||
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
|
||||
|
||||
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
|
||||
- Note that WebRTC does not support H.265.
|
||||
|
||||
:::tip
|
||||
|
||||
|
||||
@@ -157,7 +157,7 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite`
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
|
||||
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
|
||||
|
||||
<details>
|
||||
<summary>YOLOv9 Setup & Config</summary>
|
||||
@@ -178,7 +178,7 @@ model:
|
||||
labelmap_path: /config/labels-coco17.txt
|
||||
```
|
||||
|
||||
Note that due to hardware limitations of the Coral, the labelmap is a subset of the COCO labels and includes only 17 object classes.
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 17 objects.
|
||||
|
||||
</details>
|
||||
|
||||
@@ -477,7 +477,7 @@ After placing the downloaded onnx model in your config/model_cache folder, you c
|
||||
detectors:
|
||||
ov:
|
||||
type: openvino
|
||||
device: CPU
|
||||
device: GPU
|
||||
|
||||
model:
|
||||
model_type: dfine
|
||||
@@ -569,10 +569,10 @@ When using Docker Compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
devices:
|
||||
- /dev/dri
|
||||
- /dev/kfd
|
||||
---
|
||||
devices:
|
||||
- /dev/dri
|
||||
- /dev/kfd
|
||||
```
|
||||
|
||||
For reference on recommended settings see [running ROCm/pytorch in Docker](https://rocm.docs.amd.com/projects/install-on-linux/en/develop/how-to/3rd-party/pytorch-install.html#using-docker-with-pytorch-pre-installed).
|
||||
@@ -600,9 +600,9 @@ When using Docker Compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
environment:
|
||||
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
|
||||
|
||||
environment:
|
||||
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
|
||||
```
|
||||
|
||||
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
|
||||
@@ -1508,17 +1508,17 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
|
||||
EOF
|
||||
```
|
||||
|
||||
### Downloading RF-DETR Model
|
||||
### Download RF-DETR Model
|
||||
|
||||
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
|
||||
|
||||
```sh
|
||||
docker build . --build-arg MODEL_SIZE=Nano --rm --output . -f- <<'EOF'
|
||||
docker build . --build-arg MODEL_SIZE=Nano --output . -f- <<'EOF'
|
||||
FROM python:3.11 AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /rfdetr
|
||||
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 onnxscript
|
||||
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnxscript
|
||||
ARG MODEL_SIZE
|
||||
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
|
||||
FROM scratch
|
||||
|
||||
@@ -11,7 +11,7 @@ This adds features including the ability to deep link directly into the app.
|
||||
|
||||
In order to install Frigate as a PWA, the following requirements must be met:
|
||||
|
||||
- Frigate must be accessed via a secure context (localhost, secure https, VPN, etc.)
|
||||
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
|
||||
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
|
||||
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
|
||||
|
||||
@@ -22,7 +22,3 @@ Installation varies slightly based on the device that is being used:
|
||||
- Desktop: Use the install button typically found in right edge of the address bar
|
||||
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
|
||||
- iOS: Use the `Add to Homescreen` button in the share menu
|
||||
|
||||
## Usage
|
||||
|
||||
Once setup, the Frigate app can be used wherever it has access to Frigate. This means it can be setup as local-only, VPN-only, or fully accessible depending on your needs.
|
||||
|
||||
@@ -696,9 +696,6 @@ genai:
|
||||
# Optional additional args to pass to the GenAI Provider (default: None)
|
||||
provider_options:
|
||||
keep_alive: -1
|
||||
# Optional: Options to pass during inference calls (default: {})
|
||||
runtime_options:
|
||||
temperature: 0.7
|
||||
|
||||
# Optional: Configuration for audio transcription
|
||||
# NOTE: only the enabled option can be overridden at the camera level
|
||||
|
||||
@@ -185,35 +185,10 @@ In this configuration:
|
||||
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
|
||||
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
|
||||
|
||||
## Security: Restricted Stream Sources
|
||||
|
||||
For security reasons, the `echo:`, `expr:`, and `exec:` stream sources are disabled by default in go2rtc. These sources allow arbitrary command execution and can pose security risks if misconfigured.
|
||||
|
||||
If you attempt to use these sources in your configuration, the streams will be removed and an error message will be printed in the logs.
|
||||
|
||||
To enable these sources, you must set the environment variable `GO2RTC_ALLOW_ARBITRARY_EXEC=true`. This can be done in your Docker Compose file or container environment:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
- GO2RTC_ALLOW_ARBITRARY_EXEC=true
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
Enabling arbitrary exec sources allows execution of arbitrary commands through go2rtc stream configurations. Only enable this if you understand the security implications and trust all sources of your configuration.
|
||||
|
||||
:::
|
||||
|
||||
## Advanced Restream Configurations
|
||||
|
||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
||||
|
||||
:::warning
|
||||
|
||||
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
|
||||
|
||||
:::
|
||||
|
||||
NOTE: The output will need to be passed with two curly braces `{{output}}`
|
||||
|
||||
```yaml
|
||||
|
||||
@@ -11,12 +11,6 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
|
||||
|
||||
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
|
||||
|
||||
:::tip
|
||||
|
||||
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
|
||||
|
||||
:::
|
||||
|
||||
### Choosing a detect resolution
|
||||
|
||||
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.
|
||||
|
||||
@@ -20,7 +20,7 @@ Here are some of the cameras I recommend:
|
||||
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
|
||||
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
|
||||
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
|
||||
- <a href="https://www.bhphotovideo.com/c/product/1705511-REG/hikvision_colorvu_ds_2cd2387g2p_lsu_sl_8mp_network.html" target="_blank" rel="nofollow noopener">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
|
||||
- <a href="https://amzn.to/4ltOpaC" target="_blank" rel="nofollow noopener sponsored">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
|
||||
|
||||
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
|
||||
|
||||
@@ -38,11 +38,9 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
|
||||
|
||||
:::
|
||||
|
||||
| Name | Capabilities | Notes |
|
||||
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
|
||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
|
||||
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
|
||||
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
|
||||
| Name | Coral Inference Speed | Coral Compatibility | Notes |
|
||||
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
|
||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
||||
|
||||
## Detectors
|
||||
|
||||
@@ -55,10 +53,12 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Most Hardware**
|
||||
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
|
||||
|
||||
- [Supports many model architectures](../../configuration/object_detectors#configuration)
|
||||
- Runs best with tiny or small size models
|
||||
|
||||
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
|
||||
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
|
||||
|
||||
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
||||
@@ -87,6 +87,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Nvidia**
|
||||
|
||||
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
|
||||
|
||||
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
||||
- Runs well with any size models including large
|
||||
|
||||
@@ -124,16 +125,10 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
|
||||
|
||||
### Google Coral TPU
|
||||
|
||||
:::warning
|
||||
|
||||
The Coral is no longer recommended for new Frigate installations, except in deployments with particularly low power requirements or hardware incapable of utilizing alternative AI accelerators for object detection. Instead, we suggest using one of the numerous other supported object detectors. Frigate will continue to provide support for the Coral TPU for as long as practicably possible given its still one of the most power-efficient devices for executing object detection models.
|
||||
|
||||
:::
|
||||
|
||||
Frigate supports both the USB and M.2 versions of the Google Coral.
|
||||
|
||||
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
|
||||
- The PCIe and M.2 versions require installation of a driver on the host. https://github.com/jnicolson/gasket-builder should be used.
|
||||
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
|
||||
|
||||
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
|
||||
|
||||
@@ -149,7 +144,9 @@ The OpenVINO detector type is able to run on:
|
||||
|
||||
:::note
|
||||
|
||||
Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
|
||||
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported.
|
||||
|
||||
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
|
||||
|
||||
:::
|
||||
|
||||
@@ -167,7 +164,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
|
||||
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
|
||||
| Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | |
|
||||
| Intel NPU | ~ 6 ms | s-320: 11 ms s-640: 30 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
|
||||
| Intel NPU | ~ 6 ms | s-320: 11 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
|
||||
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
|
||||
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
||||
|
||||
@@ -94,10 +94,6 @@ $ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576
|
||||
|
||||
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
|
||||
|
||||
## Extra Steps for Specific Hardware
|
||||
|
||||
The following sections contain additional setup steps that are only required if you are using specific hardware. If you are not using any of these hardware types, you can skip to the [Docker](#docker) installation section.
|
||||
|
||||
### Raspberry Pi 3/4
|
||||
|
||||
By default, the Raspberry Pi limits the amount of memory available to the GPU. In order to use ffmpeg hardware acceleration, you must increase the available memory by setting `gpu_mem` to the maximum recommended value in `config.txt` as described in the [official docs](https://www.raspberrypi.org/documentation/computers/config_txt.html#memory-options).
|
||||
@@ -110,107 +106,14 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
|
||||
|
||||
#### Installation
|
||||
|
||||
:::warning
|
||||
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
|
||||
|
||||
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
|
||||
For other installations, follow these steps for installation:
|
||||
|
||||
:::
|
||||
|
||||
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
|
||||
|
||||
:::note
|
||||
|
||||
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
|
||||
|
||||
:::
|
||||
|
||||
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
|
||||
|
||||
```bash
|
||||
lsmod | grep hailo
|
||||
```
|
||||
|
||||
If it shows `hailo_pci`, unload it:
|
||||
|
||||
```bash
|
||||
sudo rmmod hailo_pci
|
||||
```
|
||||
|
||||
Now blacklist the driver to prevent it from loading on boot:
|
||||
|
||||
```bash
|
||||
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
|
||||
```
|
||||
|
||||
Update initramfs to ensure the blacklist takes effect:
|
||||
|
||||
```bash
|
||||
sudo update-initramfs -u
|
||||
```
|
||||
|
||||
Reboot your Raspberry Pi:
|
||||
|
||||
```bash
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
After rebooting, verify the built-in driver is not loaded:
|
||||
|
||||
```bash
|
||||
lsmod | grep hailo
|
||||
```
|
||||
|
||||
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
|
||||
|
||||
2. **Run the installation script**:
|
||||
|
||||
Download the installation script:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/blakeblackshear/frigate/dev/docker/hailo8l/user_installation.sh
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
|
||||
```bash
|
||||
sudo chmod +x user_installation.sh
|
||||
```
|
||||
|
||||
Run the script:
|
||||
|
||||
```bash
|
||||
./user_installation.sh
|
||||
```
|
||||
|
||||
The script will:
|
||||
|
||||
- Install necessary build dependencies
|
||||
- Clone and build the Hailo driver from the official repository
|
||||
- Install the driver
|
||||
- Download and install the required firmware
|
||||
- Set up udev rules
|
||||
|
||||
3. **Reboot your system**:
|
||||
|
||||
After the script completes successfully, reboot to load the firmware:
|
||||
|
||||
```bash
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
4. **Verify the installation**:
|
||||
|
||||
After rebooting, verify that the Hailo device is available:
|
||||
|
||||
```bash
|
||||
ls -l /dev/hailo0
|
||||
```
|
||||
|
||||
You should see the device listed. You can also verify the driver is loaded:
|
||||
|
||||
```bash
|
||||
lsmod | grep hailo_pci
|
||||
```
|
||||
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
|
||||
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
|
||||
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
|
||||
4. Run the script with `./user_installation.sh`
|
||||
|
||||
#### Setup
|
||||
|
||||
@@ -399,7 +302,7 @@ services:
|
||||
shm_size: "512mb" # update for your cameras based on calculation above
|
||||
devices:
|
||||
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
|
||||
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
|
||||
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128 # AMD / Intel GPU, needs to be updated for your hardware
|
||||
- /dev/accel:/dev/accel # Intel NPU
|
||||
@@ -465,7 +368,6 @@ There are important limitations in HA OS to be aware of:
|
||||
|
||||
- Separate local storage for media is not yet supported by Home Assistant
|
||||
- AMD GPUs are not supported because HA OS does not include the mesa driver.
|
||||
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
|
||||
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
|
||||
|
||||
:::
|
||||
|
||||
@@ -5,7 +5,7 @@ title: Updating
|
||||
|
||||
# Updating Frigate
|
||||
|
||||
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
|
||||
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
|
||||
|
||||
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
|
||||
|
||||
@@ -33,21 +33,21 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||
2. **Update and Pull the Latest Image**:
|
||||
|
||||
- If using Docker Compose:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
image: ghcr.io/blakeblackshear/frigate:0.17.0
|
||||
image: ghcr.io/blakeblackshear/frigate:0.16.2
|
||||
```
|
||||
- Then pull the image:
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
|
||||
```
|
||||
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
|
||||
- If using `docker run`:
|
||||
- Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
|
||||
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
|
||||
```
|
||||
|
||||
3. **Start the Container**:
|
||||
@@ -105,8 +105,8 @@ If an update causes issues:
|
||||
1. Stop Frigate.
|
||||
2. Restore your backed-up config file and database.
|
||||
3. Revert to the previous image version:
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
|
||||
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
|
||||
4. Verify the old version is running again.
|
||||
|
||||
|
||||
@@ -134,13 +134,31 @@ Now you should be able to start Frigate by running `docker compose up -d` from w
|
||||
|
||||
This section assumes that you already have an environment setup as described in [Installation](../frigate/installation.md). You should also configure your cameras according to the [camera setup guide](/frigate/camera_setup). Pay particular attention to the section on choosing a detect resolution.
|
||||
|
||||
### Step 1: Start Frigate
|
||||
### Step 1: Add a detect stream
|
||||
|
||||
At this point you should be able to start Frigate and a basic config will be created automatically.
|
||||
First we will add the detect stream for the camera:
|
||||
|
||||
### Step 2: Add a camera
|
||||
```yaml
|
||||
mqtt:
|
||||
enabled: False
|
||||
|
||||
You can click the `Add Camera` button to use the camera setup wizard to get your first camera added into Frigate.
|
||||
cameras:
|
||||
name_of_your_camera: # <------ Name the camera
|
||||
enabled: True
|
||||
ffmpeg:
|
||||
inputs:
|
||||
- path: rtsp://10.0.10.10:554/rtsp # <----- The stream you want to use for detection
|
||||
roles:
|
||||
- detect
|
||||
```
|
||||
|
||||
### Step 2: Start Frigate
|
||||
|
||||
At this point you should be able to start Frigate and see the video feed in the UI.
|
||||
|
||||
If you get an error image from the camera, this means ffmpeg was not able to get the video feed from your camera. Check the logs for error messages from ffmpeg. The default ffmpeg arguments are designed to work with H264 RTSP cameras that support TCP connections.
|
||||
|
||||
FFmpeg arguments for other types of cameras can be found [here](../configuration/camera_specific.md).
|
||||
|
||||
### Step 3: Configure hardware acceleration (recommended)
|
||||
|
||||
@@ -155,7 +173,7 @@ services:
|
||||
frigate:
|
||||
...
|
||||
devices:
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel & amd hwaccel, needs to be updated for your hardware
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
|
||||
...
|
||||
```
|
||||
|
||||
@@ -184,7 +202,7 @@ services:
|
||||
...
|
||||
devices:
|
||||
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
|
||||
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
|
||||
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
@@ -245,12 +245,6 @@ To load a preview gif of a review item:
|
||||
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
|
||||
```
|
||||
|
||||
To load the thumbnail of a review item:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/notifications/<review-id>/<camera>/review_thumbnail.webp
|
||||
```
|
||||
|
||||
<a name="streams"></a>
|
||||
|
||||
## RTSP stream
|
||||
|
||||
@@ -280,7 +280,7 @@ Topic with current state of notifications. Published values are `ON` and `OFF`.
|
||||
|
||||
## Frigate Camera Topics
|
||||
|
||||
### `frigate/<camera_name>/status/<role>`
|
||||
### `frigate/<camera_name>/<role>/status`
|
||||
|
||||
Publishes the current health status of each role that is enabled (`audio`, `detect`, `record`). Possible values are:
|
||||
|
||||
|
||||
@@ -15,11 +15,13 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
|
||||
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||
|
||||
| Model Type | Description |
|
||||
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
|
||||
| Model Type | Description |
|
||||
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
|
||||
|
||||
_\* Support coming in 0.17_
|
||||
|
||||
### YOLOv9 Details
|
||||
|
||||
@@ -37,7 +39,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
|
||||
|
||||
#### Rockchip (RKNN) Support
|
||||
|
||||
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
|
||||
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
|
||||
|
||||
## Supported detector types
|
||||
|
||||
@@ -53,7 +55,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
|
||||
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
|
||||
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
|
||||
|
||||
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
|
||||
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
|
||||
|
||||
## Improving your model
|
||||
|
||||
|
||||
@@ -1,73 +0,0 @@
|
||||
---
|
||||
id: cpu
|
||||
title: High CPU Usage
|
||||
---
|
||||
|
||||
High CPU usage can impact Frigate's performance and responsiveness. This guide outlines the most effective configuration changes to help reduce CPU consumption and optimize resource usage.
|
||||
|
||||
## 1. Hardware Acceleration for Video Decoding
|
||||
|
||||
**Priority: Critical**
|
||||
|
||||
Video decoding is one of the most CPU-intensive tasks in Frigate. While an AI accelerator handles object detection, it does not assist with decoding video streams. Hardware acceleration (hwaccel) offloads this work to your GPU or specialized video decode hardware, significantly reducing CPU usage and enabling you to support more cameras on the same hardware.
|
||||
|
||||
### Key Concepts
|
||||
|
||||
**Resolution & FPS Impact:** The decoding burden grows exponentially with resolution and frame rate. A 4K stream at 30 FPS requires roughly 4 times the processing power of a 1080p stream at the same frame rate, and doubling the frame rate doubles the decode workload. This is why hardware acceleration becomes critical when working with multiple high-resolution cameras.
|
||||
|
||||
**Hardware Acceleration Benefits:** By using dedicated video decode hardware, you can:
|
||||
|
||||
- Significantly reduce CPU usage per camera stream
|
||||
- Support 2-3x more cameras on the same hardware
|
||||
- Free up CPU resources for motion detection and other Frigate processes
|
||||
- Reduce system heat and power consumption
|
||||
|
||||
### Configuration
|
||||
|
||||
Frigate provides preset configurations for common hardware acceleration scenarios. Set up `hwaccel_args` based on your hardware in your [configuration](../configuration/reference) as described in the [getting started guide](../guides/getting_started).
|
||||
|
||||
### Troubleshooting Hardware Acceleration
|
||||
|
||||
If hardware acceleration isn't working:
|
||||
|
||||
1. Check Frigate logs for FFmpeg errors related to hwaccel
|
||||
2. Verify the hardware device is accessible inside the container
|
||||
3. Ensure your camera streams use H.264 or H.265 codecs (most common)
|
||||
4. Try different presets if the automatic detection fails
|
||||
5. Check that your GPU drivers are properly installed on the host system
|
||||
|
||||
## 2. Detector Selection and Configuration
|
||||
|
||||
**Priority: Critical**
|
||||
|
||||
Choosing the right detector for your hardware is the single most important factor for detection performance. The detector is responsible for running the AI model that identifies objects in video frames. Different detector types have vastly different performance characteristics and hardware requirements, as detailed in the [hardware documentation](../frigate/hardware).
|
||||
|
||||
### Understanding Detector Performance
|
||||
|
||||
Frigate uses motion detection as a first-line check before running expensive object detection, as explained in the [motion detection documentation](../configuration/motion_detection). When motion is detected, Frigate creates a "region" (the green boxes in the debug viewer) and sends it to the detector. The detector's inference speed determines how many detections per second your system can handle.
|
||||
|
||||
**Calculating Detector Capacity:** Your detector has a finite capacity measured in detections per second. With an inference speed of 10ms, your detector can handle approximately 100 detections per second (1000ms / 10ms = 100).If your cameras collectively require more than this capacity, you'll experience delays, missed detections, or the system will fall behind.
|
||||
|
||||
### Choosing the Right Detector
|
||||
|
||||
Different detectors have vastly different performance characteristics, see the expected performance for object detectors in [the hardware docs](../frigate/hardware)
|
||||
|
||||
### Multiple Detector Instances
|
||||
|
||||
When a single detector cannot keep up with your camera count, some detector types (`openvino`, `onnx`) allow you to define multiple detector instances to share the workload. This is particularly useful with GPU-based detectors that have sufficient VRAM to run multiple inference processes.
|
||||
|
||||
For detailed instructions on configuring multiple detectors, see the [Object Detectors documentation](../configuration/object_detectors).
|
||||
|
||||
|
||||
**When to add a second detector:**
|
||||
|
||||
- Skipped FPS is consistently > 0 even during normal activity
|
||||
|
||||
### Model Selection and Optimization
|
||||
|
||||
The model you use significantly impacts detector performance. Frigate provides default models optimized for each detector type, but you can customize them as described in the [detector documentation](../configuration/object_detectors).
|
||||
|
||||
**Model Size Trade-offs:**
|
||||
|
||||
- Smaller models (320x320): Faster inference, Frigate is specifically optimized for a 320x320 size model.
|
||||
- Larger models (640x640): Slower inference, can sometimes have higher accuracy on very large objects that take up a majority of the frame.
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
id: dummy-camera
|
||||
title: Analyzing Object Detection
|
||||
title: Troubleshooting Detection
|
||||
---
|
||||
|
||||
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
|
||||
@@ -37,7 +37,7 @@ cameras:
|
||||
|
||||
## Steps
|
||||
|
||||
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
|
||||
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
|
||||
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
|
||||
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
|
||||
3. Restart Frigate.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
id: edgetpu
|
||||
title: EdgeTPU Errors
|
||||
title: Troubleshooting EdgeTPU
|
||||
---
|
||||
|
||||
## USB Coral Not Detected
|
||||
@@ -68,7 +68,8 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
|
||||
|
||||
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
|
||||
|
||||
- In most cases https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
|
||||
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
|
||||
- For some newer Linux distros (for example, Ubuntu 22.04+), https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
|
||||
|
||||
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
id: gpu
|
||||
title: GPU Errors
|
||||
title: Troubleshooting GPU
|
||||
---
|
||||
|
||||
## OpenVINO
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
id: memory
|
||||
title: Memory Usage
|
||||
title: Memory Troubleshooting
|
||||
---
|
||||
|
||||
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
|
||||
@@ -36,6 +36,7 @@ Frigate processes are named using a module-based naming scheme. Common module na
|
||||
- `frigate.output` - Output processing
|
||||
- `frigate.audio_manager` - Audio processing
|
||||
- `frigate.embeddings` - Embeddings processing
|
||||
- `frigate.embeddings_manager` - Embeddings manager
|
||||
|
||||
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
id: recordings
|
||||
title: Recordings Errors
|
||||
title: Troubleshooting Recordings
|
||||
---
|
||||
|
||||
## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?
|
||||
|
||||
@@ -170,7 +170,7 @@ const config: Config = {
|
||||
],
|
||||
},
|
||||
],
|
||||
copyright: `Copyright © ${new Date().getFullYear()} Frigate, Inc.`,
|
||||
copyright: `Copyright © ${new Date().getFullYear()} Frigate LLC`,
|
||||
},
|
||||
},
|
||||
plugins: [
|
||||
|
||||
6
docs/package-lock.json
generated
6
docs/package-lock.json
generated
@@ -18490,9 +18490,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/qs": {
|
||||
"version": "6.14.1",
|
||||
"resolved": "https://registry.npmjs.org/qs/-/qs-6.14.1.tgz",
|
||||
"integrity": "sha512-4EK3+xJl8Ts67nLYNwqw/dsFVnCf+qR7RgXSK9jEEm9unao3njwMDdmsdvoKBKHzxd7tCYz5e5M+SnMjdtXGQQ==",
|
||||
"version": "6.14.0",
|
||||
"resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz",
|
||||
"integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==",
|
||||
"license": "BSD-3-Clause",
|
||||
"dependencies": {
|
||||
"side-channel": "^1.1.0"
|
||||
|
||||
@@ -129,27 +129,10 @@ const sidebars: SidebarsConfig = {
|
||||
Troubleshooting: [
|
||||
"troubleshooting/faqs",
|
||||
"troubleshooting/recordings",
|
||||
"troubleshooting/gpu",
|
||||
"troubleshooting/edgetpu",
|
||||
"troubleshooting/memory",
|
||||
"troubleshooting/dummy-camera",
|
||||
{
|
||||
type: "category",
|
||||
label: "Troubleshooting Hardware",
|
||||
link: {
|
||||
type: "generated-index",
|
||||
title: "Troubleshooting Hardware",
|
||||
description: "Troubleshooting Problems with Hardware",
|
||||
},
|
||||
items: ["troubleshooting/gpu", "troubleshooting/edgetpu"],
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "Troubleshooting Resource Usage",
|
||||
link: {
|
||||
type: "generated-index",
|
||||
title: "Troubleshooting Resource Usage",
|
||||
description: "Troubleshooting issues with resource usage",
|
||||
},
|
||||
items: ["troubleshooting/cpu", "troubleshooting/memory"],
|
||||
},
|
||||
],
|
||||
Development: [
|
||||
"development/contributing",
|
||||
|
||||
8
docs/static/_headers
vendored
8
docs/static/_headers
vendored
@@ -1,8 +0,0 @@
|
||||
https://:project.pages.dev/*
|
||||
X-Robots-Tag: noindex
|
||||
|
||||
https://:version.:project.pages.dev/*
|
||||
X-Robots-Tag: noindex
|
||||
|
||||
https://docs-dev.frigate.video/*
|
||||
X-Robots-Tag: noindex
|
||||
10
docs/static/img/branding/LICENSE.md
vendored
10
docs/static/img/branding/LICENSE.md
vendored
@@ -1,12 +1,12 @@
|
||||
# COPYRIGHT AND TRADEMARK NOTICE
|
||||
|
||||
The images, logos, and icons contained in this directory (the "Brand Assets") are
|
||||
proprietary to Frigate, Inc. and are NOT covered by the MIT License governing the
|
||||
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
|
||||
rest of this repository.
|
||||
|
||||
1. TRADEMARK STATUS
|
||||
The "Frigate" name and the accompanying logo are common law trademarks™ of
|
||||
Frigate, Inc. Frigate, Inc. reserves all rights to these marks.
|
||||
Frigate LLC. Frigate LLC reserves all rights to these marks.
|
||||
|
||||
2. LIMITED PERMISSION FOR USE
|
||||
Permission is hereby granted to display these Brand Assets strictly for the
|
||||
@@ -17,9 +17,9 @@ rest of this repository.
|
||||
3. RESTRICTIONS
|
||||
You may NOT:
|
||||
a. Use these Brand Assets to represent a derivative work (fork) as an official
|
||||
product of Frigate, Inc.
|
||||
product of Frigate LLC.
|
||||
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
|
||||
commercial affiliation with Frigate, Inc.
|
||||
commercial affiliation with Frigate LLC.
|
||||
c. Modify or alter the Brand Assets.
|
||||
|
||||
If you fork this repository with the intent to distribute a modified or competing
|
||||
@@ -27,4 +27,4 @@ version of the software, you must replace these Brand Assets with your own
|
||||
original content.
|
||||
|
||||
ALL RIGHTS RESERVED.
|
||||
Copyright (c) 2026 Frigate, Inc.
|
||||
Copyright (c) 2025 Frigate LLC.
|
||||
|
||||
BIN
docs/static/img/frigate-autotracking-example.gif
vendored
BIN
docs/static/img/frigate-autotracking-example.gif
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 12 MiB After Width: | Height: | Size: 28 MiB |
@@ -23,12 +23,7 @@ from markupsafe import escape
|
||||
from peewee import SQL, fn, operator
|
||||
from pydantic import ValidationError
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
allow_public,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
|
||||
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
|
||||
from frigate.api.defs.request.app_body import AppConfigSetBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
@@ -692,19 +687,13 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||
@router.get(
|
||||
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
|
||||
)
|
||||
def get_recognized_license_plates(
|
||||
split_joined: Optional[int] = None,
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
try:
|
||||
query = (
|
||||
Event.select(
|
||||
SQL("json_extract(data, '$.recognized_license_plate') AS plate")
|
||||
)
|
||||
.where(
|
||||
(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
|
||||
& (Event.camera << allowed_cameras)
|
||||
)
|
||||
.where(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
|
||||
.distinct()
|
||||
)
|
||||
recognized_license_plates = [row[0] for row in query.tuples()]
|
||||
|
||||
@@ -848,10 +848,9 @@ async def onvif_probe(
|
||||
try:
|
||||
if isinstance(uri, str) and uri.startswith("rtsp://"):
|
||||
if username and password and "@" not in uri:
|
||||
# Inject raw credentials and add only the
|
||||
# authenticated version. The credentials will be encoded
|
||||
# later by ffprobe_stream or the config system.
|
||||
cred = f"{username}:{password}@"
|
||||
# Inject URL-encoded credentials and add only the
|
||||
# authenticated version.
|
||||
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
|
||||
injected = uri.replace(
|
||||
"rtsp://", f"rtsp://{cred}", 1
|
||||
)
|
||||
@@ -904,8 +903,12 @@ async def onvif_probe(
|
||||
"/cam/realmonitor?channel=1&subtype=0",
|
||||
"/11",
|
||||
]
|
||||
# Use raw credentials for pattern fallback URIs when provided
|
||||
auth_str = f"{username}:{password}@" if username and password else ""
|
||||
# Use URL-encoded credentials for pattern fallback URIs when provided
|
||||
auth_str = (
|
||||
f"{quote_plus(username)}:{quote_plus(password)}@"
|
||||
if username and password
|
||||
else ""
|
||||
)
|
||||
rtsp_port = 554
|
||||
for path in common_paths:
|
||||
uri = f"rtsp://{auth_str}{host}:{rtsp_port}{path}"
|
||||
@@ -927,7 +930,7 @@ async def onvif_probe(
|
||||
and uri.startswith("rtsp://")
|
||||
and "@" not in uri
|
||||
):
|
||||
cred = f"{username}:{password}@"
|
||||
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
|
||||
cred_uri = uri.replace("rtsp://", f"rtsp://{cred}", 1)
|
||||
if cred_uri not in to_test:
|
||||
to_test.append(cred_uri)
|
||||
|
||||
@@ -759,28 +759,15 @@ def delete_classification_dataset_images(
|
||||
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
|
||||
)
|
||||
|
||||
deleted_count = 0
|
||||
for id in list_of_ids:
|
||||
file_path = os.path.join(folder, sanitize_filename(id))
|
||||
|
||||
if os.path.isfile(file_path):
|
||||
os.unlink(file_path)
|
||||
deleted_count += 1
|
||||
|
||||
if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none":
|
||||
os.rmdir(folder)
|
||||
|
||||
# Update training metadata to reflect deleted images
|
||||
# This ensures the dataset is marked as changed after deletion
|
||||
# (even if the total count happens to be the same after adding and deleting)
|
||||
if deleted_count > 0:
|
||||
sanitized_name = sanitize_filename(name)
|
||||
metadata = read_training_metadata(sanitized_name)
|
||||
if metadata:
|
||||
last_count = metadata.get("last_training_image_count", 0)
|
||||
updated_count = max(0, last_count - deleted_count)
|
||||
write_training_metadata(sanitized_name, updated_count)
|
||||
|
||||
return JSONResponse(
|
||||
content=({"success": True, "message": "Successfully deleted images."}),
|
||||
status_code=200,
|
||||
|
||||
@@ -10,7 +10,7 @@ class ReviewQueryParams(BaseModel):
|
||||
cameras: str = "all"
|
||||
labels: str = "all"
|
||||
zones: str = "all"
|
||||
reviewed: Union[int, SkipJsonSchema[None]] = None
|
||||
reviewed: int = 0
|
||||
limit: Union[int, SkipJsonSchema[None]] = None
|
||||
severity: Union[SeverityEnum, SkipJsonSchema[None]] = None
|
||||
before: Union[float, SkipJsonSchema[None]] = None
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Optional, Union
|
||||
from typing import Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
@@ -16,5 +16,5 @@ class ExportRecordingsBody(BaseModel):
|
||||
source: PlaybackSourceEnum = Field(
|
||||
default=PlaybackSourceEnum.recordings, title="Playback source"
|
||||
)
|
||||
name: Optional[str] = Field(title="Friendly name", default=None, max_length=256)
|
||||
name: str = Field(title="Friendly name", default=None, max_length=256)
|
||||
image_path: Union[str, SkipJsonSchema[None]] = None
|
||||
|
||||
@@ -1935,7 +1935,7 @@ async def label_clip(request: Request, camera_name: str, label: str):
|
||||
try:
|
||||
event = event_query.get()
|
||||
|
||||
return await event_clip(request, event.id, 0)
|
||||
return await event_clip(request, event.id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Event not found"}, status_code=404
|
||||
|
||||
@@ -144,8 +144,6 @@ async def review(
|
||||
(UserReviewStatus.has_been_reviewed == False)
|
||||
| (UserReviewStatus.has_been_reviewed.is_null())
|
||||
)
|
||||
elif reviewed == 1:
|
||||
review_query = review_query.where(UserReviewStatus.has_been_reviewed == True)
|
||||
|
||||
# Apply ordering and limit
|
||||
review_query = (
|
||||
|
||||
@@ -388,7 +388,7 @@ class WebPushClient(Communicator):
|
||||
else:
|
||||
title = base_title
|
||||
|
||||
message = payload["after"]["data"]["metadata"]["shortSummary"]
|
||||
message = payload["after"]["data"]["metadata"]["scene"]
|
||||
else:
|
||||
zone_names = payload["after"]["data"]["zones"]
|
||||
formatted_zone_names = []
|
||||
|
||||
@@ -26,6 +26,3 @@ class GenAIConfig(FrigateBaseModel):
|
||||
provider_options: dict[str, Any] = Field(
|
||||
default={}, title="GenAI Provider extra options."
|
||||
)
|
||||
runtime_options: dict[str, Any] = Field(
|
||||
default={}, title="Options to pass during inference calls."
|
||||
)
|
||||
|
||||
@@ -28,7 +28,6 @@ from frigate.util.builtin import (
|
||||
get_ffmpeg_arg_list,
|
||||
)
|
||||
from frigate.util.config import (
|
||||
CURRENT_CONFIG_VERSION,
|
||||
StreamInfoRetriever,
|
||||
convert_area_to_pixels,
|
||||
find_config_file,
|
||||
@@ -77,12 +76,11 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
DEFAULT_CONFIG = f"""
|
||||
DEFAULT_CONFIG = """
|
||||
mqtt:
|
||||
enabled: False
|
||||
|
||||
cameras: {{}} # No cameras defined, UI wizard should be used
|
||||
version: {CURRENT_CONFIG_VERSION}
|
||||
cameras: {} # No cameras defined, UI wizard should be used
|
||||
"""
|
||||
|
||||
DEFAULT_DETECTORS = {"cpu": {"type": "cpu"}}
|
||||
@@ -662,13 +660,6 @@ class FrigateConfig(FrigateBaseModel):
|
||||
# generate zone contours
|
||||
if len(camera_config.zones) > 0:
|
||||
for zone in camera_config.zones.values():
|
||||
if zone.filters:
|
||||
for object_name, filter_config in zone.filters.items():
|
||||
zone.filters[object_name] = RuntimeFilterConfig(
|
||||
frame_shape=camera_config.frame_shape,
|
||||
**filter_config.model_dump(exclude_unset=True),
|
||||
)
|
||||
|
||||
zone.generate_contour(camera_config.frame_shape)
|
||||
|
||||
# Set live view stream if none is set
|
||||
@@ -762,7 +753,8 @@ class FrigateConfig(FrigateBaseModel):
|
||||
if new_config and f.tell() == 0:
|
||||
f.write(DEFAULT_CONFIG)
|
||||
logger.info(
|
||||
"Created default config file, see the getting started docs for configuration: https://docs.frigate.video/guides/getting_started"
|
||||
"Created default config file, see the getting started docs \
|
||||
for configuration https://docs.frigate.video/guides/getting_started"
|
||||
)
|
||||
|
||||
f.seek(0)
|
||||
|
||||
@@ -86,11 +86,7 @@ class ObjectDescriptionProcessor(PostProcessorApi):
|
||||
and data["id"] not in self.early_request_sent
|
||||
):
|
||||
if data["has_clip"] and data["has_snapshot"]:
|
||||
try:
|
||||
event: Event = Event.get(Event.id == data["id"])
|
||||
except DoesNotExist:
|
||||
logger.error(f"Event {data['id']} not found")
|
||||
return
|
||||
event: Event = Event.get(Event.id == data["id"])
|
||||
|
||||
if (
|
||||
not camera_config.objects.genai.objects
|
||||
|
||||
@@ -92,7 +92,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
|
||||
pixels_per_image = width * height
|
||||
tokens_per_image = pixels_per_image / 1250
|
||||
prompt_tokens = 3800
|
||||
prompt_tokens = 3500
|
||||
response_tokens = 300
|
||||
available_tokens = context_size - prompt_tokens - response_tokens
|
||||
max_frames = int(available_tokens / tokens_per_image)
|
||||
|
||||
@@ -8,9 +8,6 @@ class ReviewMetadata(BaseModel):
|
||||
scene: str = Field(
|
||||
description="A comprehensive description of the setting and entities, including relevant context and plausible inferences if supported by visual evidence."
|
||||
)
|
||||
shortSummary: str = Field(
|
||||
description="A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail."
|
||||
)
|
||||
confidence: float = Field(
|
||||
description="A float between 0 and 1 representing your overall confidence in this analysis."
|
||||
)
|
||||
|
||||
@@ -419,21 +419,14 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
"""
|
||||
if object_id not in self.classification_history:
|
||||
self.classification_history[object_id] = []
|
||||
logger.debug(f"Created new classification history for {object_id}")
|
||||
|
||||
self.classification_history[object_id].append(
|
||||
(current_label, current_score, current_time)
|
||||
)
|
||||
|
||||
history = self.classification_history[object_id]
|
||||
logger.debug(
|
||||
f"History for {object_id}: {len(history)} entries, latest=({current_label}, {current_score})"
|
||||
)
|
||||
|
||||
if len(history) < 3:
|
||||
logger.debug(
|
||||
f"History for {object_id} has {len(history)} entries, need at least 3"
|
||||
)
|
||||
return None, 0.0
|
||||
|
||||
label_counts = {}
|
||||
@@ -452,27 +445,14 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
best_count = label_counts[best_label]
|
||||
|
||||
consensus_threshold = total_attempts * 0.6
|
||||
logger.debug(
|
||||
f"Consensus calc for {object_id}: label_counts={label_counts}, "
|
||||
f"best_label={best_label}, best_count={best_count}, "
|
||||
f"total={total_attempts}, threshold={consensus_threshold}"
|
||||
)
|
||||
|
||||
if best_count < consensus_threshold:
|
||||
logger.debug(
|
||||
f"No consensus for {object_id}: {best_count} < {consensus_threshold}"
|
||||
)
|
||||
return None, 0.0
|
||||
|
||||
avg_score = sum(label_scores[best_label]) / len(label_scores[best_label])
|
||||
|
||||
if best_label == "none":
|
||||
logger.debug(f"Filtering 'none' label for {object_id}")
|
||||
return None, 0.0
|
||||
|
||||
logger.debug(
|
||||
f"Consensus reached for {object_id}: {best_label} with avg_score={avg_score}"
|
||||
)
|
||||
return best_label, avg_score
|
||||
|
||||
def process_frame(self, obj_data, frame):
|
||||
@@ -580,30 +560,17 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
)
|
||||
|
||||
if score < self.model_config.threshold:
|
||||
logger.debug(
|
||||
f"{self.model_config.name}: Score {score} < threshold {self.model_config.threshold} for {object_id}, skipping"
|
||||
)
|
||||
logger.debug(f"Score {score} is less than threshold.")
|
||||
return
|
||||
|
||||
sub_label = self.labelmap[best_id]
|
||||
|
||||
logger.debug(
|
||||
f"{self.model_config.name}: Object {object_id} (label={obj_data['label']}) passed threshold with sub_label={sub_label}, score={score}"
|
||||
)
|
||||
|
||||
consensus_label, consensus_score = self.get_weighted_score(
|
||||
object_id, sub_label, score, now
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f"{self.model_config.name}: get_weighted_score returned consensus_label={consensus_label}, consensus_score={consensus_score} for {object_id}"
|
||||
)
|
||||
|
||||
if consensus_label is not None:
|
||||
camera = obj_data["camera"]
|
||||
logger.info(
|
||||
f"{self.model_config.name}: Publishing sub_label={consensus_label} for {obj_data['label']} object {object_id} on {camera}"
|
||||
)
|
||||
|
||||
if (
|
||||
self.model_config.object_config.classification_type
|
||||
|
||||
@@ -139,31 +139,8 @@ class ONNXModelRunner(BaseModelRunner):
|
||||
ModelTypeEnum.dfine.value,
|
||||
]
|
||||
|
||||
@staticmethod
|
||||
def is_concurrent_model(model_type: str | None) -> bool:
|
||||
"""Check if model requires thread locking for concurrent inference.
|
||||
|
||||
Some models (like JinaV2) share one runner between text and vision embeddings
|
||||
called from different threads, requiring thread synchronization.
|
||||
"""
|
||||
if not model_type:
|
||||
return False
|
||||
|
||||
# Import here to avoid circular imports
|
||||
from frigate.embeddings.types import EnrichmentModelTypeEnum
|
||||
|
||||
return model_type == EnrichmentModelTypeEnum.jina_v2.value
|
||||
|
||||
def __init__(self, ort: ort.InferenceSession, model_type: str | None = None):
|
||||
def __init__(self, ort: ort.InferenceSession):
|
||||
self.ort = ort
|
||||
self.model_type = model_type
|
||||
|
||||
# Thread lock to prevent concurrent inference (needed for JinaV2 which shares
|
||||
# one runner between text and vision embeddings called from different threads)
|
||||
if self.is_concurrent_model(model_type):
|
||||
self._inference_lock = threading.Lock()
|
||||
else:
|
||||
self._inference_lock = None
|
||||
|
||||
def get_input_names(self) -> list[str]:
|
||||
return [input.name for input in self.ort.get_inputs()]
|
||||
@@ -173,10 +150,6 @@ class ONNXModelRunner(BaseModelRunner):
|
||||
return self.ort.get_inputs()[0].shape[3]
|
||||
|
||||
def run(self, input: dict[str, Any]) -> Any | None:
|
||||
if self._inference_lock:
|
||||
with self._inference_lock:
|
||||
return self.ort.run(None, input)
|
||||
|
||||
return self.ort.run(None, input)
|
||||
|
||||
|
||||
@@ -603,6 +576,5 @@ def get_optimized_runner(
|
||||
),
|
||||
providers=providers,
|
||||
provider_options=options,
|
||||
),
|
||||
model_type=model_type,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -203,9 +203,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
# post processors
|
||||
self.post_processors: list[PostProcessorApi] = []
|
||||
|
||||
if self.genai_client is not None and any(
|
||||
c.review.genai.enabled_in_config for c in self.config.cameras.values()
|
||||
):
|
||||
if any(c.review.genai.enabled_in_config for c in self.config.cameras.values()):
|
||||
self.post_processors.append(
|
||||
ReviewDescriptionProcessor(
|
||||
self.config, self.requestor, self.metrics, self.genai_client
|
||||
@@ -246,9 +244,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
)
|
||||
self.post_processors.append(semantic_trigger_processor)
|
||||
|
||||
if self.genai_client is not None and any(
|
||||
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
|
||||
):
|
||||
if any(c.objects.genai.enabled_in_config for c in self.config.cameras.values()):
|
||||
self.post_processors.append(
|
||||
ObjectDescriptionProcessor(
|
||||
self.config,
|
||||
@@ -633,7 +629,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
|
||||
camera, frame_name, _, _, motion_boxes, _ = data
|
||||
|
||||
if not camera or len(motion_boxes) == 0 or camera not in self.config.cameras:
|
||||
if not camera or len(motion_boxes) == 0:
|
||||
return
|
||||
|
||||
camera_config = self.config.cameras[camera]
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
import warnings
|
||||
|
||||
from transformers import AutoFeatureExtractor, AutoTokenizer
|
||||
@@ -55,7 +54,6 @@ class JinaV1TextEmbedding(BaseEmbedding):
|
||||
self.tokenizer = None
|
||||
self.feature_extractor = None
|
||||
self.runner = None
|
||||
self._lock = threading.Lock()
|
||||
files_names = list(self.download_urls.keys()) + [self.tokenizer_file]
|
||||
|
||||
if not all(
|
||||
@@ -136,18 +134,17 @@ class JinaV1TextEmbedding(BaseEmbedding):
|
||||
)
|
||||
|
||||
def _preprocess_inputs(self, raw_inputs):
|
||||
with self._lock:
|
||||
max_length = max(len(self.tokenizer.encode(text)) for text in raw_inputs)
|
||||
return [
|
||||
self.tokenizer(
|
||||
text,
|
||||
padding="max_length",
|
||||
truncation=True,
|
||||
max_length=max_length,
|
||||
return_tensors="np",
|
||||
)
|
||||
for text in raw_inputs
|
||||
]
|
||||
max_length = max(len(self.tokenizer.encode(text)) for text in raw_inputs)
|
||||
return [
|
||||
self.tokenizer(
|
||||
text,
|
||||
padding="max_length",
|
||||
truncation=True,
|
||||
max_length=max_length,
|
||||
return_tensors="np",
|
||||
)
|
||||
for text in raw_inputs
|
||||
]
|
||||
|
||||
|
||||
class JinaV1ImageEmbedding(BaseEmbedding):
|
||||
@@ -177,7 +174,6 @@ class JinaV1ImageEmbedding(BaseEmbedding):
|
||||
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
|
||||
self.feature_extractor = None
|
||||
self.runner: BaseModelRunner | None = None
|
||||
self._lock = threading.Lock()
|
||||
files_names = list(self.download_urls.keys())
|
||||
if not all(
|
||||
os.path.exists(os.path.join(self.download_path, n)) for n in files_names
|
||||
@@ -220,9 +216,8 @@ class JinaV1ImageEmbedding(BaseEmbedding):
|
||||
)
|
||||
|
||||
def _preprocess_inputs(self, raw_inputs):
|
||||
with self._lock:
|
||||
processed_images = [self._process_image(img) for img in raw_inputs]
|
||||
return [
|
||||
self.feature_extractor(images=image, return_tensors="np")
|
||||
for image in processed_images
|
||||
]
|
||||
processed_images = [self._process_image(img) for img in raw_inputs]
|
||||
return [
|
||||
self.feature_extractor(images=image, return_tensors="np")
|
||||
for image in processed_images
|
||||
]
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
@@ -54,11 +53,6 @@ class JinaV2Embedding(BaseEmbedding):
|
||||
self.tokenizer = None
|
||||
self.image_processor = None
|
||||
self.runner = None
|
||||
|
||||
# Lock to prevent concurrent calls (text and vision share this instance)
|
||||
self._call_lock = threading.Lock()
|
||||
|
||||
# download the model and tokenizer
|
||||
files_names = list(self.download_urls.keys()) + [self.tokenizer_file]
|
||||
if not all(
|
||||
os.path.exists(os.path.join(self.download_path, n)) for n in files_names
|
||||
@@ -206,40 +200,37 @@ class JinaV2Embedding(BaseEmbedding):
|
||||
def __call__(
|
||||
self, inputs: list[str] | list[Image.Image] | list[str], embedding_type=None
|
||||
) -> list[np.ndarray]:
|
||||
# Lock the entire call to prevent race conditions when text and vision
|
||||
# embeddings are called concurrently from different threads
|
||||
with self._call_lock:
|
||||
self.embedding_type = embedding_type
|
||||
if not self.embedding_type:
|
||||
raise ValueError(
|
||||
"embedding_type must be specified either in __init__ or __call__"
|
||||
)
|
||||
self.embedding_type = embedding_type
|
||||
if not self.embedding_type:
|
||||
raise ValueError(
|
||||
"embedding_type must be specified either in __init__ or __call__"
|
||||
)
|
||||
|
||||
self._load_model_and_utils()
|
||||
processed = self._preprocess_inputs(inputs)
|
||||
batch_size = len(processed)
|
||||
self._load_model_and_utils()
|
||||
processed = self._preprocess_inputs(inputs)
|
||||
batch_size = len(processed)
|
||||
|
||||
# Prepare ONNX inputs with matching batch sizes
|
||||
onnx_inputs = {}
|
||||
if self.embedding_type == "text":
|
||||
onnx_inputs["input_ids"] = np.stack([x[0] for x in processed])
|
||||
onnx_inputs["pixel_values"] = np.zeros(
|
||||
(batch_size, 3, 512, 512), dtype=np.float32
|
||||
)
|
||||
elif self.embedding_type == "vision":
|
||||
onnx_inputs["input_ids"] = np.zeros((batch_size, 16), dtype=np.int64)
|
||||
onnx_inputs["pixel_values"] = np.stack([x[0] for x in processed])
|
||||
else:
|
||||
raise ValueError("Invalid embedding type")
|
||||
# Prepare ONNX inputs with matching batch sizes
|
||||
onnx_inputs = {}
|
||||
if self.embedding_type == "text":
|
||||
onnx_inputs["input_ids"] = np.stack([x[0] for x in processed])
|
||||
onnx_inputs["pixel_values"] = np.zeros(
|
||||
(batch_size, 3, 512, 512), dtype=np.float32
|
||||
)
|
||||
elif self.embedding_type == "vision":
|
||||
onnx_inputs["input_ids"] = np.zeros((batch_size, 16), dtype=np.int64)
|
||||
onnx_inputs["pixel_values"] = np.stack([x[0] for x in processed])
|
||||
else:
|
||||
raise ValueError("Invalid embedding type")
|
||||
|
||||
# Run inference
|
||||
outputs = self.runner.run(onnx_inputs)
|
||||
if self.embedding_type == "text":
|
||||
embeddings = outputs[2] # text embeddings
|
||||
elif self.embedding_type == "vision":
|
||||
embeddings = outputs[3] # image embeddings
|
||||
else:
|
||||
raise ValueError("Invalid embedding type")
|
||||
# Run inference
|
||||
outputs = self.runner.run(onnx_inputs)
|
||||
if self.embedding_type == "text":
|
||||
embeddings = outputs[2] # text embeddings
|
||||
elif self.embedding_type == "vision":
|
||||
embeddings = outputs[3] # image embeddings
|
||||
else:
|
||||
raise ValueError("Invalid embedding type")
|
||||
|
||||
embeddings = self._postprocess_outputs(embeddings)
|
||||
return [embedding for embedding in embeddings]
|
||||
embeddings = self._postprocess_outputs(embeddings)
|
||||
return [embedding for embedding in embeddings]
|
||||
|
||||
@@ -101,7 +101,6 @@ When forming your description:
|
||||
Your response MUST be a flat JSON object with:
|
||||
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
|
||||
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
|
||||
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail. This should be a condensed version of the scene description above.
|
||||
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
|
||||
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
|
||||
{get_concern_prompt()}
|
||||
@@ -193,8 +192,6 @@ Input format: Each event is a JSON object with:
|
||||
- "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "camera", "time", "start_time", "end_time"
|
||||
- "context": array of related events from other cameras that occurred during overlapping time periods
|
||||
|
||||
**Note: Use the "scene" field for event descriptions in the report. Ignore any "shortSummary" field if present.**
|
||||
|
||||
Report Structure - Use this EXACT format:
|
||||
|
||||
# Security Summary - {time_range}
|
||||
|
||||
@@ -64,7 +64,6 @@ class OpenAIClient(GenAIClient):
|
||||
},
|
||||
],
|
||||
timeout=self.timeout,
|
||||
**self.genai_config.runtime_options,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("Azure OpenAI returned an error: %s", str(e))
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from google import genai
|
||||
from google.genai import errors, types
|
||||
import google.generativeai as genai
|
||||
from google.api_core.exceptions import GoogleAPICallError
|
||||
|
||||
from frigate.config import GenAIProviderEnum
|
||||
from frigate.genai import GenAIClient, register_genai_provider
|
||||
@@ -16,58 +16,40 @@ logger = logging.getLogger(__name__)
|
||||
class GeminiClient(GenAIClient):
|
||||
"""Generative AI client for Frigate using Gemini."""
|
||||
|
||||
provider: genai.Client
|
||||
provider: genai.GenerativeModel
|
||||
|
||||
def _init_provider(self):
|
||||
"""Initialize the client."""
|
||||
# Merge provider_options into HttpOptions
|
||||
http_options_dict = {
|
||||
"timeout": int(self.timeout * 1000), # requires milliseconds
|
||||
"retry_options": types.HttpRetryOptions(
|
||||
attempts=3,
|
||||
initial_delay=1.0,
|
||||
max_delay=60.0,
|
||||
exp_base=2.0,
|
||||
jitter=1.0,
|
||||
http_status_codes=[429, 500, 502, 503, 504],
|
||||
),
|
||||
}
|
||||
|
||||
if isinstance(self.genai_config.provider_options, dict):
|
||||
http_options_dict.update(self.genai_config.provider_options)
|
||||
|
||||
return genai.Client(
|
||||
api_key=self.genai_config.api_key,
|
||||
http_options=types.HttpOptions(**http_options_dict),
|
||||
genai.configure(api_key=self.genai_config.api_key)
|
||||
return genai.GenerativeModel(
|
||||
self.genai_config.model, **self.genai_config.provider_options
|
||||
)
|
||||
|
||||
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
|
||||
"""Submit a request to Gemini."""
|
||||
contents = [
|
||||
types.Part.from_bytes(data=img, mime_type="image/jpeg") for img in images
|
||||
data = [
|
||||
{
|
||||
"mime_type": "image/jpeg",
|
||||
"data": img,
|
||||
}
|
||||
for img in images
|
||||
] + [prompt]
|
||||
try:
|
||||
# Merge runtime_options into generation_config if provided
|
||||
generation_config_dict = {"candidate_count": 1}
|
||||
generation_config_dict.update(self.genai_config.runtime_options)
|
||||
|
||||
response = self.provider.models.generate_content(
|
||||
model=self.genai_config.model,
|
||||
contents=contents,
|
||||
config=types.GenerateContentConfig(
|
||||
**generation_config_dict,
|
||||
response = self.provider.generate_content(
|
||||
data,
|
||||
generation_config=genai.types.GenerationConfig(
|
||||
candidate_count=1,
|
||||
),
|
||||
request_options=genai.types.RequestOptions(
|
||||
timeout=self.timeout,
|
||||
),
|
||||
)
|
||||
except errors.APIError as e:
|
||||
except GoogleAPICallError as e:
|
||||
logger.warning("Gemini returned an error: %s", str(e))
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.warning("An unexpected error occurred with Gemini: %s", str(e))
|
||||
return None
|
||||
|
||||
try:
|
||||
description = response.text.strip()
|
||||
except (ValueError, AttributeError):
|
||||
except ValueError:
|
||||
# No description was generated
|
||||
return None
|
||||
return description
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
import logging
|
||||
from typing import Any, Optional
|
||||
|
||||
from httpx import RemoteProtocolError, TimeoutException
|
||||
from httpx import TimeoutException
|
||||
from ollama import Client as ApiClient
|
||||
from ollama import ResponseError
|
||||
|
||||
@@ -58,26 +58,17 @@ class OllamaClient(GenAIClient):
|
||||
)
|
||||
return None
|
||||
try:
|
||||
ollama_options = {
|
||||
**self.provider_options,
|
||||
**self.genai_config.runtime_options,
|
||||
}
|
||||
result = self.provider.generate(
|
||||
self.genai_config.model,
|
||||
prompt,
|
||||
images=images if images else None,
|
||||
**ollama_options,
|
||||
**self.provider_options,
|
||||
)
|
||||
logger.debug(
|
||||
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
|
||||
)
|
||||
return result["response"].strip()
|
||||
except (
|
||||
TimeoutException,
|
||||
ResponseError,
|
||||
RemoteProtocolError,
|
||||
ConnectionError,
|
||||
) as e:
|
||||
except (TimeoutException, ResponseError, ConnectionError) as e:
|
||||
logger.warning("Ollama returned an error: %s", str(e))
|
||||
return None
|
||||
|
||||
|
||||
@@ -22,14 +22,9 @@ class OpenAIClient(GenAIClient):
|
||||
|
||||
def _init_provider(self):
|
||||
"""Initialize the client."""
|
||||
# Extract context_size from provider_options as it's not a valid OpenAI client parameter
|
||||
# It will be used in get_context_size() instead
|
||||
provider_opts = {
|
||||
k: v
|
||||
for k, v in self.genai_config.provider_options.items()
|
||||
if k != "context_size"
|
||||
}
|
||||
return OpenAI(api_key=self.genai_config.api_key, **provider_opts)
|
||||
return OpenAI(
|
||||
api_key=self.genai_config.api_key, **self.genai_config.provider_options
|
||||
)
|
||||
|
||||
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
|
||||
"""Submit a request to OpenAI."""
|
||||
@@ -61,7 +56,6 @@ class OpenAIClient(GenAIClient):
|
||||
},
|
||||
],
|
||||
timeout=self.timeout,
|
||||
**self.genai_config.runtime_options,
|
||||
)
|
||||
if (
|
||||
result is not None
|
||||
@@ -79,16 +73,6 @@ class OpenAIClient(GenAIClient):
|
||||
if self.context_size is not None:
|
||||
return self.context_size
|
||||
|
||||
# First check provider_options for manually specified context size
|
||||
# This is necessary for llama.cpp and other OpenAI-compatible servers
|
||||
# that don't expose the configured runtime context size in the API response
|
||||
if "context_size" in self.genai_config.provider_options:
|
||||
self.context_size = self.genai_config.provider_options["context_size"]
|
||||
logger.debug(
|
||||
f"Using context size {self.context_size} from provider_options for model {self.genai_config.model}"
|
||||
)
|
||||
return self.context_size
|
||||
|
||||
try:
|
||||
models = self.provider.models.list()
|
||||
for model in models.data:
|
||||
|
||||
@@ -89,7 +89,6 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
|
||||
"ws4py": LogLevel.error,
|
||||
"PIL": LogLevel.warning,
|
||||
"numba": LogLevel.warning,
|
||||
"google_genai.models": LogLevel.warning,
|
||||
**log_levels,
|
||||
}
|
||||
|
||||
|
||||
@@ -139,11 +139,9 @@ class OutputProcess(FrigateProcess):
|
||||
if CameraConfigUpdateEnum.add in updates:
|
||||
for camera in updates["add"]:
|
||||
jsmpeg_cameras[camera] = JsmpegCamera(
|
||||
self.config.cameras[camera], self.stop_event, websocket_server
|
||||
)
|
||||
preview_recorders[camera] = PreviewRecorder(
|
||||
self.config.cameras[camera]
|
||||
cam_config, self.stop_event, websocket_server
|
||||
)
|
||||
preview_recorders[camera] = PreviewRecorder(cam_config)
|
||||
preview_write_times[camera] = 0
|
||||
|
||||
if (
|
||||
|
||||
@@ -97,7 +97,6 @@ class RecordingMaintainer(threading.Thread):
|
||||
self.object_recordings_info: dict[str, list] = defaultdict(list)
|
||||
self.audio_recordings_info: dict[str, list] = defaultdict(list)
|
||||
self.end_time_cache: dict[str, Tuple[datetime.datetime, float]] = {}
|
||||
self.unexpected_cache_files_logged: bool = False
|
||||
|
||||
async def move_files(self) -> None:
|
||||
cache_files = [
|
||||
@@ -113,14 +112,7 @@ class RecordingMaintainer(threading.Thread):
|
||||
for cache in cache_files:
|
||||
cache_path = os.path.join(CACHE_DIR, cache)
|
||||
basename = os.path.splitext(cache)[0]
|
||||
try:
|
||||
camera, date = basename.rsplit("@", maxsplit=1)
|
||||
except ValueError:
|
||||
if not self.unexpected_cache_files_logged:
|
||||
logger.warning("Skipping unexpected files in cache")
|
||||
self.unexpected_cache_files_logged = True
|
||||
continue
|
||||
|
||||
camera, date = basename.rsplit("@", maxsplit=1)
|
||||
start_time = datetime.datetime.strptime(
|
||||
date, CACHE_SEGMENT_FORMAT
|
||||
).astimezone(datetime.timezone.utc)
|
||||
@@ -172,13 +164,7 @@ class RecordingMaintainer(threading.Thread):
|
||||
|
||||
cache_path = os.path.join(CACHE_DIR, cache)
|
||||
basename = os.path.splitext(cache)[0]
|
||||
try:
|
||||
camera, date = basename.rsplit("@", maxsplit=1)
|
||||
except ValueError:
|
||||
if not self.unexpected_cache_files_logged:
|
||||
logger.warning("Skipping unexpected files in cache")
|
||||
self.unexpected_cache_files_logged = True
|
||||
continue
|
||||
camera, date = basename.rsplit("@", maxsplit=1)
|
||||
|
||||
# important that start_time is utc because recordings are stored and compared in utc
|
||||
start_time = datetime.datetime.strptime(
|
||||
|
||||
@@ -42,10 +42,11 @@ def get_latest_version(config: FrigateConfig) -> str:
|
||||
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
|
||||
timeout=10,
|
||||
)
|
||||
response = request.json()
|
||||
except (RequestException, JSONDecodeError):
|
||||
return "unknown"
|
||||
|
||||
response = request.json()
|
||||
|
||||
if request.ok and response and "tag_name" in response:
|
||||
return str(response.get("tag_name").replace("v", ""))
|
||||
else:
|
||||
|
||||
@@ -171,8 +171,8 @@ class BaseTestHttp(unittest.TestCase):
|
||||
def insert_mock_event(
|
||||
self,
|
||||
id: str,
|
||||
start_time: float | None = None,
|
||||
end_time: float | None = None,
|
||||
start_time: float = datetime.datetime.now().timestamp(),
|
||||
end_time: float = datetime.datetime.now().timestamp() + 20,
|
||||
has_clip: bool = True,
|
||||
top_score: int = 100,
|
||||
score: int = 0,
|
||||
@@ -180,11 +180,6 @@ class BaseTestHttp(unittest.TestCase):
|
||||
camera: str = "front_door",
|
||||
) -> Event:
|
||||
"""Inserts a basic event model with a given id."""
|
||||
if start_time is None:
|
||||
start_time = datetime.datetime.now().timestamp()
|
||||
if end_time is None:
|
||||
end_time = start_time + 20
|
||||
|
||||
return Event.insert(
|
||||
id=id,
|
||||
label="Mock",
|
||||
@@ -234,16 +229,11 @@ class BaseTestHttp(unittest.TestCase):
|
||||
def insert_mock_recording(
|
||||
self,
|
||||
id: str,
|
||||
start_time: float | None = None,
|
||||
end_time: float | None = None,
|
||||
start_time: float = datetime.datetime.now().timestamp(),
|
||||
end_time: float = datetime.datetime.now().timestamp() + 20,
|
||||
motion: int = 0,
|
||||
) -> Event:
|
||||
"""Inserts a recording model with a given id."""
|
||||
if start_time is None:
|
||||
start_time = datetime.datetime.now().timestamp()
|
||||
if end_time is None:
|
||||
end_time = start_time + 20
|
||||
|
||||
return Recordings.insert(
|
||||
id=id,
|
||||
path=id,
|
||||
|
||||
@@ -96,17 +96,16 @@ class TestHttpApp(BaseTestHttp):
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_limit(self):
|
||||
now = datetime.now().timestamp()
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id, start_time=now + 1)
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
|
||||
super().insert_mock_event(id2, start_time=now)
|
||||
super().insert_mock_event(id2)
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 2
|
||||
|
||||
@@ -145,7 +144,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
assert events[0]["id"] == id2
|
||||
assert events[1]["id"] == id
|
||||
|
||||
events = client.get("/events", params={"sort": "score_desc"}).json()
|
||||
events = client.get("/events", params={"sort": "score_des"}).json()
|
||||
assert len(events) == 2
|
||||
assert events[0]["id"] == id
|
||||
assert events[1]["id"] == id2
|
||||
|
||||
@@ -196,50 +196,6 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert len(response_json) == 1
|
||||
assert response_json[0]["id"] == id
|
||||
|
||||
def test_get_review_with_reviewed_filter_unreviewed(self):
|
||||
"""Test that reviewed=0 returns only unreviewed items."""
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with AuthTestClient(self.app) as client:
|
||||
id_unreviewed = "123456.unreviewed"
|
||||
id_reviewed = "123456.reviewed"
|
||||
super().insert_mock_review_segment(id_unreviewed, now, now + 2)
|
||||
super().insert_mock_review_segment(id_reviewed, now, now + 2)
|
||||
self._insert_user_review_status(id_reviewed, reviewed=True)
|
||||
|
||||
params = {
|
||||
"reviewed": 0,
|
||||
"after": now - 1,
|
||||
"before": now + 3,
|
||||
}
|
||||
response = client.get("/review", params=params)
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
assert len(response_json) == 1
|
||||
assert response_json[0]["id"] == id_unreviewed
|
||||
|
||||
def test_get_review_with_reviewed_filter_reviewed(self):
|
||||
"""Test that reviewed=1 returns only reviewed items."""
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with AuthTestClient(self.app) as client:
|
||||
id_unreviewed = "123456.unreviewed"
|
||||
id_reviewed = "123456.reviewed"
|
||||
super().insert_mock_review_segment(id_unreviewed, now, now + 2)
|
||||
super().insert_mock_review_segment(id_reviewed, now, now + 2)
|
||||
self._insert_user_review_status(id_reviewed, reviewed=True)
|
||||
|
||||
params = {
|
||||
"reviewed": 1,
|
||||
"after": now - 1,
|
||||
"before": now + 3,
|
||||
}
|
||||
response = client.get("/review", params=params)
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
assert len(response_json) == 1
|
||||
assert response_json[0]["id"] == id_reviewed
|
||||
|
||||
####################################################################################################################
|
||||
################################### GET /review/summary Endpoint #################################################
|
||||
####################################################################################################################
|
||||
|
||||
@@ -632,49 +632,6 @@ class TestConfig(unittest.TestCase):
|
||||
)
|
||||
assert frigate_config.cameras["back"].zones["test"].color != (0, 0, 0)
|
||||
|
||||
def test_zone_filter_area_percent_converts_to_pixels(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"record": {
|
||||
"alerts": {
|
||||
"retain": {
|
||||
"days": 20,
|
||||
}
|
||||
}
|
||||
},
|
||||
"cameras": {
|
||||
"back": {
|
||||
"ffmpeg": {
|
||||
"inputs": [
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"zones": {
|
||||
"notification": {
|
||||
"coordinates": "0.03,1,0.025,0,0.626,0,0.643,1",
|
||||
"objects": ["person"],
|
||||
"filters": {"person": {"min_area": 0.1}},
|
||||
}
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
frigate_config = FrigateConfig(**config)
|
||||
expected_min_area = int(1080 * 1920 * 0.1)
|
||||
assert (
|
||||
frigate_config.cameras["back"]
|
||||
.zones["notification"]
|
||||
.filters["person"]
|
||||
.min_area
|
||||
== expected_min_area
|
||||
)
|
||||
|
||||
def test_zone_relative_matches_explicit(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
|
||||
@@ -1,66 +0,0 @@
|
||||
import sys
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
# Mock complex imports before importing maintainer
|
||||
sys.modules["frigate.comms.inter_process"] = MagicMock()
|
||||
sys.modules["frigate.comms.detections_updater"] = MagicMock()
|
||||
sys.modules["frigate.comms.recordings_updater"] = MagicMock()
|
||||
sys.modules["frigate.config.camera.updater"] = MagicMock()
|
||||
|
||||
# Now import the class under test
|
||||
from frigate.config import FrigateConfig # noqa: E402
|
||||
from frigate.record.maintainer import RecordingMaintainer # noqa: E402
|
||||
|
||||
|
||||
class TestMaintainer(unittest.IsolatedAsyncioTestCase):
|
||||
async def test_move_files_survives_bad_filename(self):
|
||||
config = MagicMock(spec=FrigateConfig)
|
||||
config.cameras = {}
|
||||
stop_event = MagicMock()
|
||||
|
||||
maintainer = RecordingMaintainer(config, stop_event)
|
||||
|
||||
# We need to mock end_time_cache to avoid key errors if logic proceeds
|
||||
maintainer.end_time_cache = {}
|
||||
|
||||
# Mock filesystem
|
||||
# One bad file, one good file
|
||||
files = ["bad_filename.mp4", "camera@20210101000000+0000.mp4"]
|
||||
|
||||
with patch("os.listdir", return_value=files):
|
||||
with patch("os.path.isfile", return_value=True):
|
||||
with patch(
|
||||
"frigate.record.maintainer.psutil.process_iter", return_value=[]
|
||||
):
|
||||
with patch("frigate.record.maintainer.logger.warning") as warn:
|
||||
# Mock validate_and_move_segment to avoid further logic
|
||||
maintainer.validate_and_move_segment = MagicMock()
|
||||
|
||||
try:
|
||||
await maintainer.move_files()
|
||||
except ValueError as e:
|
||||
if "not enough values to unpack" in str(e):
|
||||
self.fail("move_files() crashed on bad filename!")
|
||||
raise e
|
||||
except Exception:
|
||||
# Ignore other errors (like DB connection) as we only care about the unpack crash
|
||||
pass
|
||||
|
||||
# The bad filename is encountered in multiple loops, but should only warn once.
|
||||
matching = [
|
||||
c
|
||||
for c in warn.call_args_list
|
||||
if c.args
|
||||
and isinstance(c.args[0], str)
|
||||
and "Skipping unexpected files in cache" in c.args[0]
|
||||
]
|
||||
self.assertEqual(
|
||||
1,
|
||||
len(matching),
|
||||
f"Expected a single warning for unexpected files, got {len(matching)}",
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -43,7 +43,6 @@ def write_training_metadata(model_name: str, image_count: int) -> None:
|
||||
model_name: Name of the classification model
|
||||
image_count: Number of images used in training
|
||||
"""
|
||||
model_name = model_name.strip()
|
||||
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
|
||||
os.makedirs(clips_model_dir, exist_ok=True)
|
||||
|
||||
@@ -71,7 +70,6 @@ def read_training_metadata(model_name: str) -> dict[str, any] | None:
|
||||
Returns:
|
||||
Dictionary with last_training_date and last_training_image_count, or None if not found
|
||||
"""
|
||||
model_name = model_name.strip()
|
||||
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
|
||||
metadata_path = os.path.join(clips_model_dir, TRAINING_METADATA_FILE)
|
||||
|
||||
@@ -97,7 +95,6 @@ def get_dataset_image_count(model_name: str) -> int:
|
||||
Returns:
|
||||
Total count of images across all categories
|
||||
"""
|
||||
model_name = model_name.strip()
|
||||
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
|
||||
|
||||
if not os.path.exists(dataset_dir):
|
||||
@@ -129,7 +126,6 @@ class ClassificationTrainingProcess(FrigateProcess):
|
||||
"TF_KERAS_MOBILENET_V2_WEIGHTS_URL",
|
||||
"",
|
||||
)
|
||||
model_name = model_name.strip()
|
||||
super().__init__(
|
||||
stop_event=None,
|
||||
priority=PROCESS_PRIORITY_LOW,
|
||||
@@ -296,7 +292,6 @@ class ClassificationTrainingProcess(FrigateProcess):
|
||||
def kickoff_model_training(
|
||||
embeddingRequestor: EmbeddingsRequestor, model_name: str
|
||||
) -> None:
|
||||
model_name = model_name.strip()
|
||||
requestor = InterProcessRequestor()
|
||||
requestor.send_data(
|
||||
UPDATE_MODEL_STATE,
|
||||
@@ -364,7 +359,6 @@ def collect_state_classification_examples(
|
||||
model_name: Name of the classification model
|
||||
cameras: Dict mapping camera names to normalized crop coordinates [x1, y1, x2, y2] (0-1)
|
||||
"""
|
||||
model_name = model_name.strip()
|
||||
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
|
||||
|
||||
# Step 1: Get review items for the cameras
|
||||
@@ -720,7 +714,6 @@ def collect_object_classification_examples(
|
||||
model_name: Name of the classification model
|
||||
label: Object label to collect (e.g., "person", "car")
|
||||
"""
|
||||
model_name = model_name.strip()
|
||||
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
|
||||
temp_dir = os.path.join(dataset_dir, "temp")
|
||||
os.makedirs(temp_dir, exist_ok=True)
|
||||
|
||||
@@ -540,16 +540,9 @@ def get_jetson_stats() -> Optional[dict[int, dict]]:
|
||||
try:
|
||||
results["mem"] = "-" # no discrete gpu memory
|
||||
|
||||
if os.path.exists("/sys/devices/gpu.0/load"):
|
||||
with open("/sys/devices/gpu.0/load", "r") as f:
|
||||
gpuload = float(f.readline()) / 10
|
||||
results["gpu"] = f"{gpuload}%"
|
||||
elif os.path.exists("/sys/devices/platform/gpu.0/load"):
|
||||
with open("/sys/devices/platform/gpu.0/load", "r") as f:
|
||||
gpuload = float(f.readline()) / 10
|
||||
results["gpu"] = f"{gpuload}%"
|
||||
else:
|
||||
results["gpu"] = "-"
|
||||
with open("/sys/devices/gpu.0/load", "r") as f:
|
||||
gpuload = float(f.readline()) / 10
|
||||
results["gpu"] = f"{gpuload}%"
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
@@ -64,12 +64,10 @@ def stop_ffmpeg(ffmpeg_process: sp.Popen[Any], logger: logging.Logger):
|
||||
try:
|
||||
logger.info("Waiting for ffmpeg to exit gracefully...")
|
||||
ffmpeg_process.communicate(timeout=30)
|
||||
logger.info("FFmpeg has exited")
|
||||
except sp.TimeoutExpired:
|
||||
logger.info("FFmpeg didn't exit. Force killing...")
|
||||
ffmpeg_process.kill()
|
||||
ffmpeg_process.communicate()
|
||||
logger.info("FFmpeg has been killed")
|
||||
ffmpeg_process = None
|
||||
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
# COPYRIGHT AND TRADEMARK NOTICE
|
||||
|
||||
The images, logos, and icons contained in this directory (the "Brand Assets") are
|
||||
proprietary to Frigate, Inc. and are NOT covered by the MIT License governing the
|
||||
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
|
||||
rest of this repository.
|
||||
|
||||
1. TRADEMARK STATUS
|
||||
The "Frigate" name and the accompanying logo are common law trademarks™ of
|
||||
Frigate, Inc. Frigate, Inc. reserves all rights to these marks.
|
||||
Frigate LLC. Frigate LLC reserves all rights to these marks.
|
||||
|
||||
2. LIMITED PERMISSION FOR USE
|
||||
Permission is hereby granted to display these Brand Assets strictly for the
|
||||
@@ -17,9 +17,9 @@ rest of this repository.
|
||||
3. RESTRICTIONS
|
||||
You may NOT:
|
||||
a. Use these Brand Assets to represent a derivative work (fork) as an official
|
||||
product of Frigate, Inc.
|
||||
product of Frigate LLC.
|
||||
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
|
||||
commercial affiliation with Frigate, Inc.
|
||||
commercial affiliation with Frigate LLC.
|
||||
c. Modify or alter the Brand Assets.
|
||||
|
||||
If you fork this repository with the intent to distribute a modified or competing
|
||||
@@ -30,4 +30,4 @@ For full usage guidelines, strictly see the TRADEMARK.md file in the
|
||||
repository root.
|
||||
|
||||
ALL RIGHTS RESERVED.
|
||||
Copyright (c) 2026 Frigate, Inc.
|
||||
Copyright (c) 2025 Frigate LLC.
|
||||
|
||||
300
web/package-lock.json
generated
300
web/package-lock.json
generated
@@ -48,7 +48,7 @@
|
||||
"idb-keyval": "^6.2.1",
|
||||
"immer": "^10.1.1",
|
||||
"konva": "^9.3.18",
|
||||
"lodash": "^4.17.23",
|
||||
"lodash": "^4.17.21",
|
||||
"lucide-react": "^0.477.0",
|
||||
"monaco-yaml": "^5.3.1",
|
||||
"next-themes": "^0.3.0",
|
||||
@@ -64,7 +64,7 @@
|
||||
"react-i18next": "^15.2.0",
|
||||
"react-icons": "^5.5.0",
|
||||
"react-konva": "^18.2.10",
|
||||
"react-router-dom": "^6.30.3",
|
||||
"react-router-dom": "^6.26.0",
|
||||
"react-swipeable": "^7.0.2",
|
||||
"react-tracked": "^2.0.1",
|
||||
"react-transition-group": "^4.4.5",
|
||||
@@ -116,7 +116,7 @@
|
||||
"prettier-plugin-tailwindcss": "^0.6.5",
|
||||
"tailwindcss": "^3.4.9",
|
||||
"typescript": "^5.8.2",
|
||||
"vite": "^6.4.1",
|
||||
"vite": "^6.2.0",
|
||||
"vitest": "^3.0.7"
|
||||
}
|
||||
},
|
||||
@@ -3293,9 +3293,9 @@
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@remix-run/router": {
|
||||
"version": "1.23.2",
|
||||
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.23.2.tgz",
|
||||
"integrity": "sha512-Ic6m2U/rMjTkhERIa/0ZtXJP17QUi2CbWE7cqx4J58M8aA3QTfW+2UlQ4psvTX9IO1RfNVhK3pcpdjej7L+t2w==",
|
||||
"version": "1.19.0",
|
||||
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.19.0.tgz",
|
||||
"integrity": "sha512-zDICCLKEwbVYTS6TjYaWtHXxkdoUvD/QXvyVZjGCsWz5vyH7aFeONlPffPdW+Y/t6KT0MgXb2Mfjun9YpWN1dA==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
@@ -4683,19 +4683,6 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/call-bind-apply-helpers": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
|
||||
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"es-errors": "^1.3.0",
|
||||
"function-bind": "^1.1.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/callsites": {
|
||||
"version": "3.1.0",
|
||||
"resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
|
||||
@@ -5632,20 +5619,6 @@
|
||||
"csstype": "^3.0.2"
|
||||
}
|
||||
},
|
||||
"node_modules/dunder-proto": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
|
||||
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"call-bind-apply-helpers": "^1.0.1",
|
||||
"es-errors": "^1.3.0",
|
||||
"gopd": "^1.2.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/eastasianwidth": {
|
||||
"version": "0.2.0",
|
||||
"resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz",
|
||||
@@ -5706,24 +5679,6 @@
|
||||
"url": "https://github.com/fb55/entities?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/es-define-property": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
|
||||
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/es-errors": {
|
||||
"version": "1.3.0",
|
||||
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
|
||||
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/es-module-lexer": {
|
||||
"version": "1.6.0",
|
||||
"resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.6.0.tgz",
|
||||
@@ -5731,33 +5686,6 @@
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/es-object-atoms": {
|
||||
"version": "1.1.1",
|
||||
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
|
||||
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"es-errors": "^1.3.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/es-set-tostringtag": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
|
||||
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"es-errors": "^1.3.0",
|
||||
"get-intrinsic": "^1.2.6",
|
||||
"has-tostringtag": "^1.0.2",
|
||||
"hasown": "^2.0.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/esbuild": {
|
||||
"version": "0.25.0",
|
||||
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.0.tgz",
|
||||
@@ -6294,15 +6222,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/form-data": {
|
||||
"version": "4.0.4",
|
||||
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
|
||||
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
|
||||
"license": "MIT",
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz",
|
||||
"integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==",
|
||||
"dependencies": {
|
||||
"asynckit": "^0.4.0",
|
||||
"combined-stream": "^1.0.8",
|
||||
"es-set-tostringtag": "^2.1.0",
|
||||
"hasown": "^2.0.2",
|
||||
"mime-types": "^2.1.12"
|
||||
},
|
||||
"engines": {
|
||||
@@ -6382,30 +6307,6 @@
|
||||
"node": "6.* || 8.* || >= 10.*"
|
||||
}
|
||||
},
|
||||
"node_modules/get-intrinsic": {
|
||||
"version": "1.3.0",
|
||||
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
|
||||
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"call-bind-apply-helpers": "^1.0.2",
|
||||
"es-define-property": "^1.0.1",
|
||||
"es-errors": "^1.3.0",
|
||||
"es-object-atoms": "^1.1.1",
|
||||
"function-bind": "^1.1.2",
|
||||
"get-proto": "^1.0.1",
|
||||
"gopd": "^1.2.0",
|
||||
"has-symbols": "^1.1.0",
|
||||
"hasown": "^2.0.2",
|
||||
"math-intrinsics": "^1.1.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/get-nonce": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/get-nonce/-/get-nonce-1.0.1.tgz",
|
||||
@@ -6415,19 +6316,6 @@
|
||||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/get-proto": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
|
||||
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"dunder-proto": "^1.0.1",
|
||||
"es-object-atoms": "^1.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/glob": {
|
||||
"version": "7.2.3",
|
||||
"resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz",
|
||||
@@ -6496,18 +6384,6 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/gopd": {
|
||||
"version": "1.2.0",
|
||||
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
|
||||
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/graphemer": {
|
||||
"version": "1.4.0",
|
||||
"resolved": "https://registry.npmjs.org/graphemer/-/graphemer-1.4.0.tgz",
|
||||
@@ -6537,38 +6413,10 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/has-symbols": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
|
||||
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/has-tostringtag": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
|
||||
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"has-symbols": "^1.0.3"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/hasown": {
|
||||
"version": "2.0.2",
|
||||
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
|
||||
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
|
||||
"license": "MIT",
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.0.tgz",
|
||||
"integrity": "sha512-vUptKVTpIJhcczKBbgnS+RtcuYMB8+oNzPK2/Hp3hanz8JmpATdmmgLgSaadVREkDm+e2giHwY3ZRkyjSIDDFA==",
|
||||
"dependencies": {
|
||||
"function-bind": "^1.1.2"
|
||||
},
|
||||
@@ -7210,10 +7058,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/lodash": {
|
||||
"version": "4.17.23",
|
||||
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
|
||||
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
|
||||
"license": "MIT"
|
||||
"version": "4.17.21",
|
||||
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
|
||||
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg=="
|
||||
},
|
||||
"node_modules/lodash.merge": {
|
||||
"version": "4.6.2",
|
||||
@@ -7293,15 +7140,6 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/math-intrinsics": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
|
||||
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
}
|
||||
},
|
||||
"node_modules/merge-stream": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz",
|
||||
@@ -8618,12 +8456,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/react-router": {
|
||||
"version": "6.30.3",
|
||||
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.30.3.tgz",
|
||||
"integrity": "sha512-XRnlbKMTmktBkjCLE8/XcZFlnHvr2Ltdr1eJX4idL55/9BbORzyZEaIkBFDhFGCEWBBItsVrDxwx3gnisMitdw==",
|
||||
"version": "6.26.0",
|
||||
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.26.0.tgz",
|
||||
"integrity": "sha512-wVQq0/iFYd3iZ9H2l3N3k4PL8EEHcb0XlU2Na8nEwmiXgIUElEH6gaJDtUQxJ+JFzmIXaQjfdpcGWaM6IoQGxg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@remix-run/router": "1.23.2"
|
||||
"@remix-run/router": "1.19.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
@@ -8633,13 +8471,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/react-router-dom": {
|
||||
"version": "6.30.3",
|
||||
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.30.3.tgz",
|
||||
"integrity": "sha512-pxPcv1AczD4vso7G4Z3TKcvlxK7g7TNt3/FNGMhfqyntocvYKj+GCatfigGDjbLozC4baguJ0ReCigoDJXb0ag==",
|
||||
"version": "6.26.0",
|
||||
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.26.0.tgz",
|
||||
"integrity": "sha512-RRGUIiDtLrkX3uYcFiCIxKFWMcWQGMojpYZfcstc63A1+sSnVgILGIm9gNUA6na3Fm1QuPGSBQH2EMbAZOnMsQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@remix-run/router": "1.23.2",
|
||||
"react-router": "6.30.3"
|
||||
"@remix-run/router": "1.19.0",
|
||||
"react-router": "6.26.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
@@ -9664,54 +9502,6 @@
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/tinyglobby": {
|
||||
"version": "0.2.15",
|
||||
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
|
||||
"integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"fdir": "^6.5.0",
|
||||
"picomatch": "^4.0.3"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/SuperchupuDev"
|
||||
}
|
||||
},
|
||||
"node_modules/tinyglobby/node_modules/fdir": {
|
||||
"version": "6.5.0",
|
||||
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
|
||||
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"picomatch": "^3 || ^4"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"picomatch": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/tinyglobby/node_modules/picomatch": {
|
||||
"version": "4.0.3",
|
||||
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
|
||||
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/jonschlinkert"
|
||||
}
|
||||
},
|
||||
"node_modules/tinypool": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/tinypool/-/tinypool-1.0.2.tgz",
|
||||
@@ -10078,18 +9868,15 @@
|
||||
}
|
||||
},
|
||||
"node_modules/vite": {
|
||||
"version": "6.4.1",
|
||||
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
|
||||
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
|
||||
"version": "6.2.0",
|
||||
"resolved": "https://registry.npmjs.org/vite/-/vite-6.2.0.tgz",
|
||||
"integrity": "sha512-7dPxoo+WsT/64rDcwoOjk76XHj+TqNTIvHKcuMQ1k4/SeHDaQt5GFAeLYzrimZrMpn/O6DtdI03WUjdxuPM0oQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"esbuild": "^0.25.0",
|
||||
"fdir": "^6.4.4",
|
||||
"picomatch": "^4.0.2",
|
||||
"postcss": "^8.5.3",
|
||||
"rollup": "^4.34.9",
|
||||
"tinyglobby": "^0.2.13"
|
||||
"rollup": "^4.30.1"
|
||||
},
|
||||
"bin": {
|
||||
"vite": "bin/vite.js"
|
||||
@@ -10183,37 +9970,6 @@
|
||||
"monaco-editor": ">=0.33.0"
|
||||
}
|
||||
},
|
||||
"node_modules/vite/node_modules/fdir": {
|
||||
"version": "6.5.0",
|
||||
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
|
||||
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"picomatch": "^3 || ^4"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"picomatch": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/vite/node_modules/picomatch": {
|
||||
"version": "4.0.3",
|
||||
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
|
||||
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/jonschlinkert"
|
||||
}
|
||||
},
|
||||
"node_modules/vitest": {
|
||||
"version": "3.0.7",
|
||||
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.0.7.tgz",
|
||||
|
||||
@@ -54,7 +54,7 @@
|
||||
"idb-keyval": "^6.2.1",
|
||||
"immer": "^10.1.1",
|
||||
"konva": "^9.3.18",
|
||||
"lodash": "^4.17.23",
|
||||
"lodash": "^4.17.21",
|
||||
"lucide-react": "^0.477.0",
|
||||
"monaco-yaml": "^5.3.1",
|
||||
"next-themes": "^0.3.0",
|
||||
@@ -70,7 +70,7 @@
|
||||
"react-i18next": "^15.2.0",
|
||||
"react-icons": "^5.5.0",
|
||||
"react-konva": "^18.2.10",
|
||||
"react-router-dom": "^6.30.3",
|
||||
"react-router-dom": "^6.26.0",
|
||||
"react-swipeable": "^7.0.2",
|
||||
"react-tracked": "^2.0.1",
|
||||
"react-transition-group": "^4.4.5",
|
||||
@@ -122,7 +122,7 @@
|
||||
"prettier-plugin-tailwindcss": "^0.6.5",
|
||||
"tailwindcss": "^3.4.9",
|
||||
"typescript": "^5.8.2",
|
||||
"vite": "^6.4.1",
|
||||
"vite": "^6.2.0",
|
||||
"vitest": "^3.0.7"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1 @@
|
||||
{
|
||||
"train": {
|
||||
"titleShort": "الأخيرة"
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"description": {
|
||||
"addFace": "أضف مجموعة جديدة إلى مكتبة الوجوه عن طريق رفع صورتك الأولى.",
|
||||
"addFace": "قم بإضافة مجموعة جديدة لمكتبة الأوجه.",
|
||||
"invalidName": "أسم غير صالح. يجب أن يشمل الأسم فقط على الحروف، الأرقام، المسافات، الفاصلة العليا، الشرطة التحتية، والشرطة الواصلة.",
|
||||
"placeholder": "أدخل أسم لهذه المجموعة"
|
||||
},
|
||||
@@ -21,88 +21,6 @@
|
||||
"collections": "المجموعات",
|
||||
"createFaceLibrary": {
|
||||
"title": "إنشاء المجاميع",
|
||||
"desc": "إنشاء مجموعة جديدة",
|
||||
"new": "إضافة وجه جديد",
|
||||
"nextSteps": "لبناء أساس قوي:<li>استخدم علامة التبويب \"التعرّفات الأخيرة\" لاختيار الصور والتدريب عليها لكل شخص تم اكتشافه.</li> <li>ركّز على الصور الأمامية المباشرة للحصول على أفضل النتائج؛ وتجنّب صور التدريب التي تُظهر الوجوه بزاوية.</li>"
|
||||
},
|
||||
"steps": {
|
||||
"faceName": "ادخل اسم للوجه",
|
||||
"uploadFace": "ارفع صورة للوجه",
|
||||
"nextSteps": "الخطوة التالية",
|
||||
"description": {
|
||||
"uploadFace": "قم برفع صورة لـ {{name}} تُظهر وجهه من زاوية أمامية مباشرة. لا يلزم أن تكون الصورة مقتصرة على الوجه فقط."
|
||||
}
|
||||
},
|
||||
"train": {
|
||||
"title": "التعرّفات الأخيرة",
|
||||
"titleShort": "الأخيرة",
|
||||
"aria": "اختر التعرّفات الأخيرة",
|
||||
"empty": "لا توجد أي محاولات حديثة للتعرّف على الوجوه"
|
||||
},
|
||||
"deleteFaceLibrary": {
|
||||
"title": "احذف الاسم",
|
||||
"desc": "هل أنت متأكد أنك تريد حذف المجموعة {{name}}؟ سيؤدي هذا إلى حذف جميع الوجوه المرتبطة بها نهائيًا."
|
||||
},
|
||||
"deleteFaceAttempts": {
|
||||
"title": "احذف الوجوه",
|
||||
"desc_zero": "وجه",
|
||||
"desc_one": "وجه",
|
||||
"desc_two": "وجهان",
|
||||
"desc_few": "وجوه",
|
||||
"desc_many": "وجهًا",
|
||||
"desc_other": "وجه"
|
||||
},
|
||||
"renameFace": {
|
||||
"title": "اعادة تسمية الوجه",
|
||||
"desc": "ادخل اسم جديد لـ{{name}}"
|
||||
},
|
||||
"button": {
|
||||
"deleteFaceAttempts": "احذف الوجوه",
|
||||
"addFace": "اظف وجهًا",
|
||||
"renameFace": "اعد تسمية وجه",
|
||||
"deleteFace": "احذف وجهًا",
|
||||
"uploadImage": "ارفع صورة",
|
||||
"reprocessFace": "إعادة معالجة الوجه"
|
||||
},
|
||||
"imageEntry": {
|
||||
"validation": {
|
||||
"selectImage": "يرجى اختيار ملف صورة."
|
||||
},
|
||||
"dropActive": "اسحب الصورة إلى هنا…",
|
||||
"dropInstructions": "اسحب وأفلت أو الصق صورة هنا، أو انقر للاختيار",
|
||||
"maxSize": "الحجم الأقصى: {{size}} ميغابايت"
|
||||
},
|
||||
"nofaces": "لا توجد وجوه متاحة",
|
||||
"trainFaceAs": "درّب الوجه كـ:",
|
||||
"trainFace": "درّب الوجه",
|
||||
"toast": {
|
||||
"success": {
|
||||
"uploadedImage": "تم رفع الصورة بنجاح.",
|
||||
"addFaceLibrary": "تمت إضافة {{name}} بنجاح إلى مكتبة الوجوه!",
|
||||
"deletedFace_zero": "وجه",
|
||||
"deletedFace_one": "وجه",
|
||||
"deletedFace_two": "وجهين",
|
||||
"deletedFace_few": "وجوه",
|
||||
"deletedFace_many": "وجهًا",
|
||||
"deletedFace_other": "وجه",
|
||||
"deletedName_zero": "وجه",
|
||||
"deletedName_one": "وجه",
|
||||
"deletedName_two": "وجهين",
|
||||
"deletedName_few": "وجوه",
|
||||
"deletedName_many": "وجهًا",
|
||||
"deletedName_other": "وجه",
|
||||
"renamedFace": "تمت إعادة تسمية الوجه بنجاح إلى {{name}}",
|
||||
"trainedFace": "تم تدريب الوجه بنجاح.",
|
||||
"updatedFaceScore": "تم تحديث درجة الوجه بنجاح إلى {{name}} ({{score}})."
|
||||
},
|
||||
"error": {
|
||||
"uploadingImageFailed": "فشل في رفع الصورة: {{errorMessage}}",
|
||||
"addFaceLibraryFailed": "فشل في تعيين اسم الوجه: {{errorMessage}}",
|
||||
"deleteFaceFailed": "فشل الحذف: {{errorMessage}}",
|
||||
"deleteNameFailed": "فشل في حذف الاسم: {{errorMessage}}",
|
||||
"renameFaceFailed": "فشل في إعادة تسمية الوجه: {{errorMessage}}",
|
||||
"trainFailed": "فشل التدريب: {{errorMessage}}",
|
||||
"updateFaceScoreFailed": "فشل في تحديث درجة الوجه: {{errorMessage}}"
|
||||
}
|
||||
"desc": "إنشاء مجموعة جديدة"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
"babbling": "Бърборене",
|
||||
"whispering": "Шепнене",
|
||||
"laughter": "Смях",
|
||||
"crying": "Плач",
|
||||
"crying": "Плача",
|
||||
"sigh": "Въздишка",
|
||||
"singing": "Пеене",
|
||||
"singing": "Подписвам",
|
||||
"choir": "Хор",
|
||||
"yodeling": "Йоделинг",
|
||||
"mantra": "Мантра",
|
||||
@@ -264,6 +264,5 @@
|
||||
"pant": "Здъхване",
|
||||
"stomach_rumble": "Къркорене на стомах",
|
||||
"heartbeat": "Сърцебиене",
|
||||
"scream": "Вик",
|
||||
"snicker": "Хихикане"
|
||||
"scream": "Вик"
|
||||
}
|
||||
|
||||
@@ -1,16 +1,6 @@
|
||||
{
|
||||
"form": {
|
||||
"user": "Потребителско име",
|
||||
"password": "Парола",
|
||||
"login": "Вход",
|
||||
"firstTimeLogin": "Опитвате да влезете за първи път? Данните за вход са разпечатани в логовете на Frigate.",
|
||||
"errors": {
|
||||
"usernameRequired": "Потребителското име е задължително",
|
||||
"passwordRequired": "Паролата е задължителна",
|
||||
"rateLimit": "Надхвърлен брой опити. Моля Опитайте по-късно.",
|
||||
"loginFailed": "Неуспешен вход",
|
||||
"unknownError": "Неизвестна грешка. Поля проверете логовете.",
|
||||
"webUnknownError": "Неизвестна грешка. Поля проверете изхода в конзолата."
|
||||
}
|
||||
"password": "Парола"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"label": "Изтрий група за камери",
|
||||
"confirm": {
|
||||
"title": "Потвърди изтриването",
|
||||
"desc": "Сигурни ли сте, че искате да изтриете група <em>{{name}}</em>?"
|
||||
"desc": "Сигурни ли сте, че искате да изтриете група </em>{{name}}</em>?"
|
||||
}
|
||||
},
|
||||
"name": {
|
||||
|
||||
@@ -11,9 +11,6 @@
|
||||
},
|
||||
"restart": {
|
||||
"title": "Сигурен ли сте, че искате да рестартирате Frigate?",
|
||||
"button": "Рестартирай",
|
||||
"restarting": {
|
||||
"title": "Frigare се рестартира"
|
||||
}
|
||||
"button": "Рестартирай"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,3 @@
|
||||
{
|
||||
"documentTitle": "Модели за класификация - Frigate",
|
||||
"description": {
|
||||
"invalidName": "Невалидно име. Имената могат да съдържат единствено: букви, числа, празни места, долни черти и тирета."
|
||||
}
|
||||
"documentTitle": "Модели за класификация"
|
||||
}
|
||||
|
||||
@@ -1,18 +1,4 @@
|
||||
{
|
||||
"documentTitle": "Настройки на конфигурацията - Frigate",
|
||||
"configEditor": "Конфигуратор",
|
||||
"safeConfigEditor": "Конфигуратор (Safe Mode)",
|
||||
"safeModeDescription": "Frigate е в режим \"Safe Mode\" тъй като конфигурацията не минава проверките за валидност.",
|
||||
"copyConfig": "Копирай Конфигурацията",
|
||||
"saveAndRestart": "Запази и Рестартирай",
|
||||
"saveOnly": "Запази",
|
||||
"confirm": "Изход без запис?",
|
||||
"toast": {
|
||||
"success": {
|
||||
"copyToClipboard": "Конфигурацията е копирана."
|
||||
},
|
||||
"error": {
|
||||
"savingError": "Грешка при запис на конфигурацията"
|
||||
}
|
||||
}
|
||||
"documentTitle": "Настройки на конфигурацията - Фригейт",
|
||||
"configEditor": "Настройки на конфигурацията"
|
||||
}
|
||||
|
||||
@@ -11,8 +11,5 @@
|
||||
},
|
||||
"allCameras": "Всички камери",
|
||||
"alerts": "Известия",
|
||||
"detections": "Засичания",
|
||||
"motion": {
|
||||
"label": "Движение"
|
||||
}
|
||||
"detections": "Засичания"
|
||||
}
|
||||
|
||||
@@ -10,5 +10,5 @@
|
||||
"trackedObjectsCount_one": "{{count}} проследен обект ",
|
||||
"trackedObjectsCount_other": "{{count}} проследени обекта ",
|
||||
"documentTitle": "Разгледай - Фригейт",
|
||||
"generativeAI": "Генеративен Изкъствен Интелект"
|
||||
"generativeAI": "Генериращ Изкъствен Интелект"
|
||||
}
|
||||
|
||||
@@ -1,23 +1,4 @@
|
||||
{
|
||||
"documentTitle": "Експорт - Frigate",
|
||||
"search": "Търси",
|
||||
"noExports": "Няма намерени експорти",
|
||||
"deleteExport": "Изтрий експорт",
|
||||
"deleteExport.desc": "Сигурни ли сте, че искате да изтриете {{exportName}}?",
|
||||
"editExport": {
|
||||
"title": "Преименувай експорт",
|
||||
"desc": "Въведете ново име за този експорт.",
|
||||
"saveExport": "Запази експорт"
|
||||
},
|
||||
"tooltip": {
|
||||
"shareExport": "Сподели експорт",
|
||||
"downloadVideo": "Свали видео",
|
||||
"editName": "Редактирай име",
|
||||
"deleteExport": "Изтрий експорт"
|
||||
},
|
||||
"toast": {
|
||||
"error": {
|
||||
"renameExportFailed": "Неуспешно преименуване на експорт: {{errorMessage}}"
|
||||
}
|
||||
}
|
||||
"search": "Търси"
|
||||
}
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
},
|
||||
"description": {
|
||||
"addFace": "Добавете нова колекция във библиотеката за лица при качването на първата ви снимка.",
|
||||
"placeholder": "Напишете име за тази колекция",
|
||||
"invalidName": "Невалидно име. Имената могат да съдържат единствено: букви, числа, празни места, долни черти и тирета."
|
||||
"placeholder": "Напишете име за тази колекция"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,5 @@
|
||||
"save": "Запазване на търсенето"
|
||||
},
|
||||
"search": "Търси",
|
||||
"savedSearches": "Запазени търсения",
|
||||
"searchFor": "Търсене за {{inputValue}}"
|
||||
"savedSearches": "Запазени търсения"
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
},
|
||||
"documentTitle": {
|
||||
"cameras": "Статистики за Камери - Фригейт",
|
||||
"storage": "Статистика за паметта - Фригейт",
|
||||
"general": "Обща Статистика - Frigate"
|
||||
"storage": "Статистика за паметта - Фригейт"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -227,8 +227,7 @@
|
||||
"show": "Mostra {{item}}",
|
||||
"ID": "ID",
|
||||
"none": "Cap",
|
||||
"all": "Tots",
|
||||
"other": "Altres"
|
||||
"all": "Tots"
|
||||
},
|
||||
"button": {
|
||||
"apply": "Aplicar",
|
||||
|
||||
@@ -132,9 +132,5 @@
|
||||
},
|
||||
"count_one": "{{count}} Classe",
|
||||
"count_other": "{{count}} Classes"
|
||||
},
|
||||
"attributes": {
|
||||
"label": "Atributs de classificació",
|
||||
"all": "Tots els atributs"
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user