mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-23 13:38:12 -05:00
Miscellaneous Fixes (0.17 beta) (#21336)
* fix coral docs * add note about sub label object classification with person * Catch OSError for deleting classification image * add docs for dummy camera debugging * add to sidebar * fix formatting * fix * avx instructions are required for classification * break text on classification card to prevent button overflow * Ensure there is no NameError when processing * Don't use region for state classification models * fix spelling * Handle attribute based models * Catch case of non-trained model that doesn't add infinite number of classification images * Actually train object classification models automatically --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
@@ -11,6 +11,8 @@ Object classification models are lightweight and run very fast on CPU. Inference
|
||||
|
||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||
|
||||
A CPU with AVX instructions is required for training and inference.
|
||||
|
||||
## Classes
|
||||
|
||||
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
|
||||
@@ -35,6 +37,12 @@ For object classification:
|
||||
- Ideal when multiple attributes can coexist independently.
|
||||
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
|
||||
|
||||
:::note
|
||||
|
||||
A tracked object can only have a single sub label. If you are using Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
|
||||
|
||||
:::
|
||||
|
||||
## Assignment Requirements
|
||||
|
||||
Sub labels and attributes are only assigned when both conditions are met:
|
||||
|
||||
@@ -11,6 +11,8 @@ State classification models are lightweight and run very fast on CPU. Inference
|
||||
|
||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||
|
||||
A CPU with AVX instructions is required for training and inference.
|
||||
|
||||
## Classes
|
||||
|
||||
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
|
||||
|
||||
@@ -146,16 +146,16 @@ detectors:
|
||||
|
||||
### EdgeTPU Supported Models
|
||||
|
||||
| Model | Notes |
|
||||
| ------------------------------------- | ------------------------------------------- |
|
||||
| [MobileNet v2](#ssdlite-mobilenet-v2) | Default model |
|
||||
| [YOLOv9](#yolo-v9) | More accurate but slower than default model |
|
||||
| Model | Notes |
|
||||
| ----------------------- | ------------------------------------------- |
|
||||
| [Mobiledet](#mobiledet) | Default model |
|
||||
| [YOLOv9](#yolov9) | More accurate but slower than default model |
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
#### Mobiledet
|
||||
|
||||
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||
|
||||
#### YOLO v9
|
||||
#### YOLOv9
|
||||
|
||||
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
|
||||
|
||||
|
||||
60
docs/docs/troubleshooting/dummy-camera.md
Normal file
60
docs/docs/troubleshooting/dummy-camera.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
id: dummy-camera
|
||||
title: Troubleshooting Detection
|
||||
---
|
||||
|
||||
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
|
||||
|
||||
## When to use
|
||||
|
||||
- Replaying an exported clip to reproduce incorrect detections
|
||||
- Testing configuration changes (model settings, trackers, filters) against a known clip
|
||||
- Gathering deterministic logs and recordings for debugging or issue reports
|
||||
|
||||
## Example Config
|
||||
|
||||
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
test:
|
||||
ffmpeg:
|
||||
inputs:
|
||||
- path: /media/frigate/car-stopping.mp4
|
||||
input_args: -re -stream_loop -1 -fflags +genpts
|
||||
roles:
|
||||
- detect
|
||||
detect:
|
||||
enabled: true
|
||||
record:
|
||||
enabled: false
|
||||
snapshots:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
|
||||
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
|
||||
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
|
||||
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
|
||||
3. Restart Frigate.
|
||||
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
|
||||
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
|
||||
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
|
||||
|
||||
## Variables to consider in object tracking
|
||||
|
||||
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
|
||||
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
|
||||
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
|
||||
|
||||
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- No video: verify the path is correct and accessible from the Frigate process/container.
|
||||
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
|
||||
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.
|
||||
@@ -132,6 +132,7 @@ const sidebars: SidebarsConfig = {
|
||||
"troubleshooting/gpu",
|
||||
"troubleshooting/edgetpu",
|
||||
"troubleshooting/memory",
|
||||
"troubleshooting/dummy-camera",
|
||||
],
|
||||
Development: [
|
||||
"development/contributing",
|
||||
|
||||
@@ -229,28 +229,34 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
if not should_run:
|
||||
return
|
||||
|
||||
x, y, x2, y2 = calculate_region(
|
||||
frame.shape,
|
||||
crop[0],
|
||||
crop[1],
|
||||
crop[2],
|
||||
crop[3],
|
||||
224,
|
||||
1.0,
|
||||
)
|
||||
|
||||
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
|
||||
frame = rgb[
|
||||
y:y2,
|
||||
x:x2,
|
||||
]
|
||||
height, width = rgb.shape[:2]
|
||||
|
||||
if frame.shape != (224, 224):
|
||||
try:
|
||||
resized_frame = cv2.resize(frame, (224, 224))
|
||||
except Exception:
|
||||
logger.warning("Failed to resize image for state classification")
|
||||
return
|
||||
# Convert normalized crop coordinates to pixel values
|
||||
x1 = int(camera_config.crop[0] * width)
|
||||
y1 = int(camera_config.crop[1] * height)
|
||||
x2 = int(camera_config.crop[2] * width)
|
||||
y2 = int(camera_config.crop[3] * height)
|
||||
|
||||
# Clip coordinates to frame boundaries
|
||||
x1 = max(0, min(x1, width))
|
||||
y1 = max(0, min(y1, height))
|
||||
x2 = max(0, min(x2, width))
|
||||
y2 = max(0, min(y2, height))
|
||||
|
||||
if x2 <= x1 or y2 <= y1:
|
||||
logger.warning(
|
||||
f"Invalid crop coordinates for {camera}: [{x1}, {y1}, {x2}, {y2}]"
|
||||
)
|
||||
return
|
||||
|
||||
frame = rgb[y1:y2, x1:x2]
|
||||
|
||||
try:
|
||||
resized_frame = cv2.resize(frame, (224, 224))
|
||||
except Exception:
|
||||
logger.warning("Failed to resize image for state classification")
|
||||
return
|
||||
|
||||
if self.interpreter is None:
|
||||
# When interpreter is None, always save (score is 0.0, which is < 1.0)
|
||||
@@ -513,6 +519,13 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
0.0,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
|
||||
# Still track history even when model doesn't exist to respect MAX_OBJECT_CLASSIFICATIONS
|
||||
# Add an entry with "unknown" label so the history limit is enforced
|
||||
if object_id not in self.classification_history:
|
||||
self.classification_history[object_id] = []
|
||||
|
||||
self.classification_history[object_id].append(("unknown", 0.0, now))
|
||||
return
|
||||
|
||||
input = np.expand_dims(resized_crop, axis=0)
|
||||
@@ -654,5 +667,5 @@ def write_classification_attempt(
|
||||
|
||||
if len(files) > max_files:
|
||||
os.unlink(os.path.join(folder, files[-1]))
|
||||
except FileNotFoundError:
|
||||
except (FileNotFoundError, OSError):
|
||||
pass
|
||||
|
||||
@@ -74,7 +74,7 @@
|
||||
},
|
||||
"renameCategory": {
|
||||
"title": "Rename Class",
|
||||
"desc": "Enter a new name for {{name}}. You will be required to retrain the model for the name change to take affect."
|
||||
"desc": "Enter a new name for {{name}}. You will be required to retrain the model for the name change to take effect."
|
||||
},
|
||||
"description": {
|
||||
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
|
||||
|
||||
@@ -4,8 +4,8 @@ import { cn } from "@/lib/utils";
|
||||
import {
|
||||
ClassificationItemData,
|
||||
ClassificationThreshold,
|
||||
ClassifiedEvent,
|
||||
} from "@/types/classification";
|
||||
import { Event } from "@/types/event";
|
||||
import { forwardRef, useMemo, useRef, useState } from "react";
|
||||
import { isDesktop, isIOS, isMobile, isMobileOnly } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
@@ -160,7 +160,7 @@ export const ClassificationCard = forwardRef<
|
||||
data.score != undefined ? "text-xs" : "text-sm",
|
||||
)}
|
||||
>
|
||||
<div className="smart-capitalize">
|
||||
<div className="break-all smart-capitalize">
|
||||
{data.name == "unknown"
|
||||
? t("details.unknown")
|
||||
: data.name == "none"
|
||||
@@ -190,7 +190,7 @@ export const ClassificationCard = forwardRef<
|
||||
|
||||
type GroupedClassificationCardProps = {
|
||||
group: ClassificationItemData[];
|
||||
event?: Event;
|
||||
classifiedEvent?: ClassifiedEvent;
|
||||
threshold?: ClassificationThreshold;
|
||||
selectedItems: string[];
|
||||
i18nLibrary: string;
|
||||
@@ -201,7 +201,7 @@ type GroupedClassificationCardProps = {
|
||||
};
|
||||
export function GroupedClassificationCard({
|
||||
group,
|
||||
event,
|
||||
classifiedEvent,
|
||||
threshold,
|
||||
selectedItems,
|
||||
i18nLibrary,
|
||||
@@ -236,14 +236,15 @@ export function GroupedClassificationCard({
|
||||
const bestTyped: ClassificationItemData = best;
|
||||
return {
|
||||
...bestTyped,
|
||||
name: event
|
||||
? event.sub_label && event.sub_label !== "none"
|
||||
? event.sub_label
|
||||
: t(noClassificationLabel)
|
||||
: bestTyped.name,
|
||||
score: event?.data?.sub_label_score,
|
||||
name:
|
||||
classifiedEvent?.label && classifiedEvent.label !== "none"
|
||||
? classifiedEvent.label
|
||||
: classifiedEvent
|
||||
? t(noClassificationLabel)
|
||||
: bestTyped.name,
|
||||
score: classifiedEvent?.score,
|
||||
};
|
||||
}, [group, event, noClassificationLabel, t]);
|
||||
}, [group, classifiedEvent, noClassificationLabel, t]);
|
||||
|
||||
const bestScoreStatus = useMemo(() => {
|
||||
if (!bestItem?.score || !threshold) {
|
||||
@@ -329,36 +330,38 @@ export function GroupedClassificationCard({
|
||||
)}
|
||||
>
|
||||
<ContentTitle className="flex items-center gap-2 font-normal capitalize">
|
||||
{event?.sub_label && event.sub_label !== "none"
|
||||
? event.sub_label
|
||||
{classifiedEvent?.label && classifiedEvent.label !== "none"
|
||||
? classifiedEvent.label
|
||||
: t(noClassificationLabel)}
|
||||
{event?.sub_label && event.sub_label !== "none" && (
|
||||
<div className="flex items-center gap-1">
|
||||
<div
|
||||
className={cn(
|
||||
"",
|
||||
bestScoreStatus == "match" && "text-success",
|
||||
bestScoreStatus == "potential" && "text-orange-400",
|
||||
bestScoreStatus == "unknown" && "text-danger",
|
||||
)}
|
||||
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<button
|
||||
className="focus:outline-none"
|
||||
aria-label={t("details.scoreInfo", {
|
||||
ns: i18nLibrary,
|
||||
})}
|
||||
>
|
||||
<LuInfo className="size-3" />
|
||||
</button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 text-sm">
|
||||
{t("details.scoreInfo", { ns: i18nLibrary })}
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
)}
|
||||
{classifiedEvent?.label &&
|
||||
classifiedEvent.label !== "none" &&
|
||||
classifiedEvent.score !== undefined && (
|
||||
<div className="flex items-center gap-1">
|
||||
<div
|
||||
className={cn(
|
||||
"",
|
||||
bestScoreStatus == "match" && "text-success",
|
||||
bestScoreStatus == "potential" && "text-orange-400",
|
||||
bestScoreStatus == "unknown" && "text-danger",
|
||||
)}
|
||||
>{`${Math.round((classifiedEvent.score || 0) * 100)}%`}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<button
|
||||
className="focus:outline-none"
|
||||
aria-label={t("details.scoreInfo", {
|
||||
ns: i18nLibrary,
|
||||
})}
|
||||
>
|
||||
<LuInfo className="size-3" />
|
||||
</button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 text-sm">
|
||||
{t("details.scoreInfo", { ns: i18nLibrary })}
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
)}
|
||||
</ContentTitle>
|
||||
<ContentDescription className={cn("", isMobile && "px-2")}>
|
||||
{time && (
|
||||
@@ -372,14 +375,14 @@ export function GroupedClassificationCard({
|
||||
</div>
|
||||
{isDesktop && (
|
||||
<div className="flex flex-row justify-between">
|
||||
{event && (
|
||||
{classifiedEvent && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div
|
||||
className="cursor-pointer"
|
||||
tabIndex={-1}
|
||||
onClick={() => {
|
||||
navigate(`/explore?event_id=${event.id}`);
|
||||
navigate(`/explore?event_id=${classifiedEvent.id}`);
|
||||
}}
|
||||
>
|
||||
<LuSearch className="size-4 text-secondary-foreground" />
|
||||
|
||||
@@ -186,15 +186,17 @@ export default function Step3ChooseExamples({
|
||||
await Promise.all(emptyFolderPromises);
|
||||
|
||||
// Step 3: Determine if we should train
|
||||
// For state models, we need ALL states to have examples
|
||||
// For object models, we need at least 2 classes with images
|
||||
// For state models, we need ALL states to have examples (at least 2 states)
|
||||
// For object models, we need at least 1 class with images (the rest go to "none")
|
||||
const allStatesHaveExamplesForTraining =
|
||||
step1Data.modelType !== "state" ||
|
||||
step1Data.classes.every((className) =>
|
||||
classesWithImages.has(className),
|
||||
);
|
||||
const shouldTrain =
|
||||
allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
||||
step1Data.modelType === "object"
|
||||
? classesWithImages.size >= 1
|
||||
: allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
||||
|
||||
// Step 4: Kick off training only if we have enough classes with images
|
||||
if (shouldTrain) {
|
||||
|
||||
@@ -68,7 +68,10 @@ import {
|
||||
ClassificationCard,
|
||||
GroupedClassificationCard,
|
||||
} from "@/components/card/ClassificationCard";
|
||||
import { ClassificationItemData } from "@/types/classification";
|
||||
import {
|
||||
ClassificationItemData,
|
||||
ClassifiedEvent,
|
||||
} from "@/types/classification";
|
||||
|
||||
export default function FaceLibrary() {
|
||||
const { t } = useTranslation(["views/faceLibrary"]);
|
||||
@@ -922,10 +925,22 @@ function FaceAttemptGroup({
|
||||
[onRefresh, t],
|
||||
);
|
||||
|
||||
// Create ClassifiedEvent from Event (face recognition uses sub_label)
|
||||
const classifiedEvent: ClassifiedEvent | undefined = useMemo(() => {
|
||||
if (!event || !event.sub_label || event.sub_label === "none") {
|
||||
return undefined;
|
||||
}
|
||||
return {
|
||||
id: event.id,
|
||||
label: event.sub_label,
|
||||
score: event.data?.sub_label_score,
|
||||
};
|
||||
}, [event]);
|
||||
|
||||
return (
|
||||
<GroupedClassificationCard
|
||||
group={group}
|
||||
event={event}
|
||||
classifiedEvent={classifiedEvent}
|
||||
threshold={threshold}
|
||||
selectedItems={selectedFaces}
|
||||
i18nLibrary="views/faceLibrary"
|
||||
|
||||
@@ -21,6 +21,12 @@ export type ClassificationThreshold = {
|
||||
unknown: number;
|
||||
};
|
||||
|
||||
export type ClassifiedEvent = {
|
||||
id: string;
|
||||
label?: string;
|
||||
score?: number;
|
||||
};
|
||||
|
||||
export type ClassificationDatasetResponse = {
|
||||
categories: {
|
||||
[id: string]: string[];
|
||||
|
||||
@@ -24,5 +24,12 @@ export interface Event {
|
||||
type: "object" | "audio" | "manual";
|
||||
recognized_license_plate?: string;
|
||||
path_data: [number[], number][];
|
||||
// Allow arbitrary keys for attributes (e.g., model_name, model_name_score)
|
||||
[key: string]:
|
||||
| number
|
||||
| number[]
|
||||
| string
|
||||
| [number[], number][]
|
||||
| undefined;
|
||||
};
|
||||
}
|
||||
|
||||
@@ -62,6 +62,7 @@ import useApiFilter from "@/hooks/use-api-filter";
|
||||
import {
|
||||
ClassificationDatasetResponse,
|
||||
ClassificationItemData,
|
||||
ClassifiedEvent,
|
||||
TrainFilter,
|
||||
} from "@/types/classification";
|
||||
import {
|
||||
@@ -1033,6 +1034,45 @@ function ObjectTrainGrid({
|
||||
};
|
||||
}, [model]);
|
||||
|
||||
// Helper function to create ClassifiedEvent from Event
|
||||
const createClassifiedEvent = useCallback(
|
||||
(event: Event | undefined): ClassifiedEvent | undefined => {
|
||||
if (!event || !model.object_config) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const classificationType = model.object_config.classification_type;
|
||||
|
||||
if (classificationType === "attribute") {
|
||||
// For attribute type, look at event.data[model.name]
|
||||
const attributeValue = event.data[model.name] as string | undefined;
|
||||
const attributeScore = event.data[`${model.name}_score`] as
|
||||
| number
|
||||
| undefined;
|
||||
|
||||
if (attributeValue && attributeValue !== "none") {
|
||||
return {
|
||||
id: event.id,
|
||||
label: attributeValue,
|
||||
score: attributeScore,
|
||||
};
|
||||
}
|
||||
} else {
|
||||
// For sub_label type, use event.sub_label
|
||||
if (event.sub_label && event.sub_label !== "none") {
|
||||
return {
|
||||
id: event.id,
|
||||
label: event.sub_label,
|
||||
score: event.data?.sub_label_score,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return undefined;
|
||||
},
|
||||
[model],
|
||||
);
|
||||
|
||||
// selection
|
||||
|
||||
const [selectedEvent, setSelectedEvent] = useState<Event>();
|
||||
@@ -1095,11 +1135,13 @@ function ObjectTrainGrid({
|
||||
>
|
||||
{Object.entries(groups).map(([key, group]) => {
|
||||
const event = events?.find((ev) => ev.id == key);
|
||||
const classifiedEvent = createClassifiedEvent(event);
|
||||
|
||||
return (
|
||||
<div key={key} className="aspect-square w-full">
|
||||
<GroupedClassificationCard
|
||||
group={group}
|
||||
event={event}
|
||||
classifiedEvent={classifiedEvent}
|
||||
threshold={threshold}
|
||||
selectedItems={selectedImages}
|
||||
i18nLibrary="views/classificationModel"
|
||||
|
||||
Reference in New Issue
Block a user