Miscellaneous fixes (#21373)

* Send preferred language for report service

* make object lifecycle scrollable in tracking details

* fix info popovers in live camera drawer

* ensure metrics are initialized if genai is enabled

* docs

* ollama cloud model docs

* Ensure object descriptions get claened up

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
This commit is contained in:
Nicolas Mowen
2025-12-20 17:30:34 -07:00
committed by GitHub
parent 8a4d5f34da
commit 54f4af3c6a
11 changed files with 144 additions and 110 deletions

View File

@@ -33,9 +33,9 @@ For object classification:
- Example: `cat``Leo`, `Charlie`, `None`. - Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**: - **Attribute**:
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`. - Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently. - Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not. - Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
:::note :::note
@@ -81,6 +81,8 @@ classification:
classification_type: sub_label # or: attribute classification_type: sub_label # or: attribute
``` ```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
## Training the model ## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps: Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
@@ -89,6 +91,8 @@ Creating and training the model is done within the Frigate UI using the `Classif
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category. Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". Create a third class, "none", for other neighborhood cats that are not your own.
### Step 2: Assign Training Examples ### Step 2: Assign Training Examples
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically. The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.

View File

@@ -48,6 +48,8 @@ classification:
crop: [0, 180, 220, 400] crop: [0, 180, 220, 400]
``` ```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
## Training the model ## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps: Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:

View File

@@ -56,7 +56,7 @@ Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_
### Supported Models ### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag. You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note :::note
@@ -64,6 +64,10 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
::: :::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration ### Configuration
```yaml ```yaml

View File

@@ -100,6 +100,10 @@ class FrigateApp:
) )
if ( if (
config.semantic_search.enabled config.semantic_search.enabled
or any(
c.objects.genai.enabled or c.review.genai.enabled
for c in config.cameras.values()
)
or config.lpr.enabled or config.lpr.enabled
or config.face_recognition.enabled or config.face_recognition.enabled
or len(config.classification.custom) > 0 or len(config.classification.custom) > 0

View File

@@ -131,6 +131,8 @@ class ObjectDescriptionProcessor(PostProcessorApi):
) )
): ):
self._process_genai_description(event, camera_config, thumbnail) self._process_genai_description(event, camera_config, thumbnail)
else:
self.cleanup_event(event.id)
def __regenerate_description(self, event_id: str, source: str, force: bool) -> None: def __regenerate_description(self, event_id: str, source: str, force: bool) -> None:
"""Regenerate the description for an event.""" """Regenerate the description for an event."""
@@ -204,6 +206,17 @@ class ObjectDescriptionProcessor(PostProcessorApi):
) )
return None return None
def cleanup_event(self, event_id: str) -> None:
"""Clean up tracked event data to prevent memory leaks.
This should be called when an event ends, regardless of whether
genai processing is triggered.
"""
if event_id in self.tracked_events:
del self.tracked_events[event_id]
if event_id in self.early_request_sent:
del self.early_request_sent[event_id]
def _read_and_crop_snapshot(self, event: Event) -> bytes | None: def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
"""Read, decode, and crop the snapshot image.""" """Read, decode, and crop the snapshot image."""
@@ -299,9 +312,8 @@ class ObjectDescriptionProcessor(PostProcessorApi):
), ),
).start() ).start()
# Delete tracked events based on the event_id # Clean up tracked events and early request state
if event.id in self.tracked_events: self.cleanup_event(event.id)
del self.tracked_events[event.id]
def _genai_embed_description(self, event: Event, thumbnails: list[bytes]) -> None: def _genai_embed_description(self, event: Event, thumbnails: list[bytes]) -> None:
"""Embed the description for an event.""" """Embed the description for an event."""

View File

@@ -311,6 +311,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
start_ts, start_ts,
end_ts, end_ts,
events_with_context, events_with_context,
self.config.review.genai.preferred_language,
self.config.review.genai.debug_save_thumbnails, self.config.review.genai.debug_save_thumbnails,
) )
else: else:

View File

@@ -522,6 +522,8 @@ class EmbeddingMaintainer(threading.Thread):
) )
elif isinstance(processor, ObjectDescriptionProcessor): elif isinstance(processor, ObjectDescriptionProcessor):
if not updated_db: if not updated_db:
# Still need to cleanup tracked events even if not processing
processor.cleanup_event(event_id)
continue continue
processor.process_data( processor.process_data(

View File

@@ -178,6 +178,7 @@ Each line represents a detection state, not necessarily unique individuals. Pare
start_ts: float, start_ts: float,
end_ts: float, end_ts: float,
events: list[dict[str, Any]], events: list[dict[str, Any]],
preferred_language: str | None,
debug_save: bool, debug_save: bool,
) -> str | None: ) -> str | None:
"""Generate a summary of review item descriptions over a period of time.""" """Generate a summary of review item descriptions over a period of time."""
@@ -232,6 +233,9 @@ Guidelines:
for event in events: for event in events:
timeline_summary_prompt += f"\n{event}\n" timeline_summary_prompt += f"\n{event}\n"
if preferred_language:
timeline_summary_prompt += f"\nProvide your answer in {preferred_language}"
if debug_save: if debug_save:
with open( with open(
os.path.join( os.path.join(

View File

@@ -599,9 +599,14 @@ export default function SearchDetailDialog({
<Content <Content
ref={isDesktop ? dialogContentRef : undefined} ref={isDesktop ? dialogContentRef : undefined}
className={cn( className={cn(
"scrollbar-container overflow-y-auto", isDesktop && [
isDesktop && "max-h-[95dvh] max-w-[85%] xl:max-w-[70%]", "max-h-[95dvh] max-w-[85%] xl:max-w-[70%]",
isMobile && "flex h-full flex-col px-4", pageToggle === "tracking_details"
? "flex flex-col overflow-hidden"
: "scrollbar-container overflow-y-auto",
],
isMobile &&
"scrollbar-container flex h-full flex-col overflow-y-auto px-4",
)} )}
onEscapeKeyDown={(event) => { onEscapeKeyDown={(event) => {
if (isPopoverOpen) { if (isPopoverOpen) {

View File

@@ -526,7 +526,7 @@ export function TrackingDetails({
<div <div
className={cn( className={cn(
"flex items-center justify-center", "flex items-start justify-center",
isDesktop && "overflow-hidden", isDesktop && "overflow-hidden",
cameraAspect === "tall" ? "max-h-[50dvh] lg:max-h-[70dvh]" : "w-full", cameraAspect === "tall" ? "max-h-[50dvh] lg:max-h-[70dvh]" : "w-full",
cameraAspect === "tall" && isMobileOnly && "w-full", cameraAspect === "tall" && isMobileOnly && "w-full",
@@ -622,7 +622,10 @@ export function TrackingDetails({
<div <div
className={cn( className={cn(
isDesktop && "justify-between overflow-hidden lg:basis-2/5", isDesktop && "justify-start overflow-hidden",
aspectRatio > 1 && aspectRatio < 1.5
? "lg:basis-3/5"
: "lg:basis-2/5",
)} )}
> >
{isDesktop && tabs && ( {isDesktop && tabs && (
@@ -632,121 +635,114 @@ export function TrackingDetails({
)} )}
<div <div
className={cn( className={cn(
isDesktop && "scrollbar-container h-full overflow-y-auto", isDesktop && "scrollbar-container max-h-[70vh] overflow-y-auto",
)} )}
> >
{config?.cameras[event.camera]?.onvif.autotracking {config?.cameras[event.camera]?.onvif.autotracking
.enabled_in_config && ( .enabled_in_config && (
<div className="mb-2 ml-3 text-sm text-danger"> <div className="mb-4 ml-3 text-sm text-danger">
{t("trackingDetails.autoTrackingTips")} {t("trackingDetails.autoTrackingTips")}
</div> </div>
)} )}
<div className="mt-4"> <div className={cn("rounded-md bg-background_alt px-0 py-3 md:px-2")}>
<div <div className="flex w-full items-center justify-between">
className={cn("rounded-md bg-background_alt px-0 py-3 md:px-2")} <div
> className="flex items-center gap-2 font-medium"
<div className="flex w-full items-center justify-between"> onClick={(e) => {
e.stopPropagation();
// event.start_time is detect time, convert to record
handleSeekToTime(
(event.start_time ?? 0) + annotationOffset / 1000,
);
}}
role="button"
>
<div <div
className="flex items-center gap-2 font-medium" className={cn(
onClick={(e) => { "relative ml-2 rounded-full bg-muted-foreground p-2",
e.stopPropagation(); )}
// event.start_time is detect time, convert to record
handleSeekToTime(
(event.start_time ?? 0) + annotationOffset / 1000,
);
}}
role="button"
> >
<div {getIconForLabel(
className={cn( event.sub_label ? event.label + "-verified" : event.label,
"relative ml-2 rounded-full bg-muted-foreground p-2", "size-4 text-white",
)} )}
> </div>
{getIconForLabel( <div className="flex items-center gap-2">
event.sub_label ? event.label + "-verified" : event.label, <span className="capitalize">{label}</span>
"size-4 text-white", <div className="md:text-md flex items-center text-xs text-secondary-foreground">
)} {formattedStart ?? ""}
</div> {event.end_time != null ? (
<div className="flex items-center gap-2"> <> - {formattedEnd}</>
<span className="capitalize">{label}</span> ) : (
<div className="md:text-md flex items-center text-xs text-secondary-foreground"> <div className="inline-block">
{formattedStart ?? ""} <ActivityIndicator className="ml-3 size-4" />
{event.end_time != null ? ( </div>
<> - {formattedEnd}</>
) : (
<div className="inline-block">
<ActivityIndicator className="ml-3 size-4" />
</div>
)}
</div>
{event.data?.recognized_license_plate && (
<>
<span className="text-secondary-foreground">·</span>
<div className="text-sm text-secondary-foreground">
<Link
to={`/explore?recognized_license_plate=${event.data.recognized_license_plate}`}
className="text-sm"
>
{event.data.recognized_license_plate}
</Link>
</div>
</>
)} )}
</div> </div>
{event.data?.recognized_license_plate && (
<>
<span className="text-secondary-foreground">·</span>
<div className="text-sm text-secondary-foreground">
<Link
to={`/explore?recognized_license_plate=${event.data.recognized_license_plate}`}
className="text-sm"
>
{event.data.recognized_license_plate}
</Link>
</div>
</>
)}
</div> </div>
</div> </div>
</div>
<div className="mt-2"> <div className="mt-2">
{!eventSequence ? ( {!eventSequence ? (
<ActivityIndicator className="size-2" size={2} /> <ActivityIndicator className="size-2" size={2} />
) : eventSequence.length === 0 ? ( ) : eventSequence.length === 0 ? (
<div className="py-2 text-muted-foreground"> <div className="py-2 text-muted-foreground">
{t("detail.noObjectDetailData", { ns: "views/events" })} {t("detail.noObjectDetailData", { ns: "views/events" })}
</div> </div>
) : ( ) : (
<div className="-pb-2 relative mx-0" ref={timelineContainerRef}>
<div <div
className="-pb-2 relative mx-0" className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
ref={timelineContainerRef} style={{ bottom: lineBottomOffsetPx }}
> />
{isWithinEventRange && (
<div <div
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground" className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
style={{ bottom: lineBottomOffsetPx }} style={{
top: `${lineTopOffsetPx}px`,
height: `${blueLineHeightPx}px`,
}}
/> />
{isWithinEventRange && ( )}
<div <div className="space-y-2">
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300" {eventSequence.map((item, idx) => {
style={{ return (
top: `${lineTopOffsetPx}px`, <div
height: `${blueLineHeightPx}px`, key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
}} ref={(el) => {
/> rowRefs.current[idx] = el;
)} }}
<div className="space-y-2"> >
{eventSequence.map((item, idx) => { <LifecycleIconRow
return ( item={item}
<div event={event}
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`} onClick={() => handleLifecycleClick(item)}
ref={(el) => { setSelectedZone={setSelectedZone}
rowRefs.current[idx] = el; getZoneColor={getZoneColor}
}} effectiveTime={effectiveTime}
> isTimelineActive={isWithinEventRange}
<LifecycleIconRow />
item={item} </div>
event={event} );
onClick={() => handleLifecycleClick(item)} })}
setSelectedZone={setSelectedZone}
getZoneColor={getZoneColor}
effectiveTime={effectiveTime}
isTimelineActive={isWithinEventRange}
/>
</div>
);
})}
</div>
</div> </div>
)} </div>
</div> )}
</div> </div>
</div> </div>
</div> </div>

View File

@@ -1444,7 +1444,7 @@ function FrigateCameraFeatures({
ns: "components/dialog", ns: "components/dialog",
})} })}
</div> </div>
<Popover> <Popover modal={true}>
<PopoverTrigger asChild> <PopoverTrigger asChild>
<div className="cursor-pointer p-0"> <div className="cursor-pointer p-0">
<LuInfo className="size-4" /> <LuInfo className="size-4" />
@@ -1531,7 +1531,7 @@ function FrigateCameraFeatures({
<> <>
<LuX className="size-4 text-danger" /> <LuX className="size-4 text-danger" />
<div>{t("stream.audio.unavailable")}</div> <div>{t("stream.audio.unavailable")}</div>
<Popover> <Popover modal={true}>
<PopoverTrigger asChild> <PopoverTrigger asChild>
<div className="cursor-pointer p-0"> <div className="cursor-pointer p-0">
<LuInfo className="size-4" /> <LuInfo className="size-4" />
@@ -1575,7 +1575,7 @@ function FrigateCameraFeatures({
<> <>
<LuX className="size-4 text-danger" /> <LuX className="size-4 text-danger" />
<div>{t("stream.twoWayTalk.unavailable")}</div> <div>{t("stream.twoWayTalk.unavailable")}</div>
<Popover> <Popover modal={true}>
<PopoverTrigger asChild> <PopoverTrigger asChild>
<div className="cursor-pointer p-0"> <div className="cursor-pointer p-0">
<LuInfo className="size-4" /> <LuInfo className="size-4" />