mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-23 21:48:13 -05:00
Various Tweaks (#20742)
* Pull context size from openai models * Adjust wording based on type of model * Instruct to not use parenthesis * Simplify genai config * Don't use GPU for training
This commit is contained in:
@@ -10,7 +10,6 @@ Object classification allows you to train a custom MobileNetV2 classification mo
|
||||
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
|
||||
|
||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
|
||||
|
||||
## Classes
|
||||
|
||||
|
||||
@@ -10,7 +10,6 @@ State classification allows you to train a custom MobileNetV2 classification mod
|
||||
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
|
||||
|
||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
|
||||
|
||||
## Classes
|
||||
|
||||
|
||||
Reference in New Issue
Block a user