Various Tweaks (#20742)

* Pull context size from openai models

* Adjust wording based on type of model

* Instruct to not use parenthesis

* Simplify genai config

* Don't use GPU for training
This commit is contained in:
Nicolas Mowen
2025-10-31 12:40:31 -06:00
committed by GitHub
parent 685f2c5030
commit 338b681ed0
8 changed files with 39 additions and 10 deletions

View File

@@ -10,7 +10,6 @@ Object classification allows you to train a custom MobileNetV2 classification mo
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
## Classes

View File

@@ -10,7 +10,6 @@ State classification allows you to train a custom MobileNetV2 classification mod
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
## Classes