Compare commits
1 Commits
dependabot
...
update-rec
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4977ed49a5 |
@@ -1,317 +1,168 @@
|
||||
aarch
|
||||
absdiff
|
||||
airockchip
|
||||
Alloc
|
||||
alpr
|
||||
Amcrest
|
||||
amdgpu
|
||||
analyzeduration
|
||||
Annke
|
||||
apexcharts
|
||||
arange
|
||||
argmax
|
||||
argmin
|
||||
argpartition
|
||||
ascontiguousarray
|
||||
astype
|
||||
authelia
|
||||
authentik
|
||||
autodetected
|
||||
automations
|
||||
autotrack
|
||||
autotracked
|
||||
autotracker
|
||||
autotracking
|
||||
balena
|
||||
Beelink
|
||||
BGRA
|
||||
BHWC
|
||||
blackshear
|
||||
blakeblackshear
|
||||
bottombar
|
||||
buildx
|
||||
castable
|
||||
cdist
|
||||
Celeron
|
||||
cgroups
|
||||
chipset
|
||||
chromadb
|
||||
Chromecast
|
||||
cmdline
|
||||
codeowner
|
||||
CODEOWNERS
|
||||
codeproject
|
||||
colormap
|
||||
colorspace
|
||||
comms
|
||||
cooldown
|
||||
coro
|
||||
ctypeslib
|
||||
CUDA
|
||||
Cuvid
|
||||
Dahua
|
||||
datasheet
|
||||
debconf
|
||||
deci
|
||||
deepstack
|
||||
defragment
|
||||
devcontainer
|
||||
DEVICEMAP
|
||||
discardcorrupt
|
||||
dpkg
|
||||
dsize
|
||||
dtype
|
||||
ECONNRESET
|
||||
edgetpu
|
||||
facenet
|
||||
fastapi
|
||||
faststart
|
||||
fflags
|
||||
ffprobe
|
||||
fillna
|
||||
flac
|
||||
foscam
|
||||
fourcc
|
||||
framebuffer
|
||||
fregate
|
||||
frégate
|
||||
fromarray
|
||||
frombuffer
|
||||
frontdoor
|
||||
fstype
|
||||
fullchain
|
||||
fullscreen
|
||||
genai
|
||||
generativeai
|
||||
genpts
|
||||
getpid
|
||||
gpuload
|
||||
HACS
|
||||
Hailo
|
||||
hass
|
||||
hconcat
|
||||
healthcheck
|
||||
hideable
|
||||
Hikvision
|
||||
homeassistant
|
||||
homekit
|
||||
homography
|
||||
hsize
|
||||
hstack
|
||||
httpx
|
||||
hwaccel
|
||||
hwdownload
|
||||
hwmap
|
||||
hwupload
|
||||
iloc
|
||||
imagestream
|
||||
imdecode
|
||||
imencode
|
||||
imread
|
||||
imutils
|
||||
imwrite
|
||||
interp
|
||||
iostat
|
||||
iotop
|
||||
itemsize
|
||||
Jellyfin
|
||||
jetson
|
||||
jetsons
|
||||
jina
|
||||
jinaai
|
||||
joserfc
|
||||
jsmpeg
|
||||
jsonify
|
||||
Kalman
|
||||
keepalive
|
||||
keepdims
|
||||
labelmap
|
||||
letsencrypt
|
||||
levelname
|
||||
LIBAVFORMAT
|
||||
libedgetpu
|
||||
libnvinfer
|
||||
libva
|
||||
libwebp
|
||||
libx
|
||||
libyolo
|
||||
linalg
|
||||
localzone
|
||||
logpipe
|
||||
Loryta
|
||||
lstsq
|
||||
lsusb
|
||||
markupsafe
|
||||
maxsplit
|
||||
MEMHOSTALLOC
|
||||
memlimit
|
||||
meshgrid
|
||||
metadatas
|
||||
migraphx
|
||||
minilm
|
||||
mjpeg
|
||||
mkfifo
|
||||
mobiledet
|
||||
mobilenet
|
||||
modelpath
|
||||
mosquitto
|
||||
mountpoint
|
||||
movflags
|
||||
mpegts
|
||||
mqtt
|
||||
mse
|
||||
msenc
|
||||
namedtuples
|
||||
nbytes
|
||||
nchw
|
||||
ndarray
|
||||
ndimage
|
||||
nethogs
|
||||
newaxis
|
||||
nhwc
|
||||
NOBLOCK
|
||||
nobuffer
|
||||
nokey
|
||||
NONBLOCK
|
||||
noninteractive
|
||||
noprint
|
||||
Norfair
|
||||
nptype
|
||||
NTSC
|
||||
numpy
|
||||
nvenc
|
||||
nvhost
|
||||
nvml
|
||||
nvmpi
|
||||
ollama
|
||||
onnx
|
||||
onnxruntime
|
||||
onvif
|
||||
ONVIF
|
||||
openai
|
||||
opencv
|
||||
openvino
|
||||
OWASP
|
||||
paddleocr
|
||||
paho
|
||||
passwordless
|
||||
popleft
|
||||
posthog
|
||||
postprocess
|
||||
poweroff
|
||||
preexec
|
||||
probesize
|
||||
protobuf
|
||||
pstate
|
||||
psutil
|
||||
pubkey
|
||||
putenv
|
||||
pycache
|
||||
pydantic
|
||||
pyobj
|
||||
pysqlite
|
||||
pytz
|
||||
pywebpush
|
||||
qnap
|
||||
quantisation
|
||||
Radeon
|
||||
radeonsi
|
||||
radeontop
|
||||
rawvideo
|
||||
rcond
|
||||
RDONLY
|
||||
rebranded
|
||||
referer
|
||||
reindex
|
||||
Reolink
|
||||
restream
|
||||
restreamed
|
||||
restreaming
|
||||
rkmpp
|
||||
rknn
|
||||
rkrga
|
||||
rockchip
|
||||
rocm
|
||||
rocminfo
|
||||
rootfs
|
||||
rtmp
|
||||
edgetpu
|
||||
labelmap
|
||||
rockchip
|
||||
jetson
|
||||
rocm
|
||||
vaapi
|
||||
CUDA
|
||||
hwaccel
|
||||
RTSP
|
||||
ruamel
|
||||
scroller
|
||||
setproctitle
|
||||
setpts
|
||||
shms
|
||||
SIGUSR
|
||||
skylake
|
||||
sleeptime
|
||||
SNDMORE
|
||||
socs
|
||||
sqliteq
|
||||
sqlitevecq
|
||||
ssdlite
|
||||
statm
|
||||
stimeout
|
||||
stylelint
|
||||
subclassing
|
||||
substream
|
||||
superfast
|
||||
surveillance
|
||||
svscan
|
||||
Swipeable
|
||||
sysconf
|
||||
tailscale
|
||||
Tapo
|
||||
tensorrt
|
||||
Hikvision
|
||||
Dahua
|
||||
Amcrest
|
||||
Reolink
|
||||
Loryta
|
||||
Beelink
|
||||
Celeron
|
||||
vaapi
|
||||
blakeblackshear
|
||||
workdir
|
||||
onvif
|
||||
autotracking
|
||||
openvino
|
||||
tflite
|
||||
deepstack
|
||||
codeproject
|
||||
udev
|
||||
tailscale
|
||||
restream
|
||||
restreaming
|
||||
webrtc
|
||||
ssdlite
|
||||
mobilenet
|
||||
mosquitto
|
||||
datasheet
|
||||
Jellyfin
|
||||
Radeon
|
||||
libva
|
||||
Ubiquiti
|
||||
Unifi
|
||||
Tapo
|
||||
Annke
|
||||
autotracker
|
||||
autotracked
|
||||
variations
|
||||
ONVIF
|
||||
traefik
|
||||
devcontainer
|
||||
rootfs
|
||||
ffprobe
|
||||
autotrack
|
||||
logpipe
|
||||
imread
|
||||
imwrite
|
||||
imencode
|
||||
imutils
|
||||
thresholded
|
||||
timelapse
|
||||
tmpfs
|
||||
tobytes
|
||||
toggleable
|
||||
traefik
|
||||
tzlocal
|
||||
Ubiquiti
|
||||
udev
|
||||
udevadm
|
||||
ultrafast
|
||||
unichip
|
||||
unidecode
|
||||
Unifi
|
||||
unixepoch
|
||||
unraid
|
||||
unreviewed
|
||||
userdata
|
||||
usermod
|
||||
uvicorn
|
||||
vaapi
|
||||
sleeptime
|
||||
radeontop
|
||||
vainfo
|
||||
variations
|
||||
vbios
|
||||
vconcat
|
||||
vitb
|
||||
vstream
|
||||
vsync
|
||||
wallclock
|
||||
webp
|
||||
webpush
|
||||
webrtc
|
||||
tmpfs
|
||||
homography
|
||||
websockets
|
||||
webui
|
||||
werkzeug
|
||||
workdir
|
||||
WRONLY
|
||||
wsgirefserver
|
||||
wsgiutils
|
||||
wsize
|
||||
xaddr
|
||||
xmaxs
|
||||
xmins
|
||||
XPUB
|
||||
XSUB
|
||||
ymaxs
|
||||
ymins
|
||||
yolo
|
||||
yolonas
|
||||
yolox
|
||||
LIBAVFORMAT
|
||||
NTSC
|
||||
onnxruntime
|
||||
fourcc
|
||||
radeonsi
|
||||
paho
|
||||
imagestream
|
||||
jsonify
|
||||
cgroups
|
||||
sysconf
|
||||
memlimit
|
||||
gpuload
|
||||
nvml
|
||||
setproctitle
|
||||
psutil
|
||||
Kalman
|
||||
frontdoor
|
||||
namedtuples
|
||||
zeep
|
||||
zerolatency
|
||||
fflags
|
||||
probesize
|
||||
wallclock
|
||||
rknn
|
||||
socs
|
||||
pydantic
|
||||
shms
|
||||
imdecode
|
||||
colormap
|
||||
webui
|
||||
mse
|
||||
jsmpeg
|
||||
unreviewed
|
||||
Chromecast
|
||||
Swipeable
|
||||
flac
|
||||
scroller
|
||||
cmdline
|
||||
toggleable
|
||||
bottombar
|
||||
opencv
|
||||
apexcharts
|
||||
buildx
|
||||
mqtt
|
||||
rawvideo
|
||||
defragment
|
||||
Norfair
|
||||
subclassing
|
||||
yolo
|
||||
tensorrt
|
||||
blackshear
|
||||
stylelint
|
||||
HACS
|
||||
homeassistant
|
||||
hass
|
||||
castable
|
||||
mobiledet
|
||||
framebuffer
|
||||
mjpeg
|
||||
substream
|
||||
codeowner
|
||||
noninteractive
|
||||
restreamed
|
||||
mountpoint
|
||||
fstype
|
||||
OWASP
|
||||
iotop
|
||||
letsencrypt
|
||||
fullchain
|
||||
lsusb
|
||||
iostat
|
||||
usermod
|
||||
balena
|
||||
passwordless
|
||||
debconf
|
||||
dpkg
|
||||
poweroff
|
||||
surveillance
|
||||
qnap
|
||||
homekit
|
||||
colorspace
|
||||
quantisation
|
||||
skylake
|
||||
Cuvid
|
||||
foscam
|
||||
onnx
|
||||
numpy
|
||||
protobuf
|
||||
aarch
|
||||
amdgpu
|
||||
chipset
|
||||
referer
|
||||
mpegts
|
||||
webp
|
||||
authelia
|
||||
authentik
|
||||
unichip
|
||||
rebranded
|
||||
udevadm
|
||||
automations
|
||||
unraid
|
||||
hideable
|
||||
healthcheck
|
||||
keepalive
|
||||
@@ -8,25 +8,9 @@
|
||||
"overrideCommand": false,
|
||||
"remoteUser": "vscode",
|
||||
"features": {
|
||||
"ghcr.io/devcontainers/features/common-utils:2": {}
|
||||
// Uncomment the following lines to use ONNX Runtime with CUDA support
|
||||
// "ghcr.io/devcontainers/features/nvidia-cuda:1": {
|
||||
// "installCudnn": true,
|
||||
// "installNvtx": true,
|
||||
// "installToolkit": true,
|
||||
// "cudaVersion": "12.5",
|
||||
// "cudnnVersion": "9.4.0.58"
|
||||
// },
|
||||
// "./features/onnxruntime-gpu": {}
|
||||
"ghcr.io/devcontainers/features/common-utils:1": {}
|
||||
},
|
||||
"forwardPorts": [
|
||||
8971,
|
||||
5000,
|
||||
5001,
|
||||
5173,
|
||||
8554,
|
||||
8555
|
||||
],
|
||||
"forwardPorts": [8971, 5000, 5001, 5173, 8554, 8555],
|
||||
"portsAttributes": {
|
||||
"8971": {
|
||||
"label": "External NGINX",
|
||||
@@ -68,8 +52,7 @@
|
||||
"csstools.postcss",
|
||||
"blanu.vscode-styled-jsx",
|
||||
"bradlc.vscode-tailwindcss",
|
||||
"charliermarsh.ruff",
|
||||
"eamodio.gitlens"
|
||||
"charliermarsh.ruff"
|
||||
],
|
||||
"settings": {
|
||||
"remote.autoForwardPorts": false,
|
||||
@@ -80,18 +63,10 @@
|
||||
"editor.formatOnType": true,
|
||||
"python.testing.pytestEnabled": false,
|
||||
"python.testing.unittestEnabled": true,
|
||||
"python.testing.unittestArgs": [
|
||||
"-v",
|
||||
"-s",
|
||||
"./frigate/test"
|
||||
],
|
||||
"python.testing.unittestArgs": ["-v", "-s", "./frigate/test"],
|
||||
"files.trimTrailingWhitespace": true,
|
||||
"eslint.workingDirectories": [
|
||||
"./web"
|
||||
],
|
||||
"isort.args": [
|
||||
"--settings-path=./pyproject.toml"
|
||||
],
|
||||
"eslint.workingDirectories": ["./web"],
|
||||
"isort.args": ["--settings-path=./pyproject.toml"],
|
||||
"[python]": {
|
||||
"editor.defaultFormatter": "charliermarsh.ruff",
|
||||
"editor.formatOnSave": true,
|
||||
@@ -110,16 +85,9 @@
|
||||
],
|
||||
"editor.tabSize": 2
|
||||
},
|
||||
"cSpell.ignoreWords": [
|
||||
"rtmp"
|
||||
],
|
||||
"cSpell.words": [
|
||||
"preact",
|
||||
"astype",
|
||||
"hwaccel",
|
||||
"mqtt"
|
||||
]
|
||||
"cSpell.ignoreWords": ["rtmp"],
|
||||
"cSpell.words": ["preact", "astype", "hwaccel", "mqtt"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"id": "onnxruntime-gpu",
|
||||
"version": "0.0.1",
|
||||
"name": "ONNX Runtime GPU (Nvidia)",
|
||||
"description": "Installs ONNX Runtime for Nvidia GPUs.",
|
||||
"documentationURL": "",
|
||||
"options": {
|
||||
"version": {
|
||||
"type": "string",
|
||||
"proposals": [
|
||||
"latest",
|
||||
"1.20.1",
|
||||
"1.20.0"
|
||||
],
|
||||
"default": "latest",
|
||||
"description": "Version of ONNX Runtime to install"
|
||||
}
|
||||
},
|
||||
"installsAfter": [
|
||||
"ghcr.io/devcontainers/features/nvidia-cuda"
|
||||
]
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
VERSION=${VERSION}
|
||||
|
||||
python3 -m pip config set global.break-system-packages true
|
||||
# if VERSION == "latest" or VERSION is empty, install the latest version
|
||||
if [ "$VERSION" == "latest" ] || [ -z "$VERSION" ]; then
|
||||
python3 -m pip install onnxruntime-gpu
|
||||
else
|
||||
python3 -m pip install onnxruntime-gpu==$VERSION
|
||||
fi
|
||||
|
||||
echo "Done!"
|
||||
@@ -3,12 +3,10 @@
|
||||
set -euxo pipefail
|
||||
|
||||
# Cleanup the old github host key
|
||||
if [[ -f ~/.ssh/known_hosts ]]; then
|
||||
# Add new github host key
|
||||
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
|
||||
curl -L https://api.github.com/meta | jq -r '.ssh_keys | .[]' | \
|
||||
sed -e 's/^/github.com /' >> ~/.ssh/known_hosts
|
||||
fi
|
||||
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
|
||||
# Add new github host key
|
||||
curl -L https://api.github.com/meta | jq -r '.ssh_keys | .[]' | \
|
||||
sed -e 's/^/github.com /' >> ~/.ssh/known_hosts
|
||||
|
||||
# Frigate normal container runs as root, so it have permission to create
|
||||
# the folders. But the devcontainer runs as the host user, so we need to
|
||||
@@ -19,7 +17,7 @@ sudo chown -R "$(id -u):$(id -g)" /media/frigate
|
||||
# When started as a service, LIBAVFORMAT_VERSION_MAJOR is defined in the
|
||||
# s6 service file. For dev, where frigate is started from an interactive
|
||||
# shell, we define it in .bashrc instead.
|
||||
echo 'export LIBAVFORMAT_VERSION_MAJOR=$("$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)" -version | grep -Po "libavformat\W+\K\d+")' >> "$HOME/.bashrc"
|
||||
echo 'export LIBAVFORMAT_VERSION_MAJOR=$(ffmpeg -version | grep -Po "libavformat\W+\K\d+")' >> $HOME/.bashrc
|
||||
|
||||
make version
|
||||
|
||||
|
||||
@@ -90,9 +90,6 @@ body:
|
||||
- HassOS Addon
|
||||
- Docker Compose
|
||||
- Docker CLI
|
||||
- Proxmox via Docker
|
||||
- Proxmox via TTeck Script
|
||||
- Windows WSL2
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
@@ -105,7 +102,7 @@ body:
|
||||
- TensorRT
|
||||
- RKNN
|
||||
- Other
|
||||
- CPU (no coral)
|
||||
- CPU (no Coral)
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
||||
11
.github/DISCUSSION_TEMPLATE/config-support.yml
vendored
@@ -76,17 +76,6 @@ body:
|
||||
- HassOS Addon
|
||||
- Docker Compose
|
||||
- Docker CLI
|
||||
- Proxmox via Docker
|
||||
- Proxmox via TTeck Script
|
||||
- Windows WSL2
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: docker
|
||||
attributes:
|
||||
label: docker-compose file or Docker CLI command
|
||||
description: This will be automatically formatted into code, so no need for backticks.
|
||||
render: yaml
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
||||
25
.github/DISCUSSION_TEMPLATE/detector-support.yml
vendored
@@ -48,6 +48,28 @@ body:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: go2rtclogs
|
||||
attributes:
|
||||
label: Relevant go2rtc log output
|
||||
description: Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: Operating system
|
||||
options:
|
||||
- HassOS
|
||||
- Debian
|
||||
- Other Linux
|
||||
- Proxmox
|
||||
- UNRAID
|
||||
- Windows
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: install-method
|
||||
attributes:
|
||||
@@ -56,9 +78,6 @@ body:
|
||||
- HassOS Addon
|
||||
- Docker Compose
|
||||
- Docker CLI
|
||||
- Proxmox via Docker
|
||||
- Proxmox via TTeck Script
|
||||
- Windows WSL2
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
||||
25
.github/DISCUSSION_TEMPLATE/general-support.yml
vendored
@@ -68,6 +68,20 @@ body:
|
||||
label: Frigate stats
|
||||
description: Output from frigate's /api/stats endpoint
|
||||
render: json
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: Operating system
|
||||
options:
|
||||
- HassOS
|
||||
- Debian
|
||||
- Other Linux
|
||||
- Proxmox
|
||||
- UNRAID
|
||||
- Windows
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: install-method
|
||||
attributes:
|
||||
@@ -76,17 +90,6 @@ body:
|
||||
- HassOS Addon
|
||||
- Docker Compose
|
||||
- Docker CLI
|
||||
- Proxmox via Docker
|
||||
- Proxmox via TTeck Script
|
||||
- Windows WSL2
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: docker
|
||||
attributes:
|
||||
label: docker-compose file or Docker CLI command
|
||||
description: This will be automatically formatted into code, so no need for backticks.
|
||||
render: yaml
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
||||
@@ -24,6 +24,12 @@ body:
|
||||
description: Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
attributes:
|
||||
label: In which browser(s) are you experiencing the issue with?
|
||||
placeholder: Google Chrome 88.0.4324.150
|
||||
description: >
|
||||
Provide the full name and don't forget to add the version!
|
||||
- type: textarea
|
||||
id: config
|
||||
attributes:
|
||||
@@ -64,6 +70,20 @@ body:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: Operating system
|
||||
options:
|
||||
- HassOS
|
||||
- Debian
|
||||
- Other Linux
|
||||
- Proxmox
|
||||
- UNRAID
|
||||
- Windows
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: install-method
|
||||
attributes:
|
||||
@@ -72,22 +92,6 @@ body:
|
||||
- HassOS Addon
|
||||
- Docker Compose
|
||||
- Docker CLI
|
||||
- Proxmox via Docker
|
||||
- Proxmox via TTeck Script
|
||||
- Windows WSL2
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: object-detector
|
||||
attributes:
|
||||
label: Object Detector
|
||||
options:
|
||||
- Coral
|
||||
- OpenVino
|
||||
- TensorRT
|
||||
- RKNN
|
||||
- Other
|
||||
- CPU (no coral)
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
||||
4
.github/actions/setup/action.yml
vendored
@@ -33,9 +33,9 @@ runs:
|
||||
with:
|
||||
string: ${{ github.repository }}
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
uses: docker/setup-qemu-action@v2
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc
|
||||
with:
|
||||
|
||||
38
.github/pull_request_template.md
vendored
@@ -1,38 +0,0 @@
|
||||
## Proposed change
|
||||
<!--
|
||||
Thank you!
|
||||
|
||||
If you're introducing a new feature or significantly refactoring existing functionality,
|
||||
we encourage you to start a discussion first. This helps ensure your idea aligns with
|
||||
Frigate's development goals.
|
||||
|
||||
Describe what this pull request does and how it will benefit users of Frigate.
|
||||
Please describe in detail any considerations, breaking changes, etc. that are
|
||||
made in this pull request.
|
||||
-->
|
||||
|
||||
|
||||
## Type of change
|
||||
|
||||
- [ ] Dependency upgrade
|
||||
- [ ] Bugfix (non-breaking change which fixes an issue)
|
||||
- [ ] New feature
|
||||
- [ ] Breaking change (fix/feature causing existing functionality to break)
|
||||
- [ ] Code quality improvements to existing code
|
||||
- [ ] Documentation Update
|
||||
|
||||
## Additional information
|
||||
|
||||
- This PR fixes or closes issue: fixes #
|
||||
- This PR is related to issue:
|
||||
|
||||
## Checklist
|
||||
|
||||
<!--
|
||||
Put an `x` in the boxes that apply.
|
||||
-->
|
||||
|
||||
- [ ] The code change is tested and works locally.
|
||||
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
|
||||
- [ ] There is no commented out code in this PR.
|
||||
- [ ] The code has been formatted using Ruff (`ruff format frigate`)
|
||||
186
.github/workflows/ci.yml
vendored
@@ -6,8 +6,6 @@ on:
|
||||
branches:
|
||||
- dev
|
||||
- master
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
|
||||
# only run the latest commit to avoid cache overwrites
|
||||
concurrency:
|
||||
@@ -19,13 +17,11 @@ env:
|
||||
|
||||
jobs:
|
||||
amd64_build:
|
||||
runs-on: ubuntu-22.04
|
||||
runs-on: ubuntu-latest
|
||||
name: AMD64 Build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
@@ -42,13 +38,11 @@ jobs:
|
||||
tags: ${{ steps.setup.outputs.image-name }}-amd64
|
||||
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
|
||||
arm64_build:
|
||||
runs-on: ubuntu-22.04-arm
|
||||
runs-on: ubuntu-latest
|
||||
name: ARM Build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
@@ -66,9 +60,8 @@ jobs:
|
||||
${{ steps.setup.outputs.image-name }}-standard-arm64
|
||||
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
|
||||
- name: Build and push RPi build
|
||||
uses: docker/bake-action@v6
|
||||
uses: docker/bake-action@v4
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
targets: rpi
|
||||
files: docker/rpi/rpi.hcl
|
||||
@@ -76,15 +69,47 @@ jobs:
|
||||
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
|
||||
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
|
||||
- name: Build and push Rockchip build
|
||||
uses: docker/bake-action@v3
|
||||
with:
|
||||
push: true
|
||||
targets: rk
|
||||
files: docker/rockchip/rk.hcl
|
||||
set: |
|
||||
rk.tags=${{ steps.setup.outputs.image-name }}-rk
|
||||
*.cache-from=type=gha
|
||||
jetson_jp4_build:
|
||||
runs-on: ubuntu-latest
|
||||
name: Jetson Jetpack 4
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push TensorRT (Jetson, Jetpack 4)
|
||||
env:
|
||||
ARCH: arm64
|
||||
BASE_IMAGE: timongentzsch/l4t-ubuntu20-opencv:latest
|
||||
SLIM_BASE: timongentzsch/l4t-ubuntu20-opencv:latest
|
||||
TRT_BASE: timongentzsch/l4t-ubuntu20-opencv:latest
|
||||
uses: docker/bake-action@v4
|
||||
with:
|
||||
push: true
|
||||
targets: tensorrt
|
||||
files: docker/tensorrt/trt.hcl
|
||||
set: |
|
||||
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp4
|
||||
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4,mode=max
|
||||
jetson_jp5_build:
|
||||
if: false
|
||||
runs-on: ubuntu-22.04
|
||||
runs-on: ubuntu-latest
|
||||
name: Jetson Jetpack 5
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
@@ -96,9 +121,8 @@ jobs:
|
||||
BASE_IMAGE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
|
||||
SLIM_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
|
||||
TRT_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
|
||||
uses: docker/bake-action@v6
|
||||
uses: docker/bake-action@v4
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
targets: tensorrt
|
||||
files: docker/tensorrt/trt.hcl
|
||||
@@ -106,45 +130,14 @@ jobs:
|
||||
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
|
||||
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
|
||||
jetson_jp6_build:
|
||||
runs-on: ubuntu-22.04-arm
|
||||
name: Jetson Jetpack 6
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push TensorRT (Jetson, Jetpack 6)
|
||||
env:
|
||||
ARCH: arm64
|
||||
BASE_IMAGE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
|
||||
SLIM_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
|
||||
TRT_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
|
||||
uses: docker/bake-action@v6
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
targets: tensorrt
|
||||
files: docker/tensorrt/trt.hcl
|
||||
set: |
|
||||
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp6
|
||||
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp6
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp6,mode=max
|
||||
amd64_extra_builds:
|
||||
runs-on: ubuntu-22.04
|
||||
runs-on: ubuntu-latest
|
||||
name: AMD64 Extra Build
|
||||
needs:
|
||||
- amd64_build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
@@ -153,9 +146,8 @@ jobs:
|
||||
- name: Build and push TensorRT (x86 GPU)
|
||||
env:
|
||||
COMPUTE_LEVEL: "50 60 70 80 90"
|
||||
uses: docker/bake-action@v6
|
||||
uses: docker/bake-action@v4
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
targets: tensorrt
|
||||
files: docker/tensorrt/trt.hcl
|
||||
@@ -163,49 +155,61 @@ jobs:
|
||||
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
|
||||
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
|
||||
- name: AMD/ROCm general build
|
||||
env:
|
||||
AMDGPU: gfx
|
||||
HSA_OVERRIDE: 0
|
||||
uses: docker/bake-action@v6
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
targets: rocm
|
||||
files: docker/rocm/rocm.hcl
|
||||
set: |
|
||||
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
|
||||
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-rocm,mode=max
|
||||
*.cache-from=type=gha
|
||||
arm64_extra_builds:
|
||||
runs-on: ubuntu-22.04-arm
|
||||
name: ARM Extra Build
|
||||
needs:
|
||||
- arm64_build
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up QEMU and Buildx
|
||||
id: setup
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push Rockchip build
|
||||
uses: docker/bake-action@v6
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
targets: rk
|
||||
files: docker/rockchip/rk.hcl
|
||||
set: |
|
||||
rk.tags=${{ steps.setup.outputs.image-name }}-rk
|
||||
*.cache-from=type=gha
|
||||
#- name: AMD/ROCm general build
|
||||
# env:
|
||||
# AMDGPU: gfx
|
||||
# HSA_OVERRIDE: 0
|
||||
# uses: docker/bake-action@v3
|
||||
# with:
|
||||
# push: true
|
||||
# targets: rocm
|
||||
# files: docker/rocm/rocm.hcl
|
||||
# set: |
|
||||
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
|
||||
# *.cache-from=type=gha
|
||||
#- name: AMD/ROCm gfx900
|
||||
# env:
|
||||
# AMDGPU: gfx900
|
||||
# HSA_OVERRIDE: 1
|
||||
# HSA_OVERRIDE_GFX_VERSION: 9.0.0
|
||||
# uses: docker/bake-action@v3
|
||||
# with:
|
||||
# push: true
|
||||
# targets: rocm
|
||||
# files: docker/rocm/rocm.hcl
|
||||
# set: |
|
||||
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm-gfx900
|
||||
# *.cache-from=type=gha
|
||||
#- name: AMD/ROCm gfx1030
|
||||
# env:
|
||||
# AMDGPU: gfx1030
|
||||
# HSA_OVERRIDE: 1
|
||||
# HSA_OVERRIDE_GFX_VERSION: 10.3.0
|
||||
# uses: docker/bake-action@v3
|
||||
# with:
|
||||
# push: true
|
||||
# targets: rocm
|
||||
# files: docker/rocm/rocm.hcl
|
||||
# set: |
|
||||
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm-gfx1030
|
||||
# *.cache-from=type=gha
|
||||
#- name: AMD/ROCm gfx1100
|
||||
# env:
|
||||
# AMDGPU: gfx1100
|
||||
# HSA_OVERRIDE: 1
|
||||
# HSA_OVERRIDE_GFX_VERSION: 11.0.0
|
||||
# uses: docker/bake-action@v3
|
||||
# with:
|
||||
# push: true
|
||||
# targets: rocm
|
||||
# files: docker/rocm/rocm.hcl
|
||||
# set: |
|
||||
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm-gfx1100
|
||||
# *.cache-from=type=gha
|
||||
# The majority of users running arm64 are rpi users, so the rpi
|
||||
# build should be the primary arm64 image
|
||||
assemble_default_build:
|
||||
runs-on: ubuntu-22.04
|
||||
runs-on: ubuntu-latest
|
||||
name: Assemble and push default build
|
||||
needs:
|
||||
- amd64_build
|
||||
@@ -216,7 +220,7 @@ jobs:
|
||||
with:
|
||||
string: ${{ github.repository }}
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
|
||||
uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
|
||||
24
.github/workflows/dependabot-auto-merge.yaml
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
name: dependabot-auto-merge
|
||||
on: pull_request
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
dependabot-auto-merge:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.actor == 'dependabot[bot]'
|
||||
steps:
|
||||
- name: Get Dependabot metadata
|
||||
id: metadata
|
||||
uses: dependabot/fetch-metadata@v2
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Enable auto-merge for Dependabot PRs
|
||||
if: steps.metadata.outputs.dependency-type == 'direct:development' && (steps.metadata.outputs.update-type == 'version-update:semver-minor' || steps.metadata.outputs.update-type == 'version-update:semver-patch')
|
||||
run: |
|
||||
gh pr review --approve "$PR_URL"
|
||||
gh pr merge --auto --squash "$PR_URL"
|
||||
env:
|
||||
PR_URL: ${{ github.event.pull_request.html_url }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
31
.github/workflows/pull_request.yml
vendored
@@ -1,13 +1,9 @@
|
||||
name: On pull request
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- ".github/**"
|
||||
on: pull_request
|
||||
|
||||
env:
|
||||
DEFAULT_PYTHON: 3.11
|
||||
DEFAULT_PYTHON: 3.9
|
||||
|
||||
jobs:
|
||||
build_devcontainer:
|
||||
@@ -20,11 +16,9 @@ jobs:
|
||||
DOCKER_BUILDKIT: "1"
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
with:
|
||||
node-version: 20.x
|
||||
node-version: 16.x
|
||||
- name: Install devcontainer cli
|
||||
run: npm install --global @devcontainers/cli
|
||||
- name: Build devcontainer
|
||||
@@ -41,8 +35,6 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
with:
|
||||
node-version: 16.x
|
||||
@@ -57,16 +49,11 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/setup-node@master
|
||||
with:
|
||||
node-version: 20.x
|
||||
- run: npm install
|
||||
working-directory: ./web
|
||||
- name: Build web
|
||||
run: npm run build
|
||||
working-directory: ./web
|
||||
# - name: Test
|
||||
# run: npm run test
|
||||
# working-directory: ./web
|
||||
@@ -77,10 +64,8 @@ jobs:
|
||||
steps:
|
||||
- name: Check out the repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
|
||||
uses: actions/setup-python@v5.4.0
|
||||
uses: actions/setup-python@v5.1.0
|
||||
with:
|
||||
python-version: ${{ env.DEFAULT_PYTHON }}
|
||||
- name: Install requirements
|
||||
@@ -100,8 +85,14 @@ jobs:
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@master
|
||||
with:
|
||||
persist-credentials: false
|
||||
node-version: 16.x
|
||||
- run: npm install
|
||||
working-directory: ./web
|
||||
- name: Build web
|
||||
run: npm run build
|
||||
working-directory: ./web
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: Set up Docker Buildx
|
||||
|
||||
15
.github/workflows/release.yml
vendored
@@ -11,26 +11,21 @@ jobs:
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
- id: lowercaseRepo
|
||||
uses: ASzc/change-string-case-action@v6
|
||||
with:
|
||||
string: ${{ github.repository }}
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
|
||||
uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Create tag variables
|
||||
env:
|
||||
TAG: ${{ github.ref_name }}
|
||||
LOWERCASE_REPO: ${{ steps.lowercaseRepo.outputs.lowercase }}
|
||||
run: |
|
||||
BUILD_TYPE=$([[ "${TAG}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
|
||||
BUILD_TYPE=$([[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
|
||||
echo "BUILD_TYPE=${BUILD_TYPE}" >> $GITHUB_ENV
|
||||
echo "BASE=ghcr.io/${LOWERCASE_REPO}" >> $GITHUB_ENV
|
||||
echo "BASE=ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}" >> $GITHUB_ENV
|
||||
echo "BUILD_TAG=${GITHUB_SHA::7}" >> $GITHUB_ENV
|
||||
echo "CLEAN_VERSION=$(echo ${GITHUB_REF##*/} | tr '[:upper:]' '[:lower:]' | sed 's/^[v]//')" >> $GITHUB_ENV
|
||||
- name: Tag and push the main image
|
||||
@@ -39,14 +34,14 @@ jobs:
|
||||
STABLE_TAG=${BASE}:stable
|
||||
PULL_TAG=${BASE}:${BUILD_TAG}
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk; do
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
|
||||
done
|
||||
|
||||
# stable tag
|
||||
if [[ "${BUILD_TYPE}" == "stable" ]]; then
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk; do
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
|
||||
done
|
||||
fi
|
||||
|
||||
5
.github/workflows/stale.yml
vendored
@@ -23,9 +23,7 @@ jobs:
|
||||
exempt-pr-labels: "pinned,security,dependencies"
|
||||
operations-per-run: 120
|
||||
- name: Print outputs
|
||||
env:
|
||||
STALE_OUTPUT: ${{ join(steps.stale.outputs.*, ',') }}
|
||||
run: echo "$STALE_OUTPUT"
|
||||
run: echo ${{ join(steps.stale.outputs.*, ',') }}
|
||||
|
||||
# clean_ghcr:
|
||||
# name: Delete outdated dev container images
|
||||
@@ -40,3 +38,4 @@ jobs:
|
||||
# account-type: personal
|
||||
# token: ${{ secrets.GITHUB_TOKEN }}
|
||||
# token-type: github-token
|
||||
|
||||
|
||||
3
.gitignore
vendored
@@ -1,6 +1,5 @@
|
||||
.DS_Store
|
||||
__pycache__
|
||||
.mypy_cache
|
||||
*.pyc
|
||||
*.swp
|
||||
debug
|
||||
.vscode/*
|
||||
|
||||
5
.vscode/launch.json
vendored
@@ -3,9 +3,10 @@
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Python: Launch Frigate",
|
||||
"type": "debugpy",
|
||||
"type": "python",
|
||||
"request": "launch",
|
||||
"module": "frigate"
|
||||
"module": "frigate",
|
||||
"justMyCode": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -4,4 +4,3 @@
|
||||
/docker/tensorrt/*jetson* @madsciencetist
|
||||
/docker/rockchip/ @MarcA711
|
||||
/docker/rocm/ @harakas
|
||||
/docker/hailo8l/ @spanner3003
|
||||
|
||||
31
Makefile
@@ -1,9 +1,11 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.16.0
|
||||
VERSION = 0.14.1
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
CURRENT_UID := $(shell id -u)
|
||||
CURRENT_GID := $(shell id -g)
|
||||
BOARDS= #Initialized empty
|
||||
|
||||
include docker/*/*.mk
|
||||
@@ -16,38 +18,25 @@ version:
|
||||
echo 'VERSION = "$(VERSION)-$(COMMIT_HASH)"' > frigate/version.py
|
||||
|
||||
local: version
|
||||
docker buildx build --target=frigate --file docker/main/Dockerfile . \
|
||||
--tag frigate:latest \
|
||||
--load
|
||||
docker buildx build --target=frigate --tag frigate:latest --load --file docker/main/Dockerfile .
|
||||
|
||||
amd64:
|
||||
docker buildx build --target=frigate --file docker/main/Dockerfile . \
|
||||
--tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) \
|
||||
--platform linux/amd64
|
||||
docker buildx build --platform linux/amd64 --target=frigate --tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) --file docker/main/Dockerfile .
|
||||
|
||||
arm64:
|
||||
docker buildx build --target=frigate --file docker/main/Dockerfile . \
|
||||
--tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) \
|
||||
--platform linux/arm64
|
||||
docker buildx build --platform linux/arm64 --target=frigate --tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) --file docker/main/Dockerfile .
|
||||
|
||||
build: version amd64 arm64
|
||||
docker buildx build --target=frigate --file docker/main/Dockerfile . \
|
||||
--tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) \
|
||||
--platform linux/arm64/v8,linux/amd64
|
||||
docker buildx build --platform linux/arm64/v8,linux/amd64 --target=frigate --tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) --file docker/main/Dockerfile .
|
||||
|
||||
push: push-boards
|
||||
docker buildx build --target=frigate --file docker/main/Dockerfile . \
|
||||
--tag $(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH) \
|
||||
--platform linux/arm64/v8,linux/amd64 \
|
||||
--push
|
||||
docker buildx build --push --platform linux/arm64/v8,linux/amd64 --target=frigate --tag $(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH) --file docker/main/Dockerfile .
|
||||
|
||||
run: local
|
||||
docker run --rm --publish=5000:5000 --volume=${PWD}/config:/config frigate:latest
|
||||
|
||||
run_tests: local
|
||||
docker run --rm --workdir=/opt/frigate --entrypoint= frigate:latest \
|
||||
python3 -u -m unittest
|
||||
docker run --rm --workdir=/opt/frigate --entrypoint= frigate:latest \
|
||||
python3 -u -m mypy --config-file frigate/mypy.ini frigate
|
||||
docker run --rm --workdir=/opt/frigate --entrypoint= frigate:latest python3 -u -m unittest
|
||||
docker run --rm --workdir=/opt/frigate --entrypoint= frigate:latest python3 -u -m mypy --config-file frigate/mypy.ini frigate
|
||||
|
||||
.PHONY: run_tests
|
||||
|
||||
@@ -4,7 +4,6 @@ from statistics import mean
|
||||
|
||||
import numpy as np
|
||||
|
||||
import frigate.util as util
|
||||
from frigate.config import DetectorTypeEnum
|
||||
from frigate.object_detection import (
|
||||
ObjectDetectProcess,
|
||||
@@ -61,7 +60,7 @@ def start(id, num_detections, detection_queue, event):
|
||||
object_detector.cleanup()
|
||||
print(f"{id} - Processed for {duration:.2f} seconds.")
|
||||
print(f"{id} - FPS: {object_detector.fps.eps():.2f}")
|
||||
print(f"{id} - Average frame processing time: {mean(frame_times) * 1000:.2f}ms")
|
||||
print(f"{id} - Average frame processing time: {mean(frame_times)*1000:.2f}ms")
|
||||
|
||||
|
||||
######
|
||||
@@ -91,7 +90,7 @@ edgetpu_process_2 = ObjectDetectProcess(
|
||||
)
|
||||
|
||||
for x in range(0, 10):
|
||||
camera_process = util.Process(
|
||||
camera_process = mp.Process(
|
||||
target=start, args=(x, 300, detection_queue, events[str(x)])
|
||||
)
|
||||
camera_process.daemon = True
|
||||
|
||||
@@ -7,8 +7,7 @@
|
||||
"*.db",
|
||||
"node_modules",
|
||||
"__pycache__",
|
||||
"dist",
|
||||
"/audio-labelmap.txt"
|
||||
"dist"
|
||||
],
|
||||
"language": "en",
|
||||
"dictionaryDefinitions": [
|
||||
|
||||
@@ -23,7 +23,7 @@ services:
|
||||
# count: 1
|
||||
# capabilities: [gpu]
|
||||
environment:
|
||||
YOLO_MODELS: ""
|
||||
YOLO_MODELS: yolov7-320
|
||||
devices:
|
||||
- /dev/bus/usb:/dev/bus/usb
|
||||
# - /dev/dri:/dev/dri # for intel hwaccel, needs to be updated for your hardware
|
||||
@@ -38,4 +38,4 @@ services:
|
||||
container_name: mqtt
|
||||
image: eclipse-mosquitto:1.6
|
||||
ports:
|
||||
- "1883:1883"
|
||||
- "1883:1883"
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Update package list and install dependencies
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential cmake git wget
|
||||
|
||||
hailo_version="4.20.0"
|
||||
arch=$(uname -m)
|
||||
|
||||
if [[ $arch == "x86_64" ]]; then
|
||||
sudo apt install -y linux-headers-$(uname -r);
|
||||
else
|
||||
sudo apt install -y linux-modules-extra-$(uname -r);
|
||||
fi
|
||||
|
||||
# Clone the HailoRT driver repository
|
||||
git clone --depth 1 --branch v${hailo_version} https://github.com/hailo-ai/hailort-drivers.git
|
||||
|
||||
# Build and install the HailoRT driver
|
||||
cd hailort-drivers/linux/pcie
|
||||
sudo make all
|
||||
sudo make install
|
||||
|
||||
# Load the Hailo PCI driver
|
||||
sudo modprobe hailo_pci
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Unable to load hailo_pci module, common reasons for this are:"
|
||||
echo "- Key was rejected by service: Secure Boot is enabling disallowing install."
|
||||
echo "- Permissions are not setup correctly."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Download and install the firmware
|
||||
cd ../../
|
||||
./download_firmware.sh
|
||||
|
||||
# verify the firmware folder is present
|
||||
if [ ! -d /lib/firmware/hailo ]; then
|
||||
sudo mkdir /lib/firmware/hailo
|
||||
fi
|
||||
sudo mv hailo8_fw.*.bin /lib/firmware/hailo/hailo8_fw.bin
|
||||
|
||||
# Install udev rules
|
||||
sudo cp ./linux/pcie/51-hailo-udev.rules /etc/udev/rules.d/
|
||||
sudo udevadm control --reload-rules && sudo udevadm trigger
|
||||
|
||||
echo "HailoRT driver installation complete."
|
||||
echo "reboot your system to load the firmware!"
|
||||
@@ -3,29 +3,14 @@
|
||||
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Globally set pip break-system-packages option to avoid having to specify it every time
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES=1
|
||||
|
||||
ARG BASE_IMAGE=debian:12
|
||||
ARG SLIM_BASE=debian:12-slim
|
||||
|
||||
# A hook that allows us to inject commands right after the base images
|
||||
ARG BASE_HOOK=
|
||||
ARG BASE_IMAGE=debian:11
|
||||
ARG SLIM_BASE=debian:11-slim
|
||||
|
||||
FROM ${BASE_IMAGE} AS base
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES
|
||||
ARG BASE_HOOK
|
||||
|
||||
RUN sh -c "$BASE_HOOK"
|
||||
|
||||
FROM --platform=${BUILDPLATFORM} debian:12 AS base_host
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES
|
||||
FROM --platform=${BUILDPLATFORM} debian:11 AS base_host
|
||||
|
||||
FROM ${SLIM_BASE} AS slim-base
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES
|
||||
ARG BASE_HOOK
|
||||
|
||||
RUN sh -c "$BASE_HOOK"
|
||||
|
||||
FROM slim-base AS wget
|
||||
ARG DEBIAN_FRONTEND
|
||||
@@ -39,18 +24,11 @@ ARG DEBIAN_FRONTEND
|
||||
ENV CCACHE_DIR /root/.ccache
|
||||
ENV CCACHE_MAXSIZE 2G
|
||||
|
||||
RUN --mount=type=bind,source=docker/main/build_nginx.sh,target=/deps/build_nginx.sh \
|
||||
/deps/build_nginx.sh
|
||||
|
||||
FROM wget AS sqlite-vec
|
||||
ARG DEBIAN_FRONTEND
|
||||
|
||||
# Build sqlite_vec from source
|
||||
COPY docker/main/build_sqlite_vec.sh /deps/build_sqlite_vec.sh
|
||||
# bind /var/cache/apt to tmpfs to speed up nginx build
|
||||
RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
|
||||
--mount=type=bind,source=docker/main/build_sqlite_vec.sh,target=/deps/build_sqlite_vec.sh \
|
||||
--mount=type=bind,source=docker/main/build_nginx.sh,target=/deps/build_nginx.sh \
|
||||
--mount=type=cache,target=/root/.ccache \
|
||||
/deps/build_sqlite_vec.sh
|
||||
/deps/build_nginx.sh
|
||||
|
||||
FROM scratch AS go2rtc
|
||||
ARG TARGETARCH
|
||||
@@ -151,25 +129,31 @@ ARG TARGETARCH
|
||||
# Use a separate container to build wheels to prevent build dependencies in final image
|
||||
RUN apt-get -qq update \
|
||||
&& apt-get -qq install -y \
|
||||
apt-transport-https wget \
|
||||
apt-transport-https \
|
||||
gnupg \
|
||||
wget \
|
||||
# the key fingerprint can be obtained from https://ftp-master.debian.org/keys.html
|
||||
&& wget -qO- "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xA4285295FC7B1A81600062A9605C66F00D6C9793" | \
|
||||
gpg --dearmor > /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
|
||||
&& echo "deb [signed-by=/usr/share/keyrings/debian-archive-bullseye-stable.gpg] http://deb.debian.org/debian bullseye main contrib non-free" | \
|
||||
tee /etc/apt/sources.list.d/debian-bullseye-nonfree.list \
|
||||
&& apt-get -qq update \
|
||||
&& apt-get -qq install -y \
|
||||
python3.11 \
|
||||
python3.11-dev \
|
||||
python3.9 \
|
||||
python3.9-dev \
|
||||
# opencv dependencies
|
||||
build-essential cmake git pkg-config libgtk-3-dev \
|
||||
libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \
|
||||
libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \
|
||||
gfortran openexr libatlas-base-dev libssl-dev\
|
||||
libtbbmalloc2 libtbb-dev libdc1394-dev libopenexr-dev \
|
||||
libtbb2 libtbb-dev libdc1394-22-dev libopenexr-dev \
|
||||
libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
|
||||
# sqlite3 dependencies
|
||||
tclsh \
|
||||
# scipy dependencies
|
||||
gcc gfortran libopenblas-dev liblapack-dev && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
|
||||
# Ensure python3 defaults to python3.9
|
||||
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
|
||||
|
||||
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
|
||||
&& python3 get-pip.py "pip"
|
||||
@@ -177,27 +161,18 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
|
||||
COPY docker/main/requirements.txt /requirements.txt
|
||||
RUN pip3 install -r /requirements.txt
|
||||
|
||||
# Build pysqlite3 from source
|
||||
COPY docker/main/build_pysqlite3.sh /build_pysqlite3.sh
|
||||
RUN /build_pysqlite3.sh
|
||||
|
||||
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
|
||||
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
|
||||
|
||||
# Install HailoRT & Wheels
|
||||
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
|
||||
/deps/install_hailort.sh
|
||||
|
||||
# Collect deps in a single layer
|
||||
FROM scratch AS deps-rootfs
|
||||
COPY --from=nginx /usr/local/nginx/ /usr/local/nginx/
|
||||
COPY --from=sqlite-vec /usr/local/lib/ /usr/local/lib/
|
||||
COPY --from=go2rtc /rootfs/ /
|
||||
COPY --from=libusb-build /usr/local/lib /usr/local/lib
|
||||
COPY --from=tempio /rootfs/ /
|
||||
COPY --from=s6-overlay /rootfs/ /
|
||||
COPY --from=models /rootfs/ /
|
||||
COPY --from=wheels /rootfs/ /
|
||||
COPY docker/main/rootfs/ /
|
||||
|
||||
|
||||
@@ -213,31 +188,14 @@ ARG APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=DontWarn
|
||||
ENV NVIDIA_VISIBLE_DEVICES=all
|
||||
ENV NVIDIA_DRIVER_CAPABILITIES="compute,video,utility"
|
||||
|
||||
# Disable tokenizer parallelism warning
|
||||
# https://stackoverflow.com/questions/62691279/how-to-disable-tokenizers-parallelism-true-false-warning/72926996#72926996
|
||||
ENV TOKENIZERS_PARALLELISM=true
|
||||
# https://github.com/huggingface/transformers/issues/27214
|
||||
ENV TRANSFORMERS_NO_ADVISORY_WARNINGS=1
|
||||
|
||||
# Set OpenCV ffmpeg loglevel to fatal: https://ffmpeg.org/doxygen/trunk/log_8h.html
|
||||
ENV OPENCV_FFMPEG_LOGLEVEL=8
|
||||
|
||||
# Set HailoRT to disable logging
|
||||
ENV HAILORT_LOGGER_PATH=NONE
|
||||
|
||||
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
|
||||
ENV PATH="/usr/lib/btbn-ffmpeg/bin:/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
|
||||
|
||||
# Install dependencies
|
||||
RUN --mount=type=bind,source=docker/main/install_deps.sh,target=/deps/install_deps.sh \
|
||||
/deps/install_deps.sh
|
||||
|
||||
ENV DEFAULT_FFMPEG_VERSION="7.0"
|
||||
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:5.0"
|
||||
|
||||
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
|
||||
&& python3 get-pip.py "pip"
|
||||
|
||||
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
|
||||
python3 -m pip install --upgrade pip && \
|
||||
pip3 install -U /deps/wheels/*.whl
|
||||
|
||||
COPY --from=deps-rootfs / /
|
||||
@@ -256,7 +214,7 @@ ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
|
||||
ENTRYPOINT ["/init"]
|
||||
CMD []
|
||||
|
||||
HEALTHCHECK --start-period=300s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
|
||||
HEALTHCHECK --start-period=120s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
|
||||
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
|
||||
|
||||
# Frigate deps with Node.js and NPM for devcontainer
|
||||
|
||||
@@ -8,16 +8,10 @@ SECURE_TOKEN_MODULE_VERSION="1.5"
|
||||
SET_MISC_MODULE_VERSION="v0.33"
|
||||
NGX_DEVEL_KIT_VERSION="v0.3.3"
|
||||
|
||||
source /etc/os-release
|
||||
|
||||
if [[ "$VERSION_ID" == "12" ]]; then
|
||||
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
|
||||
else
|
||||
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
|
||||
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
|
||||
fi
|
||||
|
||||
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
|
||||
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
|
||||
apt-get update
|
||||
|
||||
apt-get -yqq build-dep nginx
|
||||
|
||||
apt-get -yqq install --no-install-recommends ca-certificates wget
|
||||
|
||||
@@ -4,7 +4,7 @@ from openvino.tools import mo
|
||||
ov_model = mo.convert_model(
|
||||
"/models/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb",
|
||||
compress_to_fp16=True,
|
||||
transformations_config="/usr/local/lib/python3.11/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
|
||||
transformations_config="/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
|
||||
tensorflow_object_detection_api_pipeline_config="/models/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config",
|
||||
reverse_input_channels=True,
|
||||
)
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
SQLITE3_VERSION="96c92aba00c8375bc32fafcdf12429c58bd8aabfcadab6683e35bbb9cdebf19e" # 3.46.0
|
||||
PYSQLITE3_VERSION="0.5.3"
|
||||
|
||||
# Fetch the source code for the latest release of Sqlite.
|
||||
if [[ ! -d "sqlite" ]]; then
|
||||
wget https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=${SQLITE3_VERSION} -O sqlite.tar.gz
|
||||
tar xzf sqlite.tar.gz
|
||||
cd sqlite/
|
||||
LIBS="-lm" ./configure --disable-tcl --enable-tempstore=always
|
||||
make sqlite3.c
|
||||
cd ../
|
||||
rm sqlite.tar.gz
|
||||
fi
|
||||
|
||||
# Grab the pysqlite3 source code.
|
||||
if [[ ! -d "./pysqlite3" ]]; then
|
||||
git clone https://github.com/coleifer/pysqlite3.git
|
||||
fi
|
||||
|
||||
cd pysqlite3/
|
||||
git checkout ${PYSQLITE3_VERSION}
|
||||
|
||||
# Copy the sqlite3 source amalgamation into the pysqlite3 directory so we can
|
||||
# create a self-contained extension module.
|
||||
cp "../sqlite/sqlite3.c" ./
|
||||
cp "../sqlite/sqlite3.h" ./
|
||||
|
||||
# Create the wheel and put it in the /wheels dir.
|
||||
sed -i "s|name='pysqlite3-binary'|name=PACKAGE_NAME|g" setup.py
|
||||
python3 setup.py build_static
|
||||
pip3 wheel . -w /wheels
|
||||
@@ -1,38 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
SQLITE_VEC_VERSION="0.1.3"
|
||||
|
||||
source /etc/os-release
|
||||
|
||||
if [[ "$VERSION_ID" == "12" ]]; then
|
||||
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
|
||||
else
|
||||
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
|
||||
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
|
||||
fi
|
||||
|
||||
apt-get update
|
||||
apt-get -yqq build-dep sqlite3 gettext git
|
||||
|
||||
mkdir /tmp/sqlite_vec
|
||||
# Grab the sqlite_vec source code.
|
||||
wget -nv https://github.com/asg017/sqlite-vec/archive/refs/tags/v${SQLITE_VEC_VERSION}.tar.gz
|
||||
tar -zxf v${SQLITE_VEC_VERSION}.tar.gz -C /tmp/sqlite_vec
|
||||
|
||||
cd /tmp/sqlite_vec/sqlite-vec-${SQLITE_VEC_VERSION}
|
||||
|
||||
mkdir -p vendor
|
||||
wget -O sqlite-amalgamation.zip https://www.sqlite.org/2024/sqlite-amalgamation-3450300.zip
|
||||
unzip sqlite-amalgamation.zip
|
||||
mv sqlite-amalgamation-3450300/* vendor/
|
||||
rmdir sqlite-amalgamation-3450300
|
||||
rm sqlite-amalgamation.zip
|
||||
|
||||
# build loadable module
|
||||
make loadable
|
||||
|
||||
# install it
|
||||
cp dist/vec0.* /usr/local/lib
|
||||
|
||||
@@ -6,87 +6,72 @@ apt-get -qq update
|
||||
|
||||
apt-get -qq install --no-install-recommends -y \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
gnupg \
|
||||
wget \
|
||||
lbzip2 \
|
||||
procps vainfo \
|
||||
unzip locales tzdata libxml2 xz-utils \
|
||||
python3.11 \
|
||||
python3.9 \
|
||||
python3-pip \
|
||||
curl \
|
||||
lsof \
|
||||
jq \
|
||||
nethogs \
|
||||
libgl1 \
|
||||
libglib2.0-0 \
|
||||
libusb-1.0.0
|
||||
nethogs
|
||||
|
||||
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
|
||||
# ensure python3 defaults to python3.9
|
||||
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
|
||||
|
||||
mkdir -p -m 600 /root/.gnupg
|
||||
|
||||
# install coral runtime
|
||||
wget -q -O /tmp/libedgetpu1-max.deb "https://github.com/feranick/libedgetpu/releases/download/16.0TF2.17.1-1/libedgetpu1-max_16.0tf2.17.1-1.bookworm_${TARGETARCH}.deb"
|
||||
unset DEBIAN_FRONTEND
|
||||
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
|
||||
rm /tmp/libedgetpu1-max.deb
|
||||
# add coral repo
|
||||
curl -fsSLo - https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
|
||||
gpg --dearmor -o /etc/apt/trusted.gpg.d/google-cloud-packages-archive-keyring.gpg
|
||||
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
|
||||
echo "libedgetpu1-max libedgetpu/accepted-eula select true" | debconf-set-selections
|
||||
|
||||
# ffmpeg -> amd64
|
||||
# enable non-free repo in Debian
|
||||
if grep -q "Debian" /etc/issue; then
|
||||
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
|
||||
fi
|
||||
|
||||
# coral drivers
|
||||
apt-get -qq update
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
libedgetpu1-max python3-tflite-runtime python3-pycoral
|
||||
|
||||
# btbn-ffmpeg -> amd64
|
||||
if [[ "${TARGETARCH}" == "amd64" ]]; then
|
||||
mkdir -p /usr/lib/ffmpeg/5.0
|
||||
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
|
||||
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
|
||||
rm -rf ffmpeg.tar.xz
|
||||
mkdir -p /usr/lib/ffmpeg/7.0
|
||||
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
|
||||
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
|
||||
rm -rf ffmpeg.tar.xz
|
||||
mkdir -p /usr/lib/btbn-ffmpeg
|
||||
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
|
||||
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/btbn-ffmpeg --strip-components 1
|
||||
rm -rf btbn-ffmpeg.tar.xz /usr/lib/btbn-ffmpeg/doc /usr/lib/btbn-ffmpeg/bin/ffplay
|
||||
fi
|
||||
|
||||
# ffmpeg -> arm64
|
||||
if [[ "${TARGETARCH}" == "arm64" ]]; then
|
||||
mkdir -p /usr/lib/ffmpeg/5.0
|
||||
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
|
||||
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
|
||||
rm -f ffmpeg.tar.xz
|
||||
mkdir -p /usr/lib/ffmpeg/7.0
|
||||
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
|
||||
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
|
||||
rm -f ffmpeg.tar.xz
|
||||
mkdir -p /usr/lib/btbn-ffmpeg
|
||||
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
|
||||
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/btbn-ffmpeg --strip-components 1
|
||||
rm -rf btbn-ffmpeg.tar.xz /usr/lib/btbn-ffmpeg/doc /usr/lib/btbn-ffmpeg/bin/ffplay
|
||||
fi
|
||||
|
||||
# arch specific packages
|
||||
if [[ "${TARGETARCH}" == "amd64" ]]; then
|
||||
# install amd / intel-i965 driver packages
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
i965-va-driver intel-gpu-tools onevpl-tools \
|
||||
libva-drm2 \
|
||||
mesa-va-drivers radeontop
|
||||
|
||||
# intel packages use zst compression so we need to update dpkg
|
||||
apt-get install -y dpkg
|
||||
|
||||
# use intel apt intel packages
|
||||
wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
|
||||
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
|
||||
# use debian bookworm for hwaccel packages
|
||||
echo 'deb https://deb.debian.org/debian bookworm main contrib non-free' >/etc/apt/sources.list.d/debian-bookworm.list
|
||||
apt-get -qq update
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
intel-opencl-icd=24.35.30872.31-996~22.04 intel-level-zero-gpu=1.3.29735.27-914~22.04 intel-media-va-driver-non-free=24.3.3-996~22.04 \
|
||||
libmfx1=23.2.2-880~22.04 libmfxgen1=24.2.4-914~22.04 libvpl2=1:2.13.0.0-996~22.04
|
||||
|
||||
rm -f /usr/share/keyrings/intel-graphics.gpg
|
||||
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
|
||||
intel-opencl-icd \
|
||||
mesa-va-drivers radeontop libva-drm2 intel-media-va-driver-non-free i965-va-driver libmfx1 intel-gpu-tools
|
||||
# something about this dependency requires it to be installed in a separate call rather than in the line above
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
i965-va-driver-shaders
|
||||
rm -f /etc/apt/sources.list.d/debian-bookworm.list
|
||||
fi
|
||||
|
||||
if [[ "${TARGETARCH}" == "arm64" ]]; then
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
libva-drm2 mesa-va-drivers radeontop
|
||||
libva-drm2 mesa-va-drivers
|
||||
fi
|
||||
|
||||
# install vulkan
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
libvulkan1 mesa-vulkan-drivers
|
||||
|
||||
apt-get purge gnupg apt-transport-https xz-utils -y
|
||||
apt-get clean autoclean -y
|
||||
apt-get autoremove --purge -y
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
hailo_version="4.20.0"
|
||||
|
||||
if [[ "${TARGETARCH}" == "amd64" ]]; then
|
||||
arch="x86_64"
|
||||
elif [[ "${TARGETARCH}" == "arm64" ]]; then
|
||||
arch="aarch64"
|
||||
fi
|
||||
|
||||
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" | tar -C / -xzf -
|
||||
wget -P /wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp311-cp311-linux_${arch}.whl"
|
||||
@@ -1,73 +1,32 @@
|
||||
aiofiles == 24.1.*
|
||||
click == 8.1.*
|
||||
# FastAPI
|
||||
aiohttp == 3.11.3
|
||||
starlette == 0.41.2
|
||||
starlette-context == 0.3.6
|
||||
fastapi == 0.115.*
|
||||
uvicorn == 0.30.*
|
||||
slowapi == 0.1.*
|
||||
Flask == 3.0.*
|
||||
Flask_Limiter == 3.7.*
|
||||
imutils == 0.5.*
|
||||
joserfc == 1.0.*
|
||||
pathvalidate == 3.2.*
|
||||
markupsafe == 3.0.*
|
||||
python-multipart == 0.0.20
|
||||
# General
|
||||
joserfc == 0.11.*
|
||||
markupsafe == 2.1.*
|
||||
mypy == 1.6.1
|
||||
onvif-zeep-async == 3.1.*
|
||||
numpy == 1.26.*
|
||||
onvif_zeep == 0.2.12
|
||||
opencv-python-headless == 4.9.0.*
|
||||
paho-mqtt == 2.1.*
|
||||
pandas == 2.2.*
|
||||
peewee == 3.17.*
|
||||
peewee_migrate == 1.13.*
|
||||
psutil == 6.1.*
|
||||
pydantic == 2.10.*
|
||||
peewee_migrate == 1.12.*
|
||||
psutil == 5.9.*
|
||||
pydantic == 2.7.*
|
||||
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
|
||||
pytz == 2025.*
|
||||
pyzmq == 26.2.*
|
||||
PyYAML == 6.0.*
|
||||
pytz == 2024.1
|
||||
pyzmq == 26.0.*
|
||||
ruamel.yaml == 0.18.*
|
||||
tzlocal == 5.2
|
||||
types-PyYAML == 6.0.*
|
||||
requests == 2.32.*
|
||||
types-requests == 2.32.*
|
||||
scipy == 1.13.*
|
||||
norfair == 2.2.*
|
||||
setproctitle == 1.3.*
|
||||
ws4py == 0.5.*
|
||||
unidecode == 1.3.*
|
||||
# Image Manipulation
|
||||
numpy == 1.26.*
|
||||
opencv-python-headless == 4.11.0.*
|
||||
opencv-contrib-python == 4.11.0.*
|
||||
scipy == 1.14.*
|
||||
# OpenVino & ONNX
|
||||
openvino == 2024.4.*
|
||||
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
|
||||
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
|
||||
# Embeddings
|
||||
transformers == 4.45.*
|
||||
# Generative AI
|
||||
google-generativeai == 0.8.*
|
||||
ollama == 0.3.*
|
||||
openai == 1.65.*
|
||||
# push notifications
|
||||
py-vapid == 1.9.*
|
||||
pywebpush == 2.0.*
|
||||
# alpr
|
||||
pyclipper == 1.3.*
|
||||
shapely == 2.0.*
|
||||
Levenshtein==0.26.*
|
||||
# HailoRT Wheels
|
||||
appdirs==1.4.*
|
||||
argcomplete==2.0.*
|
||||
contextlib2==0.6.*
|
||||
distlib==0.3.*
|
||||
filelock==3.8.*
|
||||
future==0.18.*
|
||||
importlib-metadata==5.1.*
|
||||
importlib-resources==5.1.*
|
||||
netaddr==0.8.*
|
||||
netifaces==0.10.*
|
||||
verboselogs==1.7.*
|
||||
virtualenv==20.17.*
|
||||
prometheus-client == 0.21.*
|
||||
# TFLite
|
||||
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
|
||||
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'
|
||||
onnxruntime == 1.18.*
|
||||
openvino == 2024.1.*
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
scikit-build == 0.18.*
|
||||
scikit-build == 0.17.*
|
||||
nvidia-pyindex
|
||||
|
||||
@@ -42,16 +42,10 @@ function migrate_db_path() {
|
||||
fi
|
||||
}
|
||||
|
||||
function set_libva_version() {
|
||||
local ffmpeg_path
|
||||
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
|
||||
LIBAVFORMAT_VERSION_MAJOR=$("$ffmpeg_path" -version | grep -Po "libavformat\W+\K\d+")
|
||||
export LIBAVFORMAT_VERSION_MAJOR
|
||||
}
|
||||
|
||||
echo "[INFO] Preparing Frigate..."
|
||||
migrate_db_path
|
||||
set_libva_version
|
||||
export LIBAVFORMAT_VERSION_MAJOR=$(ffmpeg -version | grep -Po 'libavformat\W+\K\d+')
|
||||
|
||||
echo "[INFO] Starting Frigate..."
|
||||
|
||||
cd /opt/frigate || echo "[ERROR] Failed to change working directory to /opt/frigate"
|
||||
|
||||
@@ -43,14 +43,7 @@ function get_ip_and_port_from_supervisor() {
|
||||
export FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL="${ip_address}:${webrtc_port}"
|
||||
}
|
||||
|
||||
function set_libva_version() {
|
||||
local ffmpeg_path
|
||||
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
|
||||
LIBAVFORMAT_VERSION_MAJOR=$("$ffmpeg_path" -version | grep -Po "libavformat\W+\K\d+")
|
||||
export LIBAVFORMAT_VERSION_MAJOR
|
||||
}
|
||||
|
||||
set_libva_version
|
||||
export LIBAVFORMAT_VERSION_MAJOR=$(ffmpeg -version | grep -Po 'libavformat\W+\K\d+')
|
||||
|
||||
if [[ -f "/dev/shm/go2rtc.yaml" ]]; then
|
||||
echo "[INFO] Removing stale config from last run..."
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
sys.path.insert(0, "/opt/frigate")
|
||||
from frigate.const import (
|
||||
DEFAULT_FFMPEG_VERSION,
|
||||
INCLUDED_FFMPEG_VERSIONS,
|
||||
)
|
||||
|
||||
sys.path.remove("/opt/frigate")
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
|
||||
|
||||
# Check if we can use .yaml instead of .yml
|
||||
config_file_yaml = config_file.replace(".yml", ".yaml")
|
||||
if os.path.isfile(config_file_yaml):
|
||||
config_file = config_file_yaml
|
||||
|
||||
try:
|
||||
with open(config_file) as f:
|
||||
raw_config = f.read()
|
||||
|
||||
if config_file.endswith((".yaml", ".yml")):
|
||||
config: dict[str, any] = yaml.load(raw_config)
|
||||
elif config_file.endswith(".json"):
|
||||
config: dict[str, any] = json.loads(raw_config)
|
||||
except FileNotFoundError:
|
||||
config: dict[str, any] = {}
|
||||
|
||||
path = config.get("ffmpeg", {}).get("path", "default")
|
||||
if path == "default":
|
||||
print(f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg")
|
||||
elif path in INCLUDED_FFMPEG_VERSIONS:
|
||||
print(f"/usr/lib/ffmpeg/{path}/bin/ffmpeg")
|
||||
else:
|
||||
print(f"{path}/bin/ffmpeg")
|
||||
@@ -5,20 +5,16 @@ import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from ruamel.yaml import YAML
|
||||
import yaml
|
||||
|
||||
sys.path.insert(0, "/opt/frigate")
|
||||
from frigate.const import (
|
||||
BIRDSEYE_PIPE,
|
||||
DEFAULT_FFMPEG_VERSION,
|
||||
INCLUDED_FFMPEG_VERSIONS,
|
||||
LIBAVFORMAT_VERSION_MAJOR,
|
||||
from frigate.const import BIRDSEYE_PIPE # noqa: E402
|
||||
from frigate.ffmpeg_presets import ( # noqa: E402
|
||||
parse_preset_hardware_acceleration_encode,
|
||||
)
|
||||
from frigate.ffmpeg_presets import parse_preset_hardware_acceleration_encode
|
||||
|
||||
sys.path.remove("/opt/frigate")
|
||||
|
||||
yaml = YAML()
|
||||
|
||||
FRIGATE_ENV_VARS = {k: v for k, v in os.environ.items() if k.startswith("FRIGATE_")}
|
||||
# read docker secret files as env vars too
|
||||
@@ -41,7 +37,7 @@ try:
|
||||
raw_config = f.read()
|
||||
|
||||
if config_file.endswith((".yaml", ".yml")):
|
||||
config: dict[str, any] = yaml.load(raw_config)
|
||||
config: dict[str, any] = yaml.safe_load(raw_config)
|
||||
elif config_file.endswith(".json"):
|
||||
config: dict[str, any] = json.loads(raw_config)
|
||||
except FileNotFoundError:
|
||||
@@ -66,32 +62,29 @@ elif go2rtc_config["log"].get("format") is None:
|
||||
go2rtc_config["log"]["format"] = "text"
|
||||
|
||||
# ensure there is a default webrtc config
|
||||
if go2rtc_config.get("webrtc") is None:
|
||||
if not go2rtc_config.get("webrtc"):
|
||||
go2rtc_config["webrtc"] = {}
|
||||
|
||||
# go2rtc should listen on 8555 tcp & udp by default
|
||||
if go2rtc_config["webrtc"].get("listen") is None:
|
||||
if not go2rtc_config["webrtc"].get("listen"):
|
||||
go2rtc_config["webrtc"]["listen"] = ":8555"
|
||||
|
||||
if go2rtc_config["webrtc"].get("candidates") is None:
|
||||
if not go2rtc_config["webrtc"].get("candidates", []):
|
||||
default_candidates = []
|
||||
# use internal candidate if it was discovered when running through the add-on
|
||||
internal_candidate = os.environ.get("FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL")
|
||||
internal_candidate = os.environ.get(
|
||||
"FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL", None
|
||||
)
|
||||
if internal_candidate is not None:
|
||||
default_candidates.append(internal_candidate)
|
||||
# should set default stun server so webrtc can work
|
||||
default_candidates.append("stun:8555")
|
||||
|
||||
go2rtc_config["webrtc"]["candidates"] = default_candidates
|
||||
|
||||
# This prevents WebRTC from attempting to establish a connection to the internal
|
||||
# docker IPs which are not accessible from outside the container itself and just
|
||||
# wastes time during negotiation. Note that this is only necessary because
|
||||
# Frigate container doesn't run in host network mode.
|
||||
if go2rtc_config["webrtc"].get("filter") is None:
|
||||
go2rtc_config["webrtc"]["filter"] = {"candidates": []}
|
||||
elif go2rtc_config["webrtc"]["filter"].get("candidates") is None:
|
||||
go2rtc_config["webrtc"]["filter"]["candidates"] = []
|
||||
go2rtc_config["webrtc"] = {"candidates": default_candidates}
|
||||
else:
|
||||
print(
|
||||
"[INFO] Not injecting WebRTC candidates into go2rtc config as it has been set manually",
|
||||
)
|
||||
|
||||
# sets default RTSP response to be equivalent to ?video=h264,h265&audio=aac
|
||||
# this means user does not need to specify audio codec when using restream
|
||||
@@ -112,27 +105,16 @@ else:
|
||||
**FRIGATE_ENV_VARS
|
||||
)
|
||||
|
||||
# ensure ffmpeg path is set correctly
|
||||
path = config.get("ffmpeg", {}).get("path", "default")
|
||||
if path == "default":
|
||||
ffmpeg_path = f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg"
|
||||
elif path in INCLUDED_FFMPEG_VERSIONS:
|
||||
ffmpeg_path = f"/usr/lib/ffmpeg/{path}/bin/ffmpeg"
|
||||
else:
|
||||
ffmpeg_path = f"{path}/bin/ffmpeg"
|
||||
|
||||
if go2rtc_config.get("ffmpeg") is None:
|
||||
go2rtc_config["ffmpeg"] = {"bin": ffmpeg_path}
|
||||
elif go2rtc_config["ffmpeg"].get("bin") is None:
|
||||
go2rtc_config["ffmpeg"]["bin"] = ffmpeg_path
|
||||
|
||||
# need to replace ffmpeg command when using ffmpeg4
|
||||
if LIBAVFORMAT_VERSION_MAJOR < 59:
|
||||
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
|
||||
if int(os.environ["LIBAVFORMAT_VERSION_MAJOR"]) < 59:
|
||||
if go2rtc_config.get("ffmpeg") is None:
|
||||
go2rtc_config["ffmpeg"] = {"rtsp": rtsp_args}
|
||||
go2rtc_config["ffmpeg"] = {
|
||||
"rtsp": "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
|
||||
}
|
||||
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
|
||||
go2rtc_config["ffmpeg"]["rtsp"] = rtsp_args
|
||||
go2rtc_config["ffmpeg"]["rtsp"] = (
|
||||
"-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
|
||||
)
|
||||
|
||||
for name in go2rtc_config.get("streams", {}):
|
||||
stream = go2rtc_config["streams"][name]
|
||||
@@ -163,7 +145,7 @@ if config.get("birdseye", {}).get("restream", False):
|
||||
birdseye: dict[str, any] = config.get("birdseye")
|
||||
|
||||
input = f"-f rawvideo -pix_fmt yuv420p -video_size {birdseye.get('width', 1280)}x{birdseye.get('height', 720)} -r 10 -i {BIRDSEYE_PIPE}"
|
||||
ffmpeg_cmd = f"exec:{parse_preset_hardware_acceleration_encode(ffmpeg_path, config.get('ffmpeg', {}).get('hwaccel_args', ''), input, '-rtsp_transport tcp -f rtsp {output}')}"
|
||||
ffmpeg_cmd = f"exec:{parse_preset_hardware_acceleration_encode(config.get('ffmpeg', {}).get('hwaccel_args'), input, '-rtsp_transport tcp -f rtsp {output}')}"
|
||||
|
||||
if go2rtc_config.get("streams"):
|
||||
go2rtc_config["streams"]["birdseye"] = ffmpeg_cmd
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
## Send a subrequest to verify if the user is authenticated and has permission to access the resource.
|
||||
auth_request /auth;
|
||||
|
||||
## Save the upstream metadata response headers from the auth request to variables
|
||||
## Save the upstream metadata response headers from Authelia to variables.
|
||||
auth_request_set $user $upstream_http_remote_user;
|
||||
auth_request_set $role $upstream_http_remote_role;
|
||||
auth_request_set $groups $upstream_http_remote_groups;
|
||||
auth_request_set $name $upstream_http_remote_name;
|
||||
auth_request_set $email $upstream_http_remote_email;
|
||||
|
||||
## Inject the metadata response headers from the variables into the request made to the backend.
|
||||
proxy_set_header Remote-User $user;
|
||||
proxy_set_header Remote-Role $role;
|
||||
proxy_set_header Remote-Groups $groups;
|
||||
proxy_set_header Remote-Email $email;
|
||||
proxy_set_header Remote-Name $name;
|
||||
|
||||
@@ -81,9 +81,6 @@ http {
|
||||
open_file_cache_errors on;
|
||||
aio on;
|
||||
|
||||
# file upload size
|
||||
client_max_body_size 10M;
|
||||
|
||||
# https://github.com/kaltura/nginx-vod-module#vod_open_file_thread_pool
|
||||
vod_open_file_thread_pool default;
|
||||
|
||||
@@ -107,16 +104,6 @@ http {
|
||||
|
||||
add_header Cache-Control "no-store";
|
||||
expires off;
|
||||
|
||||
keepalive_disable safari;
|
||||
|
||||
# vod module returns 502 for non-existent media
|
||||
# https://github.com/kaltura/nginx-vod-module/issues/468
|
||||
error_page 502 =404 /vod-not-found;
|
||||
}
|
||||
|
||||
location = /vod-not-found {
|
||||
return 404;
|
||||
}
|
||||
|
||||
location /stream/ {
|
||||
@@ -237,7 +224,7 @@ http {
|
||||
|
||||
location ~* /api/.*\.(jpg|jpeg|png|webp|gif)$ {
|
||||
include auth_request.conf;
|
||||
rewrite ^/api/(.*)$ /$1 break;
|
||||
rewrite ^/api/(.*)$ $1 break;
|
||||
proxy_pass http://frigate_api;
|
||||
include proxy.conf;
|
||||
}
|
||||
|
||||
@@ -3,9 +3,7 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
from ruamel.yaml import YAML
|
||||
|
||||
yaml = YAML()
|
||||
import yaml
|
||||
|
||||
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
|
||||
|
||||
@@ -19,7 +17,7 @@ try:
|
||||
raw_config = f.read()
|
||||
|
||||
if config_file.endswith((".yaml", ".yml")):
|
||||
config: dict[str, any] = yaml.load(raw_config)
|
||||
config: dict[str, any] = yaml.safe_load(raw_config)
|
||||
elif config_file.endswith(".json"):
|
||||
config: dict[str, any] = json.loads(raw_config)
|
||||
except FileNotFoundError:
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
./subset/000000005001.jpg
|
||||
./subset/000000038829.jpg
|
||||
./subset/000000052891.jpg
|
||||
./subset/000000075612.jpg
|
||||
./subset/000000098261.jpg
|
||||
./subset/000000181542.jpg
|
||||
./subset/000000215245.jpg
|
||||
./subset/000000277005.jpg
|
||||
./subset/000000288685.jpg
|
||||
./subset/000000301421.jpg
|
||||
./subset/000000334371.jpg
|
||||
./subset/000000348481.jpg
|
||||
./subset/000000373353.jpg
|
||||
./subset/000000397681.jpg
|
||||
./subset/000000414673.jpg
|
||||
./subset/000000419312.jpg
|
||||
./subset/000000465822.jpg
|
||||
./subset/000000475732.jpg
|
||||
./subset/000000559707.jpg
|
||||
./subset/000000574315.jpg
|
||||
|
Before Width: | Height: | Size: 207 KiB |
|
Before Width: | Height: | Size: 209 KiB |
|
Before Width: | Height: | Size: 150 KiB |
|
Before Width: | Height: | Size: 102 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 201 KiB |
|
Before Width: | Height: | Size: 233 KiB |
|
Before Width: | Height: | Size: 242 KiB |
|
Before Width: | Height: | Size: 230 KiB |
|
Before Width: | Height: | Size: 80 KiB |
|
Before Width: | Height: | Size: 136 KiB |
|
Before Width: | Height: | Size: 113 KiB |
|
Before Width: | Height: | Size: 281 KiB |
|
Before Width: | Height: | Size: 272 KiB |
|
Before Width: | Height: | Size: 152 KiB |
|
Before Width: | Height: | Size: 166 KiB |
|
Before Width: | Height: | Size: 109 KiB |
|
Before Width: | Height: | Size: 103 KiB |
|
Before Width: | Height: | Size: 203 KiB |
|
Before Width: | Height: | Size: 110 KiB |
@@ -3,32 +3,24 @@
|
||||
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Globally set pip break-system-packages option to avoid having to specify it every time
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES=1
|
||||
|
||||
FROM wheels as rk-wheels
|
||||
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
|
||||
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
|
||||
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
|
||||
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
|
||||
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
|
||||
RUN rm -rf /rk-wheels/opencv_python-*
|
||||
|
||||
FROM deps AS rk-frigate
|
||||
ARG TARGETARCH
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES
|
||||
|
||||
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
|
||||
pip3 install --no-deps -U /deps/rk-wheels/*.whl
|
||||
pip3 install -U /deps/rk-wheels/*.whl
|
||||
|
||||
WORKDIR /opt/frigate/
|
||||
COPY --from=rootfs / /
|
||||
COPY docker/rockchip/COCO /COCO
|
||||
COPY docker/rockchip/conv2rknn.py /opt/conv2rknn.py
|
||||
|
||||
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/librknnrt.so /usr/lib/
|
||||
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt.so /usr/lib/
|
||||
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
|
||||
ENV DEFAULT_FFMPEG_VERSION="6.0"
|
||||
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
|
||||
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
|
||||
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffmpeg /usr/lib/btbn-ffmpeg/bin/
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffprobe /usr/lib/btbn-ffmpeg/bin/
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
import os
|
||||
|
||||
import rknn
|
||||
import yaml
|
||||
from rknn.api import RKNN
|
||||
|
||||
try:
|
||||
with open(rknn.__path__[0] + "/VERSION") as file:
|
||||
tk_version = file.read().strip()
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
try:
|
||||
with open("/config/conv2rknn.yaml", "r") as config_file:
|
||||
configuration = yaml.safe_load(config_file)
|
||||
except FileNotFoundError:
|
||||
raise Exception("Please place a config.yaml file in /config/conv2rknn.yaml")
|
||||
|
||||
if configuration["config"] != None:
|
||||
rknn_config = configuration["config"]
|
||||
else:
|
||||
rknn_config = {}
|
||||
|
||||
if not os.path.isdir("/config/model_cache/rknn_cache/onnx"):
|
||||
raise Exception(
|
||||
"Place the onnx models you want to convert to rknn format in /config/model_cache/rknn_cache/onnx"
|
||||
)
|
||||
|
||||
if "soc" not in configuration:
|
||||
try:
|
||||
with open("/proc/device-tree/compatible") as file:
|
||||
soc = file.read().split(",")[-1].strip("\x00")
|
||||
except FileNotFoundError:
|
||||
raise Exception("Make sure to run docker in privileged mode.")
|
||||
|
||||
configuration["soc"] = [
|
||||
soc,
|
||||
]
|
||||
|
||||
if "quantization" not in configuration:
|
||||
configuration["quantization"] = False
|
||||
|
||||
if "output_name" not in configuration:
|
||||
configuration["output_name"] = "{{input_basename}}"
|
||||
|
||||
for input_filename in os.listdir("/config/model_cache/rknn_cache/onnx"):
|
||||
for soc in configuration["soc"]:
|
||||
quant = "i8" if configuration["quantization"] else "fp16"
|
||||
|
||||
input_path = "/config/model_cache/rknn_cache/onnx/" + input_filename
|
||||
input_basename = input_filename[: input_filename.rfind(".")]
|
||||
|
||||
output_filename = (
|
||||
configuration["output_name"].format(
|
||||
quant=quant,
|
||||
input_basename=input_basename,
|
||||
soc=soc,
|
||||
tk_version=tk_version,
|
||||
)
|
||||
+ ".rknn"
|
||||
)
|
||||
output_path = "/config/model_cache/rknn_cache/" + output_filename
|
||||
|
||||
rknn_config["target_platform"] = soc
|
||||
|
||||
rknn = RKNN(verbose=True)
|
||||
rknn.config(**rknn_config)
|
||||
|
||||
if rknn.load_onnx(model=input_path) != 0:
|
||||
raise Exception("Error loading model.")
|
||||
|
||||
if (
|
||||
rknn.build(
|
||||
do_quantization=configuration["quantization"],
|
||||
dataset="/COCO/coco_subset_20.txt",
|
||||
)
|
||||
!= 0
|
||||
):
|
||||
raise Exception("Error building model.")
|
||||
|
||||
if rknn.export_rknn(output_path) != 0:
|
||||
raise Exception("Error exporting rknn model.")
|
||||
@@ -1,2 +1 @@
|
||||
rknn-toolkit2 == 2.3.0
|
||||
rknn-toolkit-lite2 == 2.3.0
|
||||
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/rknn_toolkit_lite2-2.0.0b0-cp39-cp39-linux_aarch64.whl
|
||||
@@ -1,15 +1,10 @@
|
||||
BOARDS += rk
|
||||
|
||||
local-rk: version
|
||||
docker buildx bake --file=docker/rockchip/rk.hcl rk \
|
||||
--set rk.tags=frigate:latest-rk \
|
||||
--load
|
||||
docker buildx bake --load --file=docker/rockchip/rk.hcl --set rk.tags=frigate:latest-rk rk
|
||||
|
||||
build-rk: version
|
||||
docker buildx bake --file=docker/rockchip/rk.hcl rk \
|
||||
--set rk.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rk
|
||||
docker buildx bake --file=docker/rockchip/rk.hcl --set rk.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rk rk
|
||||
|
||||
push-rk: build-rk
|
||||
docker buildx bake --file=docker/rockchip/rk.hcl rk \
|
||||
--set rk.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rk \
|
||||
--push
|
||||
docker buildx bake --push --file=docker/rockchip/rk.hcl --set rk.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rk rk
|
||||
@@ -2,49 +2,74 @@
|
||||
|
||||
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ARG ROCM=6.3.3
|
||||
ARG ROCM=5.7.3
|
||||
ARG AMDGPU=gfx900
|
||||
ARG HSA_OVERRIDE_GFX_VERSION
|
||||
ARG HSA_OVERRIDE
|
||||
|
||||
#######################################################################
|
||||
FROM wget AS rocm
|
||||
FROM ubuntu:focal as rocm
|
||||
|
||||
ARG ROCM
|
||||
ARG AMDGPU
|
||||
|
||||
RUN apt update && \
|
||||
apt install -y wget gpg && \
|
||||
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/$ROCM/ubuntu/jammy/amdgpu-install_6.3.60303-1_all.deb && \
|
||||
apt install -y ./rocm.deb && \
|
||||
apt update && \
|
||||
apt install -y rocm
|
||||
RUN apt-get update && apt-get -y upgrade
|
||||
RUN apt-get -y install gnupg wget
|
||||
|
||||
RUN mkdir --parents --mode=0755 /etc/apt/keyrings
|
||||
|
||||
RUN wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | gpg --dearmor | tee /etc/apt/keyrings/rocm.gpg > /dev/null
|
||||
COPY docker/rocm/rocm.list /etc/apt/sources.list.d/
|
||||
COPY docker/rocm/rocm-pin-600 /etc/apt/preferences.d/
|
||||
|
||||
RUN apt-get update
|
||||
|
||||
RUN apt-get -y install --no-install-recommends migraphx
|
||||
RUN apt-get -y install --no-install-recommends migraphx-dev
|
||||
|
||||
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
|
||||
RUN cd /opt/rocm-$ROCM/lib && \
|
||||
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
|
||||
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
|
||||
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
|
||||
RUN cd /opt/rocm-$ROCM/lib && cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/
|
||||
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
|
||||
|
||||
RUN mkdir -p /opt/rocm-dist/etc/ld.so.conf.d/
|
||||
RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
|
||||
|
||||
#######################################################################
|
||||
FROM --platform=linux/amd64 debian:11 as debian-base
|
||||
|
||||
RUN apt-get update && apt-get -y upgrade
|
||||
RUN apt-get -y install --no-install-recommends libelf1 libdrm2 libdrm-amdgpu1 libnuma1 kmod
|
||||
|
||||
RUN apt-get -y install python3
|
||||
|
||||
#######################################################################
|
||||
# ROCm does not come with migraphx wrappers for python 3.9, so we build it here
|
||||
FROM debian-base as debian-build
|
||||
|
||||
ARG ROCM
|
||||
|
||||
COPY --from=rocm /opt/rocm-$ROCM /opt/rocm-$ROCM
|
||||
RUN ln -s /opt/rocm-$ROCM /opt/rocm
|
||||
|
||||
RUN apt-get -y install g++ cmake
|
||||
RUN apt-get -y install python3-pybind11 python3.9-distutils python3-dev
|
||||
|
||||
WORKDIR /opt/build
|
||||
|
||||
COPY docker/rocm/migraphx .
|
||||
|
||||
RUN mkdir build && cd build && cmake .. && make install
|
||||
|
||||
#######################################################################
|
||||
FROM deps AS deps-prelim
|
||||
|
||||
RUN apt-get update && apt-get install -y libnuma1
|
||||
# need this to install libnuma1
|
||||
RUN apt-get update
|
||||
# no ugprade?!?!
|
||||
RUN apt-get -y install libnuma1
|
||||
|
||||
WORKDIR /opt/frigate
|
||||
WORKDIR /opt/frigate/
|
||||
COPY --from=rootfs / /
|
||||
|
||||
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
|
||||
&& python3 get-pip.py "pip" --break-system-packages
|
||||
RUN python3 -m pip config set global.break-system-packages true
|
||||
|
||||
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
|
||||
RUN pip3 uninstall -y onnxruntime-openvino \
|
||||
&& pip3 install -r /requirements.txt
|
||||
COPY docker/rocm/rootfs/ /
|
||||
|
||||
#######################################################################
|
||||
FROM scratch AS rocm-dist
|
||||
@@ -54,14 +79,14 @@ ARG AMDGPU
|
||||
|
||||
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share/miopen/db/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
|
||||
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
|
||||
COPY --from=rocm /opt/rocm-dist/ /
|
||||
COPY --from=debian-build /opt/rocm/lib/migraphx.cpython-39-x86_64-linux-gnu.so /opt/rocm-$ROCM/lib/
|
||||
|
||||
#######################################################################
|
||||
FROM deps-prelim AS rocm-prelim-hsa-override0
|
||||
|
||||
ENV HSA_ENABLE_SDMA=0
|
||||
ENV MIGRAPHX_ENABLE_NHWC=1
|
||||
|
||||
COPY --from=rocm-dist / /
|
||||
|
||||
@@ -76,3 +101,6 @@ ENV HSA_OVERRIDE_GFX_VERSION=$HSA_OVERRIDE_GFX_VERSION
|
||||
#######################################################################
|
||||
FROM rocm-prelim-hsa-override$HSA_OVERRIDE as rocm-deps
|
||||
|
||||
# Request yolov8 download at startup
|
||||
ENV DOWNLOAD_YOLOV8=1
|
||||
|
||||
|
||||
26
docker/rocm/migraphx/CMakeLists.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
|
||||
cmake_minimum_required(VERSION 3.1)
|
||||
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
set(CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
set(CMAKE_CXX_EXTENSIONS OFF)
|
||||
|
||||
if(NOT CMAKE_BUILD_TYPE)
|
||||
set(CMAKE_BUILD_TYPE Release)
|
||||
endif()
|
||||
|
||||
SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
|
||||
|
||||
project(migraphx_py)
|
||||
|
||||
include_directories(/opt/rocm/include)
|
||||
|
||||
find_package(pybind11 REQUIRED)
|
||||
pybind11_add_module(migraphx migraphx_py.cpp)
|
||||
|
||||
target_link_libraries(migraphx PRIVATE /opt/rocm/lib/libmigraphx.so /opt/rocm/lib/libmigraphx_tf.so /opt/rocm/lib/libmigraphx_onnx.so)
|
||||
|
||||
install(TARGETS migraphx
|
||||
COMPONENT python
|
||||
LIBRARY DESTINATION /opt/rocm/lib
|
||||
)
|
||||
582
docker/rocm/migraphx/migraphx_py.cpp
Normal file
@@ -0,0 +1,582 @@
|
||||
/*
|
||||
* The MIT License (MIT)
|
||||
*
|
||||
* Copyright (c) 2015-2022 Advanced Micro Devices, Inc. All rights reserved.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <pybind11/pybind11.h>
|
||||
#include <pybind11/stl.h>
|
||||
#include <pybind11/numpy.h>
|
||||
#include <migraphx/program.hpp>
|
||||
#include <migraphx/instruction_ref.hpp>
|
||||
#include <migraphx/operation.hpp>
|
||||
#include <migraphx/quantization.hpp>
|
||||
#include <migraphx/generate.hpp>
|
||||
#include <migraphx/instruction.hpp>
|
||||
#include <migraphx/ref/target.hpp>
|
||||
#include <migraphx/stringutils.hpp>
|
||||
#include <migraphx/tf.hpp>
|
||||
#include <migraphx/onnx.hpp>
|
||||
#include <migraphx/load_save.hpp>
|
||||
#include <migraphx/register_target.hpp>
|
||||
#include <migraphx/json.hpp>
|
||||
#include <migraphx/make_op.hpp>
|
||||
#include <migraphx/op/common.hpp>
|
||||
|
||||
#ifdef HAVE_GPU
|
||||
#include <migraphx/gpu/hip.hpp>
|
||||
#endif
|
||||
|
||||
using half = half_float::half;
|
||||
namespace py = pybind11;
|
||||
|
||||
#ifdef __clang__
|
||||
#define MIGRAPHX_PUSH_UNUSED_WARNING \
|
||||
_Pragma("clang diagnostic push") \
|
||||
_Pragma("clang diagnostic ignored \"-Wused-but-marked-unused\"")
|
||||
#define MIGRAPHX_POP_WARNING _Pragma("clang diagnostic pop")
|
||||
#else
|
||||
#define MIGRAPHX_PUSH_UNUSED_WARNING
|
||||
#define MIGRAPHX_POP_WARNING
|
||||
#endif
|
||||
#define MIGRAPHX_PYBIND11_MODULE(...) \
|
||||
MIGRAPHX_PUSH_UNUSED_WARNING \
|
||||
PYBIND11_MODULE(__VA_ARGS__) \
|
||||
MIGRAPHX_POP_WARNING
|
||||
|
||||
#define MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM(x, t) .value(#x, migraphx::shape::type_t::x)
|
||||
namespace migraphx {
|
||||
|
||||
migraphx::value to_value(py::kwargs kwargs);
|
||||
migraphx::value to_value(py::list lst);
|
||||
|
||||
template <class T, class F>
|
||||
void visit_py(T x, F f)
|
||||
{
|
||||
if(py::isinstance<py::kwargs>(x))
|
||||
{
|
||||
f(to_value(x.template cast<py::kwargs>()));
|
||||
}
|
||||
else if(py::isinstance<py::list>(x))
|
||||
{
|
||||
f(to_value(x.template cast<py::list>()));
|
||||
}
|
||||
else if(py::isinstance<py::bool_>(x))
|
||||
{
|
||||
f(x.template cast<bool>());
|
||||
}
|
||||
else if(py::isinstance<py::int_>(x) or py::hasattr(x, "__index__"))
|
||||
{
|
||||
f(x.template cast<int>());
|
||||
}
|
||||
else if(py::isinstance<py::float_>(x))
|
||||
{
|
||||
f(x.template cast<float>());
|
||||
}
|
||||
else if(py::isinstance<py::str>(x))
|
||||
{
|
||||
f(x.template cast<std::string>());
|
||||
}
|
||||
else if(py::isinstance<migraphx::shape::dynamic_dimension>(x))
|
||||
{
|
||||
f(migraphx::to_value(x.template cast<migraphx::shape::dynamic_dimension>()));
|
||||
}
|
||||
else
|
||||
{
|
||||
MIGRAPHX_THROW("VISIT_PY: Unsupported data type!");
|
||||
}
|
||||
}
|
||||
|
||||
migraphx::value to_value(py::list lst)
|
||||
{
|
||||
migraphx::value v = migraphx::value::array{};
|
||||
for(auto val : lst)
|
||||
{
|
||||
visit_py(val, [&](auto py_val) { v.push_back(py_val); });
|
||||
}
|
||||
|
||||
return v;
|
||||
}
|
||||
|
||||
migraphx::value to_value(py::kwargs kwargs)
|
||||
{
|
||||
migraphx::value v = migraphx::value::object{};
|
||||
|
||||
for(auto arg : kwargs)
|
||||
{
|
||||
auto&& key = py::str(arg.first);
|
||||
auto&& val = arg.second;
|
||||
visit_py(val, [&](auto py_val) { v[key] = py_val; });
|
||||
}
|
||||
return v;
|
||||
}
|
||||
} // namespace migraphx
|
||||
|
||||
namespace pybind11 {
|
||||
namespace detail {
|
||||
|
||||
template <>
|
||||
struct npy_format_descriptor<half>
|
||||
{
|
||||
static std::string format()
|
||||
{
|
||||
// following: https://docs.python.org/3/library/struct.html#format-characters
|
||||
return "e";
|
||||
}
|
||||
static constexpr auto name() { return _("half"); }
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
} // namespace pybind11
|
||||
|
||||
template <class F>
|
||||
void visit_type(const migraphx::shape& s, F f)
|
||||
{
|
||||
s.visit_type(f);
|
||||
}
|
||||
|
||||
template <class T, class F>
|
||||
void visit(const migraphx::raw_data<T>& x, F f)
|
||||
{
|
||||
x.visit(f);
|
||||
}
|
||||
|
||||
template <class F>
|
||||
void visit_types(F f)
|
||||
{
|
||||
migraphx::shape::visit_types(f);
|
||||
}
|
||||
|
||||
template <class T>
|
||||
py::buffer_info to_buffer_info(T& x)
|
||||
{
|
||||
migraphx::shape s = x.get_shape();
|
||||
assert(s.type() != migraphx::shape::tuple_type);
|
||||
if(s.dynamic())
|
||||
MIGRAPHX_THROW("MIGRAPHX PYTHON: dynamic shape argument passed to to_buffer_info");
|
||||
auto strides = s.strides();
|
||||
std::transform(
|
||||
strides.begin(), strides.end(), strides.begin(), [&](auto i) { return i * s.type_size(); });
|
||||
py::buffer_info b;
|
||||
visit_type(s, [&](auto as) {
|
||||
// migraphx use int8_t data to store bool type, we need to
|
||||
// explicitly specify the data type as bool for python
|
||||
if(s.type() == migraphx::shape::bool_type)
|
||||
{
|
||||
b = py::buffer_info(x.data(),
|
||||
as.size(),
|
||||
py::format_descriptor<bool>::format(),
|
||||
s.ndim(),
|
||||
s.lens(),
|
||||
strides);
|
||||
}
|
||||
else
|
||||
{
|
||||
b = py::buffer_info(x.data(),
|
||||
as.size(),
|
||||
py::format_descriptor<decltype(as())>::format(),
|
||||
s.ndim(),
|
||||
s.lens(),
|
||||
strides);
|
||||
}
|
||||
});
|
||||
return b;
|
||||
}
|
||||
|
||||
migraphx::shape to_shape(const py::buffer_info& info)
|
||||
{
|
||||
migraphx::shape::type_t t;
|
||||
std::size_t n = 0;
|
||||
visit_types([&](auto as) {
|
||||
if(info.format == py::format_descriptor<decltype(as())>::format() or
|
||||
(info.format == "l" and py::format_descriptor<decltype(as())>::format() == "q") or
|
||||
(info.format == "L" and py::format_descriptor<decltype(as())>::format() == "Q"))
|
||||
{
|
||||
t = as.type_enum();
|
||||
n = sizeof(as());
|
||||
}
|
||||
else if(info.format == "?" and py::format_descriptor<decltype(as())>::format() == "b")
|
||||
{
|
||||
t = migraphx::shape::bool_type;
|
||||
n = sizeof(bool);
|
||||
}
|
||||
});
|
||||
|
||||
if(n == 0)
|
||||
{
|
||||
MIGRAPHX_THROW("MIGRAPHX PYTHON: Unsupported data type " + info.format);
|
||||
}
|
||||
|
||||
auto strides = info.strides;
|
||||
std::transform(strides.begin(), strides.end(), strides.begin(), [&](auto i) -> std::size_t {
|
||||
return n > 0 ? i / n : 0;
|
||||
});
|
||||
|
||||
// scalar support
|
||||
if(info.shape.empty())
|
||||
{
|
||||
return migraphx::shape{t};
|
||||
}
|
||||
else
|
||||
{
|
||||
return migraphx::shape{t, info.shape, strides};
|
||||
}
|
||||
}
|
||||
|
||||
MIGRAPHX_PYBIND11_MODULE(migraphx, m)
|
||||
{
|
||||
py::class_<migraphx::shape> shape_cls(m, "shape");
|
||||
shape_cls
|
||||
.def(py::init([](py::kwargs kwargs) {
|
||||
auto v = migraphx::to_value(kwargs);
|
||||
auto t = migraphx::shape::parse_type(v.get("type", "float"));
|
||||
if(v.contains("dyn_dims"))
|
||||
{
|
||||
auto dyn_dims =
|
||||
migraphx::from_value<std::vector<migraphx::shape::dynamic_dimension>>(
|
||||
v.at("dyn_dims"));
|
||||
return migraphx::shape(t, dyn_dims);
|
||||
}
|
||||
auto lens = v.get<std::size_t>("lens", {1});
|
||||
if(v.contains("strides"))
|
||||
return migraphx::shape(t, lens, v.at("strides").to_vector<std::size_t>());
|
||||
else
|
||||
return migraphx::shape(t, lens);
|
||||
}))
|
||||
.def("type", &migraphx::shape::type)
|
||||
.def("lens", &migraphx::shape::lens)
|
||||
.def("strides", &migraphx::shape::strides)
|
||||
.def("ndim", &migraphx::shape::ndim)
|
||||
.def("elements", &migraphx::shape::elements)
|
||||
.def("bytes", &migraphx::shape::bytes)
|
||||
.def("type_string", &migraphx::shape::type_string)
|
||||
.def("type_size", &migraphx::shape::type_size)
|
||||
.def("dyn_dims", &migraphx::shape::dyn_dims)
|
||||
.def("packed", &migraphx::shape::packed)
|
||||
.def("transposed", &migraphx::shape::transposed)
|
||||
.def("broadcasted", &migraphx::shape::broadcasted)
|
||||
.def("standard", &migraphx::shape::standard)
|
||||
.def("scalar", &migraphx::shape::scalar)
|
||||
.def("dynamic", &migraphx::shape::dynamic)
|
||||
.def("__eq__", std::equal_to<migraphx::shape>{})
|
||||
.def("__ne__", std::not_equal_to<migraphx::shape>{})
|
||||
.def("__repr__", [](const migraphx::shape& s) { return migraphx::to_string(s); });
|
||||
|
||||
py::enum_<migraphx::shape::type_t>(shape_cls, "type_t")
|
||||
MIGRAPHX_SHAPE_VISIT_TYPES(MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM);
|
||||
|
||||
py::class_<migraphx::shape::dynamic_dimension>(shape_cls, "dynamic_dimension")
|
||||
.def(py::init<>())
|
||||
.def(py::init<std::size_t, std::size_t>())
|
||||
.def(py::init<std::size_t, std::size_t, std::set<std::size_t>>())
|
||||
.def_readwrite("min", &migraphx::shape::dynamic_dimension::min)
|
||||
.def_readwrite("max", &migraphx::shape::dynamic_dimension::max)
|
||||
.def_readwrite("optimals", &migraphx::shape::dynamic_dimension::optimals)
|
||||
.def("is_fixed", &migraphx::shape::dynamic_dimension::is_fixed);
|
||||
|
||||
py::class_<migraphx::argument>(m, "argument", py::buffer_protocol())
|
||||
.def_buffer([](migraphx::argument& x) -> py::buffer_info { return to_buffer_info(x); })
|
||||
.def(py::init([](py::buffer b) {
|
||||
py::buffer_info info = b.request();
|
||||
return migraphx::argument(to_shape(info), info.ptr);
|
||||
}))
|
||||
.def("get_shape", &migraphx::argument::get_shape)
|
||||
.def("data_ptr",
|
||||
[](migraphx::argument& x) { return reinterpret_cast<std::uintptr_t>(x.data()); })
|
||||
.def("tolist",
|
||||
[](migraphx::argument& x) {
|
||||
py::list l{x.get_shape().elements()};
|
||||
visit(x, [&](auto data) { l = py::cast(data.to_vector()); });
|
||||
return l;
|
||||
})
|
||||
.def("__eq__", std::equal_to<migraphx::argument>{})
|
||||
.def("__ne__", std::not_equal_to<migraphx::argument>{})
|
||||
.def("__repr__", [](const migraphx::argument& x) { return migraphx::to_string(x); });
|
||||
|
||||
py::class_<migraphx::target>(m, "target");
|
||||
|
||||
py::class_<migraphx::instruction_ref>(m, "instruction_ref")
|
||||
.def("shape", [](migraphx::instruction_ref i) { return i->get_shape(); })
|
||||
.def("op", [](migraphx::instruction_ref i) { return i->get_operator(); });
|
||||
|
||||
py::class_<migraphx::module, std::unique_ptr<migraphx::module, py::nodelete>>(m, "module")
|
||||
.def("print", [](const migraphx::module& mm) { std::cout << mm << std::endl; })
|
||||
.def(
|
||||
"add_instruction",
|
||||
[](migraphx::module& mm,
|
||||
const migraphx::operation& op,
|
||||
std::vector<migraphx::instruction_ref>& args,
|
||||
std::vector<migraphx::module*>& mod_args) {
|
||||
return mm.add_instruction(op, args, mod_args);
|
||||
},
|
||||
py::arg("op"),
|
||||
py::arg("args"),
|
||||
py::arg("mod_args") = std::vector<migraphx::module*>{})
|
||||
.def(
|
||||
"add_literal",
|
||||
[](migraphx::module& mm, py::buffer data) {
|
||||
py::buffer_info info = data.request();
|
||||
auto literal_shape = to_shape(info);
|
||||
return mm.add_literal(literal_shape, reinterpret_cast<char*>(info.ptr));
|
||||
},
|
||||
py::arg("data"))
|
||||
.def(
|
||||
"add_parameter",
|
||||
[](migraphx::module& mm, const std::string& name, const migraphx::shape shape) {
|
||||
return mm.add_parameter(name, shape);
|
||||
},
|
||||
py::arg("name"),
|
||||
py::arg("shape"))
|
||||
.def(
|
||||
"add_return",
|
||||
[](migraphx::module& mm, std::vector<migraphx::instruction_ref>& args) {
|
||||
return mm.add_return(args);
|
||||
},
|
||||
py::arg("args"))
|
||||
.def("__repr__", [](const migraphx::module& mm) { return migraphx::to_string(mm); });
|
||||
|
||||
py::class_<migraphx::program>(m, "program")
|
||||
.def(py::init([]() { return migraphx::program(); }))
|
||||
.def("get_parameter_names", &migraphx::program::get_parameter_names)
|
||||
.def("get_parameter_shapes", &migraphx::program::get_parameter_shapes)
|
||||
.def("get_output_shapes", &migraphx::program::get_output_shapes)
|
||||
.def("is_compiled", &migraphx::program::is_compiled)
|
||||
.def(
|
||||
"compile",
|
||||
[](migraphx::program& p,
|
||||
const migraphx::target& t,
|
||||
bool offload_copy,
|
||||
bool fast_math,
|
||||
bool exhaustive_tune) {
|
||||
migraphx::compile_options options;
|
||||
options.offload_copy = offload_copy;
|
||||
options.fast_math = fast_math;
|
||||
options.exhaustive_tune = exhaustive_tune;
|
||||
p.compile(t, options);
|
||||
},
|
||||
py::arg("t"),
|
||||
py::arg("offload_copy") = true,
|
||||
py::arg("fast_math") = true,
|
||||
py::arg("exhaustive_tune") = false)
|
||||
.def("get_main_module", [](const migraphx::program& p) { return p.get_main_module(); })
|
||||
.def(
|
||||
"create_module",
|
||||
[](migraphx::program& p, const std::string& name) { return p.create_module(name); },
|
||||
py::arg("name"))
|
||||
.def("run",
|
||||
[](migraphx::program& p, py::dict params) {
|
||||
migraphx::parameter_map pm;
|
||||
for(auto x : params)
|
||||
{
|
||||
std::string key = x.first.cast<std::string>();
|
||||
py::buffer b = x.second.cast<py::buffer>();
|
||||
py::buffer_info info = b.request();
|
||||
pm[key] = migraphx::argument(to_shape(info), info.ptr);
|
||||
}
|
||||
return p.eval(pm);
|
||||
})
|
||||
.def("run_async",
|
||||
[](migraphx::program& p,
|
||||
py::dict params,
|
||||
std::uintptr_t stream,
|
||||
std::string stream_name) {
|
||||
migraphx::parameter_map pm;
|
||||
for(auto x : params)
|
||||
{
|
||||
std::string key = x.first.cast<std::string>();
|
||||
py::buffer b = x.second.cast<py::buffer>();
|
||||
py::buffer_info info = b.request();
|
||||
pm[key] = migraphx::argument(to_shape(info), info.ptr);
|
||||
}
|
||||
migraphx::execution_environment exec_env{
|
||||
migraphx::any_ptr(reinterpret_cast<void*>(stream), stream_name), true};
|
||||
return p.eval(pm, exec_env);
|
||||
})
|
||||
.def("sort", &migraphx::program::sort)
|
||||
.def("print", [](const migraphx::program& p) { std::cout << p << std::endl; })
|
||||
.def("__eq__", std::equal_to<migraphx::program>{})
|
||||
.def("__ne__", std::not_equal_to<migraphx::program>{})
|
||||
.def("__repr__", [](const migraphx::program& p) { return migraphx::to_string(p); });
|
||||
|
||||
py::class_<migraphx::operation> op(m, "op");
|
||||
op.def(py::init([](const std::string& name, py::kwargs kwargs) {
|
||||
migraphx::value v = migraphx::value::object{};
|
||||
if(kwargs)
|
||||
{
|
||||
v = migraphx::to_value(kwargs);
|
||||
}
|
||||
return migraphx::make_op(name, v);
|
||||
}))
|
||||
.def("name", &migraphx::operation::name);
|
||||
|
||||
py::enum_<migraphx::op::pooling_mode>(op, "pooling_mode")
|
||||
.value("average", migraphx::op::pooling_mode::average)
|
||||
.value("max", migraphx::op::pooling_mode::max)
|
||||
.value("lpnorm", migraphx::op::pooling_mode::lpnorm);
|
||||
|
||||
py::enum_<migraphx::op::rnn_direction>(op, "rnn_direction")
|
||||
.value("forward", migraphx::op::rnn_direction::forward)
|
||||
.value("reverse", migraphx::op::rnn_direction::reverse)
|
||||
.value("bidirectional", migraphx::op::rnn_direction::bidirectional);
|
||||
|
||||
m.def(
|
||||
"argument_from_pointer",
|
||||
[](const migraphx::shape shape, const int64_t address) {
|
||||
return migraphx::argument(shape, reinterpret_cast<void*>(address));
|
||||
},
|
||||
py::arg("shape"),
|
||||
py::arg("address"));
|
||||
|
||||
m.def(
|
||||
"parse_tf",
|
||||
[](const std::string& filename,
|
||||
bool is_nhwc,
|
||||
unsigned int batch_size,
|
||||
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
|
||||
std::vector<std::string> output_names) {
|
||||
return migraphx::parse_tf(
|
||||
filename, migraphx::tf_options{is_nhwc, batch_size, map_input_dims, output_names});
|
||||
},
|
||||
"Parse tf protobuf (default format is nhwc)",
|
||||
py::arg("filename"),
|
||||
py::arg("is_nhwc") = true,
|
||||
py::arg("batch_size") = 1,
|
||||
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
|
||||
py::arg("output_names") = std::vector<std::string>());
|
||||
|
||||
m.def(
|
||||
"parse_onnx",
|
||||
[](const std::string& filename,
|
||||
unsigned int default_dim_value,
|
||||
migraphx::shape::dynamic_dimension default_dyn_dim_value,
|
||||
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
|
||||
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
|
||||
map_dyn_input_dims,
|
||||
bool skip_unknown_operators,
|
||||
bool print_program_on_error,
|
||||
int64_t max_loop_iterations) {
|
||||
migraphx::onnx_options options;
|
||||
options.default_dim_value = default_dim_value;
|
||||
options.default_dyn_dim_value = default_dyn_dim_value;
|
||||
options.map_input_dims = map_input_dims;
|
||||
options.map_dyn_input_dims = map_dyn_input_dims;
|
||||
options.skip_unknown_operators = skip_unknown_operators;
|
||||
options.print_program_on_error = print_program_on_error;
|
||||
options.max_loop_iterations = max_loop_iterations;
|
||||
return migraphx::parse_onnx(filename, options);
|
||||
},
|
||||
"Parse onnx file",
|
||||
py::arg("filename"),
|
||||
py::arg("default_dim_value") = 0,
|
||||
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
|
||||
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
|
||||
py::arg("map_dyn_input_dims") =
|
||||
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
|
||||
py::arg("skip_unknown_operators") = false,
|
||||
py::arg("print_program_on_error") = false,
|
||||
py::arg("max_loop_iterations") = 10);
|
||||
|
||||
m.def(
|
||||
"parse_onnx_buffer",
|
||||
[](const std::string& onnx_buffer,
|
||||
unsigned int default_dim_value,
|
||||
migraphx::shape::dynamic_dimension default_dyn_dim_value,
|
||||
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
|
||||
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
|
||||
map_dyn_input_dims,
|
||||
bool skip_unknown_operators,
|
||||
bool print_program_on_error) {
|
||||
migraphx::onnx_options options;
|
||||
options.default_dim_value = default_dim_value;
|
||||
options.default_dyn_dim_value = default_dyn_dim_value;
|
||||
options.map_input_dims = map_input_dims;
|
||||
options.map_dyn_input_dims = map_dyn_input_dims;
|
||||
options.skip_unknown_operators = skip_unknown_operators;
|
||||
options.print_program_on_error = print_program_on_error;
|
||||
return migraphx::parse_onnx_buffer(onnx_buffer, options);
|
||||
},
|
||||
"Parse onnx file",
|
||||
py::arg("filename"),
|
||||
py::arg("default_dim_value") = 0,
|
||||
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
|
||||
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
|
||||
py::arg("map_dyn_input_dims") =
|
||||
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
|
||||
py::arg("skip_unknown_operators") = false,
|
||||
py::arg("print_program_on_error") = false);
|
||||
|
||||
m.def(
|
||||
"load",
|
||||
[](const std::string& name, const std::string& format) {
|
||||
migraphx::file_options options;
|
||||
options.format = format;
|
||||
return migraphx::load(name, options);
|
||||
},
|
||||
"Load MIGraphX program",
|
||||
py::arg("filename"),
|
||||
py::arg("format") = "msgpack");
|
||||
|
||||
m.def(
|
||||
"save",
|
||||
[](const migraphx::program& p, const std::string& name, const std::string& format) {
|
||||
migraphx::file_options options;
|
||||
options.format = format;
|
||||
return migraphx::save(p, name, options);
|
||||
},
|
||||
"Save MIGraphX program",
|
||||
py::arg("p"),
|
||||
py::arg("filename"),
|
||||
py::arg("format") = "msgpack");
|
||||
|
||||
m.def("get_target", &migraphx::make_target);
|
||||
m.def("create_argument", [](const migraphx::shape& s, const std::vector<double>& values) {
|
||||
if(values.size() != s.elements())
|
||||
MIGRAPHX_THROW("Values and shape elements do not match");
|
||||
migraphx::argument a{s};
|
||||
a.fill(values.begin(), values.end());
|
||||
return a;
|
||||
});
|
||||
m.def("generate_argument", &migraphx::generate_argument, py::arg("s"), py::arg("seed") = 0);
|
||||
m.def("fill_argument", &migraphx::fill_argument, py::arg("s"), py::arg("value"));
|
||||
m.def("quantize_fp16",
|
||||
&migraphx::quantize_fp16,
|
||||
py::arg("prog"),
|
||||
py::arg("ins_names") = std::vector<std::string>{"all"});
|
||||
m.def("quantize_int8",
|
||||
&migraphx::quantize_int8,
|
||||
py::arg("prog"),
|
||||
py::arg("t"),
|
||||
py::arg("calibration") = std::vector<migraphx::parameter_map>{},
|
||||
py::arg("ins_names") = std::vector<std::string>{"dot", "convolution"});
|
||||
|
||||
#ifdef HAVE_GPU
|
||||
m.def("allocate_gpu", &migraphx::gpu::allocate_gpu, py::arg("s"), py::arg("host") = false);
|
||||
m.def("to_gpu", &migraphx::gpu::to_gpu, py::arg("arg"), py::arg("host") = false);
|
||||
m.def("from_gpu", &migraphx::gpu::from_gpu);
|
||||
m.def("gpu_sync", [] { migraphx::gpu::gpu_sync(); });
|
||||
#endif
|
||||
|
||||
#ifdef VERSION_INFO
|
||||
m.attr("__version__") = VERSION_INFO;
|
||||
#else
|
||||
m.attr("__version__") = "dev";
|
||||
#endif
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.3.3/onnxruntime_rocm-1.20.1-cp311-cp311-linux_x86_64.whl
|
||||
3
docker/rocm/rocm-pin-600
Normal file
@@ -0,0 +1,3 @@
|
||||
Package: *
|
||||
Pin: release o=repo.radeon.com
|
||||
Pin-Priority: 600
|
||||
@@ -2,7 +2,7 @@ variable "AMDGPU" {
|
||||
default = "gfx900"
|
||||
}
|
||||
variable "ROCM" {
|
||||
default = "6.3.3"
|
||||
default = "5.7.3"
|
||||
}
|
||||
variable "HSA_OVERRIDE_GFX_VERSION" {
|
||||
default = ""
|
||||
@@ -10,13 +10,6 @@ variable "HSA_OVERRIDE_GFX_VERSION" {
|
||||
variable "HSA_OVERRIDE" {
|
||||
default = "1"
|
||||
}
|
||||
|
||||
target wget {
|
||||
dockerfile = "docker/main/Dockerfile"
|
||||
platforms = ["linux/amd64"]
|
||||
target = "wget"
|
||||
}
|
||||
|
||||
target deps {
|
||||
dockerfile = "docker/main/Dockerfile"
|
||||
platforms = ["linux/amd64"]
|
||||
@@ -33,7 +26,6 @@ target rocm {
|
||||
dockerfile = "docker/rocm/Dockerfile"
|
||||
contexts = {
|
||||
deps = "target:deps",
|
||||
wget = "target:wget",
|
||||
rootfs = "target:rootfs"
|
||||
}
|
||||
platforms = ["linux/amd64"]
|
||||
|
||||
1
docker/rocm/rocm.list
Normal file
@@ -0,0 +1 @@
|
||||
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/5.7.3 focal main
|
||||
@@ -4,50 +4,14 @@ BOARDS += rocm
|
||||
ROCM_CHIPSETS:=gfx900:9.0.0 gfx1030:10.3.0 gfx1100:11.0.0
|
||||
|
||||
local-rocm: version
|
||||
$(foreach chipset,$(ROCM_CHIPSETS), \
|
||||
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE=1 \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) \
|
||||
--load \
|
||||
&&) true
|
||||
|
||||
unset HSA_OVERRIDE_GFX_VERSION && \
|
||||
HSA_OVERRIDE=0 \
|
||||
AMDGPU=gfx \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=frigate:latest-rocm \
|
||||
--load
|
||||
$(foreach chipset,$(ROCM_CHIPSETS),AMDGPU=$(word 1,$(subst :, ,$(chipset))) HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) HSA_OVERRIDE=1 docker buildx bake --load --file=docker/rocm/rocm.hcl --set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) rocm;)
|
||||
unset HSA_OVERRIDE_GFX_VERSION && HSA_OVERRIDE=0 AMDGPU=gfx docker buildx bake --load --file=docker/rocm/rocm.hcl --set rocm.tags=frigate:latest-rocm rocm
|
||||
|
||||
build-rocm: version
|
||||
$(foreach chipset,$(ROCM_CHIPSETS), \
|
||||
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE=1 \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
|
||||
&&) true
|
||||
|
||||
unset HSA_OVERRIDE_GFX_VERSION && \
|
||||
HSA_OVERRIDE=0 \
|
||||
AMDGPU=gfx \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm
|
||||
$(foreach chipset,$(ROCM_CHIPSETS),AMDGPU=$(word 1,$(subst :, ,$(chipset))) HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) HSA_OVERRIDE=1 docker buildx bake --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) rocm;)
|
||||
unset HSA_OVERRIDE_GFX_VERSION && HSA_OVERRIDE=0 AMDGPU=gfx docker buildx bake --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm rocm
|
||||
|
||||
push-rocm: build-rocm
|
||||
$(foreach chipset,$(ROCM_CHIPSETS), \
|
||||
AMDGPU=$(word 1,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) \
|
||||
HSA_OVERRIDE=1 \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) \
|
||||
--push \
|
||||
&&) true
|
||||
$(foreach chipset,$(ROCM_CHIPSETS),AMDGPU=$(word 1,$(subst :, ,$(chipset))) HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) HSA_OVERRIDE=1 docker buildx bake --push --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) rocm;)
|
||||
unset HSA_OVERRIDE_GFX_VERSION && HSA_OVERRIDE=0 AMDGPU=gfx docker buildx bake --push --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm rocm
|
||||
|
||||
unset HSA_OVERRIDE_GFX_VERSION && \
|
||||
HSA_OVERRIDE=0 \
|
||||
AMDGPU=gfx \
|
||||
docker buildx bake --file=docker/rocm/rocm.hcl rocm \
|
||||
--set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm \
|
||||
--push
|
||||
|
||||
20
docker/rocm/rootfs/etc/s6-overlay/s6-rc.d/compile-rocm-models/run
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/command/with-contenv bash
|
||||
# shellcheck shell=bash
|
||||
# Compile YoloV8 ONNX files into ROCm MIGraphX files
|
||||
|
||||
OVERRIDE=$(cd /opt/frigate && python3 -c 'import frigate.detectors.plugins.rocm as rocm; print(rocm.auto_override_gfx_version())')
|
||||
|
||||
if ! test -z "$OVERRIDE"; then
|
||||
echo "Using HSA_OVERRIDE_GFX_VERSION=${OVERRIDE}"
|
||||
export HSA_OVERRIDE_GFX_VERSION=$OVERRIDE
|
||||
fi
|
||||
|
||||
for onnx in /config/model_cache/yolov8/*.onnx
|
||||
do
|
||||
mxr="${onnx%.onnx}.mxr"
|
||||
if ! test -f $mxr; then
|
||||
echo "processing $onnx into $mxr"
|
||||
/opt/rocm/bin/migraphx-driver compile $onnx --optimize --gpu --enable-offload-copy --binary -o $mxr
|
||||
fi
|
||||
done
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
oneshot
|
||||
@@ -0,0 +1 @@
|
||||
/etc/s6-overlay/s6-rc.d/compile-rocm-models/run
|
||||
@@ -6,12 +6,11 @@ ARG DEBIAN_FRONTEND=noninteractive
|
||||
FROM deps AS rpi-deps
|
||||
ARG TARGETARCH
|
||||
|
||||
RUN rm -rf /usr/lib/btbn-ffmpeg/
|
||||
|
||||
# Install dependencies
|
||||
RUN --mount=type=bind,source=docker/rpi/install_deps.sh,target=/deps/install_deps.sh \
|
||||
/deps/install_deps.sh
|
||||
|
||||
ENV DEFAULT_FFMPEG_VERSION="rpi"
|
||||
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
|
||||
|
||||
WORKDIR /opt/frigate/
|
||||
COPY --from=rootfs / /
|
||||
|
||||
@@ -18,17 +18,13 @@ apt-get -qq install --no-install-recommends -y \
|
||||
mkdir -p -m 600 /root/.gnupg
|
||||
|
||||
# enable non-free repo
|
||||
echo "deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list
|
||||
apt update
|
||||
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
|
||||
|
||||
# ffmpeg -> arm64
|
||||
if [[ "${TARGETARCH}" == "arm64" ]]; then
|
||||
# add raspberry pi repo
|
||||
gpg --no-default-keyring --keyring /usr/share/keyrings/raspbian.gpg --keyserver keyserver.ubuntu.com --recv-keys 82B129927FA3303E
|
||||
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bookworm main" | tee /etc/apt/sources.list.d/raspi.list
|
||||
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bullseye main" | tee /etc/apt/sources.list.d/raspi.list
|
||||
apt-get -qq update
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y ffmpeg
|
||||
mkdir -p /usr/lib/ffmpeg/rpi/bin
|
||||
ln -svf /usr/bin/ffmpeg /usr/lib/ffmpeg/rpi/bin/ffmpeg
|
||||
ln -svf /usr/bin/ffprobe /usr/lib/ffmpeg/rpi/bin/ffprobe
|
||||
fi
|
||||
|
||||
@@ -1,15 +1,10 @@
|
||||
BOARDS += rpi
|
||||
|
||||
local-rpi: version
|
||||
docker buildx bake --file=docker/rpi/rpi.hcl rpi \
|
||||
--set rpi.tags=frigate:latest-rpi \
|
||||
--load
|
||||
docker buildx bake --load --file=docker/rpi/rpi.hcl --set rpi.tags=frigate:latest-rpi rpi
|
||||
|
||||
build-rpi: version
|
||||
docker buildx bake --file=docker/rpi/rpi.hcl rpi \
|
||||
--set rpi.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rpi
|
||||
docker buildx bake --file=docker/rpi/rpi.hcl --set rpi.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rpi rpi
|
||||
|
||||
push-rpi: build-rpi
|
||||
docker buildx bake --file=docker/rpi/rpi.hcl rpi \
|
||||
--set rpi.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rpi \
|
||||
--push
|
||||
docker buildx bake --push --file=docker/rpi/rpi.hcl --set rpi.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rpi rpi
|
||||
@@ -3,16 +3,20 @@
|
||||
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Globally set pip break-system-packages option to avoid having to specify it every time
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES=1
|
||||
# Make this a separate target so it can be built/cached optionally
|
||||
FROM wheels as trt-wheels
|
||||
ARG DEBIAN_FRONTEND
|
||||
ARG TARGETARCH
|
||||
|
||||
# Add TensorRT wheels to another folder
|
||||
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
|
||||
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
|
||||
|
||||
FROM tensorrt-base AS frigate-tensorrt
|
||||
ARG PIP_BREAK_SYSTEM_PACKAGES
|
||||
ENV TRT_VER=8.6.1
|
||||
|
||||
# Install TensorRT wheels
|
||||
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
|
||||
RUN pip3 install -U -r /requirements-tensorrt.txt && ldconfig
|
||||
ENV TRT_VER=8.5.3
|
||||
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
|
||||
pip3 install -U /deps/trt-wheels/*.whl && \
|
||||
ldconfig
|
||||
|
||||
WORKDIR /opt/frigate/
|
||||
COPY --from=rootfs / /
|
||||
@@ -22,7 +26,6 @@ FROM devcontainer AS devcontainer-trt
|
||||
|
||||
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
|
||||
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
|
||||
COPY --from=trt-deps /usr/local/cuda-12.1 /usr/local/cuda
|
||||
COPY docker/tensorrt/detector/rootfs/ /
|
||||
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
|
||||
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
|
||||
|
||||
@@ -7,25 +7,20 @@ ARG BASE_IMAGE
|
||||
FROM ${BASE_IMAGE} AS build-wheels
|
||||
ARG DEBIAN_FRONTEND
|
||||
|
||||
# Add deadsnakes PPA for python3.11
|
||||
RUN apt-get -qq update && \
|
||||
apt-get -qq install -y --no-install-recommends \
|
||||
software-properties-common \
|
||||
&& add-apt-repository ppa:deadsnakes/ppa
|
||||
|
||||
# Use a separate container to build wheels to prevent build dependencies in final image
|
||||
RUN apt-get -qq update \
|
||||
&& apt-get -qq install -y --no-install-recommends \
|
||||
python3.11 python3.11-dev \
|
||||
wget build-essential cmake git \
|
||||
python3.9 python3.9-dev \
|
||||
wget build-essential cmake git \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Ensure python3 defaults to python3.11
|
||||
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
|
||||
# Ensure python3 defaults to python3.9
|
||||
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
|
||||
|
||||
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
|
||||
&& python3 get-pip.py "pip"
|
||||
|
||||
|
||||
FROM build-wheels AS trt-wheels
|
||||
ARG DEBIAN_FRONTEND
|
||||
ARG TARGETARCH
|
||||
@@ -46,12 +41,7 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
|
||||
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
|
||||
|
||||
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
|
||||
# See https://elinux.org/Jetson_Zoo#ONNX_Runtime
|
||||
ADD https://nvidia.box.com/shared/static/9yvw05k6u343qfnkhdv2x6xhygze0aq1.whl /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
|
||||
|
||||
RUN pip3 uninstall -y onnxruntime-openvino \
|
||||
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt \
|
||||
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
|
||||
RUN pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
|
||||
|
||||
FROM build-wheels AS trt-model-wheels
|
||||
ARG DEBIAN_FRONTEND
|
||||
@@ -73,21 +63,11 @@ RUN --mount=type=bind,source=docker/tensorrt/build_jetson_ffmpeg.sh,target=/deps
|
||||
# Frigate w/ TensorRT for NVIDIA Jetson platforms
|
||||
FROM tensorrt-base AS frigate-tensorrt
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y python-is-python3 libprotobuf23 \
|
||||
&& apt-get install -y python-is-python3 libprotobuf17 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN rm -rf /usr/lib/btbn-ffmpeg/
|
||||
COPY --from=jetson-ffmpeg /rootfs /
|
||||
ENV DEFAULT_FFMPEG_VERSION="jetson"
|
||||
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
|
||||
|
||||
# ffmpeg runtime dependencies
|
||||
RUN apt-get -qq update \
|
||||
&& apt-get -qq install -y --no-install-recommends \
|
||||
libx264-163 libx265-199 libegl1 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Fixes "Error loading shared libs"
|
||||
RUN mkdir -p /etc/ld.so.conf.d && echo /usr/lib/ffmpeg/jetson/lib/ > /etc/ld.so.conf.d/ffmpeg.conf
|
||||
|
||||
COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
|
||||
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
|
||||
@@ -97,6 +77,3 @@ RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels
|
||||
|
||||
WORKDIR /opt/frigate/
|
||||
COPY --from=rootfs / /
|
||||
|
||||
# Fixes "Error importing detector runtime: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block"
|
||||
ENV LD_PRELOAD /usr/lib/aarch64-linux-gnu/libstdc++.so.6
|
||||
|
||||
@@ -3,12 +3,11 @@
|
||||
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.12-py3
|
||||
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.03-py3
|
||||
|
||||
# Build TensorRT-specific library
|
||||
FROM ${TRT_BASE} AS trt-deps
|
||||
|
||||
ARG TARGETARCH
|
||||
ARG COMPUTE_LEVEL
|
||||
|
||||
RUN apt-get update \
|
||||
@@ -17,28 +16,16 @@ RUN apt-get update \
|
||||
RUN --mount=type=bind,source=docker/tensorrt/detector/tensorrt_libyolo.sh,target=/tensorrt_libyolo.sh \
|
||||
/tensorrt_libyolo.sh
|
||||
|
||||
# COPY required individual CUDA deps
|
||||
RUN mkdir -p /usr/local/cuda-deps
|
||||
RUN if [ "$TARGETARCH" = "amd64" ]; then \
|
||||
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libcurand.so.* /usr/local/cuda-deps/ && \
|
||||
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libnvrtc.so.* /usr/local/cuda-deps/ ; \
|
||||
fi
|
||||
|
||||
# Frigate w/ TensorRT Support as separate image
|
||||
FROM deps AS tensorrt-base
|
||||
|
||||
#Disable S6 Global timeout
|
||||
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
|
||||
|
||||
# COPY TensorRT Model Generation Deps
|
||||
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
|
||||
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
|
||||
|
||||
# COPY Individual CUDA deps folder
|
||||
COPY --from=trt-deps /usr/local/cuda-deps /usr/local/cuda
|
||||
|
||||
COPY docker/tensorrt/detector/rootfs/ /
|
||||
ENV YOLO_MODELS=""
|
||||
ENV YOLO_MODELS="yolov7-320"
|
||||
|
||||
HEALTHCHECK --start-period=600s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
|
||||
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
INSTALL_PREFIX=/rootfs/usr/lib/ffmpeg/jetson
|
||||
INSTALL_PREFIX=/rootfs/usr/local
|
||||
|
||||
apt-get -qq update
|
||||
apt-get -qq install -y --no-install-recommends build-essential ccache clang cmake pkg-config
|
||||
@@ -14,27 +14,14 @@ apt-get -qq install -y --no-install-recommends libx264-dev libx265-dev
|
||||
pushd /tmp
|
||||
|
||||
# Install libnvmpi to enable nvmpi decoders (h264_nvmpi, hevc_nvmpi)
|
||||
if [ -e /usr/local/cuda-12 ]; then
|
||||
# assume Jetpack 6.2
|
||||
apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc
|
||||
echo "deb https://repo.download.nvidia.com/jetson/common r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
|
||||
echo "deb https://repo.download.nvidia.com/jetson/t234 r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
|
||||
echo "deb https://repo.download.nvidia.com/jetson/ffmpeg r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
|
||||
|
||||
mkdir -p /opt/nvidia/l4t-packages/
|
||||
touch /opt/nvidia/l4t-packages/.nv-l4t-disable-boot-fw-update-in-preinstall
|
||||
|
||||
apt-get update
|
||||
apt-get -qq install -y --no-install-recommends -o Dpkg::Options::="--force-confold" nvidia-l4t-jetson-multimedia-api
|
||||
elif [ -e /usr/local/cuda-10.2 ]; then
|
||||
if [ -e /usr/local/cuda-10.2 ]; then
|
||||
# assume Jetpack 4.X
|
||||
wget -q https://developer.nvidia.com/embedded/L4T/r32_Release_v5.0/T186/Jetson_Multimedia_API_R32.5.0_aarch64.tbz2 -O jetson_multimedia_api.tbz2
|
||||
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
|
||||
else
|
||||
# assume Jetpack 5.X
|
||||
wget -q https://developer.nvidia.com/downloads/embedded/l4t/r35_release_v3.1/release/jetson_multimedia_api_r35.3.1_aarch64.tbz2 -O jetson_multimedia_api.tbz2
|
||||
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
|
||||
fi
|
||||
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
|
||||
|
||||
wget -q https://github.com/AndBobsYourUncle/jetson-ffmpeg/archive/9c17b09.zip -O jetson-ffmpeg.zip
|
||||
unzip jetson-ffmpeg.zip && rm jetson-ffmpeg.zip && mv jetson-ffmpeg-* jetson-ffmpeg && cd jetson-ffmpeg
|
||||
|
||||
@@ -6,23 +6,23 @@ mkdir -p /trt-wheels
|
||||
|
||||
if [[ "${TARGETARCH}" == "arm64" ]]; then
|
||||
|
||||
# NVIDIA supplies python-tensorrt for python3.10, but frigate uses python3.11,
|
||||
# NVIDIA supplies python-tensorrt for python3.8, but frigate uses python3.9,
|
||||
# so we must build python-tensorrt ourselves.
|
||||
|
||||
# Get python-tensorrt source
|
||||
mkdir -p /workspace
|
||||
mkdir /workspace
|
||||
cd /workspace
|
||||
git clone -b release/8.6 https://github.com/NVIDIA/TensorRT.git --depth=1
|
||||
git clone -b ${TENSORRT_VER} https://github.com/NVIDIA/TensorRT.git --depth=1
|
||||
|
||||
# Collect dependencies
|
||||
EXT_PATH=/workspace/external && mkdir -p $EXT_PATH
|
||||
pip3 install pybind11 && ln -s /usr/local/lib/python3.11/dist-packages/pybind11 $EXT_PATH/pybind11
|
||||
ln -s /usr/include/python3.11 $EXT_PATH/python3.11
|
||||
pip3 install pybind11 && ln -s /usr/local/lib/python3.9/dist-packages/pybind11 $EXT_PATH/pybind11
|
||||
ln -s /usr/include/python3.9 $EXT_PATH/python3.9
|
||||
ln -s /usr/include/aarch64-linux-gnu/NvOnnxParser.h /workspace/TensorRT/parsers/onnx/
|
||||
|
||||
# Build wheel
|
||||
cd /workspace/TensorRT/python
|
||||
EXT_PATH=$EXT_PATH PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=11 TARGET_ARCHITECTURE=aarch64 TENSORRT_MODULE=tensorrt /bin/bash ./build.sh
|
||||
mv build/bindings_wheel/dist/*.whl /trt-wheels/
|
||||
EXT_PATH=$EXT_PATH PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=9 TARGET_ARCHITECTURE=aarch64 /bin/bash ./build.sh
|
||||
mv build/dist/*.whl /trt-wheels/
|
||||
|
||||
fi
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
/usr/local/lib
|
||||
/usr/local/cuda
|
||||
/usr/local/lib/python3.11/dist-packages/nvidia/cudnn/lib
|
||||
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_runtime/lib
|
||||
/usr/local/lib/python3.11/dist-packages/nvidia/cublas/lib
|
||||
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_nvrtc/lib
|
||||
/usr/local/lib/python3.11/dist-packages/tensorrt
|
||||
/usr/local/lib/python3.11/dist-packages/nvidia/cufft/lib
|
||||
/usr/local/lib/python3.9/dist-packages/nvidia/cudnn/lib
|
||||
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_runtime/lib
|
||||
/usr/local/lib/python3.9/dist-packages/nvidia/cublas/lib
|
||||
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_nvrtc/lib
|
||||
/usr/local/lib/python3.9/dist-packages/tensorrt
|
||||
@@ -11,7 +11,6 @@ set -o errexit -o nounset -o pipefail
|
||||
MODEL_CACHE_DIR=${MODEL_CACHE_DIR:-"/config/model_cache/tensorrt"}
|
||||
TRT_VER=${TRT_VER:-$(cat /etc/TENSORRT_VER)}
|
||||
OUTPUT_FOLDER="${MODEL_CACHE_DIR}/${TRT_VER}"
|
||||
YOLO_MODELS=${YOLO_MODELS:-""}
|
||||
|
||||
# Create output folder
|
||||
mkdir -p ${OUTPUT_FOLDER}
|
||||
@@ -20,11 +19,6 @@ FIRST_MODEL=true
|
||||
MODEL_DOWNLOAD=""
|
||||
MODEL_CONVERT=""
|
||||
|
||||
if [ -z "$YOLO_MODELS" ]; then
|
||||
echo "tensorrt model preparation disabled"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
for model in ${YOLO_MODELS//,/ }
|
||||
do
|
||||
# Remove old link in case path/version changed
|
||||
@@ -64,7 +58,7 @@ fi
|
||||
# order to run libyolo here.
|
||||
# On Jetpack 5.0, these libraries are not mounted by the runtime and are supplied by the image.
|
||||
if [[ "$(arch)" == "aarch64" ]]; then
|
||||
if [[ ! -e /usr/lib/aarch64-linux-gnu/tegra && ! -e /usr/lib/aarch64-linux-gnu/tegra-egl ]]; then
|
||||
if [[ ! -e /usr/lib/aarch64-linux-gnu/tegra ]]; then
|
||||
echo "ERROR: Container must be launched with nvidia runtime"
|
||||
exit 1
|
||||
elif [[ ! -e /usr/lib/aarch64-linux-gnu/libnvinfer.so.8 ||
|
||||
|
||||
@@ -1,17 +1,12 @@
|
||||
# NVidia TensorRT Support (amd64 only)
|
||||
--extra-index-url 'https://pypi.nvidia.com'
|
||||
numpy < 1.24; platform_machine == 'x86_64'
|
||||
tensorrt == 8.6.1; platform_machine == 'x86_64'
|
||||
tensorrt_bindings == 8.6.1; platform_machine == 'x86_64'
|
||||
cuda-python == 11.8.*; platform_machine == 'x86_64'
|
||||
cython == 3.0.*; platform_machine == 'x86_64'
|
||||
tensorrt == 8.5.3.*; platform_machine == 'x86_64'
|
||||
cuda-python == 11.8; platform_machine == 'x86_64'
|
||||
cython == 0.29.*; platform_machine == 'x86_64'
|
||||
nvidia-cuda-runtime-cu12 == 12.1.*; platform_machine == 'x86_64'
|
||||
nvidia-cuda-runtime-cu11 == 11.8.*; platform_machine == 'x86_64'
|
||||
nvidia-cublas-cu11 == 11.11.3.6; platform_machine == 'x86_64'
|
||||
nvidia-cudnn-cu11 == 8.6.0.*; platform_machine == 'x86_64'
|
||||
nvidia-cudnn-cu12 == 9.5.0.*; platform_machine == 'x86_64'
|
||||
nvidia-cufft-cu11==10.*; platform_machine == 'x86_64'
|
||||
nvidia-cufft-cu12==11.*; platform_machine == 'x86_64'
|
||||
onnx==1.16.*; platform_machine == 'x86_64'
|
||||
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
|
||||
protobuf==3.20.3; platform_machine == 'x86_64'
|
||||
onnx==1.14.0; platform_machine == 'x86_64'
|
||||
protobuf==3.20.3; platform_machine == 'x86_64'
|
||||
@@ -1 +1 @@
|
||||
cuda-python == 12.6.*; platform_machine == 'aarch64'
|
||||
cuda-python == 11.7; platform_machine == 'aarch64'
|
||||
|
||||
@@ -13,29 +13,13 @@ variable "TRT_BASE" {
|
||||
variable "COMPUTE_LEVEL" {
|
||||
default = ""
|
||||
}
|
||||
variable "BASE_HOOK" {
|
||||
# Ensure an up-to-date python 3.11 is available in jetson images
|
||||
default = <<EOT
|
||||
if grep -iq \"ubuntu\" /etc/os-release; then
|
||||
. /etc/os-release
|
||||
|
||||
# Add the deadsnakes PPA repository
|
||||
echo "deb https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu $VERSION_CODENAME main" >> /etc/apt/sources.list.d/deadsnakes.list
|
||||
echo "deb-src https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu $VERSION_CODENAME main" >> /etc/apt/sources.list.d/deadsnakes.list
|
||||
|
||||
# Add deadsnakes signing key
|
||||
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F23C5A6CF475977595C89F51BA6932366A755776
|
||||
fi
|
||||
EOT
|
||||
}
|
||||
|
||||
target "_build_args" {
|
||||
args = {
|
||||
BASE_IMAGE = BASE_IMAGE,
|
||||
SLIM_BASE = SLIM_BASE,
|
||||
TRT_BASE = TRT_BASE,
|
||||
COMPUTE_LEVEL = COMPUTE_LEVEL,
|
||||
BASE_HOOK = BASE_HOOK
|
||||
COMPUTE_LEVEL = COMPUTE_LEVEL
|
||||
}
|
||||
platforms = ["linux/${ARCH}"]
|
||||
}
|
||||
@@ -95,6 +79,7 @@ target "tensorrt" {
|
||||
wget = "target:wget",
|
||||
tensorrt-base = "target:tensorrt-base",
|
||||
rootfs = "target:rootfs"
|
||||
wheels = "target:wheels"
|
||||
}
|
||||
target = "frigate-tensorrt"
|
||||
inherits = ["_build_args"]
|
||||
|
||||
@@ -1,41 +1,26 @@
|
||||
BOARDS += trt
|
||||
|
||||
JETPACK4_BASE ?= timongentzsch/l4t-ubuntu20-opencv:latest # L4T 32.7.1 JetPack 4.6.1
|
||||
JETPACK5_BASE ?= nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime # L4T 35.3.1 JetPack 5.1.1
|
||||
JETPACK6_BASE ?= nvcr.io/nvidia/tensorrt:23.12-py3-igpu
|
||||
X86_DGPU_ARGS := ARCH=amd64 COMPUTE_LEVEL="50 60 70 80 90"
|
||||
JETPACK4_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK4_BASE) SLIM_BASE=$(JETPACK4_BASE) TRT_BASE=$(JETPACK4_BASE)
|
||||
JETPACK5_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK5_BASE) SLIM_BASE=$(JETPACK5_BASE) TRT_BASE=$(JETPACK5_BASE)
|
||||
JETPACK6_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK6_BASE) SLIM_BASE=$(JETPACK6_BASE) TRT_BASE=$(JETPACK6_BASE)
|
||||
|
||||
local-trt: version
|
||||
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=frigate:latest-tensorrt \
|
||||
--load
|
||||
$(X86_DGPU_ARGS) docker buildx bake --load --file=docker/tensorrt/trt.hcl --set tensorrt.tags=frigate:latest-tensorrt tensorrt
|
||||
|
||||
local-trt-jp4: version
|
||||
$(JETPACK4_ARGS) docker buildx bake --load --file=docker/tensorrt/trt.hcl --set tensorrt.tags=frigate:latest-tensorrt-jp4 tensorrt
|
||||
|
||||
local-trt-jp5: version
|
||||
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=frigate:latest-tensorrt-jp5 \
|
||||
--load
|
||||
|
||||
local-trt-jp6: version
|
||||
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=frigate:latest-tensorrt-jp6 \
|
||||
--load
|
||||
$(JETPACK5_ARGS) docker buildx bake --load --file=docker/tensorrt/trt.hcl --set tensorrt.tags=frigate:latest-tensorrt-jp5 tensorrt
|
||||
|
||||
build-trt:
|
||||
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt
|
||||
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5
|
||||
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp6
|
||||
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl --set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt tensorrt
|
||||
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl --set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp4 tensorrt
|
||||
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl --set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5 tensorrt
|
||||
|
||||
push-trt: build-trt
|
||||
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt \
|
||||
--push
|
||||
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5 \
|
||||
--push
|
||||
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
|
||||
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp6 \
|
||||
--push
|
||||
$(X86_DGPU_ARGS) docker buildx bake --push --file=docker/tensorrt/trt.hcl --set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt tensorrt
|
||||
$(JETPACK4_ARGS) docker buildx bake --push --file=docker/tensorrt/trt.hcl --set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp4 tensorrt
|
||||
$(JETPACK5_ARGS) docker buildx bake --push --file=docker/tensorrt/trt.hcl --set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5 tensorrt
|
||||
|
||||
@@ -1,10 +1,5 @@
|
||||
# Website
|
||||
|
||||
This website is built using [Docusaurus 3.5](https://docusaurus.io/docs), a modern static website generator.
|
||||
This website is built using [Docusaurus 2](https://v2.docusaurus.io/), a modern static website generator.
|
||||
|
||||
For installation and contributing instructions, please follow the [Contributing Docs](https://docs.frigate.video/development/contributing).
|
||||
|
||||
# Development
|
||||
|
||||
1. Run `npm i` to install dependencies
|
||||
2. Run `npm run start` to start the website
|
||||
|
||||
@@ -32,12 +32,12 @@ Examples of available modules are:
|
||||
|
||||
#### Go2RTC Logging
|
||||
|
||||
See [the go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#module-log) for logging configuration
|
||||
See [the go2rtc docs](for logging configuration)
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
# ...
|
||||
...
|
||||
log:
|
||||
exec: trace
|
||||
```
|
||||
@@ -55,7 +55,7 @@ environment_vars:
|
||||
|
||||
### `database`
|
||||
|
||||
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
||||
Event and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
||||
|
||||
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
|
||||
|
||||
@@ -176,21 +176,23 @@ listen [::]:5000 ipv6only=off;
|
||||
|
||||
### Custom ffmpeg build
|
||||
|
||||
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, statically built `ffmpeg` and `ffprobe` binaries can be placed in `/config/custom-ffmpeg/bin` for Frigate to use.
|
||||
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, a docker volume mapping can be used to overwrite the included ffmpeg build with an ffmpeg build that works for your specific hardware setup.
|
||||
|
||||
To do this:
|
||||
|
||||
1. Download your ffmpeg build and uncompress it to the `/config/custom-ffmpeg` folder. Verify that both the `ffmpeg` and `ffprobe` binaries are located in `/config/custom-ffmpeg/bin`.
|
||||
2. Update the `ffmpeg.path` in your Frigate config to `/config/custom-ffmpeg`.
|
||||
3. Restart Frigate and the custom version will be used if the steps above were done correctly.
|
||||
1. Download your ffmpeg build and uncompress to a folder on the host (let's use `/home/appdata/frigate/custom-ffmpeg` for this example).
|
||||
2. Update your docker-compose or docker CLI to include `'/home/appdata/frigate/custom-ffmpeg':'/usr/lib/btbn-ffmpeg':'ro'` in the volume mappings.
|
||||
3. Restart Frigate and the custom version will be used if the mapping was done correctly.
|
||||
|
||||
NOTE: The folder that is mapped from the host needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then `/home/appdata/frigate/custom-ffmpeg` needs to be mapped to `/usr/lib/btbn-ffmpeg`.
|
||||
|
||||
### Custom go2rtc version
|
||||
|
||||
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
|
||||
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
|
||||
|
||||
To do this:
|
||||
|
||||
1. Download the go2rtc build to the `/config` folder.
|
||||
1. Download the go2rtc build to the /config folder.
|
||||
2. Rename the build to `go2rtc`.
|
||||
3. Give `go2rtc` execute permission.
|
||||
4. Restart Frigate and the custom version will be used, you can verify by checking go2rtc logs.
|
||||
@@ -201,16 +203,16 @@ When frigate starts up, it checks whether your config file is valid, and if it i
|
||||
|
||||
### Via API
|
||||
|
||||
Frigate can accept a new configuration file as JSON at the `/api/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
|
||||
Frigate can accept a new configuration file as JSON at the `/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
|
||||
|
||||
```bash
|
||||
curl -X POST http://frigate_host:5000/api/config/save -d @config.json
|
||||
curl -X POST http://frigate_host:5000/config/save -d @config.json
|
||||
```
|
||||
|
||||
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
|
||||
|
||||
```bash
|
||||
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
|
||||
yq r -j config.yml | curl -X POST http://frigate_host:5000/config/save -d @-
|
||||
```
|
||||
|
||||
### Via Command Line
|
||||
@@ -223,5 +225,5 @@ docker run \
|
||||
--entrypoint python3 \
|
||||
ghcr.io/blakeblackshear/frigate:stable \
|
||||
-u -m frigate \
|
||||
--validate-config
|
||||
--validate_config
|
||||
```
|
||||
|
||||
@@ -31,7 +31,7 @@ auth:
|
||||
|
||||
## Login failure rate limiting
|
||||
|
||||
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
|
||||
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with Flask-Limiter, and the string notation for valid values is available in [the documentation](https://flask-limiter.readthedocs.io/en/stable/configuration.html#rate-limit-string-notation).
|
||||
|
||||
For example, `1/second;5/minute;20/hour` will rate limit the login endpoint when failures occur more than:
|
||||
|
||||
@@ -97,35 +97,15 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
|
||||
|
||||
### Header mapping
|
||||
|
||||
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive.
|
||||
If you have disabled Frigate's authentication and your proxy supports passing a header with the authenticated username, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` value. Header names are not case sensitive.
|
||||
|
||||
```yaml
|
||||
proxy:
|
||||
...
|
||||
header_map:
|
||||
user: x-forwarded-user
|
||||
role: x-forwarded-role
|
||||
```
|
||||
|
||||
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
|
||||
|
||||
#### Port Considerations
|
||||
|
||||
**Authenticated Port (8971)**
|
||||
|
||||
- Header mapping is **fully supported**.
|
||||
- The `remote-role` header determines the user’s privileges:
|
||||
- **admin** → Full access (user management, configuration changes).
|
||||
- **viewer** → Read-only access.
|
||||
- Ensure your **proxy sends both user and role headers** for proper role enforcement.
|
||||
|
||||
**Unauthenticated Port (5000)**
|
||||
|
||||
- Headers are **ignored** for role enforcement.
|
||||
- All requests are treated as **anonymous**.
|
||||
- The `remote-role` value is **overridden** to **admin-level access**.
|
||||
- This design ensures **unauthenticated internal use** within a trusted network.
|
||||
|
||||
Note that only the following list of headers are permitted by default:
|
||||
|
||||
```
|
||||
@@ -146,6 +126,8 @@ X-authentik-uid
|
||||
|
||||
If you would like to add more options, you can overwrite the default file with a docker bind mount at `/usr/local/nginx/conf/proxy_trusted_headers.conf`. Reference the source code for the default file formatting.
|
||||
|
||||
Future versions of Frigate may leverage group and role headers for authorization in Frigate as well.
|
||||
|
||||
### Login page redirection
|
||||
|
||||
Frigate gracefully performs login page redirection that should work with most authentication proxies. If your reverse proxy returns a `Location` header on `401`, `302`, or `307` unauthorized responses, Frigate's frontend will automatically detect it and redirect to that URL.
|
||||
@@ -153,31 +135,3 @@ Frigate gracefully performs login page redirection that should work with most au
|
||||
### Custom logout url
|
||||
|
||||
If your reverse proxy has a dedicated logout url, you can specify using the `logout_url` config option. This will update the link for the `Logout` link in the UI.
|
||||
|
||||
## User Roles
|
||||
|
||||
Frigate supports user roles to control access to certain features in the UI and API, such as managing users or modifying configuration settings. Roles are assigned to users in the database or through proxy headers and are enforced when accessing the UI or API through the authenticated port (`8971`).
|
||||
|
||||
### Supported Roles
|
||||
|
||||
- **admin**: Full access to all features, including user management and configuration.
|
||||
- **viewer**: Read-only access to the UI and API, including viewing cameras, review items, and historical footage. Configuration editor and settings in the UI are inaccessible.
|
||||
|
||||
### Role Enforcement
|
||||
|
||||
When using the authenticated port (`8971`), roles are validated via the JWT token or proxy headers (e.g., `remote-role`).
|
||||
|
||||
On the internal **unauthenticated** port (`5000`), roles are **not enforced**. All requests are treated as **anonymous**, granting access equivalent to the **admin** role without restrictions.
|
||||
|
||||
To use role-based access control, you must connect to Frigate via the **authenticated port (`8971`)** directly or through a reverse proxy.
|
||||
|
||||
### Role Visibility in the UI
|
||||
|
||||
- When logged in via port `8971`, your **username and role** are displayed in the **account menu** (bottom corner).
|
||||
- When using port `5000`, the UI will always display "anonymous" for the username and "admin" for the role.
|
||||
|
||||
### Managing User Roles
|
||||
|
||||
1. Log in as an **admin** user via port `8971`.
|
||||
2. Navigate to **Settings > Users**.
|
||||
3. Edit a user’s role by selecting **admin** or **viewer**.
|
||||
|
||||
@@ -41,7 +41,6 @@ cameras:
|
||||
...
|
||||
onvif:
|
||||
# Required: host of the camera being connected to.
|
||||
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
|
||||
host: 0.0.0.0
|
||||
# Optional: ONVIF port for device (default: shown below).
|
||||
port: 8000
|
||||
@@ -50,8 +49,6 @@ cameras:
|
||||
user: admin
|
||||
# Optional: password for login.
|
||||
password: admin
|
||||
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
|
||||
tls_insecure: False
|
||||
# Optional: PTZ camera object autotracking. Keeps a moving object in
|
||||
# the center of the frame by automatically moving the PTZ camera.
|
||||
autotracking:
|
||||
@@ -167,7 +164,3 @@ To maintain object tracking during PTZ moves, Frigate tracks the motion of your
|
||||
### Calibration seems to have completed, but the camera is not actually moving to track my object. Why?
|
||||
|
||||
Some cameras have firmware that reports that FOV RelativeMove, the ONVIF command that Frigate uses for autotracking, is supported. However, if the camera does not pan or tilt when an object comes into the required zone, your camera's firmware does not actually support FOV RelativeMove. One such camera is the Uniview IPC672LR-AX4DUPK. It actually moves its zoom motor instead of panning and tilting and does not follow the ONVIF standard whatsoever.
|
||||
|
||||
### Frigate reports an error saying that calibration has failed. Why?
|
||||
|
||||
Calibration measures the amount of time it takes for Frigate to make a series of movements with your PTZ. This error message is recorded in the log if these values are too high for Frigate to support calibrated autotracking. This is often the case when your camera's motor or network connection is too slow or your camera's firmware doesn't report the motor status in a timely manner. You can try running without calibration (just remove the `movement_weights` line from your config and restart), but if calibration fails, this often means that autotracking will behave unpredictably.
|
||||
|
||||
@@ -22,7 +22,7 @@ Note that mjpeg cameras require encoding the video into h264 for recording, and
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
mjpeg_cam: "ffmpeg:http://your_mjpeg_stream_url#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
|
||||
mjpeg_cam: "ffmpeg:{your_mjpeg_stream_url}#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
|
||||
|
||||
cameras:
|
||||
...
|
||||
@@ -65,32 +65,19 @@ ffmpeg:
|
||||
|
||||
## Model/vendor specific setup
|
||||
|
||||
### Amcrest & Dahua
|
||||
|
||||
Amcrest & Dahua cameras should be connected to via RTSP using the following format:
|
||||
|
||||
```
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=0 # this is the main stream
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=1 # this is the sub stream, typically supporting low resolutions only
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=2 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=3 # new higher end cameras support a fourth stream with another mid resolution (1280x720, 1920x1080)
|
||||
|
||||
```
|
||||
|
||||
### Annke C800
|
||||
|
||||
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be adjusted using the `apple_compatibility` config.
|
||||
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
annkec800: # <------ Name the camera
|
||||
ffmpeg:
|
||||
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
|
||||
output_args:
|
||||
record: preset-record-generic-audio-aac
|
||||
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
|
||||
|
||||
inputs:
|
||||
- path: rtsp://USERNAME:PASSWORD@CAMERA-IP/H264/ch1/main/av_stream # <----- Update for your camera
|
||||
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
|
||||
roles:
|
||||
- detect
|
||||
- record
|
||||
@@ -108,29 +95,6 @@ ffmpeg:
|
||||
input_args: preset-rtsp-blue-iris
|
||||
```
|
||||
|
||||
### Hikvision Cameras
|
||||
|
||||
Hikvision cameras should be connected to via RTSP using the following format:
|
||||
|
||||
```
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/101 # this is the main stream
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/102 # this is the sub stream, typically supporting low resolutions only
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/103 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
[Some users have reported](https://www.reddit.com/r/frigate_nvr/comments/1hg4ze7/hikvision_security_settings) that newer Hikvision cameras require adjustments to the security settings:
|
||||
|
||||
```
|
||||
RTSP Authentication - digest/basic
|
||||
RTSP Digest Algorithm - MD5
|
||||
WEB Authentication - digest/basic
|
||||
WEB Digest Algorithm - MD5
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
### Reolink Cameras
|
||||
|
||||
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
|
||||
@@ -192,9 +156,7 @@ cameras:
|
||||
|
||||
#### Reolink Doorbell
|
||||
|
||||
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
|
||||
|
||||
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
|
||||
The reolink doorbell supports 2-way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
@@ -219,7 +181,7 @@ go2rtc:
|
||||
- rtspx://192.168.1.1:7441/abcdefghijk
|
||||
```
|
||||
|
||||
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-rtsp)
|
||||
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#source-rtsp)
|
||||
|
||||
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
|
||||
|
||||
@@ -231,4 +193,4 @@ ffmpeg:
|
||||
|
||||
### TP-Link VIGI Cameras
|
||||
|
||||
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
|
||||
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded events. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
|
||||
|
||||
@@ -7,7 +7,7 @@ title: Camera Configuration
|
||||
|
||||
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
|
||||
|
||||
A camera is enabled by default but can be disabled by using `enabled: False`. Cameras that are disabled through the configuration file will not appear in the Frigate UI and will not consume system resources.
|
||||
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing events and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
|
||||
|
||||
Each role can only be assigned to one input per camera. The options for roles are as follows:
|
||||
|
||||
@@ -109,7 +109,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
|
||||
| Reolink E1 Zoom | ✅ | ❌ | |
|
||||
| Reolink RLC-823A 16x | ✅ | ❌ | |
|
||||
| Speco O8P32X | ✅ | ❌ | |
|
||||
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
|
||||
| Sunba 405-D20X | ✅ | ❌ | |
|
||||
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
|
||||
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
|
||||
| Uniview IPC6612SR-X33-VG | ✅ | ✅ | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |
|
||||
|
||||
@@ -1,56 +0,0 @@
|
||||
---
|
||||
id: face_recognition
|
||||
title: Face Recognition
|
||||
---
|
||||
|
||||
Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.
|
||||
|
||||
Frigate has support for CV2 Local Binary Pattern Face Recognizer to recognize faces, which runs locally. A lightweight face landmark detection model is also used to align faces before running them through the face recognizer.
|
||||
|
||||
## Configuration
|
||||
|
||||
Face recognition is disabled by default, face recognition must be enabled in your config file before it can be used. Face recognition is a global configuration setting.
|
||||
|
||||
```yaml
|
||||
face_recognition:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
## Dataset
|
||||
|
||||
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
|
||||
|
||||
- Diversity of the dataset: A dataset with diverse images, including variations in lighting, pose, and facial expressions, will require fewer images per person than a less diverse dataset.
|
||||
- Desired accuracy: The higher the desired accuracy, the more images are typically needed.
|
||||
|
||||
However, here are some general guidelines:
|
||||
|
||||
- Minimum: For basic face recognition tasks, a minimum of 10-20 images per person is often recommended.
|
||||
- Recommended: For more robust and accurate systems, 30-50 images per person is a good starting point.
|
||||
- Ideal: For optimal performance, especially in challenging conditions, 100 or more images per person can be beneficial.
|
||||
|
||||
## Creating a Robust Training Set
|
||||
|
||||
The accuracy of face recognition is heavily dependent on the quality of data given to it for training. It is recommended to build the face training library in phases.
|
||||
|
||||
:::tip
|
||||
|
||||
When choosing images to include in the face training set it is recommended to always follow these recommendations:
|
||||
|
||||
- If it is difficult to make out details in a persons face it will not be helpful in training.
|
||||
- Avoid images with under/over-exposure.
|
||||
- Avoid blurry / pixelated images.
|
||||
- Be careful when uploading images of people when they are wearing clothing that covers a lot of their face as this may confuse the training.
|
||||
- Do not upload too many images at the same time, it is recommended to train 4-6 images for each person each day so it is easier to know if the previously added images helped or hurt performance.
|
||||
|
||||
:::
|
||||
|
||||
### Step 1 - Building a Strong Foundation
|
||||
|
||||
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-2 photos taken by a smartphone for each person. It is important that the person's face in the photo is straight-on and not turned which will ensure a good starting point.
|
||||
|
||||
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle. Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step.
|
||||
|
||||
### Step 2 - Expanding The Dataset
|
||||
|
||||
Once straight-on images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone.
|
||||