Closed shad0wca7 closed 8 months ago
model:
path: /trt-models/yolov7-tiny-416.trt
input_tensor: nchw
input_pixel_format: rgb
width: 416
height: 416
is the wrong path, please check the docs
can you also provide the unraid docker cli command for frigate please
I've updated the path so it's:
path: /config/model_cache/tensorrt/yolov7-tiny-416.trt
I'm using the Unraid GUI (wasn't an option when I created this ticket) which stores the docker settings in XML format, here:
<?xml version="1.0"?>
<Container version="2">
<Name>frigate</Name>
<Repository>ghcr.io/blakeblackshear/frigate:stable-tensorrt</Repository>
<Registry/>
<Network>br0.10</Network>
<MyIP>192.168.111.59</MyIP>
<Shell>sh</Shell>
<Privileged>true</Privileged>
<Support/>
<Project/>
<Overview>A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.

Use of a Google Coral Accelerator is optional, but highly recommended. The Coral will outperform even the best CPUs and can process 100+ FPS with very little overhead.

- Tight integration with Home Assistant via a custom component
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
- Leverages multiprocessing heavily with an emphasis on realtime over processing every frame
- Uses a very low overhead motion detection to determine where to run object detection
- Object detection with TensorFlow runs in separate processes for maximum FPS
- Communicates over MQTT for easy integration into other systems
- Records video with retention settings based on detected objects
- 24/7 recording
- Re-streaming via RTMP to reduce the number of connections to your camera

A config.yml file must exist in the config directory.
See the documentation for more details.</Overview>
<Category>HomeAutomation: Security:</Category>
<WebUI>http://[IP]:[PORT:5000]</WebUI>
<TemplateURL/>
<Icon>https://raw.githubusercontent.com/yayitazale/unraid-templates/main/frigate.png</Icon>
<ExtraParams>--gpus=all --shm-size=256mb --mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000</ExtraParams>
<PostArgs/>
<CPUset/>
<DateInstalled>1706898601</DateInstalled>
<DonateText/>
<DonateLink/>
<Requires>Note: If you are using a PCI Coral instead of a USB one, you must install first the needed drivers going to the CA APP and searching for Coral-Driver (thanks to @ich777)
&lt;br&gt;
&lt;br&gt;If you want to use a nvidia card to image decoding, you must add the &amp;quot;--gpus all&amp;quot; extra parameter. If you have multiple GPUs in your system with some allocated to VMs, you instead must add &amp;quot;--runtime=nvidia&amp;quot; as extra parameter and set the NVIDIA_DRIVER_CAPABILITIES and NVIDIA_VISIBLE_DEVICES variables to only give the container access to selected GPUs.</Requires>
<Config Name="Config Path" Target="/config" Default="/mnt/user/appdata/frigate" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/appdata/frigate/config/</Config>
<Config Name="Media path" Target="/media/frigate" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/NVR/frigate/</Config>
<Config Name="HTTP port" Target="5000" Default="" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">5000</Config>
<Config Name="RTMP port" Target="1935" Default="" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">1935</Config>
<Config Name="Frigate RTSP Password" Target="FRIGATE_RTSP_PASSWORD" Default="" Mode="" Description="" Type="Variable" Display="always" Required="true" Mask="false">XXXXX</Config>
<Config Name="NVIDIA_VISIBLE_DEVICES" Target="NVIDIA_VISIBLE_DEVICES" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">all</Config>
<Config Name="NVIDIA_DRIVER_CAPABILITIES" Target="NVIDIA_DRIVER_CAPABILITIES" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">compute,utility,video</Config>
<Config Name="trt-models" Target="/trt-models" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/appdata/frigate/trt-models/</Config>
<Config Name="Models" Target="YOLO_MODELS" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">yolov4-tiny-416 </Config>
<Config Name="Localtime" Target="/etc/localtime" Default="" Mode="rw" Description="" Type="Path" Display="advanced-hide" Required="true" Mask="false">/etc/localtime</Config>
</Container>
Log output is basically the same:
s6-rc: info: service s6rc-fdholder: starting
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service s6rc-fdholder successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service trt-model-prepare: starting
s6-rc: info: service log-prepare: starting
s6-rc: info: service log-prepare successfully started
s6-rc: info: service nginx-log: starting
s6-rc: info: service go2rtc-log: starting
s6-rc: info: service frigate-log: starting
s6-rc: info: service nginx-log successfully started
s6-rc: info: service go2rtc-log successfully started
s6-rc: info: service go2rtc: starting
s6-rc: info: service frigate-log successfully started
s6-rc: info: service go2rtc successfully started
s6-rc: info: service go2rtc-healthcheck: starting
s6-rc: info: service go2rtc-healthcheck successfully started
s6-rc: warning: unable to start service trt-model-prepare: command exited 4
Generating the following TRT Models: yolov4-tiny-416
Downloading yolo weights
2024-02-02 12:46:32.983247526 [INFO] Preparing new go2rtc config...
2024-02-02 12:46:34.071112904 [INFO] Starting go2rtc...
2024-02-02 12:46:34.270627033 12:46:34.270 INF go2rtc version 1.8.4 linux/amd64
2024-02-02 12:46:34.271495664 12:46:34.271 INF [rtsp] listen addr=:8554
2024-02-02 12:46:34.271518142 12:46:34.271 INF [api] listen addr=:1984
2024-02-02 12:46:34.271886251 12:46:34.271 INF [webrtc] listen addr=:8555
2024-02-02 12:46:42.884791466 [INFO] Starting go2rtc healthcheck service...
when you hit the apply
button in the unraid gui it shows you the docker cli command
in any case I think it will just take some time to start, looks like it is downloading the weights which can take some time
Generating the following TRT Models: yolov4-tiny-416
Downloading yolo weights
Ah I'm being stupid about that of course, here:
docker run
-d
--name='frigate'
--net='br0.10'
--ip='192.168.111.59'
--privileged=true
-e TZ="America/Chicago"
-e HOST_OS="Unraid"
-e HOST_HOSTNAME="PowerPig"
-e HOST_CONTAINERNAME="frigate"
-e 'TCP_PORT_5000'='5000'
-e 'TCP_PORT_1935'='1935'
-e 'FRIGATE_RTSP_PASSWORD'='XXXX'
-e 'NVIDIA_VISIBLE_DEVICES'='all'
-e 'NVIDIA_DRIVER_CAPABILITIES'='compute,utility,video'
-e 'YOLO_MODELS'='yolov4-tiny-416'
-l net.unraid.docker.managed=dockerman
-l net.unraid.docker.webui='http://[IP]:[PORT:5000]'
-l net.unraid.docker.icon='https://raw.githubusercontent.com/yayitazale/unraid-templates/main/frigate.png'
-v '/mnt/user/appdata/frigate/config/':'/config':'rw'
-v '/mnt/user/NVR/frigate/':'/media/frigate':'rw'
-v '/mnt/user/appdata/frigate/trt-models/':'/trt-models':'rw'
-v '/etc/localtime':'/etc/localtime':'rw'
--gpus=all
--shm-size=256mb
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 'ghcr.io/blakeblackshear/frigate:stable-tensorrt'
b6c218a0c0c7a24d3f8b8399802b47cec0315c4cafe4195367c39798dad1acaf
The command finished successfully!
you may want to regenerate your Frigate+ api key since it was included in that dump. I would let it run for 30 minutes and see if it finishes downloading the weights
you may want to regenerate your Frigate+ api key since it was included in that dump. I would let it run for 30 minutes and see if it finishes downloading the weights
Oh man.. I'm smashing it with messing up 🤣 I'll follow your advice thanks, hopefully a nothingburger
you may want to regenerate your Frigate+ api key since it was included in that dump. I would let it run for 30 minutes and see if it finishes downloading the weights
Unfortunately it just seems to have stalled here completely - I've been letting it run til now and there's no change.
Maybe try running a different model
i've got exactly the same issue. It was working perfectly until i updated.
hey so for me, it was the changes to the the way tensorrt is handled in v13. ive got mine working now.
one thing ive noticed in your config: in your docker compose, under 'YOLO_MODELS' you've got v4, but in your config.yml you've got v7. not sure if that might be the cause?
hey so for me, it was the changes to the the way tensorrt is handled in v13. ive got mine working now.
one thing ive noticed in your config: in your docker compose, under 'YOLO_MODELS' you've got v4, but in your config.yml you've got v7. not sure if that might be the cause?
What did you do go get it working?
I've changed mine to be v7 640 (in both areas) but still having the same issue, stalling here:
s6-rc: warning: unable to start service trt-model-prepare: command exited 4
Generating the following TRT Models: yolov7-640
Downloading yolo weights
2024-02-03 10:43:02.995710209 [INFO] Preparing new go2rtc config...
2024-02-03 10:43:04.174293841 [INFO] Starting go2rtc...
2024-02-03 10:43:04.396931353 10:43:04.396 INF go2rtc version 1.8.4 linux/amd64
2024-02-03 10:43:04.397993336 10:43:04.397 INF [rtsp] listen addr=:8554
2024-02-03 10:43:04.398066038 10:43:04.397 INF [api] listen addr=:1984
2024-02-03 10:43:04.398575008 10:43:04.398 INF [webrtc] listen addr=:8555
2024-02-03 10:43:12.867245748 [INFO] Starting go2rtc healthcheck service...
I'm using unraid, so what i ended up doing was deleting my existing image, deleting the model cache folder in the frigate config folder so all that was there was the config.yml file. i then re-added frigate from the app store, as there are a couple of changes in the docker configuration. added yolov7 to the yolo models. added --runtime=nvidia to the extra parameters changed the model path in config.yml
i think that was everything?
Oh also updated the nvidia drivers
I've just done all of that - also deleted the recordings and database - so the only thing left was the config.yml
Same behavior for me unfortunately..
hhmm. In you docker config, the variable -e 'NVIDIA_VISIBLE_DEVICES'='all' I believe this needs to have your cards UUID?
Hello, I have the same problem when updating Frigate to 13, the plugin does not start. I get the following in the long. Some Help please
They say that going back to version 12 of the plugin works. But I don't know how to go back since it only gives me the option of the latest version in HAOs. Can anyone tell me how to go back to version 12? I have an operating system on a mini pc with HaOs and I don't have docker. thank you
if you are running HA OS then you are not using tensorrt, which means you do not have the same problem as is being discussed here
hhmm. In you docker config, the variable -e 'NVIDIA_VISIBLE_DEVICES'='all' I believe this needs to have your cards UUID?
"all" works within Unraid (and worked on the 0.12.x versions of Frigate). Nonetheless I changed it to the specific UUID - no change in behavior.
hhmm you docker compose looks different to mine, plus i cant see the --runtime=nvidia command anywhere? otherwise might be best deleting your existing and re-adding from the app store?
here is mine for reference: docker run -d --name='frigate' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Tower" -e HOST_CONTAINERNAME="frigate" -e 'FRIGATE_RTSP_PASSWORD'='xxxxxxxx' -e 'LIBVA_DRIVER_NAME'='radeonsi' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-6d96034c-cc43-7045-4031-xxxxxxxxxx' -e 'NVIDIA_DRIVER_CAPABILITIES'='compute,utility,video' -e 'YOLO_MODELS'='yolov4-416,yolov4-tiny-416,yolov7-tiny-416' -e 'USE_FP16'='false' -e 'TRT_MODEL_PREP_DEVICE'='0' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:5000]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/yayitazale/unraid-templates/main/frigate.png' -p '5000:5000/tcp' -p '8554:8554/tcp' -p '8555:8555/tcp' -p '8555:8555/udp' -p '1984:1984/tcp' -v '/mnt/user/Config/Frigate':'/config':'rw' -v '/mnt/disks/WD-WXL2A209K7UU/':'/media/frigate':'rw' -v '/etc/localtime':'/etc/localtime':'rw' --device='/dev/dri/renderD128' --shm-size=256mb --mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 --restart unless-stopped --runtime=nvidia 'ghcr.io/blakeblackshear/frigate:stable-tensorrt'
740250c2212d60160f4e032ffeb989fd290ce97fe8733b690cf63cc309e71776
you don't need runtime nvidia you need --gpus=all
Hey Nick, are you able to exaplin that a bit more? Sorry its probably the blind leading the blind here with myself and shadow. I found some stuff regarding nvidia depreciating the runtime=nvidia command. also found this: Take note of your Docker version with docker -v. Versions earlier than 19.03 require nvidia-docker2 and the --runtime=nvidia flag. On versions including and after 19.03, you will use the nvidia-container-toolkit package and the --gpus all flag. Both options are documented on the page linked above.***
im on version 20.10.24 of docker. i've only got the -runtime=nvidia flag but everything seems to be running fine. will that break at some point?
right, runtime works fine for now but at some point it will not work which is why the recommendation is to use gpus=all
thanks. Just changed mine and confirm it works as well
Here's my latest updated files.. still not working (same issue):
docker run
-d
--name='frigate'
--net='br0.10'
--ip='192.168.111.59'
-e TZ="America/Chicago"
-e HOST_OS="Unraid"
-e HOST_HOSTNAME="PowerPig"
-e HOST_CONTAINERNAME="frigate"
-e 'TCP_PORT_5000'='5000'
-e 'TCP_PORT_8554'='8554'
-e 'FRIGATE_RTSP_PASSWORD'='XXXX'
-e 'PLUS_API_KEY'='XXXX'
-e 'NVIDIA_VISIBLE_DEVICES'='GPU-2b50f6aa-0d80-2412-eea0-1e8f5bfeeba3'
-e 'NVIDIA_DRIVER_CAPABILITIES'='compute,utility,video'
-e 'YOLO_MODELS'='yolov7-tiny-416'
-e 'USE_FP16'='false'
-e 'TRT_MODEL_PREP_DEVICE'='0'
-e 'TCP_PORT_8555'='8555'
-e 'UDP_PORT_8555'='8555'
-e 'TCP_PORT_1984'='1984'
-l net.unraid.docker.managed=dockerman
-l net.unraid.docker.webui='http://[IP]:[PORT:5000]'
-l net.unraid.docker.icon='https://raw.githubusercontent.com/yayitazale/unraid-templates/main/frigate.png'
-v '/mnt/user/appdata/frigate':'/config':'rw'
-v '/mnt/user/Media/frigate':'/media/frigate':'rw'
-v '/etc/localtime':'/etc/localtime':'rw'
--shm-size=256mb
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000
--restart unless-stopped
--runtime=nvidia
--gpus=all 'ghcr.io/blakeblackshear/frigate:stable-tensorrt'
5ebd8b92d0d2a8295a52fec3ed7ac31bf1e883dec9285e49d736cab7862c603a
The log:
s6-rc: info: service s6rc-fdholder: starting
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service s6rc-fdholder successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service trt-model-prepare: starting
s6-rc: info: service log-prepare: starting
s6-rc: info: service log-prepare successfully started
s6-rc: info: service nginx-log: starting
s6-rc: info: service go2rtc-log: starting
s6-rc: info: service frigate-log: starting
s6-rc: info: service nginx-log successfully started
s6-rc: info: service go2rtc-log successfully started
s6-rc: info: service go2rtc: starting
s6-rc: info: service frigate-log successfully started
s6-rc: info: service go2rtc successfully started
s6-rc: info: service go2rtc-healthcheck: starting
s6-rc: info: service go2rtc-healthcheck successfully started
s6-rc: warning: unable to start service trt-model-prepare: command exited 4
Generating the following TRT Models: yolov7-tiny-416
Downloading yolo weights
2024-02-08 08:38:10.087433407 [INFO] Preparing new go2rtc config...
2024-02-08 08:38:11.442893547 [INFO] Starting go2rtc...
2024-02-08 08:38:11.693590983 08:38:11.693 INF go2rtc version 1.8.4 linux/amd64
2024-02-08 08:38:11.695202229 08:38:11.695 INF [rtsp] listen addr=:8554
2024-02-08 08:38:11.695301128 08:38:11.695 INF [api] listen addr=:1984
2024-02-08 08:38:11.695674161 08:38:11.695 INF [webrtc] listen addr=:8555
2024-02-08 08:38:19.987165395 [INFO] Starting go2rtc healthcheck service...
Config yaml:
mqtt:
host: 192.168.111.2
user: mqttuser
password: XXXX
birdseye:
enabled: True
mode: continuous
# webrtc:
# network_mode: host
detect:
enabled: True
objects:
track:
- person
- dog
- cat
- car
record:
enabled: True
events:
pre_capture: 10
post_capture: 10
retain:
default: 30
cameras:
entrance:
enabled: True
record:
events:
required_zones:
- front_porch
snapshots:
enabled: True
required_zones:
- front_porch
ffmpeg:
hwaccel_args: preset-nvidia-h264
inputs:
- path: rtsp://192.168.111.2:32869/98a2372c50e1606b
input_args: preset-rtsp-restream
roles:
- record
- path: rtsp://192.168.111.2:32869/0f477492eb6d033c
input_args: preset-rtsp-restream
roles:
- detect
zones:
front_porch:
coordinates: 0,720,1280,720,1280,0,770,0,402,236,359,235,271,266,151,427,0,594
front_doorbell:
record:
events:
required_zones:
- front_porch
snapshots:
enabled: True
required_zones:
- front_porch
ffmpeg:
hwaccel_args: preset-nvidia-h264
inputs:
- path: rtsp://192.168.111.2:45161/eeb9918ab44c8350
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://192.168.111.2:45161/ebd99ea4be2c98dc
input_args: preset-rtsp-restream
roles:
- record
motion:
mask:
- 1920,0,1920,551,0,554,0,0
zones:
front_porch:
coordinates: 0,493,91,493,208,509,296,544,404,544,493,633,577,648,726,660,753,635,888,569,1028,643,1236,614,1280,720,909,720,0,720
garage:
ffmpeg:
hwaccel_args: preset-nvidia-h264
inputs:
- path: rtsp://192.168.111.2:35705/fd1734abb5ba49ae
input_args: preset-rtsp-restream
roles:
- detect
- path: rtsp://192.168.111.2:35705/4540a170ea6fe583
input_args: preset-rtsp-restream
roles:
- record
snapshots:
enabled: True
rear:
record:
events:
required_zones:
- rear_entrance
snapshots:
enabled: True
required_zones:
- rear_entrance
ffmpeg:
hwaccel_args: preset-nvidia-h264
inputs:
- path: rtsp://192.168.111.2:35863/c59dd8784e1fcd63
input_args: preset-rtsp-restream
roles:
- record
- path: rtsp://192.168.111.2:35863/c6c7806ac11fde3a
input_args: preset-rtsp-restream
roles:
- detect
motion:
mask:
- 1280,0,1280,160,616,151,0,214,0,0
zones:
rear_entrance:
coordinates: 0,720,1280,720,1280,265,975,244,669,344,532,332,0,460
detectors:
tensorrt:
type: tensorrt
device: 0 #This is the default, select the first GPU
model:
path: /config/model_cache/tensorrt/yolov7-tiny-416.trt
input_tensor: nchw
input_pixel_format: rgb
width: 416
height: 416
I got this solved eventually - for some reason the container once upgraded to .13 wasn't getting internet access so the downloading of the weights failed. I changed the container to bridge (from macvlan) and it's working now.
My container does have access to the internet and I am facing this issue:
$ docker exec -it frigate bash
root@88f34c766277:/opt/frigate# curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
root@88f34c766277:/opt/frigate#
Adding the bridge configuration below did not help.
version: "3.9"
services:
frigate:
container_name: frigate
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:dev-bfbacee-tensorrt # the issue is also present with the production image
shm_size: 96MB
privileged: true
environment:
- LIBVA_DRIVER_NAME=i965
- USE_FP16=False
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
devices:
- /dev/dri/renderD128
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1 # number of GPUs
capabilities: [gpu]
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554" # RTSP feeds
networks:
- bridge_network
networks:
bridge_network:
driver: bridge
Figured I'd try here before opening a new issue since they look similar.
Here is the config file for good measure:
mqtt:
enabled: False
cameras:
heitor:
enabled: True
ffmpeg:
hwaccel_args: preset-vaapi
inputs:
- path: rtsp://<user>:<pwd>@192.168.66.254:554/Streaming/Channels/101/
roles:
- detect
detect:
enabled: True
detectors:
tensorrt:
type: tensorrt
device: 0
model:
path: /config/model_cache/tensorrt/yolov7-320.trt
input_tensor: nchw
input_pixel_format: rgb
width: 320
height: 320
It turns out it’s because the machine had a wireguard client running. I guess docker did not like the DNS configuration.
I shut off wireguard and it is now happily decoding with the integrated intel GPU and detecting with my old GeForce 840M 😁
I have also encountered the same problem. Do you have any good solutions, seniors? Thank you for your reply
s6-rc: info: service s6rc-fdholder: starting s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service s6rc-fdholder successfully started s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service trt-model-prepare: starting s6-rc: info: service log-prepare: starting s6-rc: info: service log-prepare successfully started Generating the following TRT Models: yolov7-320 s6-rc: info: service nginx-log: starting s6-rc: info: service go2rtc-log: starting Downloading yolo weights s6-rc: info: service frigate-log: starting s6-rc: info: service nginx-log successfully started s6-rc: info: service frigate-log successfully started s6-rc: info: service go2rtc-log successfully started s6-rc: info: service go2rtc: starting s6-rc: info: service go2rtc successfully started s6-rc: info: service go2rtc-healthcheck: starting s6-rc: info: service go2rtc-healthcheck successfully started 2024-07-19 01:43:29.038272789 [INFO] Preparing new go2rtc config... 2024-07-19 01:43:29.446236412 [INFO] Starting go2rtc... 2024-07-19 01:43:29.550289841 01:43:29.550 INF go2rtc version 1.8.4 linux/arm64 2024-07-19 01:43:29.551101345 01:43:29.551 INF [rtsp] listen addr=:8554 2024-07-19 01:43:29.551208803 01:43:29.551 INF [api] listen addr=:1984 2024-07-19 01:43:29.551377991 01:43:29.551 INF [webrtc] listen addr=:8555 2024-07-19 01:43:39.000225040 [INFO] Starting go2rtc healthcheck service... s6-rc: warning: unable to start service trt-model-prepare: command exited 4
I solved this after disabling system proxy tools
My frigate container was also routing its traffic through wireguard and hitting the same error, failing to pull down the model weights from github. I think it was related to MTU size and packet fragmentation during the TLS handshake because of the wireguard overhead. I noticed that trying to curl -v https://github.com
didn't work in any of my containers but nearly all other TLS requests worked fine. It was nothing to do with frigate, just the first time there was a noticeable issue.
What fixed it for me was adding the following to the Interface
section of the wireguard .conf
file on both ends of the tunnel:
PostUp = iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t mangle -D FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
This makes sure the Maximum Segment Size (MSS) will fit within the Maximum Transmission Unit (MTU) for the SYN packets of the TLS handshake.
Describe the problem you are having
Frigate fails to start, hangs on go2rtc health check starting but earlier in the logs this can be seen:
unable to start service trt-model-prepare: command exited 4
Version
0.13.1
Frigate config file
Relevant log output
FFprobe output from your camera
Frigate stats
No response
Operating system
UNRAID
Install method
Docker Compose
Coral version
Other
Network connection
Wired
Camera make and model
N/A
Any other information that may be helpful
No response