Open madasus opened 12 months ago
Same Here Using a Raspberry Pi 4 8gb Model with USB Coral TPU Frigate Version 0.13.2 docker container on Raspberry Pi OS The whole machine crashes, No access from any source (web,ssh) but strangely ping works (with lots of lag) The only way to restore everything is by restarting the Pi 4 Cameras (3 detecting off office hours) Google Coral TPU over USB UPDATE: Did some changes on the config files but no luck so far Container:
frigate: container_name: Frigate privileged: true restart: unless-stopped image: ghcr.io/blakeblackshear/frigate:stable shm_size: 128mb <---------------- increased to 1024mb devices: - /dev/bus/usb/002/004:/dev/bus/usb/002/004 # passes the USB Coral, needs to be modified for other versions - /dev/video11:/dev/video11 # Extracted from frigate doumentation for RBPi volumes: - /etc/localtime:/etc/localtime:ro - /home/djcrawleravp/docker/frigate:/config - /home/djcrawleravp/docker/frigate/media:/media/frigate network_mode: host environment: FRIGATE_RTSP_PASSWORD: "Password123"
Config File: (Recentrly Disabled Birdseye, not much improvement)
mqtt: host: 127.0.0.1 user: mqtt password: Password123 topic_prefix: frigate #ffmpeg: # hwaccel_args: preset-rpi-64-h264 <---------------- hwaccell disabled detectors: coral: type: edgetpu device: usb birdseye: enabled: false mode: continuous cameras: oficina: birdseye: order: 2 ffmpeg: inputs: - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=1&subtype=0 roles: - record - path: rtsp://arquitec1:pass**92.168.1.193:554/cam/realmonitor?channel=1&subtype=1 roles: - detect objects: track: - person - mouse snapshots: enabled: true retain: default: 30 objects: person: 30 quality: 100 record: enabled: true retain: days: 1 mode: active_objects events: pre_capture: 2 post_capture: 2 retain: default: 15 mode: active_objects objects: person: 15 mouse: 15 detect: enabled: true fps: 5 width: 1280 height: 720 motion: mask: - 0,296,440,172,864,113,918,164,989,187,988,253,1079,275,1102,84,1280,76,1280,0,0,0 ingreso: birdseye: order: 1 ffmpeg: inputs: - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=2&subtype=0 roles: - record - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=2&subtype=1 roles: - detect objects: track: - person - cat - mouse snapshots: enabled: true retain: default: 30 objects: person: 30 quality: 100 record: enabled: true retain: days: 1 mode: active_objects events: pre_capture: 2 post_capture: 2 retain: default: 15 mode: active_objects objects: person: 15 cat: 15 mouse: 15 detect: enabled: true fps: 5 width: 1280 height: 720 motion: mask: - 158,406,0,720,0,0,1280,0,1280,39,1280,720,1013,720,1067,364,657,283,398,234 garaje: birdseye: order: 3 ffmpeg: inputs: - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=3&subtype=0 roles: - record - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=3&subtype=1 roles: - detect objects: track: - person - cat - mouse - car snapshots: enabled: true retain: default: 30 objects: person: 30 quality: 100 record: enabled: true retain: days: 1 mode: active_objects events: pre_capture: 2 post_capture: 2 retain: default: 15 mode: active_objects objects: person: 15 cat: 15 mouse: 15 detect: enabled: true fps: 5 width: 1280 height: 720 motion: mask: - 0,185,0,0,33,0,1280,0,1280,75,771,83,134,158 patio: birdseye: order: 4 ffmpeg: inputs: - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=4&subtype=0 roles: - record - path: rtsp://arquitec1:pass**@192.168.1.193:554/cam/realmonitor?channel=4&subtype=1 roles: - detect objects: track: - person - cat - mouse snapshots: enabled: true retain: default: 30 objects: person: 30 quality: 100 record: enabled: true retain: days: 1 mode: active_objects events: pre_capture: 2 post_capture: 2 retain: default: 15 mode: active_objects objects: person: 15 cat: 15 mouse: 15 detect: enabled: true fps: 5 width: 1280 height: 720 motion: mask: - 912,217,478,336,0,337,0,0,1280,0,1280,194
Htop while detecting: (usually below 20%when not detecting)
Frigate Side:
Disabling hardware accel and increasing shm_size to 1024 did the trick for me… running perfectly for 3 days now
So yeah... after a while my server is randomly crashing again :/
On Frigate 0.13 I had the same crashes, but solved it using the yolov8n model. That resulted in no more crashes at all. Yesterday I migrated to 0.14, and had to move to yolo-nas model to get it working. After about 12 hours the whole computer crashed again :( Any ideas how to solve this on 0.14? I consider buying a Google Coral USB Accelerator now...
Working with a NUC 6i3SYH, Debian with Frigate on Docker.
Same here, 2 crashes in 2 days. I didn't had crashes in the beta versions...
What worked for me was turning off hardware acceleration in ffmpeg. I use a coral, so don't need CPU for that. I haven't tried turning it back on in .14 but an earlier beta did not work either.
ffmpeg:
hwaccel_args: ' '
Note: Commenting out these lines does not turn off acceleration, you must pass in a string with a blank in it.
This has never happened to me, but has started happening with 0.14. Waiting to see if I can save some diagnostics this time.
For me, the memory use would spike a bunch, then it would go to 100% and crash the system. I turned hwaccel off and no more spikes.
That seems to workaround it though is far less than ideal. Lots of GPU errors and OOM without:
Aug 11 16:21:08 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:21:08 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[25543] context reset due to GPU hang
Aug 11 16:21:08 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:f45e8fef, in ffmpeg [25543]
Aug 11 16:21:08 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:21:08 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[25543] context reset due to GPU hang
Aug 11 16:21:08 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:eaaeaeae, in ffmpeg [25543]
Aug 11 16:25:33 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:25:33 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[1216] context reset due to GPU hang
Aug 11 16:25:33 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:f8d3bfef, in ffmpeg [1216]
Aug 11 16:25:33 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:25:33 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[1216] context reset due to GPU hang
Aug 11 16:25:33 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:eaaeaeae, in ffmpeg [1216]
Aug 11 16:29:07 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:29:07 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[7602] context reset due to GPU hang
Aug 11 16:29:07 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:f1b5b5b5, in ffmpeg [7602]
Aug 11 16:29:07 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:29:07 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[7602] context reset due to GPU hang
Aug 11 16:29:07 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:eaaeaeae, in ffmpeg [7602]
Aug 11 16:31:41 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:31:41 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[12716] context reset due to GPU hang
Aug 11 16:31:41 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:8313bfef, in ffmpeg [12716]
Aug 11 16:31:41 crow kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for CS error
Aug 11 16:31:41 crow kernel: i915 0000:00:02.0: [drm] ffmpeg[12716] context reset due to GPU hang
Aug 11 16:31:41 crow kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 11:4:eaaeaeae, in ffmpeg [12716]
Aug 11 16:35:07 crow kernel: mdcmd (39): nocheck pause
Aug 11 16:35:07 crow kernel: md: recovery thread: exit status: -4
Aug 11 16:55:54 crow kernel: ffmpeg invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
Aug 11 16:55:54 crow kernel: CPU: 3 PID: 13423 Comm: ffmpeg Tainted: P O 6.1.79-Unraid #1
Aug 11 16:55:54 crow kernel: Hardware name: retsamarret 000-F4423-FBA004-2000-N/Default string, BIOS 5.19 06/24/2022
Aug 11 16:55:54 crow kernel: Call Trace:
Aug 11 16:55:54 crow kernel: <TASK>
Aug 11 16:55:54 crow kernel: dump_stack_lvl+0x44/0x5c
Aug 11 16:55:54 crow kernel: dump_header+0x4a/0x211
Aug 11 16:55:54 crow kernel: oom_kill_process+0x80/0x111
Aug 11 16:55:54 crow kernel: out_of_memory+0x3b3/0x3e5
Aug 11 16:55:54 crow kernel: mem_cgroup_out_of_memory+0x7c/0xb2
Aug 11 16:55:54 crow kernel: try_charge_memcg+0x44a/0x5ad
Aug 11 16:55:54 crow kernel: charge_memcg+0x31/0x79
Aug 11 16:55:54 crow kernel: __mem_cgroup_charge+0x29/0x41
Aug 11 16:55:54 crow kernel: __handle_mm_fault+0x881/0xcf9
Aug 11 16:55:54 crow kernel: ? mas_destroy+0xa8/0xbb
Aug 11 16:55:54 crow kernel: handle_mm_fault+0x13d/0x20f
Aug 11 16:55:54 crow kernel: do_user_addr_fault+0x2c3/0x48d
Aug 11 16:55:54 crow kernel: exc_page_fault+0xfb/0x11d
Aug 11 16:55:54 crow kernel: asm_exc_page_fault+0x22/0x30
Aug 11 16:55:54 crow kernel: RIP: 0033:0x149aedf03d81
Aug 11 16:55:54 crow kernel: Code: 10 0f 29 95 50 ff ff ff 66 0f 38 2a 49 20 0f 29 8d 60 ff ff ff 66 0f 38 2a 41 30 0f 11 5a f0 49 63 4d 08 0f 29 85 70 ff ff ff <0f> 11 54 0a f0 41 8b 4d 08 01 c9 48 63 c9 0f 11 4c 0a f0 41 8b 4d
Aug 11 16:55:54 crow kernel: RSP: 002b:00007fffce5671a0 EFLAGS: 00010206
Aug 11 16:55:54 crow kernel: RAX: 000000000008c200 RBX: 0000149ae39e7080 RCX: 0000000000000500
Aug 11 16:55:54 crow kernel: RDX: 000055d083ea0e40 RSI: 00000000fffffe00 RDI: 000055d083ea1330
Aug 11 16:55:54 crow kernel: RBP: 00007fffce567310 R08: 0000000000000500 R09: 00000000fffffe08
Aug 11 16:55:54 crow kernel: R10: 000055d083ea0e30 R11: 0000000000000000 R12: 00000000fffffe0c
Aug 11 16:55:54 crow kernel: R13: 00007fffce567370 R14: 000055d083ea0e30 R15: 000055d083ea0e30
Aug 11 16:55:54 crow kernel: </TASK>
Aug 11 16:55:54 crow kernel: memory: usage 2097152kB, limit 2097152kB, failcnt 7991
Aug 11 16:55:54 crow kernel: swap: usage 0kB, limit 9007199254740988kB, failcnt 0
Aug 11 16:55:54 crow kernel: Memory cgroup stats for /docker/0c0c9c3d5a07b7396a7e4ae3b3c3c4486aae2cee31bac0415a835f086a625b73:
Aug 11 16:55:54 crow kernel: anon 1891573760
Aug 11 16:55:54 crow kernel: file 217554944
Aug 11 16:55:54 crow kernel: kernel 37122048
Aug 11 16:55:54 crow kernel: kernel_stack 4259840
Aug 11 16:55:54 crow kernel: pagetables 14495744
Aug 11 16:55:54 crow kernel: sec_pagetables 0
Aug 11 16:55:54 crow kernel: percpu 216
Aug 11 16:55:54 crow kernel: sock 1232896
Aug 11 16:55:54 crow kernel: vmalloc 12288
Aug 11 16:55:54 crow kernel: shmem 217522176
Aug 11 16:55:54 crow kernel: file_mapped 3784704
Aug 11 16:55:54 crow kernel: file_dirty 20480
Aug 11 16:55:54 crow kernel: file_writeback 0
Aug 11 16:55:54 crow kernel: swapcached 0
Aug 11 16:55:54 crow kernel: anon_thp 27262976
Aug 11 16:55:54 crow kernel: file_thp 0
Aug 11 16:55:54 crow kernel: shmem_thp 6291456
Aug 11 16:55:54 crow kernel: inactive_anon 1972129792
Aug 11 16:55:54 crow kernel: active_anon 16654336
Aug 11 16:55:54 crow kernel: inactive_file 12288
Aug 11 16:55:54 crow kernel: active_file 20480
Aug 11 16:55:54 crow kernel: unevictable 120274944
Aug 11 16:55:54 crow kernel: slab_reclaimable 5335168
Aug 11 16:55:54 crow kernel: slab_unreclaimable 11924672
Aug 11 16:55:54 crow kernel: slab 17259840
Aug 11 16:55:54 crow kernel: workingset_refault_anon 0
Aug 11 16:55:54 crow kernel: workingset_refault_file 10114
Aug 11 16:55:54 crow kernel: workingset_activate_anon 0
Aug 11 16:55:54 crow kernel: workingset_activate_file 7271
Aug 11 16:55:54 crow kernel: workingset_restore_anon 0
Aug 11 16:55:54 crow kernel: workingset_restore_file 118
Aug 11 16:55:54 crow kernel: workingset_nodereclaim 0
Aug 11 16:55:54 crow kernel: pgscan 15992
Aug 11 16:55:54 crow kernel: pgsteal 15577
Aug 11 16:55:54 crow kernel: pgscan_kswapd 5406
Aug 11 16:55:54 crow kernel: pgscan_direct 10586
Aug 11 16:55:54 crow kernel: pgsteal_kswapd 5402
Aug 11 16:55:54 crow kernel: pgsteal_direct 10175
Aug 11 16:55:54 crow kernel: pgfault 8179859
Aug 11 16:55:54 crow kernel: pgmajfault 180
Aug 11 16:55:54 crow kernel: pgrefill 8265
Aug 11 16:55:54 crow kernel: pgactivate 4396828
Aug 11 16:55:54 crow kernel: pgdeactivate 8211
Aug 11 16:55:54 crow kernel: pglazyfree 0
Aug 11 16:55:54 crow kernel: pglazyfreed 0
Aug 11 16:55:54 crow kernel: thp_fault_alloc 44
Aug 11 16:55:54 crow kernel: Tasks state (memory values in pages):
Aug 11 16:55:54 crow kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 11 16:55:54 crow kernel: [ 12298] 0 12298 52 17 24576 0 0 s6-svscan
Aug 11 16:55:54 crow kernel: [ 12321] 0 12321 53 16 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12322] 0 12322 50 1 24576 0 0 s6-linux-init-s
Aug 11 16:55:54 crow kernel: [ 12330] 0 12330 53 16 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12331] 0 12331 53 16 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12332] 0 12332 53 17 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12333] 0 12333 53 15 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12334] 0 12334 53 17 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12335] 0 12335 53 16 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12336] 0 12336 53 16 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12337] 0 12337 53 15 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12338] 0 12338 53 16 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12339] 0 12339 53 16 28672 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12340] 0 12340 53 17 24576 0 0 s6-supervise
Aug 11 16:55:54 crow kernel: [ 12351] 0 12351 131 42 28672 0 0 s6-fdholderd
Aug 11 16:55:54 crow kernel: [ 12352] 0 12352 47 1 24576 0 0 s6-ipcserverd
Aug 11 16:55:54 crow kernel: [ 12414] 65534 12414 73 34 36864 0 0 s6-log
Aug 11 16:55:54 crow kernel: [ 12416] 65534 12416 69 27 36864 0 0 s6-log
Aug 11 16:55:54 crow kernel: [ 12417] 65534 12417 70 28 36864 0 0 s6-log
Aug 11 16:55:54 crow kernel: [ 12418] 65534 12418 69 29 36864 0 0 s6-log
Aug 11 16:55:54 crow kernel: [ 12426] 0 12426 310220 8776 180224 0 0 go2rtc
Aug 11 16:55:54 crow kernel: [ 12445] 0 12445 973 770 45056 0 0 bash
Aug 11 16:55:54 crow kernel: [ 12449] 0 12449 710319 88468 1564672 0 0 python3
Aug 11 16:55:54 crow kernel: [ 12457] 0 12457 148781 3600 106496 0 0 nginx
Aug 11 16:55:54 crow kernel: [ 12489] 0 12489 165396 2219 217088 0 0 nginx
Aug 11 16:55:54 crow kernel: [ 12490] 0 12490 165579 2327 221184 0 0 nginx
Aug 11 16:55:54 crow kernel: [ 12491] 0 12491 148828 1153 77824 0 0 nginx
Aug 11 16:55:54 crow kernel: [ 12627] 0 12627 973 773 40960 0 0 bash
Aug 11 16:55:54 crow kernel: [ 12700] 0 12700 171537 30949 487424 0 0 frigate.logger
Aug 11 16:55:54 crow kernel: [ 13279] 0 13279 275533 38184 696320 0 0 frigate.recordi
Aug 11 16:55:54 crow kernel: [ 13280] 0 13280 240128 33899 544768 0 0 frigate.review_
Aug 11 16:55:54 crow kernel: [ 13327] 0 13327 3625 2680 61440 0 0 python3
Aug 11 16:55:54 crow kernel: [ 13348] 0 13348 343612 35272 618496 0 0 frigate.detecto
Aug 11 16:55:54 crow kernel: [ 13350] 0 13350 428639 36380 757760 0 0 frigate.output
Aug 11 16:55:54 crow kernel: [ 13370] 0 13370 396533 36277 733184 0 0 frigate.process
Aug 11 16:55:54 crow kernel: [ 13374] 0 13374 445685 36269 745472 0 0 frigate.process
Aug 11 16:55:54 crow kernel: [ 13385] 0 13385 445685 36280 745472 0 0 frigate.process
Aug 11 16:55:54 crow kernel: [ 13393] 0 13393 446540 34187 675840 0 0 frigate.capture
Aug 11 16:55:54 crow kernel: [ 13408] 0 13408 449325 34918 679936 0 0 frigate.capture
Aug 11 16:55:54 crow kernel: [ 13411] 0 13411 449325 34846 679936 0 0 frigate.capture
Aug 11 16:55:54 crow kernel: [ 13415] 0 13415 122320 14108 372736 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13416] 0 13416 284071 170586 1667072 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13423] 0 13423 282285 170591 1671168 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13427] 0 13427 32228 5476 176128 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13442] 0 13442 31705 4232 163840 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13447] 0 13447 31705 4221 163840 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13452] 0 13452 31705 4215 159744 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 13457] 0 13457 31705 4235 163840 0 0 ffmpeg
Aug 11 16:55:54 crow kernel: [ 30953] 0 30953 622 230 40960 0 0 sleep
Aug 11 16:55:54 crow kernel: [ 31207] 0 31207 622 220 40960 0 0 sleep
Aug 11 16:55:54 crow kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=0c0c9c3d5a07b7396a7e4ae3b3c3c4486aae2cee31bac0415a835f086a625b73,mems_allowed=0,oom_memcg=/docker/0c0c9c3d5a07b7396a7e4ae3b3c3c4486aae2cee31bac0415a835f086a625b73,task_memcg=/docker/0c0c9c3d5a07b7396a7e4ae3b3c3c4486aae2cee31bac0415a835f086a625b73,task=ffmpeg,pid=13423,uid=0
Aug 11 16:55:54 crow kernel: Memory cgroup out of memory: Killed process 13423 (ffmpeg) total-vm:1129140kB, anon-rss:654648kB, file-rss:27716kB, shmem-rss:0kB, UID:0 pgtables:1632kB oom_score_adj:0
Aug 11 16:55:56 crow kernel: ffmpeg invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
Aug 11 16:55:56 crow kernel: CPU: 3 PID: 13416 Comm: ffmpeg Tainted: P O 6.1.79-Unraid #1
Aug 11 16:55:56 crow kernel: Hardware name: retsamarret 000-F4423-FBA004-2000-N/Default string, BIOS 5.19 06/24/2022
For me, the memory use would spike a bunch, then it would go to 100% and crash the system. I turned hwaccel off and no more spikes.
In my case, disabling HW accel makes the host use 20% extra CPU
I upgraded to v0.14.0 yesterday and sure enough about 24 hours later I got this crash again. I had not beeing getting this in v0.13.0 since I had disabled hwaccel.
The change here appears to be that in v0.14.0 hwaccel_args
has changed so that if it is not present in the configuration file then it will default to auto. This is different to v0.13.0 where if it was not present (or was commented out) then it would default to off.
This change to auto was introduced in commit 0ee81c7526c9db6e312704b36c1c113895c5ee41
As has been called out in other comments, you now need to explicitly disable hwaccel with:
ffmpeg:
hwaccel_args: ' '
IMHO This is a breaking change for a lot of people and needs to be called out as such in the upgrade documentation. It's mentioned in the 'Fixes and Changes' section of the upgrade notes, but given it causes entire system crashes in some configurations that were not present in v0.13.0, I think it needs to be better highlighted as a potential problem.
I have increased the SHM to the double from the calculation done and looks promising.
What worked for me was turning off hardware acceleration in ffmpeg. I use a coral, so don't need CPU for that. I haven't tried turning it back on in .14 but an earlier beta did not work either.
ffmpeg: hwaccel_args: ' '
Note: Commenting out these lines does not turn off acceleration, you must pass in a string with a blank in it.
I'm trying this, are you using a Raspberry pi 4 too?
I'm trying this, are you using a Raspberry pi 4 too? No, I am on X86, I should have said that. I think my issue is X86 focused. Maybe will help on PI. Though the pi does not have much CPU to spare.
I'm trying this, are you using a Raspberry pi 4 too? No, I am on X86, I should have said that. I think my issue is X86 focused. Maybe will help on PI. Though the pi does not have much CPU to spare.
I was using the "preset-rpi-64-h264" for RBPi, and the comenting it but same results :/
I'm just upgraded to 0.14.0, many months using 0.13.2 on docker without problem.
Less than 1 hour after upgrade full host crashed.
Running on - 12th Gen Intel(R) Core(TM) i5-1235U
Will try now the hwacess_args: ' '
to check if problem persists.
I'm just upgraded to 0.14.0, many months using 0.13.2 on docker without problem.
Less than 1 hour after upgrade full host crashed.
Running on - 12th Gen Intel(R) Core(TM) i5-1235U
Will try now the
hwacess_args: ' '
to check if problem persists.
Any update?, that method didn't work for me on a RBPi4 8gb... Maybe we should all roll back to 0.13.2?
What is the lastest stable version?, I'm done I just want this to work
I'm just upgraded to 0.14.0, many months using 0.13.2 on docker without problem. Less than 1 hour after upgrade full host crashed. Running on - 12th Gen Intel(R) Core(TM) i5-1235U Will try now the
hwacess_args: ' '
to check if problem persists.Any update?, that method didn't work for me on a RBPi4 8gb... Maybe we should all roll back to 0.13.2?
What is the lastest stable version?, I'm done I just want this to work
In my case 4 days with no crashes after using the hwacess_args: ' ' option
I'm just upgraded to 0.14.0, many months using 0.13.2 on docker without problem. Less than 1 hour after upgrade full host crashed. Running on - 12th Gen Intel(R) Core(TM) i5-1235U Will try now the
hwacess_args: ' '
to check if problem persists.Any update?, that method didn't work for me on a RBPi4 8gb... Maybe we should all roll back to 0.13.2? What is the lastest stable version?, I'm done I just want this to work
In my case 4 days with no crashes after using the hwacess_args: ' ' option
No changes on my case :/
Having the same problem with Intel NUC i3 7th generation. The issues started with Frigate 0.13 but made much worse with 0.14.
Tried to go with hwaccel_args: '' which made the problem disappear for few good days but had another crash today. I read somewhere there was a driver change in one of the latest Frigate majors, I suspect it might related..
Having the same problem with Intel NUC i3 7th generation. The issues started with Frigate 0.13 but made much worse with 0.14.
Tried to go with hwacess_args: ' ' which made the problem disappear for few good days but had another crash today. I read somewhere there was a driver change in one of the latest Frigate majors, I suspect it might related..
Exactly that, but on RBPi 4:
Queestion, is it hwacess_args? or hwaccel_args?
hwaccel_args: ' '
worked for a bit for me but FFMPEG is being OOM killed again and my system is crashing.
2024-08-19T07:57:25-07:00 crow kernel: ffmpeg invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0
2024-08-19T07:57:25-07:00 crow kernel: CPU: 2 PID: 20717 Comm: ffmpeg Tainted: P O 6.1.79-Unraid #1
2024-08-19T07:57:25-07:00 crow kernel: Hardware name: retsamarret 000-F4423-FBA004-2000-N/Default string, BIOS 5.19 06/24/2022
2024-08-19T07:57:25-07:00 crow kernel: Call Trace:
2024-08-19T07:57:25-07:00 crow kernel: <TASK>
2024-08-19T07:57:25-07:00 crow kernel: dump_stack_lvl+0x44/0x5c
2024-08-19T07:57:25-07:00 crow kernel: dump_header+0x4a/0x211
2024-08-19T07:57:25-07:00 crow kernel: oom_kill_process+0x80/0x111
2024-08-19T07:57:25-07:00 crow kernel: out_of_memory+0x3b3/0x3e5
2024-08-19T07:57:25-07:00 crow kernel: mem_cgroup_out_of_memory+0x7c/0xb2
2024-08-19T07:57:25-07:00 crow kernel: try_charge_memcg+0x44a/0x5ad
2024-08-19T07:57:25-07:00 crow kernel: charge_memcg+0x31/0x79
2024-08-19T07:57:25-07:00 crow kernel: __mem_cgroup_charge+0x29/0x41
2024-08-19T07:57:25-07:00 crow kernel: __filemap_add_folio+0xc6/0x358
2024-08-19T07:57:25-07:00 crow kernel: ? lruvec_page_state+0x46/0x46
2024-08-19T07:57:25-07:00 crow kernel: filemap_add_folio+0x37/0x91
2024-08-19T07:57:25-07:00 crow kernel: __filemap_get_folio+0x1b8/0x213
2024-08-19T07:57:25-07:00 crow kernel: pagecache_get_page+0x13/0x63
2024-08-19T07:57:25-07:00 crow kernel: alloc_extent_buffer+0x12d/0x38b
2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e
2024-08-19T07:57:25-07:00 crow kernel: read_tree_block+0x21/0x7f
2024-08-19T07:57:25-07:00 crow kernel: read_block_for_search+0x220/0x2a1
2024-08-19T07:57:25-07:00 crow kernel: btrfs_search_slot+0x737/0x829
2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e
2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_csum+0x5b/0xfd
2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_bio_sums+0x1bf/0x463
2024-08-19T07:57:25-07:00 crow kernel: btrfs_submit_data_read_bio+0x4a/0x76
2024-08-19T07:57:25-07:00 crow kernel: submit_one_bio+0x8a/0x9f
2024-08-19T07:57:25-07:00 crow kernel: extent_readahead+0x22b/0x255
2024-08-19T07:57:25-07:00 crow kernel: ? btrfs_repair_one_sector+0x28d/0x28d
2024-08-19T07:57:25-07:00 crow kernel: read_pages+0x47/0xf7
2024-08-19T07:57:25-07:00 crow kernel: page_cache_ra_unbounded+0x10e/0x151
2024-08-19T07:57:25-07:00 crow kernel: filemap_fault+0x2ea/0x52f
2024-08-19T07:57:25-07:00 crow kernel: __do_fault+0x2a/0x6b
2024-08-19T07:57:25-07:00 crow kernel: __handle_mm_fault+0xa22/0xcf9
2024-08-19T07:57:25-07:00 crow kernel: handle_mm_fault+0x13d/0x20f
2024-08-19T07:57:25-07:00 crow kernel: do_user_addr_fault+0x2c3/0x48d
2024-08-19T07:57:25-07:00 crow kernel: exc_page_fault+0xfb/0x11d
2024-08-19T07:57:25-07:00 crow kernel: asm_exc_page_fault+0x22/0x30
2024-08-19T07:57:25-07:00 crow kernel: RIP: 0033:0x55a605d88be8
2024-08-19T07:57:25-07:00 crow kernel: Code: Unable to access opcode bytes at 0x55a605d88bbe.
2024-08-19T07:57:25-07:00 crow kernel: RSP: 002b:00007ffd62166408 EFLAGS: 00010246
2024-08-19T07:57:25-07:00 crow kernel: RAX: 0000000000000003 RBX: 0000000000000001 RCX: 0000000000000000
2024-08-19T07:57:25-07:00 crow kernel: RDX: 0000000000000001 RSI: 00007ffd621664f7 RDI: 0000000000000003
2024-08-19T07:57:25-07:00 crow kernel: RBP: 000055a60d1f3280 R08: 0000000000000001 R09: 0000000000000001
2024-08-19T07:57:25-07:00 crow kernel: R10: 000055a60d1efd80 R11: 0000000000000202 R12: 00007ffd621664f7
2024-08-19T07:57:25-07:00 crow kernel: R13: 0000000000000001 R14: 0000000000000001 R15: 0000000000000005
2024-08-19T07:57:25-07:00 crow kernel: </TASK>
2024-08-19T07:57:25-07:00 crow kernel: memory: usage 3145740kB, limit 3145728kB, failcnt 64546
2024-08-19T07:57:25-07:00 crow kernel: swap: usage 0kB, limit 9007199254740988kB, failcnt 0
2024-08-19T07:57:25-07:00 crow kernel: Memory cgroup stats for /docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33:
2024-08-19T07:57:25-07:00 crow kernel: anon 3065667584
2024-08-19T07:57:25-07:00 crow kernel: file 112300032
2024-08-19T07:57:25-07:00 crow kernel: kernel 42692608
2024-08-19T07:57:25-07:00 crow kernel: kernel_stack 5505024
2024-08-19T07:57:25-07:00 crow kernel: pagetables 16973824
2024-08-19T07:57:25-07:00 crow kernel: sec_pagetables 0
2024-08-19T07:57:25-07:00 crow kernel: percpu 216
2024-08-19T07:57:25-07:00 crow kernel: sock 569344
2024-08-19T07:57:25-07:00 crow kernel: vmalloc 12288
2024-08-19T07:57:25-07:00 crow kernel: shmem 93908992
2024-08-19T07:57:25-07:00 crow kernel: file_mapped 5455872
2024-08-19T07:57:25-07:00 crow kernel: file_dirty 20480
2024-08-19T07:57:25-07:00 crow kernel: file_writeback 0
2024-08-19T07:57:25-07:00 crow kernel: swapcached 0
2024-08-19T07:57:25-07:00 crow kernel: anon_thp 44040192
2024-08-19T07:57:25-07:00 crow kernel: file_thp 0
2024-08-19T07:57:25-07:00 crow kernel: shmem_thp 0
2024-08-19T07:57:25-07:00 crow kernel: inactive_anon 3152068608
2024-08-19T07:57:25-07:00 crow kernel: active_anon 7507968
2024-08-19T07:57:25-07:00 crow kernel: inactive_file 18370560
2024-08-19T07:57:25-07:00 crow kernel: active_file 20480
2024-08-19T07:57:25-07:00 crow kernel: unevictable 0
2024-08-19T07:57:25-07:00 crow kernel: slab_reclaimable 8292064
2024-08-19T07:57:25-07:00 crow kernel: slab_unreclaimable 10761760
2024-08-19T07:57:25-07:00 crow kernel: slab 19053824
2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_anon 0
2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_file 153055
2024-08-19T07:57:25-07:00 crow kernel: kworker/u8:3 invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0
2024-08-19T07:57:25-07:00 crow kernel: CPU: 3 PID: 11060 Comm: kworker/u8:3 Tainted: P O 6.1.79-Unraid #1
2024-08-19T07:57:25-07:00 crow kernel: Hardware name: retsamarret 000-F4423-FBA004-2000-N/Default string, BIOS 5.19 06/24/2022
2024-08-19T07:57:25-07:00 crow kernel: Workqueue: loop2 loop_workfn
2024-08-19T07:57:25-07:00 crow kernel: Call Trace:
2024-08-19T07:57:25-07:00 crow kernel: <TASK>
2024-08-19T07:57:25-07:00 crow kernel: dump_stack_lvl+0x44/0x5c
2024-08-19T07:57:25-07:00 crow kernel: dump_header+0x4a/0x211
2024-08-19T07:57:25-07:00 crow kernel: oom_kill_process+0x80/0x111
2024-08-19T07:57:25-07:00 crow kernel: out_of_memory+0x3b3/0x3e5
2024-08-19T07:57:25-07:00 crow kernel: mem_cgroup_out_of_memory+0x7c/0xb2
2024-08-19T07:57:25-07:00 crow kernel: try_charge_memcg+0x44a/0x5ad
2024-08-19T07:57:25-07:00 crow kernel: charge_memcg+0x31/0x79
2024-08-19T07:57:25-07:00 crow kernel: __mem_cgroup_charge+0x29/0x41
2024-08-19T07:57:25-07:00 crow kernel: __filemap_add_folio+0xc6/0x358
2024-08-19T07:57:25-07:00 crow kernel: ? lruvec_page_state+0x46/0x46
2024-08-19T07:57:25-07:00 crow kernel: filemap_add_folio+0x37/0x91
2024-08-19T07:57:25-07:00 crow kernel: __filemap_get_folio+0x1b8/0x213
2024-08-19T07:57:25-07:00 crow kernel: pagecache_get_page+0x13/0x63
2024-08-19T07:57:25-07:00 crow kernel: alloc_extent_buffer+0x12d/0x38b
2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e
2024-08-19T07:57:25-07:00 crow kernel: read_tree_block+0x21/0x7f
2024-08-19T07:57:25-07:00 crow kernel: read_block_for_search+0x220/0x2a1
2024-08-19T07:57:25-07:00 crow kernel: btrfs_search_slot+0x737/0x829
2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e
2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_csum+0x5b/0xfd
2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_bio_sums+0x1bf/0x463
2024-08-19T07:57:25-07:00 crow kernel: btrfs_submit_data_read_bio+0x4a/0x76
2024-08-19T07:57:25-07:00 crow kernel: submit_one_bio+0x8a/0x9f
2024-08-19T07:57:25-07:00 crow kernel: extent_readahead+0x22b/0x255
2024-08-19T07:57:25-07:00 crow kernel: ? btrfs_repair_one_sector+0x28d/0x28d
2024-08-19T07:57:25-07:00 crow kernel: read_pages+0x47/0xf7
2024-08-19T07:57:25-07:00 crow kernel: page_cache_ra_unbounded+0x10e/0x151
2024-08-19T07:57:25-07:00 crow kernel: filemap_get_pages+0x248/0x50c
2024-08-19T07:57:25-07:00 crow kernel: ? __accumulate_pelt_segments+0x29/0x3f
2024-08-19T07:57:25-07:00 crow kernel: filemap_read+0xb8/0x26f
2024-08-19T07:57:25-07:00 crow kernel: ? update_load_avg+0x46/0x398
2024-08-19T07:57:25-07:00 crow kernel: do_iter_readv_writev+0x93/0xdd
2024-08-19T07:57:25-07:00 crow kernel: do_iter_read+0x74/0xb9
2024-08-19T07:57:25-07:00 crow kernel: loop_process_work+0x4b7/0x607
2024-08-19T07:57:25-07:00 crow kernel: process_one_work+0x1a8/0x295
2024-08-19T07:57:25-07:00 crow kernel: worker_thread+0x18b/0x244
2024-08-19T07:57:25-07:00 crow kernel: ? rescuer_thread+0x281/0x281
2024-08-19T07:57:25-07:00 crow kernel: kthread+0xe4/0xef
2024-08-19T07:57:25-07:00 crow kernel: ? kthread_complete_and_exit+0x1b/0x1b
2024-08-19T07:57:25-07:00 crow kernel: ret_from_fork+0x1f/0x30
2024-08-19T07:57:25-07:00 crow kernel: </TASK>
2024-08-19T07:57:25-07:00 crow kernel: memory: usage 3145740kB, limit 3145728kB, failcnt 64548
2024-08-19T07:57:25-07:00 crow kernel: swap: usage 0kB, limit 9007199254740988kB, failcnt 0
2024-08-19T07:57:25-07:00 crow kernel: Memory cgroup stats for /docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33:
2024-08-19T07:57:25-07:00 crow kernel: anon 3024125952
2024-08-19T07:57:25-07:00 crow kernel: file 112316416
2024-08-19T07:57:25-07:00 crow kernel: kernel 42590208
2024-08-19T07:57:25-07:00 crow kernel: kernel_stack 5406720
2024-08-19T07:57:25-07:00 crow kernel: pagetables 16969728
2024-08-19T07:57:25-07:00 crow kernel: sec_pagetables 0
2024-08-19T07:57:25-07:00 crow kernel: percpu 216
2024-08-19T07:57:25-07:00 crow kernel: sock 569344
2024-08-19T07:57:25-07:00 crow kernel: vmalloc 12288
2024-08-19T07:57:25-07:00 crow kernel: shmem 93908992
2024-08-19T07:57:25-07:00 crow kernel: file_mapped 5455872
2024-08-19T07:57:25-07:00 crow kernel: file_dirty 20480
2024-08-19T07:57:25-07:00 crow kernel: file_writeback 0
2024-08-19T07:57:25-07:00 crow kernel: swapcached 0
2024-08-19T07:57:25-07:00 crow kernel: anon_thp 41943040
2024-08-19T07:57:25-07:00 crow kernel: file_thp 0
2024-08-19T07:57:25-07:00 crow kernel: shmem_thp 0
2024-08-19T07:57:25-07:00 crow kernel: inactive_anon 3152068608
2024-08-19T07:57:25-07:00 crow kernel: active_anon 7507968
2024-08-19T07:57:25-07:00 crow kernel: inactive_file 18378752
2024-08-19T07:57:25-07:00 crow kernel: active_file 12288
2024-08-19T07:57:25-07:00 crow kernel: unevictable 0
2024-08-19T07:57:25-07:00 crow kernel: slab_reclaimable 8292064
2024-08-19T07:57:25-07:00 crow kernel: slab_unreclaimable 10761760
2024-08-19T07:57:25-07:00 crow kernel: slab 19053824
2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_anon 0
2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_file 153059
2024-08-19T07:57:25-07:00 crow kernel: workingset_activate_anon 0
2024-08-19T07:57:25-07:00 crow kernel: workingset_activate_file 29236
2024-08-19T07:57:25-07:00 crow kernel: workingset_restore_anon 0
2024-08-19T07:57:25-07:00 crow kernel: workingset_restore_file 5014
2024-08-19T07:57:25-07:00 crow kernel: workingset_nodereclaim 0
2024-08-19T07:57:25-07:00 crow kernel: pgscan 1670174
2024-08-19T07:57:25-07:00 crow kernel: pgsteal 353145
2024-08-19T07:57:25-07:00 crow kernel: pgscan_kswapd 273845
2024-08-19T07:57:25-07:00 crow kernel: pgscan_direct 1396329
2024-08-19T07:57:25-07:00 crow kernel: pgsteal_kswapd 250912
2024-08-19T07:57:25-07:00 crow kernel: pgsteal_direct 102233
2024-08-19T07:57:25-07:00 crow kernel: pgfault 8761181
2024-08-19T07:57:25-07:00 crow kernel: pgmajfault 512
2024-08-19T07:57:25-07:00 crow kernel: pgrefill 79526
2024-08-19T07:57:25-07:00 crow kernel: pgactivate 4352164
2024-08-19T07:57:25-07:00 crow kernel: pgdeactivate 67897
2024-08-19T07:57:25-07:00 crow kernel: pglazyfree 0
2024-08-19T07:57:25-07:00 crow kernel: pglazyfreed 0
2024-08-19T07:57:25-07:00 crow kernel: thp_fault_alloc 125
2024-08-19T07:57:25-07:00 crow kernel: Tasks state (memory values in pages):
2024-08-19T07:57:25-07:00 crow kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
2024-08-19T07:57:25-07:00 crow kernel: [ 19719] 0 19719 52 5 24576 0 0 s6-svscan
2024-08-19T07:57:25-07:00 crow kernel: [ 19773] 0 19773 53 4 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19774] 0 19774 50 1 24576 0 0 s6-linux-init-s
2024-08-19T07:57:25-07:00 crow kernel: [ 19782] 0 19782 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19783] 0 19783 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19784] 0 19784 53 6 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19785] 0 19785 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19786] 0 19786 53 6 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19787] 0 19787 53 4 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19788] 0 19788 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19789] 0 19789 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19790] 0 19790 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19791] 0 19791 53 5 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19792] 0 19792 53 4 24576 0 0 s6-supervise
2024-08-19T07:57:25-07:00 crow kernel: [ 19802] 0 19802 131 16 24576 0 0 s6-fdholderd
2024-08-19T07:57:25-07:00 crow kernel: [ 19805] 0 19805 47 1 24576 0 0 s6-ipcserverd
2024-08-19T07:57:25-07:00 crow kernel: [ 19844] 65534 19844 70 6 36864 0 0 s6-log
2024-08-19T07:57:25-07:00 crow kernel: [ 19845] 65534 19845 69 5 36864 0 0 s6-log
2024-08-19T07:57:25-07:00 crow kernel: [ 19846] 65534 19846 70 5 36864 0 0 s6-log
2024-08-19T07:57:25-07:00 crow kernel: [ 19848] 65534 19848 69 5 36864 0 0 s6-log
2024-08-19T07:57:25-07:00 crow kernel: [ 19858] 0 19858 310156 5819 143360 0 0 go2rtc
2024-08-19T07:57:25-07:00 crow kernel: [ 19879] 0 19879 710697 54647 1499136 0 0 python3
2024-08-19T07:57:25-07:00 crow kernel: [ 19881] 0 19881 973 71 45056 0 0 bash
2024-08-19T07:57:25-07:00 crow kernel: [ 19890] 0 19890 148781 1423 106496 0 0 nginx
2024-08-19T07:57:25-07:00 crow kernel: [ 19938] 0 19938 165386 841 221184 0 0 nginx
2024-08-19T07:57:25-07:00 crow kernel: [ 19939] 0 19939 165386 786 221184 0 0 nginx
2024-08-19T07:57:25-07:00 crow kernel: [ 19940] 0 19940 165386 786 221184 0 0 nginx
2024-08-19T07:57:25-07:00 crow kernel: [ 19942] 0 19942 165450 918 221184 0 0 nginx
2024-08-19T07:57:25-07:00 crow kernel: [ 19950] 0 19950 148828 651 77824 0 0 nginx
2024-08-19T07:57:25-07:00 crow kernel: [ 20097] 0 20097 973 74 45056 0 0 bash
2024-08-19T07:57:25-07:00 crow kernel: [ 20190] 0 20190 171890 29415 495616 0 0 frigate.logger
2024-08-19T07:57:25-07:00 crow kernel: [ 20623] 0 20623 293457 34750 724992 0 0 frigate.recordi
2024-08-19T07:57:25-07:00 crow kernel: [ 20632] 0 20632 240835 32231 557056 0 0 frigate.review_
2024-08-19T07:57:25-07:00 crow kernel: [ 20652] 0 20652 3625 1320 65536 0 0 python3
2024-08-19T07:57:25-07:00 crow kernel: [ 20653] 0 20653 346080 34316 651264 0 0 frigate.detecto
2024-08-19T07:57:25-07:00 crow kernel: [ 20655] 0 20655 429748 33377 774144 0 0 frigate.output
2024-08-19T07:57:25-07:00 crow kernel: [ 20672] 0 20672 446961 33274 757760 0 0 frigate.process
2024-08-19T07:57:25-07:00 crow kernel: [ 20673] 0 20673 454104 40373 831488 0 0 frigate.process
2024-08-19T07:57:25-07:00 crow kernel: [ 20675] 0 20675 454374 40317 831488 0 0 frigate.process
2024-08-19T07:57:25-07:00 crow kernel: [ 20686] 0 20686 447627 33245 688128 0 0 frigate.capture
2024-08-19T07:57:25-07:00 crow kernel: [ 20694] 0 20694 450412 33964 692224 0 0 frigate.capture
2024-08-19T07:57:25-07:00 crow kernel: [ 20701] 0 20701 100950 3579 225280 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 20704] 0 20704 450412 33812 692224 0 0 frigate.capture
2024-08-19T07:57:25-07:00 crow kernel: [ 20714] 0 20714 102566 7892 266240 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 20717] 0 20717 32225 1759 184320 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 20746] 0 20746 31705 1249 163840 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 20751] 0 20751 31705 1249 163840 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 20756] 0 20756 31705 742 163840 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 20761] 0 20761 31705 741 163840 0 0 ffmpeg
2024-08-19T07:57:25-07:00 crow kernel: [ 31038] 0 31038 622 23 40960 0 0 sleep
2024-08-19T07:57:25-07:00 crow kernel: [ 31204] 0 31204 622 22 40960 0 0 sleep
2024-08-19T07:57:25-07:00 crow kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33,task_memcg=/docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33,task=python3,pid=19879,uid=0
2024-08-19T07:57:25-07:00 crow kernel: Memory cgroup out of memory: Killed process 19879 (python3) total-vm:2842788kB, anon-rss:218408kB, file-rss:0kB, shmem-rss:180kB, UID:0 pgtables:1464kB oom_score_adj:0
2024-08-19T07:57:56-07:00 crow kernel: usb 2-1.3: reset SuperSpeed USB device number 5 using xhci_hcd
2024-08-19T07:57:56-07:00 crow kernel: usb 2-1.3: LPM exit latency is zeroed, disabling LPM.
Sorry, I dont know why I wrote wrong here or if was some spell correction, but I'm using hwaccel_args
Hi, same here, hwaccel_args: ' ' doesn't work, I just had system hard lock again.
Anyone knows which version worked fine? and what is needed in order to make the config file compatible?
I tried applying some patches to the v0.14.0 branch to re-enable YOLOv8 support, as YOLOx and YOLO-NAS result in a whole machine/kernel crash 24-48h after starting Frigate, but I couldn't get past this issue, even after converting the models with OpenVINO 2024.1.0, or by using the ultralytics/yolo model export for OpenVINO.
2024-08-23 12:39:33.352648883 [2024-08-23 12:39:33] detector.ov INFO : Starting detection process: 605
2024-08-23 12:39:38.543562018 Process detector:ov:
2024-08-23 12:39:38.544323258 Traceback (most recent call last):
2024-08-23 12:39:38.544436611 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2024-08-23 12:39:38.544438896 self.run()
2024-08-23 12:39:38.544440514 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2024-08-23 12:39:38.544441967 self._target(*self._args, **self._kwargs)
2024-08-23 12:39:38.544443461 File "/opt/frigate/frigate/object_detection.py", line 125, in run_detector
2024-08-23 12:39:38.544444867 detections = object_detector.detect_raw(input_frame)
2024-08-23 12:39:38.544446309 File "/opt/frigate/frigate/object_detection.py", line 75, in detect_raw
2024-08-23 12:39:38.544447678 return self.detect_api.detect_raw(tensor_input=tensor_input)
2024-08-23 12:39:38.544449164 File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 160, in detect_raw
2024-08-23 12:39:38.544450388 infer_request.infer(input_tensor)
2024-08-23 12:39:38.544451948 File "/usr/local/lib/python3.9/dist-packages/openvino/runtime/ie_api.py", line 132, in infer
2024-08-23 12:39:38.544453246 return OVDict(super().infer(_data_dispatch(
2024-08-23 12:39:38.544454685 RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:116:
2024-08-23 12:39:38.544456060 Exception from src/inference/src/cpp/infer_request.cpp:66:
2024-08-23 12:39:38.544500730 Check 'port.get_element_type() == tensor->get_element_type()' failed at src/plugins/intel_gpu/src/plugin/sync_infer_request.cpp:144:
2024-08-23 12:39:38.544502213 [GPU] Mismatch tensor and port type: f32 vs u8
Has anyone managed to get a working YOLOv8 export for OpenVINO 2024.1?
Just to provide another data point for what hardware experiences crashes: I'm running inference via OpenVINO on an i5-6500T with 3 cameras, but the crash reliably occurs even with only 1 camera connected.
Can I just share my models? I uploaded them here https://github.com/bean72/frigate-yolov8s-models They are Yolov8s though, but these have caused no crashes for me since I have switched to them. Here's my config:
detectors:
ov:
type: openvino
device: AUTO
model:
path: /yolov8s/yolov8s.xml
model:
width: 416 # 300
height: 416 # 300
input_tensor: nchw # nhwc
input_pixel_format: bgr
model_type: yolov8
labelmap_path: /yolov8s/coco_80cl.txt
Edit: Sorry, I just noticed that you requested OpenVINO 2024.1, I don't entirely know if this applies to you specifically, but I'm leaving it up here for anyone else who may be having the same issues. This is for 0.13, not 0.14
Can I just share my models? I uploaded them here https://github.com/bean72/frigate-yolov8s-models They are Yolov8s though, but these have caused no crashes for me since I have switched to them. Here's my config:
detectors: ov: type: openvino device: AUTO model: path: /yolov8s/yolov8s.xml model: width: 416 # 300 height: 416 # 300 input_tensor: nchw # nhwc input_pixel_format: bgr model_type: yolov8 labelmap_path: /yolov8s/coco_80cl.txt
Are these models working for you in 0.14, the latest stable release?
If so, are you using a specific fork, or did you manually patch Frigate to enable YOLOv8 to work again?
No, not working in 0.14. Don't mind me, apparently I have checked out early on a Friday 🤣 I've had my frigate server down for maintenance while I sort out storage issues, not realizing of the new release.
just got the same issue. at the time the cpu usage was 100% because i was opening the debug view and it has a lot of motion. im trying to limit the cpu usage with docker compose and see what happen later.
Hello, I am having the same issue, Running R-Pi 5, 8Gb, Pi OS, Hat with Coral TPU on PCI with R-Pi 5 power supply. Also using docker compose, pretty much the same setup as the original post. There is no logs when the freeze happens. The CPU is running a bit high which is strange as it does pick up the TPU, so it should off load the work. I have tried this, think it actually makes it worse. hwaccel_args: ' ' Only running 3 cameras, can't add more, CPU freaks out.
Not sure what to do next, don't want to go the yolo route as it seems a bit hacky (illegal because of licensing) patching my own branch and I would like to have a system I can upgrade.
mqtt:
host: *
user: *
password: *
record:
enabled: true
retain:
days: 1
mode: motion
events:
retain:
default: 5
mode: active_objects
detectors:
coral_pci:
type: edgetpu
device: pci
cameras:
main_gate_camera:
snapshots:
enabled: true
ffmpeg:
inputs:
- path:
*
roles:
- detect
- record
detect:
width: 1280
height: 720
birdseye:
enabled: True
width: 1280
height: 720
objects:
filters:
person:
mask: 0.998,0.154,0.653,0.151,0.642,0.919,1,0.916
motion:
mask: 0.986,0.082,0.984,0.029,0.858,0.029,0.858,0.084
Docker Compose
version: '3.9'
volumes:
ssd:
driver: local
driver_opts:
type: none
o: bind
device: /media/SSD
services:
frigate:
container_name: frigate
privileged: true
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "128mb" # Trying all sort of config here.
devices:
- /dev/apex_0:/dev/apex_0
- /dev/video*:/dev/video* # Trying all sort of config here.
volumes:
- ./:/config:rw
- /etc/localtime:/etc/localtime:ro
- ssd:/media/frigate:rw
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
environment:
FRIGATE_RTSP_PASSWORD: *
I was having the same issue after upgrading to version 0.14. I am running on unraid with docker and pci coral. I was getting crashes about every 24 hours. I added the following to my config for each camera and no issues for the past 5 days.
cameras: driveway: ffmpeg: hwaccel_args: ' '
I don't know if it matters, but watch the space in the config!
hwaccel_args: ' '
At least I have a stable configuration with the above line, where it will crash every other day if I change that line.
Using a NUC 8i6 with docker, compose.
@Ceer123 was the crash happen around 2am?
@Ceer123 was the crash happen around 2am?
No, all random times. I had it happen only 4 hours after restarting the container
are newer low power cpu like n100 or n305 also affected?
Seems like although using hwaccel_args: ' ' reduce the number of crashes it doesn't solve the problem (it crash now every 1-2 days instead of every few hours).
I wonder how to draw the developers attention to this issue. I saw that @blakeblackshear commented on this thread long time ago.
We see the issues, but there's not much to be done / nothing that has a clear solution. The vast majority of these cases are related to proxmox. Any number of things can cause an issue like this, many of which are host issues like GPU driver, kernel, etc
I manage multiple systems including an n100 system that has been running 0.14 before release with months of frigate uptime
An update for me. I run on amd64 under homeassistant. Past few weeks been running using hwaccel_args: preset-vaapi
and it has been working fine. My guess is some kernel driver unrelated to this container was the cause.
We see the issues, but there's not much to be done / nothing that has a clear solution. The vast majority of these cases are related to proxmox. Any number of things can cause an issue like this, many of which are host issues like GPU driver, kernel, etc
I manage multiple systems including an n100 system that has been running 0.14 before release with months of frigate uptime
On my case is a raspberry pi 4 with docker 🥲
We see the issues, but there's not much to be done / nothing that has a clear solution. The vast majority of these cases are related to proxmox. Any number of things can cause an issue like this, many of which are host issues like GPU driver, kernel, etc
I manage multiple systems including an n100 system that has been running 0.14 before release with months of frigate uptime
On my case it's Intel NUC i3 7th generation running HAOS. I don't think it's related to any specific deployment as we saw folks using NUCs, Pis, dockers, proxmox and etc.
I'm sure 100% it's something that started around Frigate 0.13 and being reproduced more with 0.14 - might be related to the increasing resource usage from with each version.. ?
EDITING - Is there anything I can do with HAOS to try and update relevant drivers?
No you can't, HA OS controls that. No, it's not related to increased resource usage, most releases have increased efficiency. Like I said it's a majority of proxmox users. In any case it's always a good idea to create your own support discussion as each case is individual
@jdeath what os and kernel version are you using currently? i am using i5 6500t with ubuntu 22.04 and kernel 5.15. facing almost weekly crashes with vaapi and qsv.
hwaccel_args: ' '
worked for a bit for me but FFMPEG is being OOM killed again and my system is crashing.2024-08-19T07:57:25-07:00 crow kernel: ffmpeg invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0 2024-08-19T07:57:25-07:00 crow kernel: CPU: 2 PID: 20717 Comm: ffmpeg Tainted: P O 6.1.79-Unraid #1 2024-08-19T07:57:25-07:00 crow kernel: Hardware name: retsamarret 000-F4423-FBA004-2000-N/Default string, BIOS 5.19 06/24/2022 2024-08-19T07:57:25-07:00 crow kernel: Call Trace: 2024-08-19T07:57:25-07:00 crow kernel: <TASK> 2024-08-19T07:57:25-07:00 crow kernel: dump_stack_lvl+0x44/0x5c 2024-08-19T07:57:25-07:00 crow kernel: dump_header+0x4a/0x211 2024-08-19T07:57:25-07:00 crow kernel: oom_kill_process+0x80/0x111 2024-08-19T07:57:25-07:00 crow kernel: out_of_memory+0x3b3/0x3e5 2024-08-19T07:57:25-07:00 crow kernel: mem_cgroup_out_of_memory+0x7c/0xb2 2024-08-19T07:57:25-07:00 crow kernel: try_charge_memcg+0x44a/0x5ad 2024-08-19T07:57:25-07:00 crow kernel: charge_memcg+0x31/0x79 2024-08-19T07:57:25-07:00 crow kernel: __mem_cgroup_charge+0x29/0x41 2024-08-19T07:57:25-07:00 crow kernel: __filemap_add_folio+0xc6/0x358 2024-08-19T07:57:25-07:00 crow kernel: ? lruvec_page_state+0x46/0x46 2024-08-19T07:57:25-07:00 crow kernel: filemap_add_folio+0x37/0x91 2024-08-19T07:57:25-07:00 crow kernel: __filemap_get_folio+0x1b8/0x213 2024-08-19T07:57:25-07:00 crow kernel: pagecache_get_page+0x13/0x63 2024-08-19T07:57:25-07:00 crow kernel: alloc_extent_buffer+0x12d/0x38b 2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e 2024-08-19T07:57:25-07:00 crow kernel: read_tree_block+0x21/0x7f 2024-08-19T07:57:25-07:00 crow kernel: read_block_for_search+0x220/0x2a1 2024-08-19T07:57:25-07:00 crow kernel: btrfs_search_slot+0x737/0x829 2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e 2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_csum+0x5b/0xfd 2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_bio_sums+0x1bf/0x463 2024-08-19T07:57:25-07:00 crow kernel: btrfs_submit_data_read_bio+0x4a/0x76 2024-08-19T07:57:25-07:00 crow kernel: submit_one_bio+0x8a/0x9f 2024-08-19T07:57:25-07:00 crow kernel: extent_readahead+0x22b/0x255 2024-08-19T07:57:25-07:00 crow kernel: ? btrfs_repair_one_sector+0x28d/0x28d 2024-08-19T07:57:25-07:00 crow kernel: read_pages+0x47/0xf7 2024-08-19T07:57:25-07:00 crow kernel: page_cache_ra_unbounded+0x10e/0x151 2024-08-19T07:57:25-07:00 crow kernel: filemap_fault+0x2ea/0x52f 2024-08-19T07:57:25-07:00 crow kernel: __do_fault+0x2a/0x6b 2024-08-19T07:57:25-07:00 crow kernel: __handle_mm_fault+0xa22/0xcf9 2024-08-19T07:57:25-07:00 crow kernel: handle_mm_fault+0x13d/0x20f 2024-08-19T07:57:25-07:00 crow kernel: do_user_addr_fault+0x2c3/0x48d 2024-08-19T07:57:25-07:00 crow kernel: exc_page_fault+0xfb/0x11d 2024-08-19T07:57:25-07:00 crow kernel: asm_exc_page_fault+0x22/0x30 2024-08-19T07:57:25-07:00 crow kernel: RIP: 0033:0x55a605d88be8 2024-08-19T07:57:25-07:00 crow kernel: Code: Unable to access opcode bytes at 0x55a605d88bbe. 2024-08-19T07:57:25-07:00 crow kernel: RSP: 002b:00007ffd62166408 EFLAGS: 00010246 2024-08-19T07:57:25-07:00 crow kernel: RAX: 0000000000000003 RBX: 0000000000000001 RCX: 0000000000000000 2024-08-19T07:57:25-07:00 crow kernel: RDX: 0000000000000001 RSI: 00007ffd621664f7 RDI: 0000000000000003 2024-08-19T07:57:25-07:00 crow kernel: RBP: 000055a60d1f3280 R08: 0000000000000001 R09: 0000000000000001 2024-08-19T07:57:25-07:00 crow kernel: R10: 000055a60d1efd80 R11: 0000000000000202 R12: 00007ffd621664f7 2024-08-19T07:57:25-07:00 crow kernel: R13: 0000000000000001 R14: 0000000000000001 R15: 0000000000000005 2024-08-19T07:57:25-07:00 crow kernel: </TASK> 2024-08-19T07:57:25-07:00 crow kernel: memory: usage 3145740kB, limit 3145728kB, failcnt 64546 2024-08-19T07:57:25-07:00 crow kernel: swap: usage 0kB, limit 9007199254740988kB, failcnt 0 2024-08-19T07:57:25-07:00 crow kernel: Memory cgroup stats for /docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33: 2024-08-19T07:57:25-07:00 crow kernel: anon 3065667584 2024-08-19T07:57:25-07:00 crow kernel: file 112300032 2024-08-19T07:57:25-07:00 crow kernel: kernel 42692608 2024-08-19T07:57:25-07:00 crow kernel: kernel_stack 5505024 2024-08-19T07:57:25-07:00 crow kernel: pagetables 16973824 2024-08-19T07:57:25-07:00 crow kernel: sec_pagetables 0 2024-08-19T07:57:25-07:00 crow kernel: percpu 216 2024-08-19T07:57:25-07:00 crow kernel: sock 569344 2024-08-19T07:57:25-07:00 crow kernel: vmalloc 12288 2024-08-19T07:57:25-07:00 crow kernel: shmem 93908992 2024-08-19T07:57:25-07:00 crow kernel: file_mapped 5455872 2024-08-19T07:57:25-07:00 crow kernel: file_dirty 20480 2024-08-19T07:57:25-07:00 crow kernel: file_writeback 0 2024-08-19T07:57:25-07:00 crow kernel: swapcached 0 2024-08-19T07:57:25-07:00 crow kernel: anon_thp 44040192 2024-08-19T07:57:25-07:00 crow kernel: file_thp 0 2024-08-19T07:57:25-07:00 crow kernel: shmem_thp 0 2024-08-19T07:57:25-07:00 crow kernel: inactive_anon 3152068608 2024-08-19T07:57:25-07:00 crow kernel: active_anon 7507968 2024-08-19T07:57:25-07:00 crow kernel: inactive_file 18370560 2024-08-19T07:57:25-07:00 crow kernel: active_file 20480 2024-08-19T07:57:25-07:00 crow kernel: unevictable 0 2024-08-19T07:57:25-07:00 crow kernel: slab_reclaimable 8292064 2024-08-19T07:57:25-07:00 crow kernel: slab_unreclaimable 10761760 2024-08-19T07:57:25-07:00 crow kernel: slab 19053824 2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_anon 0 2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_file 153055 2024-08-19T07:57:25-07:00 crow kernel: kworker/u8:3 invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0 2024-08-19T07:57:25-07:00 crow kernel: CPU: 3 PID: 11060 Comm: kworker/u8:3 Tainted: P O 6.1.79-Unraid #1 2024-08-19T07:57:25-07:00 crow kernel: Hardware name: retsamarret 000-F4423-FBA004-2000-N/Default string, BIOS 5.19 06/24/2022 2024-08-19T07:57:25-07:00 crow kernel: Workqueue: loop2 loop_workfn 2024-08-19T07:57:25-07:00 crow kernel: Call Trace: 2024-08-19T07:57:25-07:00 crow kernel: <TASK> 2024-08-19T07:57:25-07:00 crow kernel: dump_stack_lvl+0x44/0x5c 2024-08-19T07:57:25-07:00 crow kernel: dump_header+0x4a/0x211 2024-08-19T07:57:25-07:00 crow kernel: oom_kill_process+0x80/0x111 2024-08-19T07:57:25-07:00 crow kernel: out_of_memory+0x3b3/0x3e5 2024-08-19T07:57:25-07:00 crow kernel: mem_cgroup_out_of_memory+0x7c/0xb2 2024-08-19T07:57:25-07:00 crow kernel: try_charge_memcg+0x44a/0x5ad 2024-08-19T07:57:25-07:00 crow kernel: charge_memcg+0x31/0x79 2024-08-19T07:57:25-07:00 crow kernel: __mem_cgroup_charge+0x29/0x41 2024-08-19T07:57:25-07:00 crow kernel: __filemap_add_folio+0xc6/0x358 2024-08-19T07:57:25-07:00 crow kernel: ? lruvec_page_state+0x46/0x46 2024-08-19T07:57:25-07:00 crow kernel: filemap_add_folio+0x37/0x91 2024-08-19T07:57:25-07:00 crow kernel: __filemap_get_folio+0x1b8/0x213 2024-08-19T07:57:25-07:00 crow kernel: pagecache_get_page+0x13/0x63 2024-08-19T07:57:25-07:00 crow kernel: alloc_extent_buffer+0x12d/0x38b 2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e 2024-08-19T07:57:25-07:00 crow kernel: read_tree_block+0x21/0x7f 2024-08-19T07:57:25-07:00 crow kernel: read_block_for_search+0x220/0x2a1 2024-08-19T07:57:25-07:00 crow kernel: btrfs_search_slot+0x737/0x829 2024-08-19T07:57:25-07:00 crow kernel: ? slab_post_alloc_hook+0x4d/0x15e 2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_csum+0x5b/0xfd 2024-08-19T07:57:25-07:00 crow kernel: btrfs_lookup_bio_sums+0x1bf/0x463 2024-08-19T07:57:25-07:00 crow kernel: btrfs_submit_data_read_bio+0x4a/0x76 2024-08-19T07:57:25-07:00 crow kernel: submit_one_bio+0x8a/0x9f 2024-08-19T07:57:25-07:00 crow kernel: extent_readahead+0x22b/0x255 2024-08-19T07:57:25-07:00 crow kernel: ? btrfs_repair_one_sector+0x28d/0x28d 2024-08-19T07:57:25-07:00 crow kernel: read_pages+0x47/0xf7 2024-08-19T07:57:25-07:00 crow kernel: page_cache_ra_unbounded+0x10e/0x151 2024-08-19T07:57:25-07:00 crow kernel: filemap_get_pages+0x248/0x50c 2024-08-19T07:57:25-07:00 crow kernel: ? __accumulate_pelt_segments+0x29/0x3f 2024-08-19T07:57:25-07:00 crow kernel: filemap_read+0xb8/0x26f 2024-08-19T07:57:25-07:00 crow kernel: ? update_load_avg+0x46/0x398 2024-08-19T07:57:25-07:00 crow kernel: do_iter_readv_writev+0x93/0xdd 2024-08-19T07:57:25-07:00 crow kernel: do_iter_read+0x74/0xb9 2024-08-19T07:57:25-07:00 crow kernel: loop_process_work+0x4b7/0x607 2024-08-19T07:57:25-07:00 crow kernel: process_one_work+0x1a8/0x295 2024-08-19T07:57:25-07:00 crow kernel: worker_thread+0x18b/0x244 2024-08-19T07:57:25-07:00 crow kernel: ? rescuer_thread+0x281/0x281 2024-08-19T07:57:25-07:00 crow kernel: kthread+0xe4/0xef 2024-08-19T07:57:25-07:00 crow kernel: ? kthread_complete_and_exit+0x1b/0x1b 2024-08-19T07:57:25-07:00 crow kernel: ret_from_fork+0x1f/0x30 2024-08-19T07:57:25-07:00 crow kernel: </TASK> 2024-08-19T07:57:25-07:00 crow kernel: memory: usage 3145740kB, limit 3145728kB, failcnt 64548 2024-08-19T07:57:25-07:00 crow kernel: swap: usage 0kB, limit 9007199254740988kB, failcnt 0 2024-08-19T07:57:25-07:00 crow kernel: Memory cgroup stats for /docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33: 2024-08-19T07:57:25-07:00 crow kernel: anon 3024125952 2024-08-19T07:57:25-07:00 crow kernel: file 112316416 2024-08-19T07:57:25-07:00 crow kernel: kernel 42590208 2024-08-19T07:57:25-07:00 crow kernel: kernel_stack 5406720 2024-08-19T07:57:25-07:00 crow kernel: pagetables 16969728 2024-08-19T07:57:25-07:00 crow kernel: sec_pagetables 0 2024-08-19T07:57:25-07:00 crow kernel: percpu 216 2024-08-19T07:57:25-07:00 crow kernel: sock 569344 2024-08-19T07:57:25-07:00 crow kernel: vmalloc 12288 2024-08-19T07:57:25-07:00 crow kernel: shmem 93908992 2024-08-19T07:57:25-07:00 crow kernel: file_mapped 5455872 2024-08-19T07:57:25-07:00 crow kernel: file_dirty 20480 2024-08-19T07:57:25-07:00 crow kernel: file_writeback 0 2024-08-19T07:57:25-07:00 crow kernel: swapcached 0 2024-08-19T07:57:25-07:00 crow kernel: anon_thp 41943040 2024-08-19T07:57:25-07:00 crow kernel: file_thp 0 2024-08-19T07:57:25-07:00 crow kernel: shmem_thp 0 2024-08-19T07:57:25-07:00 crow kernel: inactive_anon 3152068608 2024-08-19T07:57:25-07:00 crow kernel: active_anon 7507968 2024-08-19T07:57:25-07:00 crow kernel: inactive_file 18378752 2024-08-19T07:57:25-07:00 crow kernel: active_file 12288 2024-08-19T07:57:25-07:00 crow kernel: unevictable 0 2024-08-19T07:57:25-07:00 crow kernel: slab_reclaimable 8292064 2024-08-19T07:57:25-07:00 crow kernel: slab_unreclaimable 10761760 2024-08-19T07:57:25-07:00 crow kernel: slab 19053824 2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_anon 0 2024-08-19T07:57:25-07:00 crow kernel: workingset_refault_file 153059 2024-08-19T07:57:25-07:00 crow kernel: workingset_activate_anon 0 2024-08-19T07:57:25-07:00 crow kernel: workingset_activate_file 29236 2024-08-19T07:57:25-07:00 crow kernel: workingset_restore_anon 0 2024-08-19T07:57:25-07:00 crow kernel: workingset_restore_file 5014 2024-08-19T07:57:25-07:00 crow kernel: workingset_nodereclaim 0 2024-08-19T07:57:25-07:00 crow kernel: pgscan 1670174 2024-08-19T07:57:25-07:00 crow kernel: pgsteal 353145 2024-08-19T07:57:25-07:00 crow kernel: pgscan_kswapd 273845 2024-08-19T07:57:25-07:00 crow kernel: pgscan_direct 1396329 2024-08-19T07:57:25-07:00 crow kernel: pgsteal_kswapd 250912 2024-08-19T07:57:25-07:00 crow kernel: pgsteal_direct 102233 2024-08-19T07:57:25-07:00 crow kernel: pgfault 8761181 2024-08-19T07:57:25-07:00 crow kernel: pgmajfault 512 2024-08-19T07:57:25-07:00 crow kernel: pgrefill 79526 2024-08-19T07:57:25-07:00 crow kernel: pgactivate 4352164 2024-08-19T07:57:25-07:00 crow kernel: pgdeactivate 67897 2024-08-19T07:57:25-07:00 crow kernel: pglazyfree 0 2024-08-19T07:57:25-07:00 crow kernel: pglazyfreed 0 2024-08-19T07:57:25-07:00 crow kernel: thp_fault_alloc 125 2024-08-19T07:57:25-07:00 crow kernel: Tasks state (memory values in pages): 2024-08-19T07:57:25-07:00 crow kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2024-08-19T07:57:25-07:00 crow kernel: [ 19719] 0 19719 52 5 24576 0 0 s6-svscan 2024-08-19T07:57:25-07:00 crow kernel: [ 19773] 0 19773 53 4 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19774] 0 19774 50 1 24576 0 0 s6-linux-init-s 2024-08-19T07:57:25-07:00 crow kernel: [ 19782] 0 19782 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19783] 0 19783 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19784] 0 19784 53 6 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19785] 0 19785 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19786] 0 19786 53 6 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19787] 0 19787 53 4 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19788] 0 19788 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19789] 0 19789 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19790] 0 19790 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19791] 0 19791 53 5 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19792] 0 19792 53 4 24576 0 0 s6-supervise 2024-08-19T07:57:25-07:00 crow kernel: [ 19802] 0 19802 131 16 24576 0 0 s6-fdholderd 2024-08-19T07:57:25-07:00 crow kernel: [ 19805] 0 19805 47 1 24576 0 0 s6-ipcserverd 2024-08-19T07:57:25-07:00 crow kernel: [ 19844] 65534 19844 70 6 36864 0 0 s6-log 2024-08-19T07:57:25-07:00 crow kernel: [ 19845] 65534 19845 69 5 36864 0 0 s6-log 2024-08-19T07:57:25-07:00 crow kernel: [ 19846] 65534 19846 70 5 36864 0 0 s6-log 2024-08-19T07:57:25-07:00 crow kernel: [ 19848] 65534 19848 69 5 36864 0 0 s6-log 2024-08-19T07:57:25-07:00 crow kernel: [ 19858] 0 19858 310156 5819 143360 0 0 go2rtc 2024-08-19T07:57:25-07:00 crow kernel: [ 19879] 0 19879 710697 54647 1499136 0 0 python3 2024-08-19T07:57:25-07:00 crow kernel: [ 19881] 0 19881 973 71 45056 0 0 bash 2024-08-19T07:57:25-07:00 crow kernel: [ 19890] 0 19890 148781 1423 106496 0 0 nginx 2024-08-19T07:57:25-07:00 crow kernel: [ 19938] 0 19938 165386 841 221184 0 0 nginx 2024-08-19T07:57:25-07:00 crow kernel: [ 19939] 0 19939 165386 786 221184 0 0 nginx 2024-08-19T07:57:25-07:00 crow kernel: [ 19940] 0 19940 165386 786 221184 0 0 nginx 2024-08-19T07:57:25-07:00 crow kernel: [ 19942] 0 19942 165450 918 221184 0 0 nginx 2024-08-19T07:57:25-07:00 crow kernel: [ 19950] 0 19950 148828 651 77824 0 0 nginx 2024-08-19T07:57:25-07:00 crow kernel: [ 20097] 0 20097 973 74 45056 0 0 bash 2024-08-19T07:57:25-07:00 crow kernel: [ 20190] 0 20190 171890 29415 495616 0 0 frigate.logger 2024-08-19T07:57:25-07:00 crow kernel: [ 20623] 0 20623 293457 34750 724992 0 0 frigate.recordi 2024-08-19T07:57:25-07:00 crow kernel: [ 20632] 0 20632 240835 32231 557056 0 0 frigate.review_ 2024-08-19T07:57:25-07:00 crow kernel: [ 20652] 0 20652 3625 1320 65536 0 0 python3 2024-08-19T07:57:25-07:00 crow kernel: [ 20653] 0 20653 346080 34316 651264 0 0 frigate.detecto 2024-08-19T07:57:25-07:00 crow kernel: [ 20655] 0 20655 429748 33377 774144 0 0 frigate.output 2024-08-19T07:57:25-07:00 crow kernel: [ 20672] 0 20672 446961 33274 757760 0 0 frigate.process 2024-08-19T07:57:25-07:00 crow kernel: [ 20673] 0 20673 454104 40373 831488 0 0 frigate.process 2024-08-19T07:57:25-07:00 crow kernel: [ 20675] 0 20675 454374 40317 831488 0 0 frigate.process 2024-08-19T07:57:25-07:00 crow kernel: [ 20686] 0 20686 447627 33245 688128 0 0 frigate.capture 2024-08-19T07:57:25-07:00 crow kernel: [ 20694] 0 20694 450412 33964 692224 0 0 frigate.capture 2024-08-19T07:57:25-07:00 crow kernel: [ 20701] 0 20701 100950 3579 225280 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 20704] 0 20704 450412 33812 692224 0 0 frigate.capture 2024-08-19T07:57:25-07:00 crow kernel: [ 20714] 0 20714 102566 7892 266240 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 20717] 0 20717 32225 1759 184320 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 20746] 0 20746 31705 1249 163840 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 20751] 0 20751 31705 1249 163840 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 20756] 0 20756 31705 742 163840 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 20761] 0 20761 31705 741 163840 0 0 ffmpeg 2024-08-19T07:57:25-07:00 crow kernel: [ 31038] 0 31038 622 23 40960 0 0 sleep 2024-08-19T07:57:25-07:00 crow kernel: [ 31204] 0 31204 622 22 40960 0 0 sleep 2024-08-19T07:57:25-07:00 crow kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33,task_memcg=/docker/591e44d1ea60bd26d23774afdf43e64e859e4b016e40778ce34674f055a9ec33,task=python3,pid=19879,uid=0 2024-08-19T07:57:25-07:00 crow kernel: Memory cgroup out of memory: Killed process 19879 (python3) total-vm:2842788kB, anon-rss:218408kB, file-rss:0kB, shmem-rss:180kB, UID:0 pgtables:1464kB oom_score_adj:0 2024-08-19T07:57:56-07:00 crow kernel: usb 2-1.3: reset SuperSpeed USB device number 5 using xhci_hcd 2024-08-19T07:57:56-07:00 crow kernel: usb 2-1.3: LPM exit latency is zeroed, disabling LPM.
same for me on unraid
@jdeath what os and kernel version are you using currently? I'm on HAOS 13.1 .
We see the issues, but there's not much to be done / nothing that has a clear solution. The vast majority of these cases are related to proxmox. Any number of things can cause an issue like this, many of which are host issues like GPU driver, kernel, etc
I manage multiple systems including an n100 system that has been running 0.14 before release with months of frigate uptime
@NickM-27, Would you be able to provide some details on these systems so people here can compare and contrast the setup of working vs non-working systems. This might help shed some light on things.
Here is my update (Long Story) :) I change my config to include "hwaccel_args: ' '" on each camera, the space is very important. After making the change, it's been running much better, about 5 days now. The CPU usage was still very high around 60% which was strange because everything is running through the TPU around 7.8ms Inference Speed. I didn't like the hwaccel_args: ' ' solution, it feels like a hack. Checked out the code and started looking through it. (very limit experience with python) I found a few section on the hwaccel, it looks like if there is a space assigned to hwaccel it will bypass the "auto" assign which may cause issues with some setups. Auto is default, it also looks like auto will only assign presets for Nvidia and VAAPI, so in my case the RPi does not have Nvidia or VAAPI. I could not find what will happen in this case and the code base is big, going to take me some time to track it down. I decided to just try some of the other presets. I also noticed that ffmpeg: hwaccel_args: should be set outside and inside of camera section. Looks like the Auto Function use the outside one.
I am now running "preset-rpi-64-h265" not all my cameras support h265 but it seems to be okay. The CPU dropped from about 60% to around 45% so it made a difference. Still not happy with the CPU usage I tried a bunch of other configs.
At the end it came down to running the Frigate Website in browser on the RPi. I was working on the RPi in Firefox, had Portainer stats open on another PC, while jumping between Config and Preview on the RPi I noticed a significant jump in CPU usage. I closed the browser on the RPi, opened Frigate on another PC to look at the Cameras, it's streaming perfectly. No problems, I would say it's even faster then on the RPi and best of all CPU now runs at about 5-11% streaming to another PC. I can live with this. :) I think the browser was part of the problem (I always had it open) it's using so much resources on the RPi, that when there is a lot of activity on Frigate it's running out and then the system crash. I changed my recording settings as well. Here is my new config. Hope this helps someone else.
mqtt:
host: *
user: *
password: *
detectors:
coral1:
type: edgetpu
device: pci:0
record:
enabled: true
expire_interval: 120
retain:
days: 0
events:
objects:
- person
retain:
mode: active_objects
ffmpeg:
hwaccel_args: "preset-rpi-64-h265"
cameras:
main_gate_camera:
ffmpeg:
hwaccel_args: "preset-rpi-64-h265"
inputs:
- path:
*
roles:
- detect
- record
motion:
mask: 0.858,0.024,0.981,0.026,0.983,0.091,0.859,0.09
carport_camera:
ffmpeg:
hwaccel_args: "preset-rpi-64-h265"
inputs:
- path:
*
roles:
- detect
- record
motion:
mask: 0.859,0.026,0.984,0.029,0.985,0.09,0.862,0.086
backyard_camera:
ffmpeg:
hwaccel_args: "preset-rpi-64-h265"
inputs:
- path:
*
roles:
- detect
- record
motion:
mask: 0.858,0.026,0.985,0.024,0.988,0.088,0.858,0.086
office_camera:
ffmpeg:
hwaccel_args: "preset-rpi-64-h265"
inputs:
- path:
*
roles:
- detect
- record
motion:
mask: 0.862,0.028,0.989,0.026,0.991,0.088,0.862,0.088
version: 0.14
Docker
volumes:
ssd:
driver: local
driver_opts:
type: none
o: bind
device: /media/SSD/Frigate
services:
frigate:
container_name: frigate
privileged: true
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "1024mb"
devices:
- /dev/apex_0:/dev/apex_0
volumes:
- ./:/config:rw
- /etc/localtime:/etc/localtime:ro
- ssd:/media/frigate:rw
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
environment:
FRIGATE_RTSP_PASSWORD: *
same for me on unraid
Another crash now, same ffmpeg process, updated go2rtc but nothing Appeared with 0.14
Sep 3 20:59:53 unRAID php-fpm[14472]: [WARNING] [pool www] child 723 exited on signal 9 (SIGKILL) after 247.518525 seconds from start
Sep 3 21:01:41 unRAID kernel: PMS LT ChangeSt invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
Sep 3 21:01:41 unRAID kernel: CPU: 5 PID: 3637 Comm: PMS LT ChangeSt Tainted: P O 6.1.106-Unraid #1
Sep 3 21:01:41 unRAID kernel: Hardware name: Micro-Star International Co., Ltd. MS-7D07/MPG Z590 GAMING PLUS (MS-7D07), BIOS A.90 06/08/2023
Sep 3 21:01:41 unRAID kernel: Call Trace:
Sep 3 21:01:41 unRAID kernel: <TASK>
Sep 3 21:01:41 unRAID kernel: dump_stack_lvl+0x44/0x5c
Sep 3 21:01:41 unRAID kernel: dump_header+0x4a/0x211
Sep 3 21:01:41 unRAID kernel: oom_kill_process+0x80/0x111
Sep 3 21:01:41 unRAID kernel: out_of_memory+0x3b3/0x3e5
Sep 3 21:01:41 unRAID kernel: __alloc_pages_slowpath.constprop.0+0x780/0x97e
Sep 3 21:01:41 unRAID kernel: __alloc_pages+0x132/0x1e8
Sep 3 21:01:41 unRAID kernel: folio_alloc+0x14/0x35
Sep 3 21:01:41 unRAID kernel: __filemap_get_folio+0x185/0x213
Sep 3 21:01:41 unRAID kernel: ? preempt_latency_start+0x1e/0x46
Sep 3 21:01:41 unRAID kernel: filemap_fault+0x317/0x52f
Sep 3 21:01:41 unRAID kernel: __do_fault+0x2a/0x6b
Sep 3 21:01:41 unRAID kernel: __handle_mm_fault+0xa22/0xcf9
Sep 3 21:01:41 unRAID kernel: ? raw_spin_rq_unlock_irq+0x5/0x10
Sep 3 21:01:41 unRAID kernel: handle_mm_fault+0x13d/0x20f
Sep 3 21:01:41 unRAID kernel: do_user_addr_fault+0x2c3/0x465
Sep 3 21:01:41 unRAID kernel: exc_page_fault+0xfb/0x11d
Sep 3 21:01:41 unRAID kernel: asm_exc_page_fault+0x22/0x30
Sep 3 21:01:41 unRAID kernel: RIP: 0033:0x1499e968c291
Sep 3 21:01:41 unRAID kernel: Code: Unable to access opcode bytes at 0x1499e968c267.
Sep 3 21:01:41 unRAID kernel: RSP: 002b:00001499d969deb0 EFLAGS: 00010246
Sep 3 21:01:41 unRAID kernel: RAX: 00001499d969e118 RBX: 0000000000000000 RCX: 0000000000000010
Sep 3 21:01:41 unRAID kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
Sep 3 21:01:41 unRAID kernel: RBP: 00001499d969e050 R08: 0000000000000001 R09: 0000000000000000
Sep 3 21:01:41 unRAID kernel: R10: 00007ffcdd6ae080 R11: 000000000f08996e R12: 00001499e1f91220
Sep 3 21:01:41 unRAID kernel: R13: 00001499e99a50f8 R14: 0000000000000000 R15: 000000003b9aca00
Sep 3 21:01:41 unRAID kernel: </TASK>
Sep 3 21:01:41 unRAID kernel: Mem-Info:
Sep 3 21:01:41 unRAID kernel: active_anon:7951705 inactive_anon:7203745 isolated_anon:5
Sep 3 21:01:41 unRAID kernel: active_file:22993 inactive_file:15604 isolated_file:10
Sep 3 21:01:41 unRAID kernel: unevictable:12450 dirty:783 writeback:3
Sep 3 21:01:41 unRAID kernel: slab_reclaimable:170939 slab_unreclaimable:144864
Sep 3 21:01:41 unRAID kernel: mapped:77412 shmem:239502 pagetables:61189
Sep 3 21:01:41 unRAID kernel: sec_pagetables:0 bounce:0
Sep 3 21:01:41 unRAID kernel: kernel_misc_reclaimable:0
Sep 3 21:01:41 unRAID kernel: free:96024 free_pcp:0 free_cma:0
Sep 3 21:01:41 unRAID kernel: Node 0 active_anon:31806820kB inactive_anon:28814980kB active_file:93184kB inactive_file:61204kB unevictable:49800kB isolated(anon):20kB isolated(file):40kB mapped:309648kB dirty:3132kB writeback:12kB shmem:958008kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 13797376kB writeback_tmp:0kB kernel_stack:74784kB pagetables:244756kB sec_pagetables:0kB all_unreclaimable? no
Sep 3 21:01:41 unRAID kernel: Node 0 DMA free:15348kB boost:0kB min:12kB low:24kB high:36kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15364kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Sep 3 21:01:41 unRAID kernel: lowmem_reserve[]: 0 1727 64085 64085 64085
Sep 3 21:01:41 unRAID kernel: Node 0 DMA32 free:250332kB boost:0kB min:1820kB low:3588kB high:5356kB reserved_highatomic:0KB active_anon:847132kB inactive_anon:695756kB active_file:0kB inactive_file:712kB unevictable:0kB writepending:4kB present:1894400kB managed:1803192kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Sep 3 21:01:41 unRAID kernel: lowmem_reserve[]: 0 0 62357 62357 62357
Sep 3 21:01:41 unRAID kernel: Node 0 Normal free:118416kB boost:223232kB min:288976kB low:352828kB high:416680kB reserved_highatomic:0KB active_anon:30996836kB inactive_anon:28082076kB active_file:93216kB inactive_file:60664kB unevictable:49800kB writepending:3140kB present:65003520kB managed:63854140kB mlocked:176kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Sep 3 21:01:41 unRAID kernel: lowmem_reserve[]: 0 0 0 0 0
Sep 3 21:01:41 unRAID kernel: Node 0 DMA: 1*4kB (U) 0*8kB 1*16kB (U) 1*32kB (U) 1*64kB (U) 1*128kB (U) 1*256kB (U) 1*512kB (U) 0*1024kB 1*2048kB (M) 3*4096kB (M) = 15348kB
Sep 3 21:01:41 unRAID kernel: Node 0 DMA32: 168*4kB (UME) 138*8kB (UME) 224*16kB (ME) 223*32kB (ME) 194*64kB (UME) 127*128kB (ME) 86*256kB (UME) 56*512kB (UME) 43*1024kB (UME) 10*2048kB (UME) 23*4096kB (UME) = 250576kB
Sep 3 21:01:41 unRAID kernel: Node 0 Normal: 4369*4kB (UME) 5012*8kB (UME) 2415*16kB (UME) 657*32kB (UME) 9*64kB (UME) 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (U) 0*4096kB = 119860kB
Sep 3 21:01:41 unRAID kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Sep 3 21:01:41 unRAID kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Sep 3 21:01:41 unRAID kernel: 279239 total pagecache pages
Sep 3 21:01:41 unRAID kernel: 0 pages in swap cache
Sep 3 21:01:41 unRAID kernel: Free swap = 0kB
Sep 3 21:01:41 unRAID kernel: Total swap = 0kB
Sep 3 21:01:41 unRAID kernel: 16728478 pages RAM
Sep 3 21:01:41 unRAID kernel: 0 pages HighMem/MovableOnly
Sep 3 21:01:41 unRAID kernel: 310304 pages reserved
Sep 3 21:01:41 unRAID kernel: 0 pages cma reserved
Sep 3 21:01:41 unRAID kernel: Tasks state (memory values in pages):
Sep 3 21:01:41 unRAID kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Sep 3 21:01:41 unRAID kernel: [ 945] 0 945 4594 937 53248 0 -1000 udevd
Sep 3 21:01:41 unRAID kernel: [ 1402] 81 1402 1295 383 45056 0 0 dbus-daemon
Sep 3 21:01:41 unRAID kernel: [ 1413] 0 1413 1580 699 45056 0 0 elogind-daemon
Sep 3 21:01:41 unRAID kernel: [ 1554] 44 1554 18664 810 57344 0 0 ntpd
Sep 3 21:01:41 unRAID kernel: [ 1561] 0 1561 650 25 40960 0 0 acpid
Sep 3 21:01:41 unRAID kernel: [ 1579] 0 1579 643 418 45056 0 0 crond
Sep 3 21:01:41 unRAID kernel: [ 1583] 0 1583 664 350 40960 0 0 atd
Sep 3 21:01:41 unRAID kernel: [ 1769] 0 1769 800 37 40960 0 0 mcelog
Sep 3 21:01:41 unRAID kernel: [ 1848] 0 1848 1004 615 40960 0 0 monitor_nchan
Sep 3 21:01:41 unRAID kernel: [ 4060] 0 4060 1283 793 45056 0 0 atd
Sep 3 21:01:41 unRAID kernel: [ 4063] 0 4063 996 775 45056 0 0 sh
Sep 3 21:01:41 unRAID kernel: [ 4065] 0 4065 1115 761 45056 0 0 sh
Sep 3 21:01:41 unRAID kernel: [ 4066] 0 4066 693 233 40960 0 0 inotifywait
Sep 3 21:01:41 unRAID kernel: [ 10530] 0 10530 652 233 45056 0 0 agetty
Sep 3 21:01:41 unRAID kernel: [ 10531] 0 10531 652 211 40960 0 0 agetty
Sep 3 21:01:41 unRAID kernel: [ 10532] 0 10532 652 231 40960 0 0 agetty
Sep 3 21:01:41 unRAID kernel: [ 10533] 0 10533 652 219 40960 0 0 agetty
Sep 3 21:01:41 unRAID kernel: [ 10534] 0 10534 652 210 40960 0 0 agetty
Sep 3 21:01:41 unRAID kernel: [ 10535] 0 10535 652 218 40960 0 0 agetty
Sep 3 21:01:41 unRAID kernel: [ 10646] 0 10646 2096 162 57344 0 -1000 sshd
Sep 3 21:01:41 unRAID kernel: [ 10708] 0 10708 1081 59 45056 0 0 inetd
Sep 3 21:01:41 unRAID kernel: [ 10709] 0 10709 69919 571 77824 0 0 emhttpd
Sep 3 21:01:41 unRAID kernel: [ 14472] 0 14472 23209 2064 163840 0 0 php-fpm
Sep 3 21:01:41 unRAID kernel: [ 14928] 0 14928 37527 2817 86016 0 0 nginx
Sep 3 21:01:41 unRAID kernel: [ 15250] 0 15250 1283 793 45056 0 0 atd
Sep 3 21:01:41 unRAID kernel: [ 15253] 0 15253 997 741 40960 0 0 sh
Sep 3 21:01:41 unRAID kernel: [ 16588] 0 16588 35062 177 65536 0 0 shfs
Sep 3 21:01:41 unRAID kernel: [ 16598] 0 16598 174089 11170 217088 0 0 shfs
Sep 3 21:01:41 unRAID kernel: [ 16767] 0 16767 70376 680 77824 0 0 rsyslogd
Sep 3 21:01:41 unRAID kernel: [ 16938] 0 16938 1001 771 45056 0 0 update-settings
Sep 3 21:01:41 unRAID kernel: [ 16941] 0 16941 23944 3423 176128 0 0 pre-startup.php
Sep 3 21:01:41 unRAID kernel: [ 17140] 0 17140 37555 3025 86016 0 0 nginx
Sep 3 21:01:41 unRAID kernel: [ 19041] 0 19041 8110 1466 106496 0 0 virtlockd
Sep 3 21:01:41 unRAID kernel: [ 19049] 0 19049 8110 1477 98304 0 0 virtlogd
Sep 3 21:01:41 unRAID kernel: [ 19068] 0 19068 347410 1814 282624 0 0 libvirtd
Sep 3 21:01:41 unRAID kernel: [ 19285] 99 19285 1890 533 53248 0 0 dnsmasq
Sep 3 21:01:41 unRAID kernel: [ 19286] 0 19286 1857 83 53248 0 0 dnsmasq
Sep 3 21:01:41 unRAID kernel: [ 20507] 0 20507 2654 1155 57344 0 0 sudo
Sep 3 21:01:41 unRAID kernel: [ 20539] 0 20539 309507 5341 143360 0 0 unbalanced
Sep 3 21:01:41 unRAID kernel: [ 24402] 0 24402 330122 30756 368640 0 0 tailscaled
Sep 3 21:01:41 unRAID kernel: [ 24403] 0 24403 846 389 45056 0 0 grep
Sep 3 21:01:41 unRAID kernel: [ 26025] 0 26025 2382045 21550 1667072 0 0 dockerd
Sep 3 21:01:41 unRAID kernel: [ 26052] 0 26052 189235 7566 237568 0 0 containerd
Sep 3 21:01:41 unRAID kernel: [ 26306] 0 26306 177834 2606 86016 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 26313] 0 26313 177642 1587 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 26327] 0 26327 177706 1586 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 26333] 0 26333 177770 2095 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 26346] 0 26346 177706 2094 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 26353] 0 26353 177770 2089 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 26370] 0 26370 180677 2649 135168 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 26390] 0 26390 444130 21105 913408 0 0 gluetun-entrypo
Sep 3 21:01:41 unRAID kernel: [ 27463] 0 27463 177706 2099 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27469] 0 27469 177706 1587 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27494] 0 27494 180677 2528 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 27540] 999 27540 133145 73325 1056768 0 0 mongod
Sep 3 21:01:41 unRAID kernel: [ 27807] 0 27807 177770 2604 90112 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27814] 0 27814 177706 2095 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27828] 0 27828 177706 1591 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27835] 0 27835 177770 1586 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27848] 0 27848 177706 2094 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27854] 0 27854 177706 1586 73728 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27866] 0 27866 177834 2096 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27872] 0 27872 177770 2092 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27885] 0 27885 177706 1586 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27891] 0 27891 177706 1585 73728 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27904] 0 27904 177706 1587 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27911] 0 27911 177770 2088 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27924] 0 27924 177706 2094 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27931] 0 27931 177706 1586 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27944] 0 27944 177706 2094 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27951] 0 27951 177706 1591 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27964] 0 27964 177706 2094 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 27971] 0 27971 177706 2096 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28047] 0 28047 180677 2847 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 28095] 0 28095 110 14 24576 0 0 s6-svscan
Sep 3 21:01:41 unRAID kernel: [ 28265] 0 28265 177770 2097 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28271] 0 28271 177706 1587 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28279] 0 28279 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28287] 0 28287 51 1 24576 0 0 s6-linux-init-s
Sep 3 21:01:41 unRAID kernel: [ 28299] 0 28299 180677 2647 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 28332] 999 28332 51031 2747 155648 0 0 php
Sep 3 21:01:41 unRAID kernel: [ 28348] 0 28348 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28349] 0 28349 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28350] 0 28350 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28351] 0 28351 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28359] 0 28359 52 7 24576 0 0 s6-ipcserverd
Sep 3 21:01:41 unRAID kernel: [ 28505] 99 28505 2808496 359207 4034560 0 0 java
Sep 3 21:01:41 unRAID kernel: [ 28506] 0 28506 1835 70 53248 0 0 bash
Sep 3 21:01:41 unRAID kernel: [ 28532] 0 28532 1421 23 49152 0 0 sleep
Sep 3 21:01:41 unRAID kernel: [ 28650] 0 28650 177706 1587 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28656] 0 28656 177706 1587 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28713] 0 28713 180741 2577 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 28745] 0 28745 109 11 24576 0 0 s6-svscan
Sep 3 21:01:41 unRAID kernel: [ 28889] 0 28889 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28891] 0 28891 51 1 24576 0 0 s6-linux-init-s
Sep 3 21:01:41 unRAID kernel: [ 28916] 0 28916 178122 2092 110592 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28922] 0 28922 177706 1591 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 28952] 0 28952 180677 2440 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 28965] 0 28965 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28966] 0 28966 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28967] 0 28967 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28968] 0 28968 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 28984] 0 28984 52 6 24576 0 0 s6-ipcserverd
Sep 3 21:01:41 unRAID kernel: [ 29022] 999 29022 56305 3335 180224 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29115] 0 29115 578 59 40960 0 0 bash
Sep 3 21:01:41 unRAID kernel: [ 29116] 0 29116 405 15 36864 0 0 busybox
Sep 3 21:01:41 unRAID kernel: [ 29119] 99 29119 434 38 36864 0 0 mariadbd-safe
Sep 3 21:01:41 unRAID kernel: [ 29362] 99 29362 168388 20883 532480 0 0 mariadbd
Sep 3 21:01:41 unRAID kernel: [ 29469] 0 29469 178474 4117 94208 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 29476] 0 29476 177706 2100 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 29499] 0 29499 180741 2327 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 29520] 999 29520 29180 2429 139264 0 0 redis-server
Sep 3 21:01:41 unRAID kernel: [ 29614] 999 29614 329061 951 290816 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29616] 999 29616 56341 14351 397312 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29622] 999 29622 56305 4121 282624 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29623] 999 29623 56305 1749 139264 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29624] 999 29624 56473 1011 159744 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29625] 999 29625 20026 714 114688 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29626] 999 29626 56413 878 147456 0 0 postgres
Sep 3 21:01:41 unRAID kernel: [ 29777] 0 29777 180741 2729 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 29896] 65532 29896 316146 9381 245760 0 0 cloudflared
Sep 3 21:01:41 unRAID kernel: [ 30075] 0 30075 177770 1591 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30082] 0 30082 177706 1590 73728 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30095] 0 30095 177706 2095 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30101] 0 30101 177706 1587 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30114] 0 30114 178506 2992 110592 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30120] 0 30120 177706 2096 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30156] 0 30156 180741 2528 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 30179] 0 30179 174 65 36864 0 0 cinit
Sep 3 21:01:41 unRAID kernel: [ 30567] 0 30567 178314 1780 106496 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30576] 0 30576 177706 1588 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30652] 0 30652 180677 2487 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 30688] 0 30688 110 15 28672 0 0 s6-svscan
Sep 3 21:01:41 unRAID kernel: [ 30895] 0 30895 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 30911] 0 30911 51 1 24576 0 0 s6-linux-init-s
Sep 3 21:01:41 unRAID kernel: [ 30919] 99 30919 31334 7800 155648 0 0 nginx
Sep 3 21:01:41 unRAID kernel: [ 30960] 0 30960 177930 1979 98304 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30961] 0 30961 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 30962] 0 30962 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 30964] 0 30964 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 30968] 0 30968 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 30969] 0 30969 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 30974] 0 30974 177706 2096 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 30987] 0 30987 52 7 24576 0 0 s6-ipcserverd
Sep 3 21:01:41 unRAID kernel: [ 31078] 0 31078 180741 2988 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 31122] 0 31122 825307 18269 716800 0 0 vaultwarden
Sep 3 21:01:41 unRAID kernel: [ 31320] 0 31320 178058 1897 106496 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 31327] 0 31327 177706 2093 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 31398] 0 31398 180677 2543 122880 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 31440] 0 31440 109 12 24576 0 0 s6-svscan
Sep 3 21:01:41 unRAID kernel: [ 31505] 99 31505 67263 10983 2584576 0 0 node
Sep 3 21:01:41 unRAID kernel: [ 31606] 0 31606 177898 2605 86016 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 31612] 0 31612 177706 1587 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 31641] 0 31641 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31657] 0 31657 51 1 24576 0 0 s6-linux-init-s
Sep 3 21:01:41 unRAID kernel: [ 31676] 0 31676 180741 2670 135168 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 31730] 0 31730 620 16 40960 0 0 sh
Sep 3 21:01:41 unRAID kernel: [ 31758] 0 31758 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31759] 0 31759 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31760] 0 31760 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31761] 0 31761 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31762] 0 31762 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31763] 0 31763 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 31774] 0 31774 52 6 24576 0 0 s6-ipcserverd
Sep 3 21:01:41 unRAID kernel: [ 32062] 99 32062 465254 32968 3338240 0 0 immich
Sep 3 21:01:41 unRAID kernel: [ 32063] 0 32063 1835 71 53248 0 0 bash
Sep 3 21:01:41 unRAID kernel: [ 32092] 0 32092 1421 21 49152 0 0 sleep
Sep 3 21:01:41 unRAID kernel: [ 32152] 0 32152 180693 2569 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 32201] 0 32201 110 13 28672 0 0 s6-svscan
Sep 3 21:01:41 unRAID kernel: [ 32432] 0 32432 54 6 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 32435] 0 32435 51 1 24576 0 0 s6-linux-init-s
Sep 3 21:01:41 unRAID kernel: [ 32502] 0 32502 180741 2575 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 32553] 0 32553 28665 6531 139264 0 0 esphome
Sep 3 21:01:41 unRAID kernel: [ 32565] 0 32565 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 32566] 0 32566 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 32567] 0 32567 54 5 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 32568] 0 32568 54 7 24576 0 0 s6-supervise
Sep 3 21:01:41 unRAID kernel: [ 32579] 0 32579 52 7 24576 0 0 s6-ipcserverd
Sep 3 21:01:41 unRAID kernel: [ 512] 0 512 405 15 36864 0 0 busybox
Sep 3 21:01:41 unRAID kernel: [ 513] 99 513 683436 106844 1507328 0 0 python3
Sep 3 21:01:41 unRAID kernel: [ 568] 0 568 180677 2672 118784 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 632] 0 632 793337 72709 1339392 0 0 mass
Sep 3 21:01:41 unRAID kernel: [ 1370] 0 1370 177770 2086 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 1378] 0 1378 177706 1591 73728 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 1504] 0 1504 180677 2392 122880 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 1548] 0 1548 177748 3832 786432 0 0 node
Sep 3 21:01:41 unRAID kernel: [ 1979] 0 1979 178058 2277 106496 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 1986] 0 1986 177770 2091 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2068] 0 2068 180741 2569 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 2115] 0 2115 7410 3966 94208 0 0 supervisord
Sep 3 21:01:41 unRAID kernel: [ 2603] 0 2603 177770 2604 86016 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2613] 0 2613 177706 2095 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2635] 0 2635 177706 2094 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2641] 0 2641 177770 1586 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2661] 0 2661 177706 2107 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2668] 0 2668 177706 1586 73728 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 2681] 0 2681 2871212 32540 3665920 0 0 node
Sep 3 21:01:41 unRAID kernel: [ 2748] 0 2748 180741 3006 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 2793] 1883 2793 1047 264 45056 0 0 mosquitto
Sep 3 21:01:41 unRAID kernel: [ 3112] 0 3112 177642 2096 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 3119] 0 3119 177706 1586 77824 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 3162] 0 3162 180741 2643 126976 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 3190] 0 3190 162783 4614 1413120 0 0 npm start
Sep 3 21:01:41 unRAID kernel: [ 3298] 0 3298 308952 6030 761856 0 0 node
Sep 3 21:01:41 unRAID kernel: [ 3386] 0 3386 8191288 27888 2605056 0 0 next-router-wor
Sep 3 21:01:41 unRAID kernel: [ 4393] 0 4393 178186 2481 102400 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 4403] 0 4403 177770 2095 81920 0 0 docker-proxy
Sep 3 21:01:41 unRAID kernel: [ 4529] 0 4529 180741 2860 131072 0 1 containerd-shim
Sep 3 21:01:41 unRAID kernel: [ 4589] 0 4589 109 14 28672 0 0 s6-svscan
Sep 3 21:01:41 unRAID kernel: [ 4636] 33 4636 1078899 31763 1839104 0 0 uwsgi
Sep 3 21:01:42 unRAID php-fpm[14472]: [WARNING] [pool www] child 6557 exited on signal 9 (SIGKILL) after 216.936259 seconds from start
same for a friend that use frigate 0.14
is there a way to limit the ram for the container? so at least only it crashes and not the other 30 dockers otherwise I'll have to turn it off until it's fixed or go back to 0.13
is there a way to limit the ram for the container? so at least only it crashes and not the other 30 dockers otherwise I'll have to turn it off until it's fixed or go back to 0.13
Was 0.13 working fine for you? On my rbpi 4 8gb it’s the same thing
yes https://docs.docker.com/engine/containers/resource_constraints/
Other than this? It doesn't work For now I stopped the container, already 2 crash this evening I have coral m2, I use vaapi, 64gb ram, i7-11700k
Describe the problem you are having
I have two docker hosts and both have a coral. I find that Frigate seems to cause the whole host to freeze completely (console is not responsive) at frequent intervals - right now I would say on average every 48 hours but its not consistent. I've moved the docker container to the other host and cleared out all the other dockers and the freeze follows Frigate.
Its likely Frigate is pushing the hosts much harder than any other docker and perhaps its finding a bug somewhere in the hardware or OS. The Devices are BeeLink devices running the latest Ubuntu.
Looking for some advice - has anyone seen this sort of behavior and identified the cause?
This has been happening for many months so it is not related to the beta Frigate or any particular Frigate (and likely this is NOT a Frigate bug)
Version
0.13 Beta 3
Frigate config file
Relevant log output
Frigate stats
No response
Operating system
Other
Install method
Docker Compose
Coral version
USB
Any other information that may be helpful
No response