Closed qubex22 closed 11 months ago
CC @NateMeyer
Common theme here seems to be non consumer GPUs but not sure why that would be
I'm not sure. According to wikipedia, the K620 is actually a Maxwell-architecture GPU so it shouldn't have any compatibility issues with CUDA.
Are you disabling FP16 during model generation?
I'm not sure. According to wikipedia, the K620 is actually a Maxwell-architecture GPU so it shouldn't have any compatibility issues with CUDA.
Are you disabling FP16 during model generation?
Yes, as the beta documentation say, using env variable USE_FP16=false
Correct K620 is maxwell and it is deprecated in tensorrt 8.6 but should work well in 8.5.3...
Do you think it could be CUDA versions mismatch between driver and frigate libraries? As I see in your PR https://github.com/blakeblackshear/frigate/pull/7006/commits/7761a1a56c23b7c35f162900b7cf14e378a1e3e6#diff-7350d4291cdfae3b8d3bb7e088128afe630b53724f32411460e25ccef30e1c5f when you updated to 8.5.3 you use CUDA 11.8 libraries and my driver includes CUDA 12
Hello, Im trying to solve identical problem, and also upgraded nvidia driver and did lot's of combination. Did You found the solution ?
Hello,
Im trying to solve identical problem, and also upgraded nvidia driver and did lot's of combination.
Did You found the solution ?
I'm trying to build the beta with tensorrt 8.4.1 instead of 8.5.3 to test. But apart from that I have no clue
Built 13 beta 2 container with tensorrt 8.4.1 but the error persist, so now I'm thinking that maybe is some python nvidia library update that messes up with the detection process in our GPU
Just trying to understand the RemoteObjectDetector class and detection method and try to debug what's happening. So first of all the problem we have is on key int(d[0]).
d[0] should be a float number from 0 to 90 (labelmap) from the out_np_shm array. With that number the detection matches a class on the labelmap and appends the detection.
In this case, if I print d[0] inside the for loop of the detect method I obtain continuous negative numbers which could be symptom of an invalid out_np_shm array
2023-11-04 10:29:47.569257788 [INFO] Starting Frigate...
2023-11-04 10:29:47.899519360 [INFO] Starting go2rtc...
2023-11-04 10:29:47.992036821 10:29:47.991 INF go2rtc version 1.8.1 linux/amd64
2023-11-04 10:29:47.992300069 10:29:47.992 INF [api] listen addr=0.0.0.0:1984
2023-11-04 10:29:47.992519667 10:29:47.992 INF [rtsp] listen addr=0.0.0.0:8554
2023-11-04 10:29:47.992704765 10:29:47.992 INF [webrtc] listen addr=0.0.0.0:8555/tcp
2023-11-04 10:29:49.054337592 [2023-11-04 10:29:49] frigate.app INFO : Starting Frigate (0.13.0-6ec2f97)
2023-11-04 10:29:49.880464476 [2023-11-04 10:29:49] peewee_migrate.logs INFO : Starting migrations
2023-11-04 10:29:49.891730597 [2023-11-04 10:29:49] peewee_migrate.logs INFO : There is nothing to migrate
2023-11-04 10:29:49.903863749 [2023-11-04 10:29:49] frigate.app INFO : Recording process started: 738
2023-11-04 10:29:49.907467294 [2023-11-04 10:29:49] frigate.app INFO : go2rtc process pid: 98
2023-11-04 10:29:49.942418905 [2023-11-04 10:29:49] detector.tensorrt INFO : Starting detection process: 747
2023-11-04 10:29:49.951706044 [2023-11-04 10:29:49] frigate.app INFO : Output process started: 751
2023-11-04 10:29:49.971897518 [2023-11-04 10:29:49] frigate.app INFO : Camera processor started for yi_pasillo: 756
2023-11-04 10:29:49.984106860 [2023-11-04 10:29:49] frigate.app INFO : Capture process started for yi_pasillo: 760
2023-11-04 10:29:50.371964702 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init CUDA: CPU +191, GPU +0, now: CPU 251, GPU 198 (MiB)
2023-11-04 10:29:50.517152771 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : Loaded engine size: 149 MiB
2023-11-04 10:29:50.616096470 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +8, now: CPU 440, GPU 357 (MiB)
2023-11-04 10:29:50.621584427 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +2, GPU +10, now: CPU 442, GPU 367 (MiB)
2023-11-04 10:29:50.624915314 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +148, now: CPU 0, GPU 148 (MiB)
2023-11-04 10:29:50.645485455 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 293, GPU 360 (MiB)
2023-11-04 10:29:50.645490734 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 293, GPU 368 (MiB)
2023-11-04 10:29:50.645496324 [2023-11-04 10:29:50] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +29, now: CPU 0, GPU 177 (MiB)
2023-11-04 10:29:54.338432999 -13.7649145
2023-11-04 10:29:54.399519625 -14.007249
2023-11-04 10:29:54.651253859 -8.721203
2023-11-04 10:29:54.715424506 -12.397478
2023-11-04 10:29:54.772029296 -13.554862
2023-11-04 10:29:54.828049652 -10.871115
2023-11-04 10:29:55.059848180 -8.740787
2023-11-04 10:29:55.122348603 -11.726419
2023-11-04 10:29:55.177900703 -11.8439045
2023-11-04 10:29:55.233560492 -8.754908
2023-11-04 10:29:55.389519897 -12.384247
2023-11-04 10:29:55.445262406 -12.03768
2023-11-04 10:29:55.504320982 -12.535482
2023-11-04 10:29:55.809429878 -11.885252
2023-11-04 10:29:55.871777292 -13.035503
2023-11-04 10:29:55.927222063 -12.732517
2023-11-04 10:29:56.097540979 -13.629853
2023-11-04 10:29:56.158139660 -12.018048
2023-11-04 10:29:56.473892863 -9.45426
2023-11-04 10:29:56.528033837 -13.685979
2023-11-04 10:29:56.588690317 -13.149712
2023-11-04 10:29:56.642297647 -10.743803
2023-11-04 10:29:56.701665070 -13.053298
2023-11-04 10:29:56.757037162 -12.490692
2023-11-04 10:29:56.814062508 -13.831002
2023-11-04 10:29:56.990026468 -9.515472
2023-11-04 10:29:57.047049784 -15.587455
2023-11-04 10:29:57.104743494 -13.040367
2023-11-04 10:29:57.161875489 -13.58364
2023-11-04 10:29:57.555330546 [INFO] Starting go2rtc healthcheck service...
2023-11-04 10:29:57.568858785 -11.676875
2023-11-04 10:29:57.625921290 -13.813794
2023-11-04 10:29:57.682725799 -15.368919
2023-11-04 10:29:57.740282849 -12.664353
2023-11-04 10:29:58.061882675 -10.132933
2023-11-04 10:29:58.119151259 -12.742411
2023-11-04 10:29:58.181738621 -11.811705
2023-11-04 10:29:58.238623818 -14.154352
2023-11-04 10:29:58.306912304 -8.92131
2023-11-04 10:29:58.365748683 -14.189512
2023-11-04 10:29:58.433584794 -9.286553
@NateMeyer curious what you think
I'm not certain where the negative numbers are coming from, there are a couple layers to the process that spits out the detections.
The TensorRT detector uses YOLO models which have a very different output than the SSD model frigate was originally designed with. Part of the project where we get our YOLO models from includes a port-processing library (libyolo_layer.so
) that will do the math required to take the detections as YOLO spits them out, do the sigmoid calculations, and return a single detection per object, closer to what SSD would.
Once the Frigate TensorRT detector gets the results from the model, it further has to sort and reshape the results into an array format the the Object Detection expects.
Somewhere between the model, post-processing, and parsing the class is mis-calculated. We can try adding more logging straight out of the TensorRT, but I'm not an expert when it comes to how the class numbers are being generated by the YOLO model. We could test if the absolute value is sufficient? Try discarding negative values? But if these values are wrong, will we get any good detections out of your setup?
I don't think I saw, were you able to get TensorRT working with 0.12 on your machine?
I'm not certain where the negative numbers are coming from, there are a couple layers to the process that spits out the detections.
The TensorRT detector uses YOLO models which have a very different output than the SSD model frigate was originally designed with. Part of the project where we get our YOLO models from includes a port-processing library (
libyolo_layer.so
) that will do the math required to take the detections as YOLO spits them out, do the sigmoid calculations, and return a single detection per object, closer to what SSD would.Once the Frigate TensorRT detector gets the results from the model, it further has to sort and reshape the results into an array format the the Object Detection expects.
Somewhere between the model, post-processing, and parsing the class is mis-calculated. We can try adding more logging straight out of the TensorRT, but I'm not an expert when it comes to how the class numbers are being generated by the YOLO model. We could test if the absolute value is sufficient? Try discarding negative values? But if these values are wrong, will we get any good detections out of your setup?
I don't think I saw, were you able to get TensorRT working with 0.12 on your machine?
Yes, with 0.12 it works flawlessly, so something I'm trying to do is check which changes where made in 0.13 and see where it breaks.
I tried to force a detection to see if there's any valid d[0] among all those but it seems that the negative numbers are continuous also when nothing is on the image to detect, so that's why I think it has something to do with the detection process.
Confusing part to me is why it affects some but for others TRT works perfectly fine
I was trying many days to run tensorrt on nVidia GT 750Ti. It was cause of the legacy nvidia VGA Card. I installed nVidia GT 1030 with 2GB ram and now its working. I run Frigate on yolov7x-320.trt and with 4 cameras i have GPU usage at 100% and about 50% ram with Inference Speed 100ms. When i tried to test yolov7x-640.trt the Inference Speed goes to 300ms. Conclusions are simple, tensorrt needs good hardware. Soon i will try to use Frigate with CodeProject.AI to compare results.
The dependabot was fixed and looks like there are a couple dependencies quite a few versions behind, I wonder if these would fix the issue? https://github.com/blakeblackshear/frigate/pull/8479
To be honest those inference times dont seem that off for a card that old running yolov7x which is the larger model. Especially if its not int8. The last time I tested yolov7x on an rtx a4000 and an a30, inference times were 20ms+ with fp32/fp16. Id be curious if reverting that improved it at all.
To add, yolov7 relies on a lot of tricks to improve map over yolov4 and really needs a newer card that has fp16/int8 tensor cores
Just tried the new beta 5 with tiny-416 and there seems to be a new error when loading the detector, full log:
EDIT: My mistake sorry, goto https://github.com/blakeblackshear/frigate/issues/8329#issuecomment-1807214635
I have tensorrt with yolov7 320 on nVidia GeFore GT1030 2GB Ram and have inference speed 100ms theres absolutley no sense to ty run tensorrt on some legacy card. Even if it would work what inference speed it can run ? Im running tensorrt and have one ptz camera and sometimes when somebory is riding the bike object's alredy gone and camera is just running to track object which is no longer visible and it's inference speed causing the problem i guess. Just dont waste time like i did. Newer model will cost about 100 euros, hours of work are not worht it.
Just tried the new beta 5 with tiny-416 and there seems to be a new error when loading the detector, full log here:
2023-11-12 19:15:40.269362932 Fatal Python error: Segmentation fault 2023-11-12 19:15:40.269366202 2023-11-12 19:15:40.269367692 Thread 0x00001476f53f56c0 (most recent call first): 2023-11-12 19:15:40.269385411 File "/usr/lib/python3.9/threading.py", line 312 in wait 2023-11-12 19:15:40.269436811 File "/usr/lib/python3.9/multiprocessing/queues.py", line 233 in _feed 2023-11-12 19:15:40.269469521 File "/usr/lib/python3.9/threading.py", line 892 in run 2023-11-12 19:15:40.269513450 File "/usr/lib/python3.9/threading.py", line 954 in _bootstrap_inner 2023-11-12 19:15:40.269550810 File "/usr/lib/python3.9/threading.py", line 912 in _bootstrap 2023-11-12 19:15:40.269559640 2023-11-12 19:15:40.269560680 Current thread 0x000014770d0ae740 (most recent call first): 2023-11-12 19:15:40.269611179 File "/opt/frigate/frigate/detectors/plugins/tensorrt.py", line 168 in <listcomp> 2023-11-12 19:15:40.269675558 File "/opt/frigate/frigate/detectors/plugins/tensorrt.py", line 167 in _do_inference 2023-11-12 19:15:40.269736428 File "/opt/frigate/frigate/detectors/plugins/tensorrt.py", line 286 in detect_raw 2023-11-12 19:15:40.269786267 File "/opt/frigate/frigate/object_detection.py", line 75 in detect_raw 2023-11-12 19:15:40.269835627 File "/opt/frigate/frigate/object_detection.py", line 125 in run_detector 2023-11-12 19:15:40.269882096 File "/usr/lib/python3.9/multiprocessing/process.py", line 108 in run 2023-11-12 19:15:40.269932756 File "/usr/lib/python3.9/multiprocessing/process.py", line 315 in _bootstrap 2023-11-12 19:15:40.269982755 File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 71 in _launch 2023-11-12 19:15:40.270044615 File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 19 in __init__ 2023-11-12 19:15:40.270095554 File "/usr/lib/python3.9/multiprocessing/context.py", line 277 in _Popen 2023-11-12 19:15:40.270145524 File "/usr/lib/python3.9/multiprocessing/context.py", line 224 in _Popen 2023-11-12 19:15:40.270192243 File "/usr/lib/python3.9/multiprocessing/process.py", line 121 in start 2023-11-12 19:15:40.270245123 File "/opt/frigate/frigate/object_detection.py", line 183 in start_or_restart 2023-11-12 19:15:40.270289242 File "/opt/frigate/frigate/object_detection.py", line 151 in __init__ 2023-11-12 19:15:40.270336262 File "/opt/frigate/frigate/app.py", line 452 in start_detectors 2023-11-12 19:15:40.270375711 File "/opt/frigate/frigate/app.py", line 682 in start 2023-11-12 19:15:40.270425701 File "/opt/frigate/frigate/__main__.py", line 17 in <module> 2023-11-12 19:15:40.270473290 File "/usr/lib/python3.9/runpy.py", line 87 in _run_code 2023-11-12 19:15:40.270525920 File "/usr/lib/python3.9/runpy.py", line 197 in _run_module_as_main
Double check model sizes and reset or recreate the build folder before building. I just deployed it to 3 different locations with tensorrt and is working quite well at all so far.
I have tensorrt with yolov7 320 on nVidia GeFore GT1030 2GB Ram and have inference speed 100ms theres absolutley no sense to ty run tensorrt on some legacy card. Even if it would work what inference speed it can run ? Im running tensorrt and have one ptz camera and sometimes when somebory is riding the bike object's alredy gone and camera is just running to track object which is no longer visible and it's inference speed causing the problem i guess. Just dont waste time like i did. Newer model will cost about 100 euros, hours of work are not worht it.
Running 0.12 with yolov7-tiny-416 and I'm having aprox 17 ms inference speed with 2 cameras. I think it is worth it:
I have tensorrt with yolov7 320 on nVidia GeFore GT1030 2GB Ram and have inference speed 100ms theres absolutley no sense to ty run tensorrt on some legacy card. Even if it would work what inference speed it can run ? Im running tensorrt and have one ptz camera and sometimes when somebory is riding the bike object's alredy gone and camera is just running to track object which is no longer visible and it's inference speed causing the problem i guess. Just dont waste time like i did. Newer model will cost about 100 euros, hours of work are not worht it.
Running 0.12 with yolov7-tiny-416 and I'm having aprox 17 ms inference speed with 2 cameras. I think it is worth it:
Tiny will work ok. But yolov7x is a 20 times larger model than v7 tiny and uses different methods for activation. Its much much heavier. If you get 17 ms with tiny, 100ms to 300ms is probably expected with yolov7x. I get about 1.7ms a detection with v7 tiny on newer cards. It just isnt accurate enough for my use case
Double check model sizes and reset or recreate the build folder before building. I just deployed it to 3 different locations with tensorrt and is working quite well at all so far.
My bad, I didn't change the size in the config. However same KeyError as before:
2023-11-12 20:10:18.311864447 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Num Available Devices: 1
2023-11-12 20:10:18.321110215 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
2023-11-12 20:10:18.321115575 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchedNMS_TRT version 1
2023-11-12 20:10:18.321117525 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchTilePlugin_TRT version 1
2023-11-12 20:10:18.321142584 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Clip_TRT version 1
2023-11-12 20:10:18.321145904 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CoordConvAC version 1
2023-11-12 20:10:18.321148004 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CropAndResizeDynamic version 1
2023-11-12 20:10:18.321184904 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CropAndResize version 1
2023-11-12 20:10:18.321187434 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::DecodeBbox3DPlugin version 1
2023-11-12 20:10:18.321213984 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::DetectionLayer_TRT version 1
2023-11-12 20:10:18.321216254 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
2023-11-12 20:10:18.321218204 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
2023-11-12 20:10:18.321219884 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
2023-11-12 20:10:18.321243933 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_TRT version 1
2023-11-12 20:10:18.321246343 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::FlattenConcat_TRT version 1
2023-11-12 20:10:18.321248163 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::fMHA_V2 version 1
2023-11-12 20:10:18.321249783 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::fMHCA version 1
2023-11-12 20:10:18.321281433 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GenerateDetection_TRT version 1
2023-11-12 20:10:18.321283633 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GridAnchor_TRT version 1
2023-11-12 20:10:18.321285283 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GridAnchorRect_TRT version 1
2023-11-12 20:10:18.321287143 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GroupNorm version 1
2023-11-12 20:10:18.321288763 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::InstanceNormalization_TRT version 1
2023-11-12 20:10:18.321290453 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::InstanceNormalization_TRT version 2
2023-11-12 20:10:18.321292323 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::LayerNorm version 1
2023-11-12 20:10:18.321293823 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::LReLU_TRT version 1
2023-11-12 20:10:18.321295463 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
2023-11-12 20:10:18.321334752 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultilevelProposeROI_TRT version 1
2023-11-12 20:10:18.321337252 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
2023-11-12 20:10:18.321338982 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::NMSDynamic_TRT version 1
2023-11-12 20:10:18.321340472 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::NMS_TRT version 1
2023-11-12 20:10:18.321342422 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Normalize_TRT version 1
2023-11-12 20:10:18.321344092 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PillarScatterPlugin version 1
2023-11-12 20:10:18.321345662 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PriorBox_TRT version 1
2023-11-12 20:10:18.321347262 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ProposalDynamic version 1
2023-11-12 20:10:18.321391612 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ProposalLayer_TRT version 1
2023-11-12 20:10:18.321393682 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Proposal version 1
2023-11-12 20:10:18.321395372 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PyramidROIAlign_TRT version 1
2023-11-12 20:10:18.321397052 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Region_TRT version 1
2023-11-12 20:10:18.321398532 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Reorg_TRT version 1
2023-11-12 20:10:18.321400562 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ResizeNearest_TRT version 1
2023-11-12 20:10:18.321402102 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ROIAlign_TRT version 1
2023-11-12 20:10:18.321403622 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::RPROI_TRT version 1
2023-11-12 20:10:18.321405382 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ScatterND version 1
2023-11-12 20:10:18.321416922 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SeqLen2Spatial version 1
2023-11-12 20:10:18.321418842 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SpecialSlice_TRT version 1
2023-11-12 20:10:18.321420762 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SplitGeLU version 1
2023-11-12 20:10:18.321422272 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Split version 1
2023-11-12 20:10:18.321423832 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::VoxelGeneratorPlugin version 1
2023-11-12 20:10:18.372021138 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : Loaded engine size: 30 MiB
2023-11-12 20:10:18.442548787 [2023-11-12 20:10:18] urllib3.connectionpool DEBUG : https://api.github.com:443 "GET /repos/blakeblackshear/frigate/releases/latest HTTP/1.1" 200 1767
2023-11-12 20:10:18.476639858 [2023-11-12 20:10:18] peewee.sqliteq DEBUG : received query UPDATE "event" SET "end_time" = ("event"."start_time" + ?) WHERE ("event"."end_time" IS ?)
2023-11-12 20:10:18.476772446 [2023-11-12 20:10:18] peewee DEBUG : ('UPDATE "event" SET "end_time" = ("event"."start_time" + ?) WHERE ("event"."end_time" IS ?)', [30, None])
2023-11-12 20:10:18.538630071 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcublas.so.11
2023-11-12 20:10:18.538747600 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcublas.so.11
2023-11-12 20:10:18.565552873 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as plugin tactic source
2023-11-12 20:10:18.565633582 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as core library tactic source
2023-11-12 20:10:18.565703841 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +8, now: CPU 148, GPU 174 (MiB)
2023-11-12 20:10:18.565773431 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcudnn.so.8
2023-11-12 20:10:18.565896250 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcudnn.so.8
2023-11-12 20:10:18.566259306 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as plugin tactic source
2023-11-12 20:10:18.577729032 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as core library tactic source
2023-11-12 20:10:18.577803541 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 149, GPU 184 (MiB)
2023-11-12 20:10:18.580630033 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Deserialization required 54866 microseconds.
2023-11-12 20:10:18.583986460 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +30, now: CPU 0, GPU 30 (MiB)
2023-11-12 20:10:18.722630421 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcublas.so.11
2023-11-12 20:10:18.728998827 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcublas.so.11
2023-11-12 20:10:18.730483732 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as plugin tactic source
2023-11-12 20:10:18.730589011 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as core library tactic source
2023-11-12 20:10:18.730733090 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 119, GPU 176 (MiB)
2023-11-12 20:10:18.730845129 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcudnn.so.8
2023-11-12 20:10:18.730914028 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcudnn.so.8
2023-11-12 20:10:18.731011447 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as plugin tactic source
2023-11-12 20:10:18.731098596 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as core library tactic source
2023-11-12 20:10:18.731173866 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 119, GPU 184 (MiB)
2023-11-12 20:10:18.731243825 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Total per-runner device persistent memory is 602624
2023-11-12 20:10:18.731326494 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Total per-runner host persistent memory is 109936
2023-11-12 20:10:18.731407113 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated activation device memory of size 12806656
2023-11-12 20:10:18.731494872 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +13, now: CPU 0, GPU 43 (MiB)
2023-11-12 20:10:18.731558222 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : CUDA lazy loading is enabled.
2023-11-12 20:10:18.731639571 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated Tensor Binding input Memory 2076672 Bytes (519168 * DataType.FLOAT)
2023-11-12 20:10:18.731726000 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Input has Shape (1, 3, 416, 416)
2023-11-12 20:10:18.731805059 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated Tensor Binding detections Memory 298116 Bytes (74529 * DataType.FLOAT)
2023-11-12 20:10:18.731882958 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Output has Shape (1, 74529, 1, 1)
2023-11-12 20:10:18.731962738 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : TensorRT loaded. Input shape is ((416, 416), <class 'numpy.float32'>)
2023-11-12 20:10:18.732041767 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : TensorRT version is 8
2023-11-12 20:10:23.200715004 [2023-11-12 20:10:23] asyncio DEBUG : Using selector: EpollSelector
2023-11-12 20:10:24.368587376 Process camera_processor:yi_pasillo:
2023-11-12 20:10:24.370903253 Traceback (most recent call last):
2023-11-12 20:10:24.370925223 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2023-11-12 20:10:24.370926833 self.run()
2023-11-12 20:10:24.370935243 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2023-11-12 20:10:24.370936503 self._target(*self._args, **self._kwargs)
2023-11-12 20:10:24.370938233 File "/opt/frigate/frigate/video.py", line 436, in track_camera
2023-11-12 20:10:24.370939333 process_frames(
2023-11-12 20:10:24.370940563 File "/opt/frigate/frigate/video.py", line 689, in process_frames
2023-11-12 20:10:24.370941663 detect(
2023-11-12 20:10:24.370949583 File "/opt/frigate/frigate/video.py", line 474, in detect
2023-11-12 20:10:24.370950843 region_detections = object_detector.detect(tensor_input)
2023-11-12 20:10:24.370952163 File "/opt/frigate/frigate/object_detection.py", line 225, in detect
2023-11-12 20:10:24.370953493 (self.labels[int(d[0])], float(d[1]), (d[2], d[3], d[4], d[5]))
2023-11-12 20:10:24.370960602 KeyError: -16
Tiny will work ok. But yolov7x is a 20 times larger model than v7 tiny and uses different methods for activation. Its much much heavier. If you get 17 ms with tiny, 100ms to 300ms is probably expected with yolov7x. I get about 1.7ms a detection with v7 tiny on newer cards. It just isnt accurate enough for my use case
Of course, I understand you, but for my use case tiny is enough and the power consumption of a k620 is perfect for a edge AI small deployment.
Double check model sizes and reset or recreate the build folder before building. I just deployed it to 3 different locations with tensorrt and is working quite well at all so far.
My bad, I didn't change the size in the config. However same KeyError as before:
2023-11-12 20:10:18.311864447 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Num Available Devices: 1 2023-11-12 20:10:18.321110215 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 2023-11-12 20:10:18.321115575 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchedNMS_TRT version 1 2023-11-12 20:10:18.321117525 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchTilePlugin_TRT version 1 2023-11-12 20:10:18.321142584 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Clip_TRT version 1 2023-11-12 20:10:18.321145904 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CoordConvAC version 1 2023-11-12 20:10:18.321148004 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CropAndResizeDynamic version 1 2023-11-12 20:10:18.321184904 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CropAndResize version 1 2023-11-12 20:10:18.321187434 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::DecodeBbox3DPlugin version 1 2023-11-12 20:10:18.321213984 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::DetectionLayer_TRT version 1 2023-11-12 20:10:18.321216254 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1 2023-11-12 20:10:18.321218204 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1 2023-11-12 20:10:18.321219884 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 2023-11-12 20:10:18.321243933 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_TRT version 1 2023-11-12 20:10:18.321246343 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::FlattenConcat_TRT version 1 2023-11-12 20:10:18.321248163 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::fMHA_V2 version 1 2023-11-12 20:10:18.321249783 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::fMHCA version 1 2023-11-12 20:10:18.321281433 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GenerateDetection_TRT version 1 2023-11-12 20:10:18.321283633 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GridAnchor_TRT version 1 2023-11-12 20:10:18.321285283 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GridAnchorRect_TRT version 1 2023-11-12 20:10:18.321287143 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GroupNorm version 1 2023-11-12 20:10:18.321288763 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::InstanceNormalization_TRT version 1 2023-11-12 20:10:18.321290453 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::InstanceNormalization_TRT version 2 2023-11-12 20:10:18.321292323 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::LayerNorm version 1 2023-11-12 20:10:18.321293823 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::LReLU_TRT version 1 2023-11-12 20:10:18.321295463 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultilevelCropAndResize_TRT version 1 2023-11-12 20:10:18.321334752 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultilevelProposeROI_TRT version 1 2023-11-12 20:10:18.321337252 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1 2023-11-12 20:10:18.321338982 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::NMSDynamic_TRT version 1 2023-11-12 20:10:18.321340472 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::NMS_TRT version 1 2023-11-12 20:10:18.321342422 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Normalize_TRT version 1 2023-11-12 20:10:18.321344092 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PillarScatterPlugin version 1 2023-11-12 20:10:18.321345662 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PriorBox_TRT version 1 2023-11-12 20:10:18.321347262 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ProposalDynamic version 1 2023-11-12 20:10:18.321391612 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ProposalLayer_TRT version 1 2023-11-12 20:10:18.321393682 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Proposal version 1 2023-11-12 20:10:18.321395372 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PyramidROIAlign_TRT version 1 2023-11-12 20:10:18.321397052 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Region_TRT version 1 2023-11-12 20:10:18.321398532 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Reorg_TRT version 1 2023-11-12 20:10:18.321400562 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ResizeNearest_TRT version 1 2023-11-12 20:10:18.321402102 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ROIAlign_TRT version 1 2023-11-12 20:10:18.321403622 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::RPROI_TRT version 1 2023-11-12 20:10:18.321405382 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ScatterND version 1 2023-11-12 20:10:18.321416922 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SeqLen2Spatial version 1 2023-11-12 20:10:18.321418842 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SpecialSlice_TRT version 1 2023-11-12 20:10:18.321420762 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SplitGeLU version 1 2023-11-12 20:10:18.321422272 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Split version 1 2023-11-12 20:10:18.321423832 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::VoxelGeneratorPlugin version 1 2023-11-12 20:10:18.372021138 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : Loaded engine size: 30 MiB 2023-11-12 20:10:18.442548787 [2023-11-12 20:10:18] urllib3.connectionpool DEBUG : https://api.github.com:443 "GET /repos/blakeblackshear/frigate/releases/latest HTTP/1.1" 200 1767 2023-11-12 20:10:18.476639858 [2023-11-12 20:10:18] peewee.sqliteq DEBUG : received query UPDATE "event" SET "end_time" = ("event"."start_time" + ?) WHERE ("event"."end_time" IS ?) 2023-11-12 20:10:18.476772446 [2023-11-12 20:10:18] peewee DEBUG : ('UPDATE "event" SET "end_time" = ("event"."start_time" + ?) WHERE ("event"."end_time" IS ?)', [30, None]) 2023-11-12 20:10:18.538630071 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcublas.so.11 2023-11-12 20:10:18.538747600 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcublas.so.11 2023-11-12 20:10:18.565552873 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as plugin tactic source 2023-11-12 20:10:18.565633582 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as core library tactic source 2023-11-12 20:10:18.565703841 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +8, now: CPU 148, GPU 174 (MiB) 2023-11-12 20:10:18.565773431 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcudnn.so.8 2023-11-12 20:10:18.565896250 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcudnn.so.8 2023-11-12 20:10:18.566259306 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as plugin tactic source 2023-11-12 20:10:18.577729032 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as core library tactic source 2023-11-12 20:10:18.577803541 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 149, GPU 184 (MiB) 2023-11-12 20:10:18.580630033 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Deserialization required 54866 microseconds. 2023-11-12 20:10:18.583986460 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +30, now: CPU 0, GPU 30 (MiB) 2023-11-12 20:10:18.722630421 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcublas.so.11 2023-11-12 20:10:18.728998827 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcublas.so.11 2023-11-12 20:10:18.730483732 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as plugin tactic source 2023-11-12 20:10:18.730589011 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as core library tactic source 2023-11-12 20:10:18.730733090 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 119, GPU 176 (MiB) 2023-11-12 20:10:18.730845129 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcudnn.so.8 2023-11-12 20:10:18.730914028 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcudnn.so.8 2023-11-12 20:10:18.731011447 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as plugin tactic source 2023-11-12 20:10:18.731098596 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as core library tactic source 2023-11-12 20:10:18.731173866 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 119, GPU 184 (MiB) 2023-11-12 20:10:18.731243825 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Total per-runner device persistent memory is 602624 2023-11-12 20:10:18.731326494 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Total per-runner host persistent memory is 109936 2023-11-12 20:10:18.731407113 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated activation device memory of size 12806656 2023-11-12 20:10:18.731494872 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +13, now: CPU 0, GPU 43 (MiB) 2023-11-12 20:10:18.731558222 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : CUDA lazy loading is enabled. 2023-11-12 20:10:18.731639571 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated Tensor Binding input Memory 2076672 Bytes (519168 * DataType.FLOAT) 2023-11-12 20:10:18.731726000 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Input has Shape (1, 3, 416, 416) 2023-11-12 20:10:18.731805059 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated Tensor Binding detections Memory 298116 Bytes (74529 * DataType.FLOAT) 2023-11-12 20:10:18.731882958 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Output has Shape (1, 74529, 1, 1) 2023-11-12 20:10:18.731962738 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : TensorRT loaded. Input shape is ((416, 416), <class 'numpy.float32'>) 2023-11-12 20:10:18.732041767 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : TensorRT version is 8 2023-11-12 20:10:23.200715004 [2023-11-12 20:10:23] asyncio DEBUG : Using selector: EpollSelector 2023-11-12 20:10:24.368587376 Process camera_processor:yi_pasillo: 2023-11-12 20:10:24.370903253 Traceback (most recent call last): 2023-11-12 20:10:24.370925223 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap 2023-11-12 20:10:24.370926833 self.run() 2023-11-12 20:10:24.370935243 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run 2023-11-12 20:10:24.370936503 self._target(*self._args, **self._kwargs) 2023-11-12 20:10:24.370938233 File "/opt/frigate/frigate/video.py", line 436, in track_camera 2023-11-12 20:10:24.370939333 process_frames( 2023-11-12 20:10:24.370940563 File "/opt/frigate/frigate/video.py", line 689, in process_frames 2023-11-12 20:10:24.370941663 detect( 2023-11-12 20:10:24.370949583 File "/opt/frigate/frigate/video.py", line 474, in detect 2023-11-12 20:10:24.370950843 region_detections = object_detector.detect(tensor_input) 2023-11-12 20:10:24.370952163 File "/opt/frigate/frigate/object_detection.py", line 225, in detect 2023-11-12 20:10:24.370953493 (self.labels[int(d[0])], float(d[1]), (d[2], d[3], d[4], d[5])) 2023-11-12 20:10:24.370960602 KeyError: -16
Tiny will work ok. But yolov7x is a 20 times larger model than v7 tiny and uses different methods for activation. Its much much heavier. If you get 17 ms with tiny, 100ms to 300ms is probably expected with yolov7x. I get about 1.7ms a detection with v7 tiny on newer cards. It just isnt accurate enough for my use case
Of course, I understand you, but for my use case tiny is enough and the power consumption of a k620 is perfect for a edge AI small deployment.
A segmentation fault is usually model size or not enough memory on the card. Try recreating the container and regenerating models. Make sure you have fp16 set to false.
Also that card uses compute 5.0 which nvidia has completely removed support for in the next version of tensorrt. Just keep that in mind future updates to tensorrt will lead to this card not working. It should work in the current version though i believe. But i did not check cuda driver versioning and all that. I would read up on it and see
Double check model sizes and reset or recreate the build folder before building. I just deployed it to 3 different locations with tensorrt and is working quite well at all so far.
My bad, I didn't change the size in the config. However same KeyError as before:
2023-11-12 20:10:18.311864447 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Num Available Devices: 1 2023-11-12 20:10:18.321110215 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 2023-11-12 20:10:18.321115575 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchedNMS_TRT version 1 2023-11-12 20:10:18.321117525 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::BatchTilePlugin_TRT version 1 2023-11-12 20:10:18.321142584 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Clip_TRT version 1 2023-11-12 20:10:18.321145904 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CoordConvAC version 1 2023-11-12 20:10:18.321148004 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CropAndResizeDynamic version 1 2023-11-12 20:10:18.321184904 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::CropAndResize version 1 2023-11-12 20:10:18.321187434 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::DecodeBbox3DPlugin version 1 2023-11-12 20:10:18.321213984 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::DetectionLayer_TRT version 1 2023-11-12 20:10:18.321216254 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1 2023-11-12 20:10:18.321218204 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1 2023-11-12 20:10:18.321219884 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 2023-11-12 20:10:18.321243933 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::EfficientNMS_TRT version 1 2023-11-12 20:10:18.321246343 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::FlattenConcat_TRT version 1 2023-11-12 20:10:18.321248163 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::fMHA_V2 version 1 2023-11-12 20:10:18.321249783 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::fMHCA version 1 2023-11-12 20:10:18.321281433 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GenerateDetection_TRT version 1 2023-11-12 20:10:18.321283633 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GridAnchor_TRT version 1 2023-11-12 20:10:18.321285283 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GridAnchorRect_TRT version 1 2023-11-12 20:10:18.321287143 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::GroupNorm version 1 2023-11-12 20:10:18.321288763 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::InstanceNormalization_TRT version 1 2023-11-12 20:10:18.321290453 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::InstanceNormalization_TRT version 2 2023-11-12 20:10:18.321292323 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::LayerNorm version 1 2023-11-12 20:10:18.321293823 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::LReLU_TRT version 1 2023-11-12 20:10:18.321295463 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultilevelCropAndResize_TRT version 1 2023-11-12 20:10:18.321334752 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultilevelProposeROI_TRT version 1 2023-11-12 20:10:18.321337252 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1 2023-11-12 20:10:18.321338982 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::NMSDynamic_TRT version 1 2023-11-12 20:10:18.321340472 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::NMS_TRT version 1 2023-11-12 20:10:18.321342422 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Normalize_TRT version 1 2023-11-12 20:10:18.321344092 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PillarScatterPlugin version 1 2023-11-12 20:10:18.321345662 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PriorBox_TRT version 1 2023-11-12 20:10:18.321347262 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ProposalDynamic version 1 2023-11-12 20:10:18.321391612 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ProposalLayer_TRT version 1 2023-11-12 20:10:18.321393682 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Proposal version 1 2023-11-12 20:10:18.321395372 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::PyramidROIAlign_TRT version 1 2023-11-12 20:10:18.321397052 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Region_TRT version 1 2023-11-12 20:10:18.321398532 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Reorg_TRT version 1 2023-11-12 20:10:18.321400562 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ResizeNearest_TRT version 1 2023-11-12 20:10:18.321402102 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ROIAlign_TRT version 1 2023-11-12 20:10:18.321403622 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::RPROI_TRT version 1 2023-11-12 20:10:18.321405382 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::ScatterND version 1 2023-11-12 20:10:18.321416922 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SeqLen2Spatial version 1 2023-11-12 20:10:18.321418842 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SpecialSlice_TRT version 1 2023-11-12 20:10:18.321420762 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::SplitGeLU version 1 2023-11-12 20:10:18.321422272 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::Split version 1 2023-11-12 20:10:18.321423832 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Registered plugin creator - ::VoxelGeneratorPlugin version 1 2023-11-12 20:10:18.372021138 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : Loaded engine size: 30 MiB 2023-11-12 20:10:18.442548787 [2023-11-12 20:10:18] urllib3.connectionpool DEBUG : https://api.github.com:443 "GET /repos/blakeblackshear/frigate/releases/latest HTTP/1.1" 200 1767 2023-11-12 20:10:18.476639858 [2023-11-12 20:10:18] peewee.sqliteq DEBUG : received query UPDATE "event" SET "end_time" = ("event"."start_time" + ?) WHERE ("event"."end_time" IS ?) 2023-11-12 20:10:18.476772446 [2023-11-12 20:10:18] peewee DEBUG : ('UPDATE "event" SET "end_time" = ("event"."start_time" + ?) WHERE ("event"."end_time" IS ?)', [30, None]) 2023-11-12 20:10:18.538630071 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcublas.so.11 2023-11-12 20:10:18.538747600 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcublas.so.11 2023-11-12 20:10:18.565552873 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as plugin tactic source 2023-11-12 20:10:18.565633582 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as core library tactic source 2023-11-12 20:10:18.565703841 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +8, now: CPU 148, GPU 174 (MiB) 2023-11-12 20:10:18.565773431 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcudnn.so.8 2023-11-12 20:10:18.565896250 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcudnn.so.8 2023-11-12 20:10:18.566259306 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as plugin tactic source 2023-11-12 20:10:18.577729032 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as core library tactic source 2023-11-12 20:10:18.577803541 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 149, GPU 184 (MiB) 2023-11-12 20:10:18.580630033 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Deserialization required 54866 microseconds. 2023-11-12 20:10:18.583986460 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +30, now: CPU 0, GPU 30 (MiB) 2023-11-12 20:10:18.722630421 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcublas.so.11 2023-11-12 20:10:18.728998827 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcublas.so.11 2023-11-12 20:10:18.730483732 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as plugin tactic source 2023-11-12 20:10:18.730589011 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cublas as core library tactic source 2023-11-12 20:10:18.730733090 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 119, GPU 176 (MiB) 2023-11-12 20:10:18.730845129 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Trying to load shared library libcudnn.so.8 2023-11-12 20:10:18.730914028 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Loaded shared library libcudnn.so.8 2023-11-12 20:10:18.731011447 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as plugin tactic source 2023-11-12 20:10:18.731098596 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Using cuDNN as core library tactic source 2023-11-12 20:10:18.731173866 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 119, GPU 184 (MiB) 2023-11-12 20:10:18.731243825 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Total per-runner device persistent memory is 602624 2023-11-12 20:10:18.731326494 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Total per-runner host persistent memory is 109936 2023-11-12 20:10:18.731407113 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated activation device memory of size 12806656 2023-11-12 20:10:18.731494872 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +13, now: CPU 0, GPU 43 (MiB) 2023-11-12 20:10:18.731558222 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : CUDA lazy loading is enabled. 2023-11-12 20:10:18.731639571 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated Tensor Binding input Memory 2076672 Bytes (519168 * DataType.FLOAT) 2023-11-12 20:10:18.731726000 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Input has Shape (1, 3, 416, 416) 2023-11-12 20:10:18.731805059 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Allocated Tensor Binding detections Memory 298116 Bytes (74529 * DataType.FLOAT) 2023-11-12 20:10:18.731882958 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : Output has Shape (1, 74529, 1, 1) 2023-11-12 20:10:18.731962738 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : TensorRT loaded. Input shape is ((416, 416), <class 'numpy.float32'>) 2023-11-12 20:10:18.732041767 [2023-11-12 20:10:18] frigate.detectors.plugins.tensorrt DEBUG : TensorRT version is 8 2023-11-12 20:10:23.200715004 [2023-11-12 20:10:23] asyncio DEBUG : Using selector: EpollSelector 2023-11-12 20:10:24.368587376 Process camera_processor:yi_pasillo: 2023-11-12 20:10:24.370903253 Traceback (most recent call last): 2023-11-12 20:10:24.370925223 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap 2023-11-12 20:10:24.370926833 self.run() 2023-11-12 20:10:24.370935243 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run 2023-11-12 20:10:24.370936503 self._target(*self._args, **self._kwargs) 2023-11-12 20:10:24.370938233 File "/opt/frigate/frigate/video.py", line 436, in track_camera 2023-11-12 20:10:24.370939333 process_frames( 2023-11-12 20:10:24.370940563 File "/opt/frigate/frigate/video.py", line 689, in process_frames 2023-11-12 20:10:24.370941663 detect( 2023-11-12 20:10:24.370949583 File "/opt/frigate/frigate/video.py", line 474, in detect 2023-11-12 20:10:24.370950843 region_detections = object_detector.detect(tensor_input) 2023-11-12 20:10:24.370952163 File "/opt/frigate/frigate/object_detection.py", line 225, in detect 2023-11-12 20:10:24.370953493 (self.labels[int(d[0])], float(d[1]), (d[2], d[3], d[4], d[5])) 2023-11-12 20:10:24.370960602 KeyError: -16
Tiny will work ok. But yolov7x is a 20 times larger model than v7 tiny and uses different methods for activation. Its much much heavier. If you get 17 ms with tiny, 100ms to 300ms is probably expected with yolov7x. I get about 1.7ms a detection with v7 tiny on newer cards. It just isnt accurate enough for my use case
Of course, I understand you, but for my use case tiny is enough and the power consumption of a k620 is perfect for a edge AI small deployment.
A segmentation fault is usually model size or not enough memory on the card. Try recreating the container and regenerating models. Make sure you have fp16 set to false.
Sorry im not reading well. You have a different error.
how are you getting the build? Are you pulling or building the image locally
how are you getting the build? Are you pulling or building the image locally
It was pulled from official repo, FP16 false and clean model build.
Is your config still the same as 3 weeks ago? Are you binding the whole config directory? What is the directory tree its creating there if you are? Can you try removing “tensorrt” from the model path or make sure that path and model files are being created?
Is your config still the same as 3 weeks ago? Are you binding the whole config directory? What is the directory tree its creating there if you are? Can you try removing “tensorrt” from the model path or make sure that path and model files are being created?
Yes, it's the same, only difference the model, but tried both and the same KeyError. Files are being created, I am deleting the folder each time I try.
Also I tried to create them using the old 0.12 way but changing the tensorrt container to 23.02. However the same error.
I also tried reverting 0.13 beta 2 to tensorrt 8.4.1 but I kept getting the same error.
As I said, I'm thinking about some python dependency is breaking the detection in legacy cards i.e maxwell, but reviewing the code and the commits I can't get to a conclusion
I would grab the nvidia tensorrt 8.5.3 container, clone the tensorrt_demos git into and manually generate a model and see if it works with nvidias trtexec test tool in that container. I can send you a bash script when i get home in a few hours if you like.
I would grab the nvidia tensorrt 8.5.3 container, clone the tensorrt_demos git into and manually generate a model and see if it works with nvidias trtexec test tool in that container. I can send you a bash script when i get home in a few hours if you like.
Already tried it, with this script https://github.com/blakeblackshear/frigate/blob/master/docker/tensorrt_models.sh and tensorrt 8.5.3 container which is 23.02
It works?
It works?
It builds sucesfully but when I pass to frigate gives the same KeyError when starting the detector
If you open bash in the container, there is a tool called trtexec, you can run the model with the plugin to test. Narrow down where the problem is. Its normally located at /workspace/tensorrt/bin/trtexec. Easy to use. A quick google will help you or i can send you my example
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I think I stumbled upon something that might work? I was seeing this key issue my self and have been hitting me head all day on it. I finally decided I was going to try and generate a tflite and try it via CPU, but I got distracted and ended up auto-piloting another TRT generation. To my surprise, the error went away in Frigate.
So what did I magically do? Looking over my messy scripts and reviewing the logs I came upon this tidbit
ONNX: starting export with onnx 1.15.0 opset 17...
ONNX: simplifying with onnxsim 0.4.35...
ONNX: export success ✅ 1.5s, saved as 'yolov8n.onnx' (12.1 MB)
TensorFlow SavedModel: running 'onnx2tf -i "yolov8n.onnx" -o "yolov8n_saved_model" -nuo --non_verbose'
2024-01-03 06:35:22.084083: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
TensorFlow SavedModel: export success ✅ 12.7s, saved as 'yolov8n_saved_model' (30.5 MB)
TensorFlow Lite: starting export with tensorflow 2.13.1...
TensorFlow Lite: export success ✅ 0.0s, saved as 'yolov8n_saved_model/yolov8n_float32.tflite' (12.1 MB)
Awesome, I created a TFLite, right? Well, something happened to the generated ONNX used for the TFLite conversion that wasn't happening before: ONNX: simplifying with onnxsim 0.4.35...
In my auto-pilot I mistakenly used this ONNX in my TRTEXEC command
docker run \
--runtime=nvidia \
--gpus all \
--rm -it \
-e USE_FP16=False \
-v $PWD:/workspace nvcr.io/nvidia/tensorrt:23.03-py3 /bin/bash \
-c "cd /workspace && trtexec --onnx=yolov8n.onnx --saveEngine=rex.trt"
This output file appears to be working for me as of right now.
@meriley can you give a crash course on how you generated the tflite model... I'd like to try the generated onnx as well.
Describe the problem you are having
TensorRT detector fails to start returning a KeyError
This appeared more than once in the issues page as seen here https://github.com/blakeblackshear/frigate/issues/7149 or here https://github.com/blakeblackshear/frigate/issues/8240 turning out it was an outdated nvidia-driver problem, being the requirement >=530
I have tried to fresh build the models (yolov7-320 and yolov7-tiny-416) with driver 535.113.01 and 545.23.06 with no success and the same KeyError. I have tried multiple times to be sure it is not a problem building the models.
I'm thinking about that perhaps there is another nvidia or cuda library version requirement apart from the nvidia driver, but I can't find anything related.
Version
0.13.0-0858859
Frigate config file
Relevant log output
FFprobe output from your camera
Frigate stats
No response
Operating system
UNRAID
Install method
Docker CLI
Coral version
CPU (no coral)
Network connection
Wired
Camera make and model
yi dome
Any other information that may be helpful