Tianxiaomo / pytorch-YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4
Apache License 2.0
4.48k stars 1.49k forks source link

Memory leak running on Jetson Nano #451

Open seanreynoldscs opened 3 years ago

seanreynoldscs commented 3 years ago

I have a trained model that I'm interested in deploying to the edge. With the Jetson Nano I can load my model and image then process it using do_detect with "cpu" device and use_cuda = 0 However when I use "cuda" device and use_cuda = 1 it goes into an infinite loop on the do_detect.

I dont believe this is a memory issue because it loads everything into memory just fine before calling do_detect. I would expect the do_detect to return in less than 1 second because it should be somewhat high FPS using gpu on the Jetson Nano.

The project works on the nvidia gpu in my windows condo environment, it works in cpu mode everywhere, I'm just getting this issue on Jetson Nano gpu.

sudo docker pull nvcr.io/nvidia/l4t-pytorch:r32.5.0-pth1.7-py3 appdirs (1.4.4) Cython (0.29.21) dataclasses (0.8) decorator (4.4.2) future (0.18.2) Mako (1.1.3) MarkupSafe (1.1.1) numpy (1.19.4) Pillow (8.0.1) pip (9.0.1) pycuda (2020.1) pytools (2020.4.4) setuptools (51.0.0) six (1.15.0) torch (1.7.0) torchaudio (0.7.0a0+ac17b64) torchvision (0.8.0a0+45f960c) typing-extensions (3.7.4.3) wheel (0.36.1)

Any advice?