NVIDIA-AI-IOT / trt_pose

Real-time pose estimation accelerated with NVIDIA TensorRT
MIT License
957 stars 290 forks source link

Kernel dies when converting model to TRT when running live_demo.ipynb #130

Open TomasMendozaHN opened 3 years ago

TomasMendozaHN commented 3 years ago

I have done these steps on my Jetson Nano and everything worked flawlessly.

Now I have a jetson Xavier NX. Unfortunately, despite following the same procedures to prepare everything on my NX, when I tried to load the TRT model (converted on my Jetson Nano) I get an error. So, I decided to convert it again from my NX. However, as soon as I run the line model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1<<25 from the live_demo.ipynb, the whole kernel crashes.

Any suggestion as to what I should do?

TomasMendozaHN commented 3 years ago

nevermind. I forgot to create the swapmemory. that's why my kernel kept dying.

sxh-lsc commented 3 years ago

My kernel also crashes when I run the line model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1<<25.(Jestson Nano).Can you please tell me how u create the swapmemory to fix it?