NVIDIA-AI-IOT / trt_pose

Real-time pose estimation accelerated with NVIDIA TensorRT
MIT License
972 stars 287 forks source link

FPS on Jetson Xavier #44

Open IsmaelElHouas opened 4 years ago

IsmaelElHouas commented 4 years ago

Hi! I achieved to run Resnet18 on Jetson Xavier with a result of 30 fps. How did you get the performance of 251 fps?

Thanks

thancaocuong commented 4 years ago

hi @IsmaelElHouas, 251 FPS you got is only inference time (without postprocessing time called paf process). Anyway, I obtained overall 7fps when running densenet121 320x320 on jetson tx2. So on Xavier, It should be better.

Tetsujinfr commented 4 years ago

Fyi, I got 128fps on a Xavier NX, so I guess 251fps on a Xavier AGX makes sense. I have used the 2 cores/15W mode, i.e. dedicating as much power envelope to the GPU as possible. E.g. if I select the 4cores/15W power mode then the fps drops to 82. Just in case you did not consider the power modes carefully.

jasonakon commented 4 years ago

@IsmaelElHouas , @thancaocuong , may i know which version of pytorch and torchvision you installed to run the inference ? I always met this issue all the time. image

thancaocuong commented 4 years ago

@jasonakon you need to remove trt_pose completely and reinstall. On jetsonNX, and Jetson TX2 I use torch1.5.0 with jetpack 4.4. So you can try to use lastest jetpack version. I use it without any problems. You can follow this guild to get compatible pytorch version https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048