hova88 / Lidardet

A rewrite verson of Lidar detection deeplearning framework (PointPillars) for multi device fast applications ((pc train and vehicle inference)).
29 stars 6 forks source link

cant run convert2trt.py #1

Closed indra4837 closed 3 years ago

indra4837 commented 3 years ago

$ python3 libs/tools/convert2trt.py convert --config_path=./params/configs/pointpillars_kitti_car_xy16.yaml --weights_file=/home/arc/catkin_ws/src/lidar_detection_ros/models/pointpillar_7728.pth --trt_path=/home/arc/Documents/

`---------------------------------------------------------------------------- **** TensorRT: The PFN subnetwork is being transformed *****

Traceback (most recent call last): File "libs/tools/convert2trt.py", line 126, in fire.Fire() File "/home/arc/.local/lib/python3.6/site-packages/fire/core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/arc/.local/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire target=component.name) File "/home/arc/.local/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "libs/tools/convert2trt.py", line 105, in convert max_workspace_size=1 << 20) TypeError: _torch_depoly() got an unexpected keyword argument 'fp16_mode' `

Ran on Xavier and used checkpoints from OpenPCDet repository. May I know what I should do?

indra4837 commented 3 years ago

Turns out I installed the wrong torch2trt module. Everything works well, great work!

May I ask if you will be adding support for other lidar detection models such as PV-RCNN, TANET, SECOND? Thank you!

hova88 commented 3 years ago

Turns out I installed the wrong torch2trt module. Everything works well, great work!

May I ask if you will be adding support for other lidar detection models such as PV-RCNN, TANET, SECOND? Thank you!

ACTUALLY NO....I don't think the idea of this repository does not suitable for real implementation. If you're interested, take a look at the Autoware.ai and Apollo

hova88 commented 3 years ago

Turns out I installed the wrong torch2trt module. Everything works well, great work!

May I ask if you will be adding support for other lidar detection models such as PV-RCNN, TANET, SECOND? Thank you!

And now I'm thinking of transforming some models in OpenPCDet using this way, but it's still difficult for me, and take time.

indra4837 commented 3 years ago

Turns out I installed the wrong torch2trt module. Everything works well, great work! May I ask if you will be adding support for other lidar detection models such as PV-RCNN, TANET, SECOND? Thank you!

ACTUALLY NO....I don't think the idea of this repository does not suitable for real implementation. If you're interested, take a look at the Autoware.ai and Apollo

I did see that repository but it only supports Pointpillars and not other models.

Also, I was wondering why TensorRT inference only achieves around 3FPS on Xavier in ROS. This is approximately similar to a ROS node i wrote for SECOND model that can achieve 3FPS in ROS on Jetson Xavier also.

Do you have any idea why this might be the case? Thank you.

hova88 commented 3 years ago

Turns out I installed the wrong torch2trt module. Everything works well, great work! May I ask if you will be adding support for other lidar detection models such as PV-RCNN, TANET, SECOND? Thank you!

ACTUALLY NO....I don't think the idea of this repository does not suitable for real implementation. If you're interested, take a look at the Autoware.ai and Apollo

I did see that repository but it only supports Pointpillars and not other models.

Also, I was wondering why TensorRT inference only achieves around 3FPS on Xavier in ROS. This is approximately similar to a ROS node i wrote for SECOND model that can achieve 3FPS in ROS on Jetson Xavier also.

Do you have any idea why this might be the case? Thank you.

Sorry, I don't think I can make any valuable suggestions. I use to take Apollo's PointPillars out and wrote it as a single Ros node, its running time on Xavier is about 170ms/frame. However, when the two sub-model are adjusted to FP16, Hyper parameters MAX_NUM_PILLARS and MAX_POINT_PER_PILLARS are reduced to 22000 and 20 respectively, the model inference can be increased to about 50ms/frame.

indra4837 commented 3 years ago

Alright, thank you for the clarifications. I will close this issue now.