ntnu-arl / mbplanner_ros

Motion-primitives Based Planner for Fast & Agile Exploration
BSD 3-Clause "New" or "Revised" License
322 stars 84 forks source link

Using RGB-D camera instead of 3D Lidar #16

Open ta-jetson opened 7 months ago

ta-jetson commented 7 months ago

Hey @MihirDharmadhikari @engcang @ShreyanshDarshan, I want to use a front facing depth camera instead of a 3D Lidar. I want to evaluate the working of the algorithm with limited FoV and range.

I tried to follow @engcang changes in https://github.com/ntnu-arl/mbplanner_ros/issues/4#issue-758721445

I changed the SensorParams in global_planner_config.yaml and mbplanner_config.yaml as:

type: kCamera max_range: 6.0 center_offset: [0.0, 0.0, 0.0] rotations: [0.0, 0.0, 0.0] fov: [rad(pi/2), rad(pi/3)] resolution: [rad(5.0pi/180), rad(3.0pi/180)]

The simulation runs as usual and there is no change in FoV or range of the pointcloud

Screenshot from 2024-04-17 18-39-52

Please let me know if I missed something? I am actively working on this problem, so any suggestion would be helpful

Thank you,

MihirDharmadhikari commented 7 months ago

Hi @ta-jetson ,

The parameter that you have changed is used only for volumetric gain calculation. You also need to provide a point cloud from a depth camera to the planner which is set in this parameter. Currently, the M100 does not integrate a depth camera. However, the underlying rotorS simulator has a model of a depth camera which you can add on the M100 similar to this.

Let me know if this helps.

Best regards, Mihir