-
Here is the code. I hope to view the camera_bevfeat data through csv file, it seems that the output dimension is {1x80x180x180}[float16].
const nvtype::half* forward_only(const void* camera_images,…
-
### Description of task
What is this?? A task in the sensor fusion repo that actually is about fusing sensors?? WOW
---
Associate a 3d point (cluster of points??) from the lidar/pcl-detector/ta…
-
Hello, could you please give me a centerpoint-based camera, lidar and fusion training model? When I train the image branch, mAP==0. How should I train the image branch?
-
Hi,
thanks for the great project and easy calibration of lidar and camera.
As a nice addition to the project itself I would like to request a ROS 2 node for the implementation of fusion/overla…
-
AWSIM quick start demo only use camera for traffic light detection, and the perception mode was selected as "lidar", So I changed the perception mode in launch file and set the traffic light camera se…
-
Hello, Basic AI Team
I started annotating on the datasets of LiDAR Fusion Trial. I used Basic LiDAR Object Detection model. The results are so good.
I'm wondering that how worked it. So I red your do…
-
I use my own lidar-only and camera-only pth to train the fusion model,and encountered this problem,How can I solve it?
![image](https://github.com/mit-han-lab/bevfusion/assets/134828384/d11cd1f5-295…
-
I have a gazebo-ROS2 simulation in place using lidar and a rgbd realsense d455 camera.
At the moment, when I run rtabmap and using icp_odometry, I get a point cloud corresponding to the lidar view.
…
-
Hello guys,
First I would like to thank you on this awesome work your package is really helpful and easy to use and debug.
I am trying to build a gridmap for an outdoor environment simulation on gaz…
-
Hi you mentioned you used lidar_camera_calibration https://github.com/ankitdhall/lidar_camera_calibration to generate extrinsic parameters to be used in this package. I see that it "transform all the …