-
[Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving](https://arxiv.org/abs/1812.07179),CVPR 2019, 3D物体检测
提交日期:2019-03-18(2018-12-18 v1)
团队:康奈…
-
-
Hi, shaoshuai,
I am wondering how to show the detection results with 3D bbxes and show the detected vehicles in a distinguish color. Did you use any library? Thanks.
zjx99 updated
4 years ago
-
**Before opening an issue**
If the issue is about build errors:
- [x] I have read [installation instructions](https://koide3.github.io/direct_visual_lidar_calibration/installation/)
- [x] I have co…
-
hello, i have a small question about principle. hope you can help me. If I use an RGBD camera and a 2D lidar as sensors when using rtab-map, how does rtab-map fuse the two types of data? Does it perfo…
-
I feel URDF should be the format that represents the capabilities and kinematics of a robot as built. To that end it could include the sensors that are on board, as well. Similar to the `` tag the chi…
-
Hello,
What odometry is this launch configuration trying to use? I know it's ICP, but is it from Lidar or depth scans?
`roslaunch rtabmap_ros rtabmap.launch stereo:=false \ rgb_topic:=/zed2/zed_…
-
In the Docker image of your repository in ROS2, could you please inform me about how "Initial guess Automatic" (initial_guess_auto) works? I'm encountering an error when testing it.
![Screenshot from…
-
Hi
I am trying to understand visual proximity link detection performance in more details to understand its use case for our project and based on my experiments following are my questions
1. In …
-
I am trying to solve a problem where I need odometry and in the future also mapping in areas where there can be a transition between a room with light to dark room. I use the realsense D435i camera fo…