Closed qpc001 closed 1 year ago
Hi, thanks for your interest in our paper! I guess the main reason is the wrongly estimated normal originated from the wrong sensor positions. Can you please try to visualize your sensor positions?
Alternatively, you can download our official CARLA dataset here and see if the problem persists.
Hi, thanks for your interest in our paper! I guess the main reason is the wrongly estimated normal originated from the wrong sensor positions. Can you please try to visualize your sensor positions?
Alternatively, you can download our official CARLA dataset here and see if the problem persists.
I use the sensor position at [0,0,0] for recons_waymo.py
.
And I try to use the script recons_simple.py , but got similar result. (The normal is calculated by CloudCompare.)
Ah, I see the reason :)
Sensor position refers to the position of the sensor that captures this point, and it could be different for each point. Usually, you could use the positions of your vehicle to approximate such positions, instead of [0, 0, 0].
The normals computed from cloud-compare suffer from similar problems in that your normal orientations are not consistent, i.e., some normals on the road are pointing up, while others are pointing down.
Hence, two solutions:
Best.
Thanks A lot.
How to save the mesh as color image like below? I save the mesh and show it in MeshLab. It looks below.
I am trying to use this method for carla HD maps(Town01), but I got the result is:
The result is contained lots of cracks.
I used the script
recons_waymo.py
to generate it.