Thank you for your excellent work.
I have some questions about the detailed information of the synthetic dataset. For example, in "SEQ0" of "Town 11", how are the lidar depth images saved? ori_depth * 256 and save them in uint16 datatype like the kitti did, or some other processing? Meanwhile, how the images of boundary, normal are saved? Can you give us some details about them? What's the difference of depth images between "lidar" and "lidar_m"? So as the "normal" and "normal_m". If possible, we hope a readme file to briefly explain some details. Thank you so much.
Thank you for your excellent work. I have some questions about the detailed information of the synthetic dataset. For example, in "SEQ0" of "Town 11", how are the lidar depth images saved? ori_depth * 256 and save them in uint16 datatype like the kitti did, or some other processing? Meanwhile, how the images of boundary, normal are saved? Can you give us some details about them? What's the difference of depth images between "lidar" and "lidar_m"? So as the "normal" and "normal_m". If possible, we hope a readme file to briefly explain some details. Thank you so much.