ai4ce / SSCBench

SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving
158 stars 11 forks source link

Provide Corresponding LIDAR Scan for Each Frame #3

Open yininghase opened 1 year ago

yininghase commented 1 year ago

Thanks for your work at SSC Labeling for each frame. Could you please also give the LIDAR scan of each corresponding frame, so that the user can voxelize the point cloud themselves with more features instead of using the $frame$.bin file which only show occupancy of the voxel.

Gaaaavin commented 1 year ago

Hi, thank you for your interest in our dataset. If you are referring to the raw point cloud input, we recommend that you directly download from the raw dataset pages.

KITTI-360: https://www.cvlibs.net/datasets/kitti-360/download.php nuScenes: https://www.nuscenes.org/nuscenes#download Waymo: https://waymo.com/open/download/

Please let me know if this answers your question.

yininghase commented 1 year ago

Yes, I finished downloading the dataset from the orginal dataset website.

But for nuScenes dataset, I do not know which scene in your provided trainval and test dataset matches which scene in the original dataset. Could you please provide some information about this?

By the way, in your provided nuscenes dataset in google drive, it seems that there are only 699 scenes in your provided trainval dataset and 149 scene for test dataset. It seems that some frames are lost in the .sqf files. Maybe you can check it.

Thanks for your help!

Gaaaavin commented 1 year ago

The trainval split corresponds to the 700 "train" scenes in the original nuScenes dataset. The test split corresponds to the 150 "val" scenes in the original dataset.

We will have a check on the missed files.

yininghase commented 1 year ago

Got it, Thanks

yininghase commented 1 year ago

Sorry, I have another question about voxelization of the point cloud. Because when i directly voxelized the raw kitti360 lidar scan, it seems that there is a misalignment between the ssc label voxel and my voxelized raw lidar scan. So I'd like to ask, are you change the raw lidar scan to another coordinate system before voxelize it?

Gaaaavin commented 1 year ago

Sorry for the late reply. All voxels should be in the ego coordinate system. It would be helpful if you can let us know what misalignment you are facing.

yininghase commented 1 year ago

image

As the picture shows, the blue points indicate the position of voxelized point and the red points indicate the position of the voxel label, ideally the blue points should overlap with the red points

Gaaaavin commented 1 year ago

We applied a lidar-to-camera transformation to better accommodate camera-based methods. This might cause the misalignment you show.

yininghase commented 1 year ago

Thanks for the information. By the way, I would like to mention that there seems to be a one frame drift in your provided nuscenes dataset, i.e. the last image of a sequence seems to be the first image of the next sequences, please check this error.

aycatakmaz commented 1 week ago

Hi, thank you for this exciting work! @Gaaaavin

I was curious about the exact transformation that was applied to obtain the final voxelized data. If one needs to convert the voxel coordinates to the lidar coordinate system, what does one need to do to exactly?

So far, I apply (x,y,z) -> (x*0.2, y*0.2-25.6, z*0.2-2), in order to first convert the coordinates to meters (via multiplying by the voxel_size=0.2 and converting the camera location following the SemanticKITTI setup (0, -25.6, -2). In the end, what I obtain is a scan that is very close to the Lidar scan, but there is a small mismatch as described above. Is there an issue with the way I convert these coordinates? @yininghase were you able to solve this alignment issue? What additional transformation should I apply to align these voxel coordinates with the Lidar coordinate system?

Thank you very much in advance!