Closed shuchao97 closed 1 year ago
Hey @shuchao97, thanks for your interest in the SHIFT dataset!
The semantic masks are stored in png files, each pixel in the range of 0-22 to represent the classes. This is same as the format used in Cityscapes (called 'trainId' there). For the class correspondence to Cityscapes, please refer to our Get Started page under the 'Segmentation labels' part. Ps: The png files may seem nearly all black on the screen due to the range of 8-bit Integra being 0-255, but they contain the trainId.
For semantic LiDAR segmentation, this is not provided now. However, you could project the 2D segmentation into the 3D space, by using the sensor poses and camera intrinsic, to get such information.
Let me know if there is anything still unclear!
Thank you very much for the author's reply. With the help of the author, I also solved the problem in my experiment on the shift data set
Hello author, thank you for your outstanding contributions in the paper. Recently, when I used the SHIFT dataset for domain-adaptive semantic segmentation, I found that the corresponding segmentation label data was not found. At the same time, after decompressing it in the semantic segmentation .zip bag, I found that there are only image in png format. Finally, are there any semantic segmentation labels corresponding to 3D point clouds in the dataset? I hope to get your reply, thank you again. @suniique @fyu @mattiasegu