I was wondering if there is a native way to project the panoramic rendered images back to a 3d point cloud, together with the per point (per pixel) semantic labels?
For pretraining a point cloud semantic segmentation model, I wanted to use your dataset. But i need annotated point cloud data, not 2d images.
The last 3 hours, I tried to achieve this with your code within this repo, but I was not able to get it done. So thought i just ask you. Maybe you could help.
Hey,
I was wondering if there is a native way to project the panoramic rendered images back to a 3d point cloud, together with the per point (per pixel) semantic labels?
For pretraining a point cloud semantic segmentation model, I wanted to use your dataset. But i need annotated point cloud data, not 2d images.
The last 3 hours, I tried to achieve this with your code within this repo, but I was not able to get it done. So thought i just ask you. Maybe you could help.
Thank you in advance. Awesome work!
Cheers Lukas