Closed ngoductuanlhp closed 8 months ago
Hello, I'm not part of the project but I worked a bit on this repo, I can't answer the first part but I suggest you to either try to create your own datamodule that process the room or try to isolate each room seperately after the prediction, it really depends on your use-case to be honest. For the second part of your question, the points are fed into a voxel grid so two points that are too close from one another are merged.In addition to that; if you are looking at your nag after the transform_on_device, the nodes are subsampled another time so it may even more reduce the number of points. That's why the number of points are heavily reduced
Hi @ngoductuanlhp, thanks for your interest in our repo. If you use this project, don't forget to give it a :star: , it means a lot to us !
Your Superpoint Partition algorithm holds great promise for various 3D point cloud tasks, such as instance segmentation.
Indeed, we did a follow-up work on large-scale panoptic segmentation which was accepted for an oral presentation at 3DV 2024. The code will be released soon to extend this repo to panoptic (and instance) segmentation on S3DIS, ScanNet, KITTI-360, and DALES :wink:
I attempted to use your notebook, but I noticed that it merges the entire area into a single pointcloud. Could you show me how to separate it into distinct rooms?
Yes, our general philosophy is that we prefer methods that do not assume a subdivision of rooms, which is not always available in real case scenarios and may be architecturally ambiguous (eg open spaces). Yet, we have implemented and tested our method in a per-room setup on S3DIS. You can find the corresponding datamodule config in: s3dis_room config.
Furthermore, I examined the point cloud sizes, and it appears that the first level of the nag object for S3DIS area_5 contains just 9,308,118 points. However, when I load the point clouds for all the rooms in Area_5 and concatenate them, the total number of points amounts to 78,649,818 points. I'm curious about how I can recover the superpoints for each room's point cloud.
To mitigate uneven sampling densities and overly-dense acquisitions (eg S3DIS), our first preprocessing step is always voxelization. This is standard practice which comparable methods also do. So the partition is computed on a voxelized point cloud.
We do not provide code (yet) for full-resolution predictions. It is not a complicated task but cleanly implementing this in the repo requires a little bit of time I do not have at the moment. See related issues #9 and #34.
@gardiens
In addition to that; if you are looking at your nag after the transform_on_device, the nodes are subsampled another time so it may even more reduce the number of points. That's why the number of points are heavily reduced
The point sampling in transform_on_device
does not hold the same meaning as the voxelization preprocessing. It is a data augmentation used at training time to to introduce diversity in the loaded points.
@gardiens
In addition to that; if you are looking at your nag after the transform_on_device, the nodes are subsampled another time so it may even more reduce the number of points. That's why the number of points are heavily reduced
The point sampling in
transform_on_device
does not hold the same meaning as
Well, why do you apply it then on the test and validation transform then ?
Well, why do you apply it then on the test and validation transform then ?
You are right, this leftover code was initially intended for test-time augmentation, but we do not use it in the end. You will notice here that the sampling parameters are such that we hardly subsample the points with n_min: 128
. I left this transform at inference for convenience but I think it could be removed.
May I consider this issue solved @ngoductuanlhp ?
Yes, for sure @drprojects. I successfully used the KNN to get the superpoint at full-resolution. Thank you for your support.
Hello @drprojects,
Thank you for your wonderful work, SPT! Your Superpoint Partition algorithm holds great promise for various 3D point cloud tasks, such as instance segmentation. Currently, I'm interested in obtaining the superpoint partitions (1st-level partitions) for each room within the S3DIS dataset.
I attempted to use your notebook, but I noticed that it merges the entire area into a single pointcloud. Could you show me how to separate it into distinct rooms? Furthermore, I examined the point cloud sizes, and it appears that the first level of the nag object for S3DIS area_5 contains just 9,308,118 points. However, when I load the point clouds for all the rooms in Area_5 and concatenate them, the total number of points amounts to 78,649,818 points. I'm curious about how I can recover the superpoints for each room's point cloud.
Thank you for your assistance, and I eagerly await your response.