SooLab / Part2Object

[ECCV 2024] The official PyTorch implementation of the "Part2Object: Hierarchical Unsupervised 3D Instance Segmentation".
MIT License
16 stars 1 forks source link

Some questions about the .pth file in project_feature.py #6

Open LydJason opened 1 month ago

LydJason commented 1 month ago

Thank you very much for your work! I encountered some issues while generating pseudo masks:

  1. Could you please let me know how to obtain the .pth file in project_feature.py? (lines 157-160). It seems the output of data_prepare_ScanNet.py are files end with .ply. https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/project_feature.py#L156C1-L160C92
  2. Does this work use the complete version of ScanNet v2(1.5T) or the scannet_frames_25k(≈6G) version?

Thanks again for your hard work and support!

LydJason commented 1 month ago

Meanwhile, these .pth files also seem to be applied in the downstream files get_dis_matrix.py , scene_frame_ncut.py etc.

Also, in get_dis_matrix.py, it requires the input of init_segments, is it the output of initial superpoints produced by initialSP_prepare_ScanNet.py? However, it seems to come with the postfix _superpoint.npy instead of .pth. https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get_dis_matrix.py#L31-L39

Please correct me if I am wrong; thank you for your reply.

zhangyl4 commented 1 month ago

In the data preprocessing part, we follow SAM3D and Mask3D to convert the point cloud data (.ply) into a tensor form (.pth /.npy) for ease of use. You can follow SAM3D to get the data in the form of.pth

LydJason commented 1 month ago

Thank you for your timely reply! It indeed helps to produce the .pth file under train/val folder. However, I still have the problem about getting the .pth file under theinit_segments folder just as described previously:

Also, in get_dis_matrix.py, it requires the input of init_segments, is it the output of initial superpoints produced by initialSP_prepare_ScanNet.py? However, it seems to come with the postfix _superpoint.npy instead of .pth. https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get_dis_matrix.py#L31-L39

These files are used in get_dis_matrix.py: https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get_dis_matrix.py#L38-L39

Thank again for your timely reply! Also, it will be of great help if you can take a look at #7 , I still face the problem of the dimension of the dino output tensor to run project_features.py.

LydJason commented 1 month ago

I also find out that the scene_frame_ncut.py and scene_frame_merge.py requires the variables png and mask_path under the outputs folder, is it also generated by SAM3D? How can I get them, thank you! https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get%20bbox%20prior/scene_frame_ncut.py#L138 https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get%20bbox%20prior/scene_frame_ncut.py#L145-L149

zhangyl4 commented 1 month ago

Thank you for your timely reply! It indeed helps to produce the .pth file under train/val folder. However, I still have the problem about getting the .pth file under theinit_segments folder just as described previously:

Also, in get_dis_matrix.py, it requires the input of init_segments, is it the output of initial superpoints produced by initialSP_prepare_ScanNet.py? However, it seems to come with the postfix _superpoint.npy instead of .pth. https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get_dis_matrix.py#L31-L39

These files are used in get_dis_matrix.py:

https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get_dis_matrix.py#L38-L39

Thank again for your timely reply! Also, it will be of great help if you can take a look at #7 , I still face the problem of the dimension of the dino output tensor to run project_features.py.

For faster reading and less storage, we changed .pth to .npy format. Sorry that we did not synchronize the modification in the subsequent files due to our negligence. You only need to modify .pth into _superpoint.npy format.

zhangyl4 commented 1 month ago

I also find out that the scene_frame_ncut.py and scene_frame_merge.py requires the variables png and mask_path under the outputs folder, is it also generated by SAM3D? How can I get them, thank you!

https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get%20bbox%20prior/scene_frame_ncut.py#L138

https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get%20bbox%20prior/scene_frame_ncut.py#L145-L149

For 2D mask acquisition, referring to our paper, the use of SAM is not allowed in the unsupervised setting. We follow cutler, it is an unsupervised image segmentation algorithm.

LydJason commented 1 month ago

I also find out that the scene_frame_ncut.py and scene_frame_merge.py requires the variables png and mask_path under the outputs folder, is it also generated by SAM3D? How can I get them, thank you! https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get%20bbox%20prior/scene_frame_ncut.py#L138

https://github.com/SooLab/Part2Object/blob/65d728edecdb461bf6fd88c173996874ae862c45/pseudo_mask_gen/get%20bbox%20prior/scene_frame_ncut.py#L145-L149

For 2D mask acquisition, referring to our paper, the use of SAM is not allowed in the unsupervised setting. We follow cutler, it is an unsupervised image segmentation algorithm.

Thank you for your reply! I will try this.