Closed zqalex closed 8 months ago
Please follow #14 and #9.
Please follow #14 and #9.
Thank you for your quick reply.
Since this is the first time for me to face such a Transformer related project, I still have some problems according to the two issues you mentioned:
After completing the test in README.md, do I need to conduct the inference step based on a certain point cloud(.bin) to generate per-point mask and then conduct visualization? If so, do I need to inference like pcd_seg_demo.py from mmdet3d and make some modifications?
In the visualization, you mentioned "inherit from Det3DLocalVisualizer and a little bit modify its code for instance segmentation task", do you mean rewrite a new file(.py), inherit the class and then write code to visualize the prediction?? For the SCANNET V2 point cloud, could you tell me how to modify it so that the results of each segmentation task can be visualized smoothly?
Looking forward to your reply, thank you very much.
Thank you for your outstanding work.
Refer to your README.md, after running the following code:
test
python tools/fix_spconv_checkpoint.py \ --in-path work_dirs/oneformer3d_1xb4_scannet/epoch_512.pth \ --out-path work_dirs/oneformer3d_1xb4_scannet/epoch_512.pth python tools/test.py configs/oneformer3d_1xb4_scannet.py \ work_dirs/oneformer3d_1xb4_scannet/epoch_512.pth
How should I view and visualize the predicted point cloud segmentation results? In addition, how can I use the trained model to make predictions about other point clouds? Looking forward to your reply, thank you.