zyc00 / PartSLIP2

MIT License
26 stars 3 forks source link

Can PartSLIP++ perform well on PartNet? #4

Open kasteric opened 3 months ago

kasteric commented 3 months ago

Hi, thanks for your inspiring work! I notice that the point clouds in training set and testing set are both collected from Part-Mobility dataset, whose point clouds have dense number of points (10000). I am wondering whether the pretrained model can be used to directly perform part segmentation on objects of PartNet, where the number of points is only 2048 and no texture and color is provided. From our experiments, GLIP fails to detect parts from the rendering of 2048 sparse points. Did you upsample 10000 points for sparse point clouds of PartNet during training for the implementation of PartNext?

zyc00 commented 3 months ago

Thanks! Since 2048 is very sparse, I think it can't be rendered very well. What's more, PartNet's raw meshes provided on the website don't have material so that only gray image can be rendered, which is harmful to the GLIP and SAM in our pipeline. If you want to use PartSLIP2 on PartNet, it's a little hard but feasible, you can first find the correspond model in ShapeNet (PartNet meshes are almost all from ShapeNet except 3 cats: scissor, refrigerator and door), then you can sample (or fuse from multi-view images) a dense point cloud from ShapeNet's mesh which has correct material, then you can use this point cloud in our pipeline.

kasteric commented 3 months ago

Thanks very much for your help! Do I need to use Blenderproc to render the given mesh to 6 views rgb-d and then fuse the rgb-d images into point cloud? Then how to fuse the rgb-d point clouds, is there any tool to realize this fusion?

zyc00 commented 3 months ago

Yes, you need to render view images. We didn't use BlenderProc for rendering but I'm sure it works. For rendering and point cloud fusion, I pushed the codes to 'fuse_pointcloud' folder in our repo, you can check it if you need. Thanks again for supporting our work!