yanx27 / 3DGNN_pytorch

3D Graph Neural Networks for RGBD Semantic Segmentation
MIT License
231 stars 43 forks source link

Do you get the 3D point from HHA data? #3

Open gaoxiaoninghit opened 5 years ago

gaoxiaoninghit commented 5 years ago

why get the 3D point in this way instead of use the Camera params and depth data?

gaoxiaoninghit commented 5 years ago

@yanx27

yanx27 commented 5 years ago

Because this algorithm just for RGBD input

gaoxiaoninghit commented 5 years ago

ye, I know,but does you got the 3D point cloud in the processing,how you get it ,I does not see the Camera params you use in the program

yanx27 commented 5 years ago

it in EnetGnn function of model.py, I didn't get pointcloud in preprocessing ==

gaoxiaoninghit commented 5 years ago

But the origin 3DGNN(https://github.com/xjqicuhk/3DGNN) use the pointcloud 。you mean this is a different implement from the paper " 3D Graph Neural Networks for RGBD Semantic Segmentation"?

yanx27 commented 5 years ago

there is a algorithm transfering RGBD to pointcloud, so it just need RGBD as input. 3DGNN(https://github.com/xjqicuhk/3DGNN) also use this method, right? ==

gaoxiaoninghit commented 5 years ago

Although I does not see the detail about how to transfer RGBD to pointcloud in that paper,but I know you need the Camera params to get the pointcloud,I do not see the Camera params you use,you just use the HHA encode,the HHA encode contain the Camera params information,BUT I do not know there is a way to transfer HHA to pointcloud?thank you very much!

gaoxiaoninghit commented 5 years ago

in addition,do you run successful in your computer?

yanx27 commented 5 years ago

yes, but I have not finished the data processing part of modeling the nyudv2 dataset directly. Instead, I used the HHA images copied by others. If I had time, I would finish it. Sorry for the trouble :)

gaoxiaoninghit commented 5 years ago

nonono ,thank you share your code!