Open gaoxiaoninghit opened 5 years ago
@yanx27
Because this algorithm just for RGBD input
ye, I know,but does you got the 3D point cloud in the processing,how you get it ,I does not see the Camera params you use in the program
it in EnetGnn function of model.py, I didn't get pointcloud in preprocessing ==
But the origin 3DGNN(https://github.com/xjqicuhk/3DGNN) use the pointcloud 。you mean this is a different implement from the paper " 3D Graph Neural Networks for RGBD Semantic Segmentation"?
there is a algorithm transfering RGBD to pointcloud, so it just need RGBD as input. 3DGNN(https://github.com/xjqicuhk/3DGNN) also use this method, right? ==
Although I does not see the detail about how to transfer RGBD to pointcloud in that paper,but I know you need the Camera params to get the pointcloud,I do not see the Camera params you use,you just use the HHA encode,the HHA encode contain the Camera params information,BUT I do not know there is a way to transfer HHA to pointcloud?thank you very much!
in addition,do you run successful in your computer?
yes, but I have not finished the data processing part of modeling the nyudv2 dataset directly. Instead, I used the HHA images copied by others. If I had time, I would finish it. Sorry for the trouble :)
nonono ,thank you share your code!
why get the 3D point in this way instead of use the Camera params and depth data?