Hello @qianguih, thanks for sharing your code!
I have some question about your work on scannet dataset. Since your repo and paper doesn't include too much details about this dataset, I wonder how you preprocess the whole dataset? Did you split a whole scene into several small blocks when training, like what you did in S3DIS dataset, or randomly drop points in a big scene like pointnet++ (I know your work didn't use norm)?
I am looking forward to your replying. Thank you!
Hi, I also want to reproduce the result about Scannet dataset. But I don't know how to deal with the dataset. So, do you have any ideas? And can you share something about the code? Thanks!
Hello @qianguih, thanks for sharing your code! I have some question about your work on scannet dataset. Since your repo and paper doesn't include too much details about this dataset, I wonder how you preprocess the whole dataset? Did you split a whole scene into several small blocks when training, like what you did in S3DIS dataset, or randomly drop points in a big scene like pointnet++ (I know your work didn't use norm)? I am looking forward to your replying. Thank you!