Closed hagianga21 closed 5 years ago
Hi,
This loader implements the superpoint graph augmentation scheme described in the appendix.
First step shuffles the vertices index.
Second step select a subgraph by selecting spg_augm_nneigh
random superpoints and adding their neigjborhood at spg_augm_orders
hops.
The third step select a subgraph that has at most spg_augm_hardcutoff
superpoints of at least ptn_minpts
(ie that are embedded by PointNets units). This helps with the memory load.
By selecting small subgraphs, the training step see a wide variety of graph configurations, making it more resilient. Too small, and the subgraphs are too small to be meaningful. Too large and the training see always the exact same graphs, making it less resilient.
Hi, thank you so much for your help, I am a little confused in step 2. As in G = random_neighborhoods(G, spg_augm_nneigh, spg_augm_order)
num
centerssubset
and subgraph
the graphAs far as I know, because 1 center has so many neighbors so that there are so many subsets. As a result, the result in this whole step seems does not change.
For small SPG yes, this step might not decrease drammatically the size, but the next will.
What is important here is that in the training step the SPGs are different everytime so that the network learns to adapt to a variety of configurations. Depending on your data and the connectivity of its SPG you may need to alter this step, however it worked well for S3DIS and Semantic3D.
Hi, thank you so much for your work. Could you please help me explain these function in def loader function on spg.py ? Thank you so much for your help.