loicland / superpoint_graph

Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
MIT License
764 stars 213 forks source link

Cannot understand Loader function #118

Closed hagianga21 closed 5 years ago

hagianga21 commented 5 years ago

Hi, thank you so much for your work. Could you please help me explain these function in def loader function on spg.py ? Thank you so much for your help.

# 1) subset (neighborhood) selection of (permuted) superpoint graph
if train:
    if 0 < args.spg_augm_hardcutoff < G.vcount():
        perm = list(range(G.vcount())); random.shuffle(perm)
        G = G.permute_vertices(perm)

    if 0 < args.spg_augm_nneigh < G.vcount():
        G = random_neighborhoods(G, args.spg_augm_nneigh, args.spg_augm_order)

    if 0 < args.spg_augm_hardcutoff < G.vcount():
        G = k_big_enough(G, args.ptn_minpts, args.spg_augm_hardcutoff) `
loicland commented 5 years ago

Hi,

This loader implements the superpoint graph augmentation scheme described in the appendix.

First step shuffles the vertices index.

Second step select a subgraph by selecting spg_augm_nneigh random superpoints and adding their neigjborhood at spg_augm_orders hops.

The third step select a subgraph that has at most spg_augm_hardcutoff superpoints of at least ptn_minpts (ie that are embedded by PointNets units). This helps with the memory load.

By selecting small subgraphs, the training step see a wide variety of graph configurations, making it more resilient. Too small, and the subgraphs are too small to be meaningful. Too large and the training see always the exact same graphs, making it less resilient.

hagianga21 commented 5 years ago

Hi, thank you so much for your help, I am a little confused in step 2. As in G = random_neighborhoods(G, spg_augm_nneigh, spg_augm_order)

As far as I know, because 1 center has so many neighbors so that there are so many subsets. As a result, the result in this whole step seems does not change.

loicland commented 5 years ago

For small SPG yes, this step might not decrease drammatically the size, but the next will.

What is important here is that in the training step the SPGs are different everytime so that the network learns to adapt to a variety of configurations. Depending on your data and the connectivity of its SPG you may need to alter this step, however it worked well for S3DIS and Semantic3D.