loicland / superpoint_graph

Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
MIT License
764 stars 214 forks source link

embeddings from ptnCloudEmbedder and edge informations. #145

Open Yacovitch opened 5 years ago

Yacovitch commented 5 years ago

Hi Loic,

I have couple of questions.

  1. Is embeddings from ptnCloudEmbedder [number of super-points from each batches, dimension of embedded vector]?
  2. Dose embedding contains the edge (connection between superpoints)?
  3. If embedding does not contain, where does model.ecc taks the edge information.
  4. It seems like it iterates 7 times during training with batch size 2 in Semantic 3D data set with trainval dataset. Isn't it supposed to be 8 times because there are 15 data sets?
  5. Is the order of batches selected randomly?

Thank you in advance!

loicland commented 5 years ago
  1. Yes, nbatch×nsuperpoints by embedding size
  2. No edge information in the embeddings
  3. In the superpoint graph files you have super edge descriptors. They are rewritten in GIs. Access them with GIs[0].edge_feats
  4. The leader has the drop_last option, which means it drops batches smaller than the batch size
  5. Yes, different at each iteration
Yacovitch commented 5 years ago

Thank you

Yacovitch commented 5 years ago

Could you explain a bit more about the output of GIs[0]._edgefeats? Is it correnct that it is [number of edges (connections between nodes), number of edge features (13)]?

Also, how can I extract labels and adjacency matrix graph that are associated with features (embedding) that goes into model.ecc?

loicland commented 5 years ago
  1. Correct
  2. the graphs are stored in an igraph structure in GIs. You can access the graph structure with all the methods of igraphs.

Labels are stored in label_mode and label_vec, and do not go into the ecc model as they are the supervision.

Yacovitch commented 5 years ago

Hello, I still have couple of questions. In advance, thank you for your reply.

  1. Is number of edge features from GIs[0].edgefeats filtered edges or unfiltered edges?
  2. Where does the edge filtering happens? Is that coming from partition?
  3. Is it possible that I extract embeddings' edge lists?
  4. You answered the dimension of GIs[0].edgefeats is nbatch×nsuperpoints by embedding size. Is nsuperpoints consistent throughout batches?
  5. From function set_batch in GraphConvInfo.py, there is a for loop for i,G in enumerate(graphs):. What is the purpose of that? I believe batches are separated from torch.utils.data.DataLoader(train_dataset, batch_size=args.batch_size, collate_fn=spg.eccpc_collate, num_workers=args.nworkers, shuffle=True, drop_last=True).

Again, thank you very much

loicland commented 5 years ago

1,2. Superedges are filtered in spg_reader in learning/spy.py, so filtered. Removes edges that are too long.

  1. Not sure that I understand. Superedges are not embedded. But if you want to extract the filter, they re computed line 42 of learning/modules.py
  2. It is, because superpoints are over/sub samples to only contains --ptn_npts (default 128) points. The subsampling is random and different at each run.
  3. This loop collates the different superpointgraphs into an easily accessible structure.