ChandlerBang / GCond

[ICLR'22] [KDD'22] [IJCAI'24] Implementation of "Graph Condensation for Graph Neural Networks"
https://www.cs.emory.edu/~wjin30/files/GCond.pdf
118 stars 16 forks source link

questions of the coarsen part #3

Open Amanda-Zheng opened 1 year ago

Amanda-Zheng commented 1 year ago

could you provide the coarsen part's source code? the coarsen operates on the whole graph, but your work implements the inductive train graph for the label part? could you please give more details about this part?

ChandlerBang commented 1 year ago

Hey, we use the code provided by link. We coarsen the training graph while using the original test graph for inference.

Amanda-Zheng commented 1 year ago

thanks for your reply. You mentioned that you coarsen the training graph, so here is my understanding, taking the cora dataset as the example, you just coarsen cora's training graph with 140 nodes, so you operate on this training graph even under the transductive setting. Assuming the coarsening rate is 0.5, would you obtain a graph with 70 nodes? but how do you ensure the class imbalance since the coarsen algorithm could not guarantee each class has the same number of nodes?

ChandlerBang commented 1 year ago

For the transductive setting, the training graph is the same as the full graph (2.7k nodes). Here the ratio r means the number of nodes in condensed graphs to that of the training nodes (140). So, when r=0.5, you will obtain a condensed graph with 70 nodes.

For GCond, we do not tackle the class imbalance as we keep the original label distribution as shown in https://github.com/ChandlerBang/GCond/blob/main/gcond_agent_transduct.py#L46-L68.