Could you please explain why the model aggregates the node representations to obtain pred and then performs aggregation again? I don't quite understand the reason for this part.
We use subgraph similarity calculation as the task template, which requires us to aggregate node embeddings to obtain subgraph embeddings.
We only pre-train the graph encoder once for both node and graph classification tasks. The pre-training code can be found in \nodedownstream\pre-train.py.
Pretrain Section
Could you please explain why the model aggregates the node representations to obtain pred and then performs aggregation again? I don't quite understand the reason for this part.
Prompt
Also, there are two loss functions, reg_loss and bp_loss, but I couldn't find the backward() call. Did I miss something? Thank you for your patience!"