ycjing / AmalgamateGNN.PyTorch

PyTorch implementation of AmalgamateGNN (CVPR'21)
MIT License
19 stars 0 forks source link

Regarding Topological Attribute Gradient Setting #1

Open alex200420 opened 2 years ago

alex200420 commented 2 years ago

Hi! Huge pleasure! Currently working on replicating the logic behind this paper, and we were wondering about the gradient setting behind the topological attribute module, on the get_attrb function implemented. Before calling the autograd.grad, the grad_outputs is set to 1 masking the soft labels, and we couldn't quite grasp the reason behind this. I was wondering if you might be able to explain this part or maybe reference us to another paper to understand this better?

Thanks in advance!

https://github.com/ycjing/AmalgamateGNN.PyTorch/blob/f99a60b374d23002d53385f23da2d540d964c7c2/topological_attrib_s.py#L38-L40

ycjing commented 2 years ago

Hi @alex200420

Thank you for your interests in our work! I truly appreciate it. For the question, as mentioned in the official pytorch document, grad_outputs means the vector in the Jacobian-vector product. In our case of multi-label classification with PPI, instead of computing the full Jacobian, we would only like to match the most critical gradient components that correspond to the target label.

Thank you for your interest again!

Cheers, Yongcheng