Open huduo0812 opened 6 months ago
Hello!
Thank you for your kind words about our work!
I'm not quite sure what exactly you're referring to regarding the Comparative Learning module. In case you are referring to the contrastive representation learning of the explanation embeddings - this is implemented in the original repository of the MEGAN model: https://github.com/aimat-lab/graph_attention_student Specifically, you can find the calculation of the contrastive learning loss in this method of the main model class: https://github.com/aimat-lab/graph_attention_student/blob/9b3519a964f016569d3a36399fe5a2686e41e274/graph_attention_student/torch/megan.py#L714
Best regards, Jonas
Am Fr., 17. Mai 2024 um 16:47 Uhr schrieb 胡舵 @.***>:
Hello, I read your paper and I think it is a very good GNN interpretable work. I think it might inspire me, so I would like to study the details of your code implementation, unfortunately I didn't find the code where the Comparative Learning module was written. So I want to ask you guys, thanks!
— Reply to this email directly, view it on GitHub https://github.com/aimat-lab/megan_global_explanations/issues/2, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADR2O3E73NBLZD7G32LMODTZCYJ7BAVCNFSM6AAAAABH4JAXGWVHI2DSMVQWIX3LMV43ASLTON2WKOZSGMYDEOJUGI2DEOI . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Hello! Thank you for your kind words about our work! I'm not quite sure what exactly you're referring to regarding the Comparative Learning module. In case you are referring to the contrastive representation learning of the explanation embeddings - this is implemented in the original repository of the MEGAN model: https://github.com/aimat-lab/graph_attention_student Specifically, you can find the calculation of the contrastive learning loss in this method of the main model class: https://github.com/aimat-lab/graph_attention_student/blob/9b3519a964f016569d3a36399fe5a2686e41e274/graph_attention_student/torch/megan.py#L714 Best regards, Jonas Am Fr., 17. Mai 2024 um 16:47 Uhr schrieb 胡舵 @.>: … Hello, I read your paper and I think it is a very good GNN interpretable work. I think it might inspire me, so I would like to study the details of your code implementation, unfortunately I didn't find the code where the Comparative Learning module was written. So I want to ask you guys, thanks! — Reply to this email directly, view it on GitHub <#2>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADR2O3E73NBLZD7G32LMODTZCYJ7BAVCNFSM6AAAAABH4JAXGWVHI2DSMVQWIX3LMV43ASLTON2WKOZSGMYDEOJUGI2DEOI . You are receiving this because you are subscribed to this thread.Message ID: @.>
Hello! Thank you for your reply. I think I may have misrepresented myself a bit, but the link you gave should be the part I want to study. Thank you very much for your excellent work. I wish you all the best in your work and have a great life! Best wishes, Duo Hu
Hello, I read your paper and I think it is a very good
GNN interpretable
work. I think it might inspire me, so I would like to study the details of your code implementation, unfortunately I didn't find the code where theComparative Learning
module was written. So I want to ask you guys, thanks!