sigeisler / robustness_of_gnns_at_scale

This repository contains the official implementation of the paper "Robustness of Graph Neural Networks at Scale" (NeurIPS, 2021).
MIT License
27 stars 9 forks source link

self.attacked_model.train() leads to autograd error #4

Closed nixror closed 1 year ago

nixror commented 2 years ago

Hi, thanks for your solid work! I want to change the loss function, which requires setting self.attacked_model.train() in prbcd.py ( I haven't changed anywhere else.) However, this will cause the backward function in utils.grad_with_checkpoint to report an error:

"RuntimeError: Trying to backward through the graph a second time..."

Set retain_graph in backward() to True, then the grad of self.perturbed_edge_weight after backward() is None. I've been confused about this bug for a long time, could you please help me?

sigeisler commented 2 years ago

I haven't tried setting train. Spontaneously I would suggest switching off caching here.

If this does not resolve your issue, I would appreciate if you could generate a minimal example or explain how to reproduce the error (e.g. what code to insert where).

nixror commented 2 years ago

Thanks for your suggestion! You saved me!!!