Closed nixror closed 1 year ago
I haven't tried setting train
. Spontaneously I would suggest switching off caching here.
If this does not resolve your issue, I would appreciate if you could generate a minimal example or explain how to reproduce the error (e.g. what code to insert where).
Thanks for your suggestion! You saved me!!!
Hi, thanks for your solid work! I want to change the loss function, which requires setting self.attacked_model.train() in prbcd.py ( I haven't changed anywhere else.) However, this will cause the backward function in utils.grad_with_checkpoint to report an error:
"RuntimeError: Trying to backward through the graph a second time..."
Set retain_graph in backward() to True, then the grad of self.perturbed_edge_weight after backward() is None. I've been confused about this bug for a long time, could you please help me?