mit-han-lab / dlg

[NeurIPS 2019] Deep Leakage From Gradients
https://dlg.mit.edu/
MIT License
396 stars 105 forks source link

grad_diff.backward() #8

Closed shanefeng123 closed 1 year ago

shanefeng123 commented 1 year ago

On line 93 in main.py, when doing the back propagation, did you freeze the model parameters so that it would only update the dummy inputs?

shanefeng123 commented 1 year ago

Never mind, just saw you only passed the dummy inputs and outputs to the optimiser.