I am training my ConvNet model with OCT Data and analysing the privacy spent using Opacus by implementing the Random Sparsification which uses backPACK for calculating gradients.
:warning: We cannot help you without you sharing reproducible code. Do not ignore this part :)
Steps to reproduce the behavior:
python code/train_with_rs_opacus.py
Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
Expected behavior
The Grad sample module should use full backward hook instead of regular hook.
Thanks for flagging this! Replacing register_backward_hook with register_full_backward_hook is indeed one of the planned changes which will be pushed to Opacus soon!
🐛 Bug
I am training my ConvNet model with OCT Data and analysing the privacy spent using Opacus by implementing the Random Sparsification which uses backPACK for calculating gradients.
Please reproduce using this repo: Github
To Reproduce
python code/train_with_rs_opacus.py
Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
Expected behavior
The Grad sample module should use full backward hook instead of regular hook.
Environment
conda
,pip
, source):pip
3.11.8
backpack-for-pytorch==1.6.0
1.4.1
Additional context
I would be happy to contribute and implement this, need a bit of direction on what should be changed. Thanks :)