pytorch / opacus

Training PyTorch models with differential privacy
https://opacus.ai
Apache License 2.0
1.65k stars 328 forks source link

Grad Sample Module: Use full backward hook to save activations and backprop values. #652

Closed ParthS007 closed 1 week ago

ParthS007 commented 1 month ago

🐛 Bug

I am training my ConvNet model with OCT Data and analysing the privacy spent using Opacus by implementing the Random Sparsification which uses backPACK for calculating gradients.

Please reproduce using this repo: Github

To Reproduce

:warning: We cannot help you without you sharing reproducible code. Do not ignore this part :) Steps to reproduce the behavior:

  1. python code/train_with_rs_opacus.py

Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.

Expected behavior

The Grad sample module should use full backward hook instead of regular hook.

self.autograd_grad_sample_hooks.append(
    module.register_full_backward_hook(
        partial(
            self.capture_backprops_hook,
            loss_reduction=loss_reduction,
            batch_first=batch_first,
        )
    )
)

Environment

Additional context

I would be happy to contribute and implement this, need a bit of direction on what should be changed. Thanks :)

EnayatUllah commented 4 weeks ago

Thanks for flagging this! Replacing register_backward_hook with register_full_backward_hook is indeed one of the planned changes which will be pushed to Opacus soon!