JunyiZhu-AI / RandomSparsification

Improving Differentially Private SGD via Randomly Sparsified Gradients [Accepted by TMLR]
2 stars 1 forks source link

Request for Help: Implementation of Random Sparsification with ConvNet #1

Closed ParthS007 closed 1 month ago

ParthS007 commented 2 months ago

I am training my ConvNet model with OCT Data and analysing the privacy spent using Opacus by implementing the Random Sparsification and here the implementation uses backPACK for calculating gradients and specifically uses Batchgrad.

I am blocked in testing the paper approach with my model.

Please reproduce using this repo: Github

Steps to reproduce the behavior:

  1. python code/train_with_rs_opacus.py

Screenshot

Screenshot 2024-06-06 at 17 21 29

Warning in the logs:

Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.

Environment

Any pointers in the direction to resolve the error would be great! Thanks :)

JunyiZhu-AI commented 2 months ago

Thank you for reaching out!

I apologize for not being able to immediately identify the source of the error you're encountering. It appears to be related to the per-example gradient computation. Please note that the Backpack package supports specific architectures for the version I was using. Given the specifics of your issue, I recommend consulting the Backpack or Opacus teams, as they will likely be able to provide more specialized assistance.