pytorch / opacus

Training PyTorch models with differential privacy
https://opacus.ai
Apache License 2.0
1.72k stars 346 forks source link

TypeError: capture_backprops_hook() #195

Closed adityapribadi3 closed 2 years ago

adityapribadi3 commented 3 years ago

I run my python notebook using Opacus for machine learning and I received this error. TypeError: capture_backprops_hook() got an unexpected keyword argument 'inputs'

JohnlNguyen commented 3 years ago

can you add more context? Please share your notebook, code and dataset.

adityapribadi3 commented 3 years ago

Hi John, so I try to downgrade the Opacus version to 0.12.0, but I receive an error to all my notebook projects, I don't know whether it's my fault when attaching the Opacus or the library, here is some notebook example https://colab.research.google.com/drive/15Y5jjYF9XdJO4ArAMuW9oa_uWQIPwxbH?usp=sharing here is another notebook with the same error https://colab.research.google.com/drive/1IX2d1-OFWAjT0NFzCc_UUP5BBnMJH28A?usp=sharing thankyou for your help

gcormode commented 3 years ago

Hi Aditya,

I've worked through both the notebooks that you sent, and obtain some errors, although not the TypeError one.

It seems like the main issue is different versions of libraries interacting with each other. It might be a question of finding a combination that play together nicely.

If I use torch 1.4.0 and syft 0.2.9, I can train the models fine without privacy.
!pip install torch==1.4.0 !pip install syft==0.2.9 When I uncomment the PrivacyEngine statements, I get an error in both books The following layers do not have gradients: ['weight', 'bias']. Are you sure they were included in the backward pass? which seems like it could be fixable from here.

Is your code based on a tutorial or template from elsewhere, so we can see that context too?

Note also that there are some known issues with running Syft in Colab: see the thread https://github.com/OpenMined/PySyft/issues/3533 The work-around (to restart the runtime after the import) worked for me.

cheers Graham

adityapribadi3 commented 3 years ago

Hi Graham, thank you for your help, so basically I look for people's code in federated learning and I want to try to insert Opacus to give differential privacy. Yes, this error comes out "The following layers do not have gradients: ['weight', 'bias']. Are you sure they were included in the backward pass?" if you just install "!pip install opacus", that error comes out from the opacus library, if you try to remove the privacy engine the code's work well. The problem is in the documentation they said we just need to add the privacy engine in order to work but the problem is not that easy. Also yes I have tried to restart the runtime too but still not working. Have you try and change something to make the notebook's working?

gcormode commented 3 years ago

Thanks for this. We are looking into this issue. It may be due to an unanticipated interaction between Opacus and Pysift. We will update this thread if we find a resolution, but this may take some time.

ffuuugor commented 3 years ago

Hi @adityapribadi3 Thanks for reporting the issue.

The issue seems to be related to how PySyft wraps tensors and how their wrapper object (PointerTensor) handles torch hooks. Our per sample gradient computation is implemented through backward hooks mechanism, but it doesn't seem to be called on PointerTensors.

Unfortunately, pysift==0.2.x is no longer actively maintained (FAQ 0.2.x ➡️ 0.3.x), so compatibility with this version is not a priority for the opacus team. We acknowledge this as a lacking feature, but it's unlikely to be prioritised by us. That said, we'd appreciate external contribution on this.

For pysift>=0.3.0 opacus seem to be working fine (See for example this demo)

romovpa commented 2 years ago

To sum up, the problem is with an outdated version of the other library and it works fine with the newer version. I'm closing this as there is no other feedback. Please, feel free to create a new issue or reopen this one with additional information.