pytorch / opacus

Training PyTorch models with differential privacy
https://opacus.ai
Apache License 2.0
1.65k stars 328 forks source link

How to fetch the original model again? #582

Closed TsingZ0 closed 1 year ago

TsingZ0 commented 1 year ago
privacy_engine = PrivacyEngine()
model_dp, optimizer_dp, data_loader_dp = privacy_engine.make_private(
    module=model,
    optimizer=optimizer,
    data_loader=data_loader,
    noise_multiplier=1.1,
    max_grad_norm=1.0,
)

🚀 Feature

Can we turn the trained model_dp back to the corresponding trained model?

Motivation

When I integrate model_dp with downstream networks, some unpredictable issues occur, such as (some errors are not shown for privacy)

  File "/data/home/zjq/miniconda3/envs/fl_torch/lib/python3.10/site-packages/torch/autograd/__init__.py", line 276, in grad
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/data/home/zjq/miniconda3/envs/fl_torch/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 326, in capture_backprops_hook
    activations, backprops = self.rearrange_grad_samples(
  File "/data/home/zjq/miniconda3/envs/fl_torch/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 388, in rearrange_grad_samples
    activations = module.activations.pop()
IndexError: pop from empty list

Turning model_dp back to model may provide a universal solution.

alexandresablayrolles commented 1 year ago

model_dp._module contains the original model. Note that you can also keep model independently of model_dp.