Can we turn the trained model_dp back to the corresponding trained model?
Motivation
When I integrate model_dp with downstream networks, some unpredictable issues occur, such as (some errors are not shown for privacy)
File "/data/home/zjq/miniconda3/envs/fl_torch/lib/python3.10/site-packages/torch/autograd/__init__.py", line 276, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/home/zjq/miniconda3/envs/fl_torch/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 326, in capture_backprops_hook
activations, backprops = self.rearrange_grad_samples(
File "/data/home/zjq/miniconda3/envs/fl_torch/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 388, in rearrange_grad_samples
activations = module.activations.pop()
IndexError: pop from empty list
Turning model_dp back to model may provide a universal solution.
🚀 Feature
Can we turn the trained
model_dp
back to the corresponding trainedmodel
?Motivation
When I integrate
model_dp
with downstream networks, some unpredictable issues occur, such as (some errors are not shown for privacy)Turning
model_dp
back tomodel
may provide a universal solution.