Closed mcps5601 closed 2 years ago
Thank you for reporting the issue. We are looking at it.
We are pushing out a fix for this. Thanks for flagging.
The example now works! We use .zero_grad()
to reset the hooks. Take a look in the code!
Hi, I'm not sure if this is really resolved. I'm still having an error where the hooks seemed to be not reset. My code looks something like this:
g_opt = torch.optim.Adam()
d_opt = torch.optim.Adam()
privacy_engine = PrivacyEngine(
hider.discriminator,
batch_size * 2, # 128 for each real and fake
len(data),
alphas=[10, 100],
noise_multiplier=1.3,
max_grad_norm=1.0
)
privacy_engine.attach(d_opt)
for iteration in trange(iterations+1):
# Training Generator
for _ in range(2):
Z = get_batch_noise(batch_size)
X_hat = Generator(Z)
loss = Discriminator(X_hat, Y_fake)
loss.backward()
g_opt.step()
# Training Discriminator
d_opt.zero_grad()
X = get_batch_data(batch_size)
Z = get_batch_noise(batch_size)
X_hat = Generator(Z)
true_loss = Discriminator(X, Y_true)
fake_loss = Discriminator(X_hat, Y_fake)
loss = true_loss + fake_loss
loss.backward()
d_opt.step()
And I received the error below even after d_opt.zero_grad()
.
ValueError: PrivacyEngine expected a batch of size 256 but received a batch of size 512
Am I missing something here?
Hi,
I see here that you are not passing the parameters to the optimizer, that could be it may be?
g_opt = torch.optim.Adam() d_opt = torch.optim.Adam()
Hi,
I did pass the parameters within my original code. I resolved it by adding the following code.
# For cleaning up previous hooks
d_opt.virtual_step()
privacy_engine.clipper.pre_step()
d_opt.zero_grad()
Although I'm facing an issue with the d_opt.virtual_step()
.
Traceback (most recent call last):
File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/hider/timegan/utils.py", line 143, in joint_trainer
d_opt.virtual_step()
File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/privacy_engine.py", line 209, in virtual_step
self.privacy_engine.virtual_step()
File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/privacy_engine.py", line 371, in virtual_step
self.clipper.clip_and_accumulate()
File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/per_sample_gradient_clip.py", line 180, in clip_and_accumulate
all_norms = calc_sample_norms(
File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/utils/tensor_utils.py", line 44, in calc_sample_norms
norms = [torch.stack(norms, dim=0).norm(2, dim=0)]
RuntimeError: stack expects each tensor to be equal size, but got [128] at entry 0 and [256] at entry 4
Which I believe is similar to the issues raised in many cases and is currently being resolved in #31. Still searching for workarounds though (fingers crossed
Hi,
I did pass the parameters within my original code. I resolved it by adding the following code.
# For cleaning up previous hooks d_opt.virtual_step() privacy_engine.clipper.pre_step() d_opt.zero_grad()
Although I'm facing an issue with the
d_opt.virtual_step()
.Traceback (most recent call last): File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/hider/timegan/utils.py", line 143, in joint_trainer d_opt.virtual_step() File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/privacy_engine.py", line 209, in virtual_step self.privacy_engine.virtual_step() File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/privacy_engine.py", line 371, in virtual_step self.clipper.clip_and_accumulate() File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/per_sample_gradient_clip.py", line 180, in clip_and_accumulate all_norms = calc_sample_norms( File "/home/bird/Documents/Code/Implementations/TimeGAN_PyTorch/venv/lib/python3.8/site-packages/opacus/utils/tensor_utils.py", line 44, in calc_sample_norms norms = [torch.stack(norms, dim=0).norm(2, dim=0)] RuntimeError: stack expects each tensor to be equal size, but got [128] at entry 0 and [256] at entry 4
Which I believe is similar to the issues raised in many cases and is currently being resolved in #31. Still searching for workarounds though (fingers crossed
I have the same issue when I use the Opacus architecture. Do you have any solution for now?
I'm closing this because since the original issue was raised, the Opacus v1.0 API was introduced. @Iron-head If you still have this problem, please provide us with the code to reproduce it and reopen the issue.
https://github.com/pytorch/opacus/blob/01fe6146c3f705e2da630b9a7cf356c9a36d3177/examples/dcgan.py#L331
Dear Authors,
I am trying to use Opacus in GAN, so I checked the DCGAN example in Opacus repo. However, clear_backprops is now unavailable in the current version, and it seems to have been deleted according to https://github.com/pytorch/opacus/pull/16/files#.
How do I modify the code to run the GAN model?