Closed mshubhankar closed 4 months ago
This problem arises when using functorch on models with added hooks. However, both functorch and the hook serve the purpose of calculating per-sample gradients. Therefore, their coexistence is unnecessary. A fix is to leverage "no_op" grad_sample_module which does not add hooks:
model, optimizer, train_loader = privacy_engine.make_private(
module=model,
optimizer=optimizer,
data_loader=train_loader,
noise_multiplier=args.sigma,
max_grad_norm=max_grad_norm,
clipping=clipping,
grad_sample_mode="no_op", ## avoid adding hooks
)
Synced offline and proved the fix worked.
🐛 Bug
I am trying to implement augmentation multiplicity (as implemented in Paper 1 and Paper 2 ) using Opacus's new functionality of functorch. I am following the exact steps as pointed out by @alexandresablayrolles in #455 and #575. However, I am facing a bug in the
predictions = fmodel(params, batch)
line where it says thatAttributeError: 'Tensor' object has no attribute '_forward_counter'
. My intuition is that the attribute _forward_counter should be added to the model while callingmake_functional()
but there has been some code change which might be causing this.Any help is appreciated. Thanks!
Colab reproducible link
To Reproduce
Expected behavior
I would expect the forward function to work with the new functorch functionality.