Closed tklausen closed 1 month ago
Thanks for contributing to Opacus! Great catch! Let me launch some fix. Need to guarantee that the input of all the linear layers inside DPMultiheadAttention has batch_size as the first dimension of input when batch_first = True.
Closed the issue since we launched the fix in https://github.com/pytorch/opacus/pull/651
🐛 Bug
Context
Both PrivacyEngine and DPMultiheadAttention accept the bool argument
batch_first
, which indicates whether the batch dimension is the first or second dimension. In the case of the PrivacyEngine, this argument is passed down to the GradSampleModule which ensures that the batch dimension is always the first dimension in.grad_samples
(=per-sample gradients) (see rearrange_grad_samples), so that thegrad_samples
can be used by DPOptimizer.Problem
Using PrivacyEngine and DPMultiheadAttention both with
batch_first=True
mixes up the batch dimension and can throw an error.DPMultiheadAttention reorders its inputs to the forward method (query, key, value) so that the batch dimension is the second dimension (and the sequence dimension is the first dimension) if
batch_first=True
. Therefore, the internal linear layers of DPMultiheadAttention are called with an input whose second dimension is the batch dimension. However, the GradSampleModule expects the batch dimension to be the first dimension (becausebatch_first
was set toTrue
in the PrivacyEngine). Thus, the computed gradients are not per-sample gradients. This even throws an error if the model uses an additional layer other than DPMultiheadAttention whose input is batch dimension first. This error is thrown during a torch.stack operation in the DPOptimizer's clip_and_accumulate method.To Reproduce
See Colab.
batch_first=True
batch_first=True
b. one other layer such as nn.LinearStack trace:
Expected Behavior
The per-sample gradients are computed correctly and no error is thrown if
batch_first
has the same value in both PrivacyEngine and DPMultiheadAttention.For
batch_first=False
, no changes are required.For
batch_first=True
, the DPMultiheadAttention layer should call its internal linear layers with an input whose first dimension is the batch dimension.Environment
opacus: 1.4.1 pytorch: 2.2.1
Other packages should not be relevant as this is a pure Opacus bug.
Additional context
This issue may be related to #505, but I can't confirm this as the source code for this issue seems to have been deleted.