Open sagerkudrick opened 1 year ago
Same issue here. Did you address this error? : ) @SagerKudrick
Same issue here. Did you address this error? : ) @SagerKudrick
Hey @marlowe518 I did, the problem was with this:
model, optimizer, data_loader = privacy_engine.make_private_with_epsilon( module = model, optimizer=optimizer, data_loader=data_loader, epochs=10, target_epsilon=5, target_delta=0.0001, max_grad_norm=255, batch_first=True )
batch_first = True results in "tensor is shape [K, batch_size, ...], if false: [batch_size, ...]", the input tensor to the model is modified by this, throwing off the positional argument, I was able to solve this by making batch_first=False
I'm not entirely sure if Opacus supports graphs though- validating using PrivacyEngine says that our GCN model is valid, but we're running into a new error here:
File "c:\Users\me\Desktop\github\opacus_graph\tds.py", line 88, in <module>
loss.backward()
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_tensor.py", line 487, in backward
torch.autograd.backward(
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 69, in __call__
return self.hook(module, *args, **kwargs)
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\opacus\grad_sample\grad_sample_module.py", line 337, in capture_backprops_hook
grad_samples = grad_sampler_fn(module, activations, backprops)
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\opacus\grad_sample\functorch.py", line 58, in ft_compute_per_sample_gradient
per_sample_grads = layer.ft_compute_sample_grad(
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_functorch\vmap.py", line 426, in wrapped
batch_size, flat_in_dims, flat_args, args_spec = _process_batched_inputs(in_dims, args, func)
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_functorch\vmap.py", line 119, in _process_batched_inputs
return _validate_and_get_batch_size(flat_in_dims, flat_args), flat_in_dims, flat_args, args_spec
File "C:\Users\me\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_functorch\vmap.py", line 52, in _validate_and_get_batch_size
raise ValueError(
ValueError: vmap: Expected all tensors to have the same size in the mapped dimension, got sizes [16, 7] for the mapped dimension
We're using the default DataLoader from from torch_geometric.loader import DataLoader
and our loader looks like this: data_loader = DataLoader(dataset, batch_size=32, shuffle=False)
(Using torch.utils.data and torch_geometric.loader DataLoader are resulting in the same error)
Our datasets are
dataset = Planetoid(root='/tmp/Cora', name='Cora')
data = dataset[0].to(device)
And our trainer:
for epoch in range(10):
for batch in data_loader:
print("batch ", batch)
optimizer.zero_grad()
out = model(batch)
out.to(device)
loss = F.nll_loss(out, batch.y)
loss.backward()
optimizer.step()
I get similar behavior when I wrap one of my models with GradSamplerModule(). Were you able to solve this issue? Doesn't work with batch_first=false when I use it with GradSampleModule(model,batch_first=False)
@SagerKudrick I have the same problem. Have you solved this mistake?
Does Opacus work with GCNConv?
I'm attempting to use Opacus with a GCN, with the model defined as such:
When training, however, I'm running into
Occuring within
On the
loss.backward()
, it's worth noting that training and evaluating work regularly, but upon doingand training, it begins to throw the error with the new model
Thank you!