zademn / PU-GCN-pytorch

Rewriting the PU-GCN paper in pytorch and enhancing it
MIT License
22 stars 2 forks source link

Training on multiple gpus #2

Open jukieCheung opened 4 months ago

jukieCheung commented 4 months ago

First of all, thank you for maintaining the code. I would like to try using my own dataset and training on multiple GPUs, but the following issues may arise. Very strange.


RuntimeError Traceback (most recent call last) Cell In[30], line 6 3 history.val_loss = [] 4 for epoch in tqdm(range(1, train_config.epochs + 1)): ----> 6 train_loss = train( 7 model, trainloader, loss_fn, optimizer, gamma=(1 - epoch / train_config.epochs), before_refiner_loss=False 8 ) 9 # train_loss = train_w_refiner(model, trainloader, loss_fn, optimizer, alpha=0.5) 10 history.train_loss.append(train_loss)

Cell In[27], line 16, in train(model, trainloader, loss_fn, optimizer, gamma, before_refiner_loss) 14 # Train step 15 optimizer.zero_grad() ---> 16 pred = model(p, batch=pbatch) 18 # Transform to dense batches 19 pred, = to_dense_batch(pred, q_batch) # [B, N * r, 3]

File ~/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, kwargs) 1516 return self._compiled_call_impl(*args, *kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(args, kwargs)

File ~/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, *kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(args, **kwargs) 1529 try: 1530 result = None

File ~/miniconda3/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py:185, in DataParallel.forward(self, *inputs, *kwargs) 183 return self.module(inputs[0], **module_kwargs[0]) 184 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) --> 185 outputs = self.parallel_apply(replicas, inputs, module_kwargs) 186 return self.gather(outputs, self.output_device)

File ~/miniconda3/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py:200, in DataParallel.parallel_apply(self, replicas, inputs, kwargs) 199 def parallel_apply(self, replicas: Sequence[T], inputs: Sequence[Any], kwargs: Any) -> List[Any]: --> 200 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])

File ~/miniconda3/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py:110, in parallel_apply(modules, inputs, kwargs_tup, devices) 108 output = results[i] 109 if isinstance(output, ExceptionWrapper): --> 110 output.reraise() 111 outputs.append(output) 112 return outputs

File ~/miniconda3/lib/python3.10/site-packages/torch/_utils.py:694, in ExceptionWrapper.reraise(self) 690 except TypeError: 691 # If the exception takes multiple arguments, don't try to 692 # instantiate since we don't know how to 693 raise RuntimeError(msg) from None --> 694 raise exception

RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in _worker output = module(*input, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/root/PU-GCN-pytorch/pugcn_lib/models.py", line 141, in forward x, edge_index = self.feature_extractor( File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/root/PU-GCN-pytorch/pugcn_lib/feature_extractor.py", line 313, in forward x = self.pre_gcn(x, edge_index=edge_index_max) # [N, 3] -> [N, C] File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/root/PU-GCN-pytorch/pugcn_lib/torch_geometric_nn.py", line 140, in forward return self.gconv(x, edge_index=edge_index) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch_geometric/nn/conv/edge_conv.py", line 61, in forward return self.propagate(edge_index, x=x) File "/root/miniconda3/lib/python3.10/site-packages/torch_geometric/nn/conv/message_passing.py", line 547, in propagate out = self.message(msg_kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch_geometric/nn/conv/edge_conv.py", line 64, in message return self.nn(torch.cat([x_i, x_j - x_i], dim=-1)) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch_geometric/nn/models/mlp.py", line 245, in forward x = self.lins-1 File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch_geometric/nn/dense/linear.py", line 147, in forward return F.linear(x, self.weight, self.bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

zademn commented 4 months ago

Hello. I don't have any experience training on multiple GPUs, I made and trained this repo on a single 970.

Maybe this will help? The team is also usually active on their slack. Also check their issues, maybe they got something open related to multiple GPUs

jukieCheung commented 4 months ago

Hello. I don't have any experience training on multiple GPUs, I made and trained this repo on a single 970.

Maybe this will help? The team is also usually active on their slack. Also check their issues, maybe they got something open related to multiple GPUs

Thank you for your help. I will try it out