Open holderhe opened 1 week ago
Hi. Could you please provide more detailed error information to help us fix the corresponding bug? Such as your experimental parameters and error location, etc.
sorry for late reply. i already move forward and being wiped out the log, so i can't upload the detailed info, but i still remember the error point to the MIC module in MICN, possilbe here:
`# isometric convolution
zeros = torch.zeros((x.shape[0], x.shape[1], x.shape[2] - 1), device=self.device)
x = torch.cat((zeros, x), dim=-1)
x = self.drop(self.act(isometric(x)))
x = self.norm((x + x1).permute(0, 2, 1)).permute(0, 2, 1)`
which you can test easily with mutple gpus.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)