Closed LLLYLong closed 1 month ago
The inPlace operation occurs in the place of a ReLu function, does the author have time to check if there is a solution to this problem
in KPA class module, x = self.relu(x)
The inPlace operation occurs in the place of a ReLu function, does the author have time to check if there is a solution to this problem
in KPA class module, x = self.relu(x)
I have tried self.relu = nn.ReLU(inplace=False), but it doesn't work. Have you solved this problem? Thanks!
The inPlace operation occurs in the place of a ReLu function, does the author have time to check if there is a solution to this problem
in KPA class module, x = self.relu(x)
I have tried self.relu = nn.ReLU(inplace=False), but it doesn't work. Have you solved this problem? Thanks!
I solved it by using torch set to 1.8.0 and it works fine!
So happy for the author to see such excellent work. I tested torch 1.8.1 and 1.9.1 as per the previous reply without success, both reported the same error, not sure what is wrong.
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1944, 17, 512]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).