I tried to introduce the ResampleLoss function into my code and found that it did not work properly.
My training code uses the classic training method:
pred = model(img).sigmoid()
loss = config.loss_func(pred, label)
optimizer.zero_grad()
loss.backward() #There is a problem here!!!
optimizer.step()
I have changed the code to reduction = 'mean'.
Now there are the following problems that cannot be solved:
Traceback (most recent call last):
File "/home/xxx/run.py", line 5, in <module>
train()
File "/home/xxx/train.py", line 73, in train
loss.backward()
File "/home/xxx/miniconda3/envs/test/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/xxx/miniconda3/envs/test/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 15]], which is output 0 of SigmoidBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
I hope I can get your help on how to use this loss function.
Thanks!
I tried to introduce the
ResampleLoss
function into my code and found that it did not work properly.I have changed the code to
reduction = 'mean'
.Now there are the following problems that cannot be solved:
I hope I can get your help on how to use this loss function. Thanks!