Closed somebodyus closed 4 years ago
Hi! Thanks for raising this issue. Indeed, builtin pytorch function had some in-place operations, which were causing this error. It should be fixed in master since https://github.com/BloodAxe/pytorch-toolbelt/commit/68945c9421be55de7211f59c53a0965c1fde0443
Thank you for your update. What does the "soft" mean? Could you give a technical reference for this loss?
When I use the SoftCrossEntropyLoss, I got the error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Could anyone help me? BTW, what paper proposed the SoftCrossEntropyLoss?