Closed liuyanzhi1214 closed 3 years ago
It does not work. roi_featst = torch.cat([roi_feats_0, roi_feats_1, roi_feats_2, roi_feats_3], dim=1).contiguous()
I encountered a similar problem before,
you can register a backward hook
to the suspicious tensors to make grad_out
contiguous
I encountered a similar problem before, you can register a
backward hook
to the suspicious tensors to makegrad_out
contiguous
Thank you very much for your answer, but I don't know much about backward_hook, can you explain how to solve this problem in detail. What puzzles me is that I just added the roi's channel through torch.cat(). Why does it cause the backward problem.
Not sure if it is caused by cat
operation, Perhaps the change caused by several potential operations. One point needs to be clarified, the contiguous tensor in the forward process does not necessarily lead to the continuous grad, I can give you an example
v = torch.tensor([0., 0., 0.], requires_grad=True)
h = v.register_hook(lambda grad: grad.contiguous()) # make the grad contiguous
You can perform this operation on all the suspicious tensors.
Not sure if it is caused by
cat
operation, Perhaps the change caused by several potential operations. One point needs to be clarified, the contiguous tensor in the forward process does not necessarily lead to the continuous grad, I can give you an examplev = torch.tensor([0., 0., 0.], requires_grad=True) h = v.register_hook(lambda grad: grad.contiguous()) # make the grad contiguous
You can perform this operation on all the suspicious tensors.
It worked for me. Thanks!
I have the sane problem as #4310,but the answer was not useful. I just incease the out_put channel of roi When I use torch.cat to splice tensor. error
Config:
Code:
I look forward to a concrete and effective solution.