Closed techkang closed 3 years ago
thanks for your report. did you solve your problem?
I found this error because I change the backbone of CRNN and the output shape is B1637. I change the parameter of my backbone and the problem does not appear again so I close this issue. But the problem may appear again if one instance is extremely long.
Thanks for your feedback. For CTC loss, you can set zero_infinity=True
in the config file to avoid the effect.
Please see the official PyTorch doc or https://github.com/open-mmlab/mmocr/blob/main/mmocr/models/textrecog/losses/ctc_loss.py
for the explanation.
If the data length after the CTC converter is larger than the input length of input and set flatten=True in CTCLoss, there will be an error when calculating CTC loss. So is it a bug or a feature for not doing cut up in CTCConverter?