Closed psuu0001 closed 11 months ago
Is this issue solved? I am also facing the same Issue...
error: batch_size, l, d = x.size() ValueError: too many values to unpack (expected 3) x is the input image. when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256] However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable. I do not know why there is difference between training and prediction.
请问这个问题你解决了吗?
Originally posted by @jun214384468 in #66 (comment)
请问您知道如何在测试的输出矫正后的图片
请问这个问题你解决了吗?
Originally posted by @jun214384468 in https://github.com/ayumiymk/aster.pytorch/issues/66#issuecomment-787440632