ayumiymk / aster.pytorch

ASTER in Pytorch
MIT License
670 stars 169 forks source link

把训练的输入改成32,100就可以了。改完之后运行一下,再把参数改回来(65,256),发现也可以正常运行。说实话我自己都不知道怎么解决的这个bug #68

Closed psuu0001 closed 11 months ago

psuu0001 commented 3 years ago

error: batch_size, l, d = x.size() ValueError: too many values to unpack (expected 3)

x is the input image.

when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256] However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable. I do not know why there is difference between training and prediction.

请问这个问题你解决了吗?

Originally posted by @jun214384468 in https://github.com/ayumiymk/aster.pytorch/issues/66#issuecomment-787440632

ASHU-web commented 3 years ago

Is this issue solved? I am also facing the same Issue...

lrfighting commented 2 years ago

error: batch_size, l, d = x.size() ValueError: too many values to unpack (expected 3) x is the input image. when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256] However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable. I do not know why there is difference between training and prediction.

请问这个问题你解决了吗?

Originally posted by @jun214384468 in #66 (comment)

请问您知道如何在测试的输出矫正后的图片