Open psuu0001 opened 3 years ago
error: batch_size, l, d = x.size() ValueError: too many values to unpack (expected 3)
x is the input image.
when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256] However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable. I do not know why there is difference between training and prediction.
请问这个问题你解决了吗?
error: batch_size, l, d = x.size() ValueError: too many values to unpack (expected 3)
x is the input image.
when I tried to train the model, the error came up. It seems that the size of input image in lmdb format has one more dimension after encoder. The lmdb input image size after encoder is[ 12, 3, 64, 256] However, when I run the demo.py, the input size after encoder is [1, 25, 512], which is acceptable.
I do not know why there is difference between training and prediction.