Closed jbuehler1337 closed 4 years ago
Are you using the IAM images?
Yes I am using the IAM data from the official page and your code just like you described it in the read me. I did not change the data set. Just lowered the number of workers or the batch size (32 to 16) to prevent my GPU memory to be overloaded... How can I solve this? Can I see the dimensions somehow? Do you need any further information?
Were you able to run through 0_handwriting_ocr.ipynb without any issues?
Yes I run through 0_handwriting_ocr.ipynb , 1_a_paragraph_segmentation_msers.ipynb , 1_b_paragraph_segmentation_dcnn.ipynb, 2_line_word_segmentation.ipynb , 3_handwriting_recognition.ipynb notebooks without an error.
Hey, are there any news? Can I support you with additional information?
There was a typo in the code.
Please change
num_heads = 16
embed_size = 512
num_layers = 2
epochs = 5
key = 'language_denoising'
best_test_loss = 10e20
learning_rate = 0.00004
send_every_n = 50
to
num_heads = 16
embed_size = 256
num_layers = 2
epochs = 5
key = 'language_denoising'
best_test_loss = 10e20
learning_rate = 0.00004
send_every_n = 50
Thank you @jonomon so much, it is working fine now.
Hi again,
I am a bit confused about this error, happening in the 4_text_denoising notebook. I just did every step from before but something does not fit with the dimensions. Can you explain why this is happening?