simoncozens / atokern

Neural network based font kerning
85 stars 7 forks source link

Training makes arrays of the wrong shape? #2

Open colinmford opened 6 years ago

colinmford commented 6 years ago

I'm trying to run the training script on a group of fonts.

I've dumped the kerning, it's pickled all the fonts, etc. I'm running the script "out of the box" so to speak, with no adjustments to the settings.

When it comes to the "Training" portion I'm getting an error.

Traceback (most recent call last):
  File "atokern.keras.py", line 334, in <module>
    ],shuffle = True, validation_data=(val_tensors, val_kern))
  File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 1602, in fit
    batch_size=batch_size)
  File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 1414, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 141, in _standardize_input_data
    str(array.shape))
ValueError: Error when checking input: expected rightofl to have 3 dimensions, but got array with shape (0, 1)

Printing val_tensors reveals that all my arrays are empty. Any idea of what I am doing wrong?

{'rightofl': array([], shape=(0, 1), dtype=float64),
'leftofr': array([], shape=(0, 1), dtype=float64), 
'rightofo': array([], shape=(0, 1), dtype=float64), 
'rightofH': array([], shape=(0, 1), dtype=float64), 
'mwidth': array([], dtype=float64)}
simoncozens commented 6 years ago

It hasn't processed any of your fonts. Have you made sure that the training_files and validation_files arrays in settings.py are set correctly?

colinmford commented 6 years ago

I see what happened — The Readme mentions putting the font files in kern-dump but doesn't mention explicitly splitting them into training and validation. Once I put files in validation, it runs.

The Readme should probably make a mention of that. Otherwise, the code is working well!

Thanks!

simoncozens commented 6 years ago

Yeah, I only started doing that recently when I realised that both augmenting the data and getting keras to split some of the data off as validation samples was causing false positives. Will fix up the README.