Rudrabha / Lip2Wav

This is the repository containing codes for our CVPR, 2020 paper titled "Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis"
MIT License
700 stars 153 forks source link

Training error with "ValueError: all input arrays must have the same shape" #16

Closed kavinvin closed 4 years ago

kavinvin commented 4 years ago

I downloaded the dataset. Processed it. Then trained the model with the following command: python train.py first_run --data_root Dataset/chem/ --preset synthesizer/presets/chem.json And on step 8, I got the following error: ValueError: all input arrays must have the same shape What did I do it wrong here? Thanks!

Arguments:
    name:                   first_run
    data_root:              Dataset/chem/
    preset:                 synthesizer/presets/chem.json
    models_dir:             synthesizer/saved_models/
    mode:                   synthesis
    GTA:                    True
    restore:                True
    summary_interval:       2500
    embedding_interval:     1000000000
    checkpoint_interval:    1000
    eval_interval:          1000
    tacotron_train_steps:   2000000
    tf_log_level:           1

Training on 12.369824074074074 hours
Validating on 0.7361574074074074 hours
...
Instructions for updating:
Use tf.cast instead.
Loss is added.....
Optimizer is added....
Feeder is initialized....
Ready to train....
Step       1 [62.693 sec/step, loss=18.06177, avg_loss=18.06177]
Step       2 [32.221 sec/step, loss=11.06506, avg_loss=14.56342]
Step       3 [22.066 sec/step, loss=8.36187, avg_loss=12.49623]
Step       4 [16.988 sec/step, loss=9.19182, avg_loss=11.67013]
Step       5 [22.275 sec/step, loss=8.69534, avg_loss=11.07517]
Step       6 [18.854 sec/step, loss=10.56172, avg_loss=10.98960]
Step       7 [16.410 sec/step, loss=8.13013, avg_loss=10.58110]
Step       8 [14.579 sec/step, loss=7.24404, avg_loss=10.16397]
Exception in thread background:
Traceback (most recent call last):
  File "/home/kavinvin/miniconda3/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/kavinvin/miniconda3/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/kavinvin/projects/sandbox/Lip2Wav/synthesizer/feeder.py", line 147, in _enqueue_next_train_group
    feed_dict = dict(zip(self._placeholders, self._prepare_batch(batch, r)))
  File "/home/kavinvin/projects/sandbox/Lip2Wav/synthesizer/feeder.py", line 212, in _prepare_batch
    input_cur_device, input_max_len = self._prepare_inputs([x[0] for x in batch])
  File "/home/kavinvin/projects/sandbox/Lip2Wav/synthesizer/feeder.py", line 238, in _prepare_inputs
    return np.stack([self._pad_input(x, max_len) for x in inputs]), max_len
  File "/home/kavinvin/miniconda3/lib/python3.7/site-packages/numpy/core/shape_base.py", line 416, in stack
    raise ValueError('all input arrays must have the same shape')
ValueError: all input arrays must have the same shape
Rudrabha commented 4 years ago

Can you check the input sizes that you are feeding? Please add a print statement in appropriate places in this function and check what sort of shapes you are feeding? The file containing this function contains the whole data-loader. You can print in multiple places and try to track the sizes of the inputs (and ground-truth) that is being fed to the model.

kavinvin commented 4 years ago

It seems to appear randomly once a while in a batch where the shape of one sample is (89, 96, 96, 3) instead of (90, 96, 96, 3), thus causing the shape error. I'm not sure if the problem is from the dataset or get_window function.

Rudrabha commented 4 years ago

Hi, I will upload a fix to this issue in the repo after digging a bit more about the conditions where such an error can occur. I thought we had kept some sort of a check for this but apparently that is not enough. You can also try keeping a check in the data loader to ensure that the number of frames in a window is equal to the given "T" in hparams.py. I am leaving this issue open until we update the code.

Rudrabha commented 4 years ago

Added a check in the feeder.py. The fix is in this line