apatel726 / HurricaneDissertation

2 stars 2 forks source link

Training Data Shape Bug #39

Open hammad93 opened 3 years ago

hammad93 commented 3 years ago

The error,

Traceback (most recent call last):
  File "run.py", line 144, in <module>
    model = universal()
  File "run.py", line 137, in universal
    load_if_exists=args.load, epochs=args.epochs)
  File "/tf/HurricaneDissertation/hurricane_ai/ml/bd_lstm_td.py", line 142, in train
    validation_data=(X_val, y_val), verbose=verbose, callbacks=[logs])
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py", line 646, in _process_inputs
    x, y, sample_weight=sample_weights)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2383, in _standardize_user_data
    batch_size=batch_size)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2489, in _standardize_tensors
    y, self._feed_loss_fns, feed_output_shapes)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 810, in check_loss_and_target_compatibility
    ' while using as loss `' + loss_name + '`. '
ValueError: A target array with shape (7487, 6, 3) was passed for an output of shape (None, 5, 3) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.

To reproduce this error, checkout the all_timestamps branch (as of the creation of this issue) and run the following command,

python run.py --universal --epochs 500 --dropout 0.01 --loss mse --optimizer adam
hammad93 commented 3 years ago

https://github.com/apatel726/HurricaneDissertation/commit/ce75aad3b21514a3d7180be3ed519c15fa63f940

This commit fixes it (was supposed to reference #39 instead of #29) for the time being. There needs to be a better analysis on longer timesteps that was causing the issue. The remaining issue centers around the timesteps variable that's hard coded to 5 and passed around function calls and objects.