Open intervolga-school opened 2 years ago
@sachinprasadhs, I was able to reproduce the issue on Colab using TF2.7
and tf-nightly(2.8.0-dev20211201)
. Please find the gist here for reference.Thanks!
You can use the solution mentioned here to avoid warning and continue training.
I know a workaround, but that is very ugly:
This is a bad solution. In my opinion model should continue training until reaches num_epochs even if some epoch has less batches then first one. Displaying the number of steps remaining within an epoch is not as important as the completion of all epochs.
Waiting for triage. Summary: When the dataset has a different number of samples from epoch to epoch (the batch size are the same, the number of steps are different), the training will stop at a epoch, whose number of steps is different from the first epoch.
Thanks for reporting the issue - one solution is to use a steps_per_epoch
that's large enough for the number of data in all epochs, and have the termination of an epoch rely on exhaustion of data (OutOfRangeError
). Can you check if this works?
Got same issue when implementing word2vec model. Dataset size changes from epoch to epoch due to:
Single estimation number of batches takes around 4 hours (very large dataset). And this size can changes with +- 20% from epoch to epoch.
So setting steps_per_epoch is not a good option. It would be great if keras.Model will always look at OutOfRangeError itself.
System information.
Describe the problem.
In real case I use tf.data.Dataset (based on tensorflow_datasets) instance to train model. One big difference from default examples of keras.Model.fit + Dataset is unknown (variable) dataset length. In my case dataset length is variable (+- 20%) because i make some random augmentations with filtering out some of them. See provided colab link to see what i mean.
As result when first epoch is finished (dataset has reached OutOfRangeError), keras remembers current step an if the same dataset on the next epoch has smaller length, all model training will be stopped.
Describe the current behavior. Model stops training if second/third/etc dataset iterator has length smaller then first one.
Describe the expected behavior. Model should not stop training. It can print warning, but not stop it.
Standalone code to reproduce the issue. https://colab.research.google.com/drive/1fY4v9WBRxfsywDyKKidu-lmFpaPdAn9D?usp=sharing
Source code / logs.