Closed Fratorhe closed 3 years ago
This is usually because it gets cut by the batching, although in this case it could be something smaller. In the case of batching we would need to add some additional methods to look for excess trajectories and put them in. Perhaps @PythonFZ can discuss some TF stuff.
EDIT:
We should be able to use tf.data.Dataset
with its batching method to generate batches with different sizes. Usually the last batch would be smaller than the batches before.
Yes this is what I was thinking
Sorry I did not mean to close.
@Fratorhe please check the latest push and see if this has been solved. I did not have any good data to check this quickly.
Can we close this issue? @Fratorhe
Sorry, I am finishing with the GK thermal and viscosity. I will test it in one hour or so. I will close it if it works.
Perfect
working!!
Describe the bug When I generate the database in a per-atom configuration, the number of time-steps read is a bit smaller. In this case, I did 20.000 steps, and the database loaded 19.987. I guess it may be related to the batching because it happened only in that case and the batching took 23 steps (19.987+23=20.000)...
To Reproduce Steps to reproduce the behavior:
tutorials\walkthrough_notebooks\Liquid_Argon
Expected behavior Size database wrt input file is inconsistent.
Screenshots
Desktop (please complete the following information):
Additional context Python 3.7.4