Closed HiJuly closed 6 years ago
Hi July!
Thanks a lot for your interest in our work!
The truth is, this was a quick and dirty hack in order to have the same batch size for the last testing batch in TF...
We could say it is OK to miss some of the data if the training set is very large. But to enable fair comparison with the state of the art algorithms and reproduce the same experimental setup, we had to use all the data especially for the inference and metrics evaluation.
So it is just a hack in TF. If someone knows how to adapt the batch size for the final batch, I would be more than happy to update the code with it.
July, do let me know if it is still unclear,
Thanks a lot, Houssam
Hi Houssam!
I got it, thanks for your rapid reply. I also have read your paper carefully, good job. Last, have a nice day at work.
Thanks a lot, July
hi,thanks for your sharing of the cool idea. I have a question about the code,displaying as bellow:
ran_from = nr_batches_test batch_size ran_to = (nr_batches_test + 1) batch_size size = testx[ran_from: ran_to].shape[0] fill = np.ones([batch_size - size, 121]) # why all ones batch = np.concatenate([testx[ran_from:ran_to], fill], axis=0)
I cannot understand the meaning of creating such batch of the data. can you give me a detailed explanation?
thank you!