Open 915288938lx opened 4 years ago
in the eval_dataset, the shuffle param is set to False, so every time we run(evel_init_op), we just get the first batch in the eval_dataset, so get_hypotheses just get a ramdom sample of the first batch of the eval_dataset
Also, the training loss should be last batch mean loss of every epoch, why is epoch loss?
in the eval_dataset, the shuffle param is set to False, so every time we run(evel_init_op), we just get the first batch in the eval_dataset, so get_hypotheses just get a ramdom sample of the first batch of the eval_dataset