google-research / tapas

End-to-end neural table-text understanding models.
Apache License 2.0
1.15k stars 217 forks source link

Training loss #48

Closed gihanpanapitiya closed 4 years ago

gihanpanapitiya commented 4 years ago

Is there a way to get the training loss in a fine tuning task?

ghost commented 4 years ago

Yes, if you run tapas_classifier_experiment.py with do_eval it will compute the loss for the current checkpoint.

It think the batch-wise training loss is also reported during training (basically similar but more noisy).

gihanpanapitiya commented 4 years ago

Thanks! Is the batch-wise training loss written to the ${output_dir}? Can this loss be visualized using Tensorboard?

eisenjulian commented 4 years ago

The estimator high level API that we use automatically will log the loss after every batch to tensorboard summaries, which can be smoothed in the tensorbaord UI for easier reading. All other metrics are computed by loading a saved checkpoint and doing a pass over the full dataset (either train or dev). So you will notice that there are two train losses (sometimes even more if you use multiple accelerators) one is for each batch (more frequent and more noisy) and the other is for the full training set on saved checkpoint.