We recently had trouble keeping older checkpoints when training BERT.
It turns out if you want to keep more checkpoints than the default, you can manually add a parameter or add a parameter to tf.estimator.RunConfig, see here for reference: https://github.com/dbmdz/berts/issues/32
We recently had trouble keeping older checkpoints when training BERT.
It turns out if you want to keep more checkpoints than the default, you can manually add a parameter or add a parameter to
tf.estimator.RunConfig
, see here for reference: https://github.com/dbmdz/berts/issues/32