Open tomohideshibata opened 4 years ago
Are you pre-training from scratch or initializing from the .h5 file?
I've been pre-training with init from the h5 file and the loss appears to be unchanged between epochs
Epoch 1:
Train Step: 32256/32256,
loss = 6.186943054199219
masked_lm_accuracy = 0.117702
lm_example_loss = 5.309477
sentence_order_accuracy = 0.550855
sentence_order_mean_loss = 0.689294
Epoch 2:
Train Step: 64512/64512
loss = 6.207996845245361
masked_lm_accuracy = 0.114809
lm_example_loss = 5.329027
sentence_order_accuracy = 0.546185
sentence_order_mean_loss = 0.689931
Going to try from scratch to see if it makes a difference.
From scratch.
I trained from scratch and no difference. I reduced the dataset size to only 10.000 sentences to make it easier to debug and perhaps make the model overfit the data but the loss doesn't change from epoch to epoch. So, still not able to pre-train from scratch but it appears we aren't dealing with the same problem. Would be good to know if anyone succeeded in pre-training from scratch.
I think most of the codes were from the following google official tf2.0 code. https://github.com/tensorflow/models/tree/master/official/nlp/bert
I also tried the pre-training using the above repository, but it failed.
I posted an issue in the official google repository.
https://github.com/tensorflow/models/issues/7903#issuecomment-564394004
The parameters such as predictions/output_bias
and seq_relationship/output_weights
are not saved in a checkpoint. I am not sure the pre-training failure arises from this point, but there may be a problem in the pre-training code.
I am trying pre-training from scratch using GPUs in Japanese, but the pre-training seems strange. In the following log,
masked_lm_accuracy
andsentence_order_accuracy
suddenly dropped.Has someone succeeded in pre-training from scratch?