kyzhouhzau / BERT-NER

Use Google's BERT for named entity recognition (CoNLL-2003 as the dataset).
MIT License
1.23k stars 335 forks source link

Skipping training since max_steps has already saved. #50

Open himanshu16497 opened 5 years ago

himanshu16497 commented 5 years ago

I tried using the same model on a custom dataset. But I got the above error. Do you know how I can get around it?

vincetang commented 5 years ago

When you trained the initial model, a checkpoint file was generated and saved in the output directory. You're seeing that error because the number of steps completed in training is equal or greater than the 'max_steps' parameter in BERT_NER.py#628: estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) The model is loading the checkpoint and seeing that max_steps has already been completed.

If you want to train on your dataset, increase num_train_steps. You can also re-train the model by deleting the checkpoint files generated in the output directory. If you just want to evaluate the model on your custom data without training, use FLAGS.do_eval or FLAGS.do_predict instead of FLAGS.do_train.