Closed likerainsun closed 4 years ago
I downloaded the pretrained bert model. Running the fine-tuning step brings an error when loading vocab file, I assume.
Any idea to fix it?
The tokenization.py is from the google bert official repo. How about reporting this bug to the repo? (you might be able to reproduce the same error in that codes.)
I downloaded the pretrained bert model. Running the fine-tuning step brings an error when loading vocab file, I assume.
Any idea to fix it?