Closed RishabhMaheshwary closed 4 years ago
Hi, I think the main cause of this error is that the vocab.txt file is missing in the IMDB BERT model folder. I just now updated the BERT model files so that this file is in every folder: https://drive.google.com/drive/folders/1xog7EYBk1esscLgHxk23f46f73-kixC7. Could you download the model parameters files again so that the tokenizer can be initialized correctly?
It is running now, Thanks
Glad to be able to help you!
I am getting the following output:
Then the code runs and stops below:
May be it is related to cache directory path.
Can you help me to resolve this ?