lxy444 / bertcner

Chinese clinical named entity recognition using pre-trained BERT model
121 stars 21 forks source link

请问分词为何一直报错,'NoneType' object has no attribute 'tokenize' #8

Open Wulingyun0425 opened 3 years ago

Wulingyun0425 commented 3 years ago

请大家救救菜狗,为啥一直报这个错误T.T token = tokenizer.tokenize(word) AttributeError: 'NoneType' object has no attribute 'tokenize'

fenglsh3 commented 3 years ago

请大家救救菜狗,为啥一直报这个错误T.T token = tokenizer.tokenize(word) AttributeError: 'NoneType' object has no attribute 'tokenize'

将main.py中的 --bert_model的默认改为'model‘即可,如下:

parser.add_argument("--bert_model", default='model', type=str, help="Bert pre-trained model selected in the list: bert-base-uncased, " "bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, " "bert-base-multilingual-cased, bert-base-chinese.")