thunlp / OpenPrompt

An Open-Source Framework for Prompt-Learning.
https://thunlp.github.io/OpenPrompt/
Apache License 2.0
4.38k stars 455 forks source link

初始模型为bert,roberta时的特殊注意事项? #208

Closed qdchenxiaoyan closed 2 years ago

qdchenxiaoyan commented 2 years ago

作者好,我使用t5,gpt2作为初始模型进行训练,验证集指标稳步上升没有问题,但是使用bert,roberta,electra的时候就出现问题,验证集指标一直是0.5203649397197784,想请教下,这是什么原因导致的? example: input_example = InputExample(text_a=line['text_pair'], text_b=line['text'], label=int(line['label']), guid=i)

初始化模型: plm, tokenizer, model_config, WrapperClass = load_plm("bert", "bert-base-chinese")

模板: template_text = '{"placeholder":"text_a"}{"placeholder":"text_b"}的情感倾向是{"mask"}.'

verbalizer: myverbalizer = ManualVerbalizer(tokenizer, num_classes=2, label_words=[["负"], ["正"]])

训练详情详情: Epoch 1, average loss: 3.3326262831687927 Epoch 1, average loss: 0.7787383239444913 Epoch 1, average loss: 0.7572225447236699 Epoch 1, average loss: 0.738348940730161 Epoch 1, average loss: 0.7296206120358232 Epoch 1, average loss: 0.7233000741192647 Epoch 1, average loss: 0.7194478078047589 Epoch 1, average loss: 0.7165702087618587 Epoch 1, average loss: 0.7136984900552019 Epoch 1, average loss: 0.7121389577100447 Epoch 1, average loss: 0.7103113287874931 Epoch 1, average loss: 0.7093091916511776 Epoch 1, average loss: 0.7082642679232515 Epoch 1, average loss: 0.7077864898547248 Epoch 1, average loss: 0.7074250399318126 Epoch 1, average loss: 0.7070826163498072 Epoch 1, average loss: 0.7063648934145984 Epoch 1, average loss: 0.7059904860616641 Epoch 1, average loss: 0.70552960885168 Epoch 1, average loss: 0.7050825911213101 Epoch 1, average loss: 0.7048186851440073 0.5203649397197784 Epoch 2, average loss: 0.6653246581554413 Epoch 2, average loss: 0.7000961779363898 Epoch 2, average loss: 0.6992864966194495 Epoch 2, average loss: 0.697152165840576 Epoch 2, average loss: 0.6964660410873108 Epoch 2, average loss: 0.6976269556980793 Epoch 2, average loss: 0.6974568861339253 Epoch 2, average loss: 0.6972834053179063 Epoch 2, average loss: 0.6972271847809284 Epoch 2, average loss: 0.6969758515266203 Epoch 2, average loss: 0.6968832315801383 Epoch 2, average loss: 0.6966261330479784 Epoch 2, average loss: 0.6964328451033501 Epoch 2, average loss: 0.6963928808987537 Epoch 2, average loss: 0.6964452584858793 Epoch 2, average loss: 0.6963973140276998 Epoch 2, average loss: 0.696516385802325 Epoch 2, average loss: 0.6964337500765108 Epoch 2, average loss: 0.6963930293084604 Epoch 2, average loss: 0.6962399163065522 Epoch 2, average loss: 0.7043500146401878 0.5203649397197784

Aurora-slz commented 2 years ago

您好,请问您加载中文的roberta是用的哪个模型。我使用 plm, tokenizer, model_config, WrapperClass = load_plm("roberta", 'hfl/chinese-roberta-wwm-ext')报错。提示加载的roberta模型,确使用bert的tokenizer,不匹配。 感谢您的指正!

Achazwl commented 2 years ago

from openprompt import plms, then you can modify plms._MODEL_CLASSES. For example, plms._MODEL_CLASSES['roberta'] specifies which kind of model, and which kind of tokenizer should be used. You can change them to match huggingface's needs, such as making plms._MODEL_CLASSES['roberta'].tokenizer = BertTokenizer in your experiment code.

Aurora-slz commented 2 years ago

Thanks for your advice, which helps me a lot!

Trevo1 commented 1 year ago

Thanks for your advice, which helps me a lot!

你好,请问你是怎么修改使用hfl/chinese-roberta-wwm-ext'的呢,我直接plm, tokenizer, model_config, WrapperClass = load_plm("bert", 'hfl/chinese-roberta-wwm-ext')可以跑,但是plm, tokenizer, model_config, WrapperClass = load_plm("roberta", 'hfl/chinese-roberta-wwm-ext')直接跑不了, 加了plms._MODEL_CLASSES['roberta'].tokenizer = BertTokenizer 这句也还是报错😭