yangheng95 / LCF-ATEPC

codes for paper A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction
MIT License
185 stars 45 forks source link

怎样加载模型预测单条语句? #18

Closed ScottishFold007 closed 3 years ago

ScottishFold007 commented 3 years ago

楼主您好!我在加载训练好的模型用于预测时,出现了如下的报错,还请您指点一二:

from model.lcf_atepc import LCF_ATEPC
from pytorch_transformers.tokenization_bert import BertTokenizer
from pytorch_transformers.modeling_bert import BertModel,BertConfig,BertForTokenClassification

args = {'dropout':0,
        'device':'cuda',
        'use_unique_bert':True,
        'SRD':5,
        'max_seq_length':80,
        'use_bert_spc':False,
        'local_context_focus':"fusion"
           }

bert_model = r'output\laptop_fusion_apcacc_74.53_apcf1_67.79_atef1_79.97'

tokenizer = BertTokenizer.from_pretrained(bert_model, do_lower_case=True)

bert_base_model = BertForTokenClassification.from_pretrained(bert_model)

model = LCF_ATEPC.from_pretrained(bert_model, args=args)

RuntimeError Traceback (most recent call last)

in 3 tokenizer = BertTokenizer.from_pretrained(bert_model, do_lower_case=True) 4 ----> 5 bert_base_model = BertForTokenClassification.from_pretrained(bert_model) d:\anaconda20190415\lib\site-packages\pytorch_transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 534 535 # Instantiate model. --> 536 model = cls(config, *model_args, **model_kwargs) 537 538 if state_dict is None and not from_tf: d:\anaconda20190415\lib\site-packages\pytorch_transformers\modeling_bert.py in __init__(self, config) 1124 self.num_labels = config.num_labels 1125 -> 1126 self.bert = BertModel(config) 1127 self.dropout = nn.Dropout(config.hidden_dropout_prob) 1128 self.classifier = nn.Linear(config.hidden_size, config.num_labels) d:\anaconda20190415\lib\site-packages\pytorch_transformers\modeling_bert.py in __init__(self, config) 649 super(BertModel, self).__init__(config) 650 --> 651 self.embeddings = BertEmbeddings(config) 652 self.encoder = BertEncoder(config) 653 self.pooler = BertPooler(config) d:\anaconda20190415\lib\site-packages\pytorch_transformers\modeling_bert.py in __init__(self, config) 232 def __init__(self, config): 233 super(BertEmbeddings, self).__init__() --> 234 self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0) 235 self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) 236 self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) ~\AppData\Roaming\Python\Python36\site-packages\torch\nn\modules\sparse.py in __init__(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight) 107 self.scale_grad_by_freq = scale_grad_by_freq 108 if _weight is None: --> 109 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) 110 self.reset_parameters() 111 else: RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768] 请问,我该如何正确的加载训练好的模型?谢谢!
yangheng95 commented 3 years ago

这个库的预测脚本还没来得及完成,后面我可能会抽时间完成。这个错误可能是因为保存的模型是state_dict,如果是这样的话,初始化模型的时候应该用EBRT预训练的模型(bin格式),然后用torch.load()加载state_dict。

ScottishFold007 commented 3 years ago

了然了,谢谢大咖回复

---原始邮件--- 发件人: "YangHeng"<notifications@github.com> 发送时间: 2020年10月29日(周四) 晚上7:01 收件人: "yangheng95/LCF-ATEPC"<LCF-ATEPC@noreply.github.com>; 抄送: "Author"<author@noreply.github.com>;"Scottish_Fold007"<gaochangkuan01@foxmail.com>; 主题: Re: [yangheng95/LCF-ATEPC] 怎样加载模型预测单条语句? (#18)

这个库的预测脚本还没来得及完成,后面我可能会抽时间完成。这个错误可能是因为保存的模型是state_dict,如果是这样的话,初始化模型的时候应该用EBRT预训练的模型(bin格式),然后用torch.load()加载state_dict。

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

EEElevation commented 3 years ago

您好,我也遇到了和您相同的问题,请问您是否已经能正确的加载训练好的模型了呢?怎么做呢? @ScottishFold007

yangheng95 commented 3 years ago

您好,我也遇到了和您相同的问题,请问您是否已经能正确的加载训练好的模型了呢?怎么做呢? @ScottishFold007

https://github.com/yangheng95/pyabsa 请参考这个库