Closed HojaMuerta closed 2 years ago
我的torch是1.10.0的,会是这个关系吗
感谢提issue,还需要修改conf文件夹中config.yaml中model为lm,后续会修改readme
感谢提issue,还需要修改conf文件夹中config.yaml中model为lm,后续会修改readme
感谢回复,又出现了新的问题,抽取关系不准确,同时有新的报错,不知道是否是报错部分影响结果
感谢提issue,还需要修改conf文件夹中config.yaml中model为lm,后续会修改readme
[2022-06-10 13:12:05,774][main][INFO] - "男人的爱" 和 "人生长路" 在句中关系为:"毕业院校",置信度为0.99。Traceback (most recent call last): File "F:\Anaconda3\envs\DeepKE-main\lib\site-packages\hydra_internal\utils.py", line 198, in run_and_report return func() File "F:\Anaconda3\envs\DeepKE-main\lib\site-packages\hydra_internal\utils.py", line 347, in
新的报错应该是跟predict_plot这个参数设置为FALSE即可,型训练时效果不错,预测效果不佳模的问题我们正在处理中
新的报错应该是跟predict_plot有关设置为FALSE即可,模型效果的这个问题我们正在处理中
好的,谢谢回复!
新的报错应该是跟predict_plot这个参数设置为FALSE即可,型训练时效果不错,预测效果不佳模的问题我们正在处理中
请问结果不对这个是模型的问题吗,有可替换的模型吗
提供的两个模型在训练时都能达到很好的效果,具体是什么问题也还在找
非常抱歉,关系抽取模型有bug,我们会在下周更新模型
您好,模型已更新,预测时候需要添加上实体抽取中的实体类型,这样预测更加准确
您好,模型已更新,预测时候需要添加上实体抽取中的实体类型,这样预测更加准确
我已经重新下载了模型,但是预测结果仍然为"毕业院校"
您好,模型已更新,预测时候需要添加上实体抽取中的实体类型,这样预测更加准确
我已经重新下载了模型,但是预测结果仍然为"毕业院校"
用的是哪个例子
您好,模型已更新,预测时候需要添加上实体抽取中的实体类型,这样预测更加准确
我已经重新下载了模型,但是预测结果仍然为"毕业院校"
用的是哪个例子
Redme预设的专辑那个例子,切换其他的也一样
这个例子用bert-chinese-wwm没有错啊,实体类型也需要给
Traceback (most recent call last): File "D:/MyProject/python/DeepKE-main/example/re/standard/predict.py", line 120, in main model.load(cfg.fp, device=device) File "D:\MyProject\python\DeepKE-main\src\deepke\relation_extraction\standard\models\BasicModule.py", line 19, in load self.load_state_dict(torch.load(path, map_location=device)) File "F:\Anaconda3\envs\DeepKE-main\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(RuntimeError: Error(s) in loading state_dict for PCNN: Missing key(s) in state_dict: "embedding.wordEmbed.weight", "embedding.entityPosEmbed.weight", "embedding.attribute_keyPosEmbed.weight", "embedding.layer_norm.weight", "embedding.layer_norm.bias", "cnn.convs.0.weight", "cnn.convs.1.weight", "cnn.convs.2.weight", "cnn.activations.prelu.weight", "fc1.weight", "fc1.bias", "fc2.weight", "fc2.bias". Unexpected key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.pooler.dense.weight", "bert.pooler.dense.bias", "bilstm.rnn.weight_ih_l0", "bilstm.rnn.weight_hh_l0", "bilstm.rnn.bias_ih_l0", "bilstm.rnn.bias_hh_l0", "bilstm.rnn.weight_ih_l0_reverse", "bilstm.rnn.weight_hh_l0_reverse", "bilstm.rnn.bias_ih_l0_reverse", "bilstm.rnn.bias_hh_l0_reverse", "fc.weight", "fc.bias". Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.进程已结束,退出代码1