zjunlp / KnowPrompt

[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
MIT License
194 stars 34 forks source link

运行报错 #16

Closed rrxsir closed 1 year ago

rrxsir commented 1 year ago

请问运行>> bash scripts/retacred.sh的时候报错: Traceback (most recent call last): File "main.py", line 244, in main() File "main.py", line 128, in main parser = _setup_parser() File "main.py", line 55, in _setup_parser litmodel_class = _import_class(f"lit_models.{temp_args.litmodel_class}") File "main.py", line 27, in _importclass class = getattr(module, class_name) AttributeError: module 'lit_models' has no attribute 'TransformerLitModel' scripts/retacred.sh: line 2: --model_name_or_path: command not found scripts/retacred.sh: line 3: --accumulate_grad_batches: command not found scripts/retacred.sh: line 4: --batch_size: command not found scripts/retacred.sh: line 5: --data_dir: command not found scripts/retacred.sh: line 6: --check_val_every_n_epoch: command not found scripts/retacred.sh: line 7: --data_class: command not found scripts/retacred.sh: line 8: --max_seq_length: command not found scripts/retacred.sh: line 9: --model_class: command not found scripts/retacred.sh: line 10: --t_lambda: command not found scripts/retacred.sh: line 11: --wandb: command not found scripts/retacred.sh: line 12: --litmodel_class: command not found scripts/retacred.sh: line 13: --task_name: command not found scripts/retacred.sh: line 14: --lr: command not found 这是什么原因?

zxlzr commented 1 year ago

您好建议检查一下是否安装正确的pytorch_lightning==1.3.1版本 建议使用虚拟环境执行pip install -r requirements.txt

rrxsir commented 1 year ago

您好,请问一下python的版本是多少?

zxlzr commented 1 year ago

3.8

rrxsir commented 1 year ago

您好,环境中pytorch_lightning==1.3.1是正确的,并且我也重新用上述指令安装了环境,但报错依然是一样的

flow3rdown commented 1 year ago

请问运行>> bash scripts/retacred.sh的时候报错: Traceback (most recent call last): File "main.py", line 244, in main() File "main.py", line 128, in main parser = _setup_parser() File "main.py", line 55, in _setup_parser litmodel_class = _import_class(f"lit_models.{temp_args.litmodel_class}") File "main.py", line 27, in _importclass class = getattr(module, class_name) AttributeError: module 'lit_models' has no attribute 'TransformerLitModel' scripts/retacred.sh: line 2: --model_name_or_path: command not found scripts/retacred.sh: line 3: --accumulate_grad_batches: command not found scripts/retacred.sh: line 4: --batch_size: command not found scripts/retacred.sh: line 5: --data_dir: command not found scripts/retacred.sh: line 6: --check_val_every_n_epoch: command not found scripts/retacred.sh: line 7: --data_class: command not found scripts/retacred.sh: line 8: --max_seq_length: command not found scripts/retacred.sh: line 9: --model_class: command not found scripts/retacred.sh: line 10: --t_lambda: command not found scripts/retacred.sh: line 11: --wandb: command not found scripts/retacred.sh: line 12: --litmodel_class: command not found scripts/retacred.sh: line 13: --task_name: command not found scripts/retacred.sh: line 14: --lr: command not found 这是什么原因?

您好,请检查一下shell脚本retacred.sh的内容是否存在错误,shell脚本中的参数如果进行了换行需要在末尾添加 \进行分隔。

rrxsir commented 1 year ago

您好,这个文件没有改动过,并且检查了每行末尾都有\

flow3rdown commented 1 year ago

方便贴一下您的脚本吗,我看报错是因为--model_name_or_path之后的参数都没有读取到,使用的默认值

rrxsir commented 1 year ago

安装环境的时候将pytorch版本换成了1.8.2,torchmetrics的版本为0.6.0(因为有其他报错),不知道是否影响。

rrxsir commented 1 year ago

前面输入的指令都是readme里面的代码,没有改动

rrxsir commented 1 year ago
CUDA_VISIBLE_DEVICES=1 python main.py --max_epochs=5  --num_workers=8 \
    --model_name_or_path roberta-large \
    --accumulate_grad_batches 4 \
    --batch_size 16 \
    --data_dir dataset/retacred \
    --check_val_every_n_epoch 1 \
    --data_class WIKI80 \
    --max_seq_length 256 \
    --model_class RobertaForPrompt \
    --t_lambda 0.001 \
    --wandb \
    --litmodel_class BertLitModel \
    --task_name wiki80 \
    --lr 3e-5
flow3rdown commented 1 year ago
CUDA_VISIBLE_DEVICES=1 python main.py --max_epochs=5  --num_workers=8 \
    --model_name_or_path roberta-large \
    --accumulate_grad_batches 4 \
    --batch_size 16 \
    --data_dir dataset/retacred \
    --check_val_every_n_epoch 1 \
    --data_class WIKI80 \
    --max_seq_length 256 \
    --model_class RobertaForPrompt \
    --t_lambda 0.001 \
    --wandb \
    --litmodel_class BertLitModel \
    --task_name wiki80 \
    --lr 3e-5

请问您是通过bash scripts/retacred.sh运行的这个脚本吗,正常按照这个脚本运行的话--litmodel_class的值是BertLitModel而不是默认值TransformerLitModel。我通过运行以下脚本可以复现您遇到的问题:

CUDA_VISIBLE_DEVICES=1 python main.py --max_epochs=5  --num_workers=8
    --model_name_or_path roberta-large
    --accumulate_grad_batches 4
    --batch_size 16
    --data_dir dataset/retacred
    --check_val_every_n_epoch 1
    --data_class WIKI80
    --max_seq_length 256
    --model_class RobertaForPrompt
    --t_lambda 0.001
    --wandb
    --litmodel_class BertLitModel
    --task_name wiki80
    --lr 3e-5
rrxsir commented 1 year ago

您好,我运行时的指令为

>> bash scripts/semeval.sh

semeval.sh中的内容为

CUDA_VISIBLE_DEVICES=0 python main.py --max_epochs=10  --num_workers=8 \
    --model_name_or_path roberta-large_ \
    --accumulate_grad_batches 4 \
    --batch_size 16 \
    --data_dir dataset/semeval \
    --check_val_every_n_epoch 1 \
    --data_class WIKI80 \
    --max_seq_length 256 \
    --model_class RobertaForPrompt \
    --t_lambda 0.001 \
    --wandb \
    --litmodel_class BertLitModel \
    --task_name wiki80 \
    --lr 3e-5
rrxsir commented 1 year ago

您好,我发现 scripts/semeval.sh最下面有个注释# 90.2,删除之后这个问题好像解决了