thunlp / PL-Marker

Source code for "Packed Levitated Marker for Entity and Relation Extraction"
MIT License
260 stars 35 forks source link

关于使用的问题 #15

Closed lxBa0 closed 2 years ago

lxBa0 commented 2 years ago

您好 我这边想自己输入一段文本,用您的模型来进行实体和关系的提取,请问我应该怎么操作呢

YeDeming commented 2 years ago

你好,暂时没有提供这个接口。需要您把数据改成readme中的输入格式再运行指令

lxBa0 commented 2 years ago

谢谢您的回复,如果想要得到预测,我这边首先整理成readme格式,但是relations和ner空着,然后输入这个指令就可以得到预测结果了吗? CUDA_VISIBLE_DEVICES=0 python3 run_acener.py --model_type bertspanmarker \ --model_name_or_path ../bert_models/scibert-uncased --do_lower_case \ --data_dir scierc \ --learning_rate 2e-5 --num_train_epochs 50 --per_gpu_train_batch_size 8 --per_gpu_eval_batch_size 16 --gradient_accumulation_steps 1 \ --max_seq_length 512 --save_steps 2000 --max_pair_length 256 --max_mention_ori_length 8 \ --do_eval --evaluate_during_training --eval_all_checkpoints \ --fp16 --seed 42 --onedropout --lminit \ --train_file train.json --dev_file dev.json --test_file test.json \ --output_dir sciner_models/sciner-scibert --overwrite_output_dir --output_results

YeDeming commented 2 years ago

--test_file的路径换了,ner设置为[], relations设置为[],应该是可以得到ner结果

lxBa0 commented 2 years ago

是需要把--test_file的路径替换为我处理好的文件路径就可以了吗,其他的参数比如--train_file不需要改变吗

YeDeming commented 2 years ago

是的,只使用--do_eval时只会用到test_file

lxBa0 commented 2 years ago

感谢 感谢!我试一下

lxBa0 commented 2 years ago

还要请教您一个问题,我的代码总会卡在这里02/23/2022 17:35:52 - INFO - transformers.tokenization_utils - Model name './sciner-scibert' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming './sciner-scibert' is a path, a model identifier, or url to a directory containing tokenizer files. 02/23/2022 17:35:52 - INFO - transformers.tokenization_utils - Didn't find file ./sciner-scibert\added_tokens.json. We won't load it. 02/23/2022 17:35:52 - INFO - transformers.tokenization_utils - loading file ./sciner-scibert\vocab.txt 02/23/2022 17:35:52 - INFO - transformers.tokenization_utils - loading file None 02/23/2022 17:35:52 - INFO - transformers.tokenization_utils - loading file ./sciner-scibert\special_tokens_map.json 02/23/2022 17:35:52 - INFO - transformers.tokenization_utils - loading file ./sciner-scibert\tokenizer_config.json 02/23/2022 17:35:52 - INFO - transformers.modeling_utils - loading weights file ./sciner-scibert\pytorch_model.bin 02/23/2022 17:35:59 - INFO - __main__ - entity_id: 8494 02/23/2022 17:35:59 - INFO - __main__ - mask_id: 104 02/23/2022 17:36:00 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, alpha=1, cache_dir='', config_name='', data_dir='scierc', dev_file='dev.json', device=device(type='cpu'), do_eval=True, do_lower_case=True, do_test=False, do_train=True, eval_all_checkpoints=True, evaluate_during_training=True, fp16=True, fp16_opt_level='O1', gradient_accumulation_steps=1, group_axis=-1, group_edge=False, group_sort=False, learning_rate=2e-05, lminit=True, local_rank=-1, logging_steps=5, max_grad_norm=1.0, max_mention_ori_length=8, max_pair_length=256, max_seq_length=512, max_steps=-1, model_name_or_path='./sciner-scibert', model_type='bertspanmarker', n_gpu=0, no_cuda=False, no_test=False, norm_emb=False, num_train_epochs=50.0, onedropout=True, output_dir='sciner_models/PL-Marker-scierc-scibert-42', output_results=True, overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=16, per_gpu_train_batch_size=8, save_steps=2000, save_total_limit=1, seed=42, server_ip='', server_port='', shuffle=False, test_file='test.json', tokenizer_name='', train_file='train.json', use_full_layer=-1, warmup_steps=-1, weight_decay=0.0) 02/23/2022 17:36:02 - INFO - __main__ - maxL: 334 02/23/2022 17:36:02 - INFO - __main__ - maxR: 780 02/23/2022 17:36:02 - INFO - __main__ - ***** Running training ***** 02/23/2022 17:36:02 - INFO - __main__ - Num examples = 2103 02/23/2022 17:36:02 - INFO - __main__ - Num Epochs = 50 02/23/2022 17:36:02 - INFO - __main__ - Instantaneous batch size per GPU = 8 02/23/2022 17:36:02 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8 02/23/2022 17:36:02 - INFO - __main__ - Gradient Accumulation steps = 1 02/23/2022 17:36:02 - INFO - __main__ - Total optimization steps = 13150 Experiment dir : sciner_models/PL-Marker-scierc-scibert-42 scierc\train.json _____________- Epoch: 0%| | 0/50 [00:00<?, ?it/s] Iteration: 0%| | 0/263 [00:00<?, ?it/s]

YeDeming commented 2 years ago

抱歉呀,我这边也暂时看不出来原因