liuwei1206 / LEBERT

Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"
336 stars 60 forks source link

IndexError: list index out of range #19

Closed s1162276945 closed 3 years ago

s1162276945 commented 3 years ago

def evaluate(model, args, dataset, label_vocab, global_step, description="dev", write_file=False): """ evaluate the model's performance """ dataloader = get_dataloader(dataset, args, mode='dev') if (not args.do_train) and (not args.no_cuda) and args.local_rank != -1: model = model.cuda() model = torch.nn.parallel.DistributedDataParallel( model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True )

batch_size = dataloader.batch_size
if args.local_rank == 0 or args.local_rank == -1:
    logger.info("***** Running %s *****", description)
    logger.info("  Num examples = %d", len(dataloader.dataset))
    logger.info("  Batch size = %d", batch_size)
eval_losses = []
model.eval()

all_input_ids = None
all_label_ids = None
all_predict_ids = None
all_attention_mask = None

for batch in tqdm(dataloader, desc=description):
    # new batch data: [input_ids, token_type_ids, attention_mask, matched_word_ids,
    # matched_word_mask, boundary_ids, labels
    batch_data = (batch[0], batch[2], batch[1], batch[3], batch[4], batch[5], batch[6])
    new_batch = batch_data
    batch = tuple(t.to(args.device) for t in new_batch)
    inputs = {"input_ids": batch[0], "attention_mask": batch[1], "token_type_ids": batch[2],
              "matched_word_ids": batch[3], "matched_word_mask": batch[4],
              "boundary_ids": batch[5], "labels": batch[6], "flag": "Predict"}
    batch_data = None
    new_batch = None

    with torch.no_grad():
        outputs = model(**inputs)
        preds = outputs[0]

========================================================================= training has no problem,but weibo/labels.txt has 28 tags,two tags '' and '',add to 30. but 31 in pred value.

O B-PER.NOM E-PER.NOM B-LOC.NAM E-LOC.NAM B-PER.NAM I-PER.NAM E-PER.NAM S-PER.NOM B-GPE.NAM E-GPE.NAM B-ORG.NAM I-ORG.NAM E-ORG.NAM I-PER.NOM S-GPE.NAM B-ORG.NOM E-ORG.NOM I-LOC.NAM I-ORG.NOM B-LOC.NOM I-LOC.NOM E-LOC.NOM B-GPE.NOM E-GPE.NOM I-GPE.NAM S-PER.NAM S-LOC.NOM

class ItemVocabFile(): """ Build vocab from file. Note, each line is a item in vocab, or each items[0] is in vocab """ def init(self, files, is_word=False, has_default=False, unk_num=0): self.files = files self.item2idx = {} self.idx2item = [] self.item_size = 0 self.is_word = is_word if not has_default and not self.is_word: self.item2idx[''] = self.item_size self.idx2item.append('') self.item_size += 1 self.item2idx[''] = self.item_size self.idx2item.append('') self.item_size += 1

for unk words

        for i in range(unk_num):
            self.item2idx['<unk>{}'.format(i+1)] = self.item_size
            self.idx2item.append('<unk>{}'.format(i+1))
            self.item_size += 1

    self.init_vocab()
    print('=======labels info========')
    print(self.item2idx)
    print(self.idx2item)

=======labels info======== {'': 0, '': 1, 'O': 2, 'B-PER.NOM': 3, 'E-PER.NOM': 4, 'B-LOC.NAM': 5, 'E-LOC.NAM': 6, 'B-PER.NAM': 7, 'I-PER.NAM': 8, 'E-PER.NAM': 9, 'S-PER.NOM': 10, 'B-GPE.NAM': 11, 'E-GPE.NAM': 12, 'B-ORG.NAM': 13, 'I-ORG.NAM': 14, 'E-ORG.NAM': 15, 'I-PER.NOM': 16, 'S-GPE.NAM': 17, 'B-ORG.NOM': 18, 'E-ORG.NOM': 19, 'I-LOC.NAM': 20, 'I-ORG.NOM': 21, 'B-LOC.NOM': 22, 'I-LOC.NOM': 23, 'E-LOC.NOM': 24, 'B-GPE.NOM': 25, 'E-GPE.NOM': 26, 'I-GPE.NAM': 27, 'S-PER.NAM': 28, 'S-LOC.NOM': 29} ['', '', 'O', 'B-PER.NOM', 'E-PER.NOM', 'B-LOC.NAM', 'E-LOC.NAM', 'B-PER.NAM', 'I-PER.NAM', 'E-PER.NAM', 'S-PER.NOM', 'B-GPE.NAM', 'E-GPE.NAM', 'B-ORG.NAM', 'I-ORG.NAM', 'E-ORG.NAM', 'I-PER.NOM', 'S-GPE.NAM', 'B-ORG.NOM', 'E-ORG.NOM', 'I-LOC.NAM', 'I-ORG.NOM', 'B-LOC.NOM', 'I-LOC.NOM', 'E-LOC.NOM', 'B-GPE.NOM', 'E-GPE.NOM', 'I-GPE.NAM', 'S-PER.NAM', 'S-LOC.NOM']

s1162276945 commented 3 years ago

刘老师你好,按理说预测值的范围应该是[0,29],但预测值pred里面有31,导致下标越界。我重新下载了weibo数据,从新训练之后,再evaluate还是有这个异常,实在是找不到问题在哪了,是梯度没更新好么。参数如下: CUDA_VISIBLE_DEVICES='0' python3 -m torch.distributed.launch --master_port 13117 --nproc_per_node=1 \ Trainer.py --do_eval --do_predict --evaluate_during_training \ --data_dir="data/dataset/NER/weibo" \ --output_dir="data/result/NER/weibo/wcbertcrf" \ --config_name="data/berts/bert/config.json" \ --model_name_or_path="data/berts/bert/pytorch_model.bin" \ --vocab_file="data/berts/bert/vocab.txt" \ --word_vocab_file="data/vocab/tencent_vocab.txt" \ --max_scan_num=1500000 \ --max_word_num=5 \ --label_file="data/dataset/NER/weibo/labels.txt" \ --word_embedding="data/embedding/word_embedding.txt" \ --saved_embedding_dir="data/dataset/NER/weibo" \ --model_type="WCBertCRF_Token" \ --seed=106524 \ --per_gpu_train_batch_size=4 \ --per_gpu_eval_batch_size=16 \ --learning_rate=1e-5 \ --max_steps=-1 \ --max_seq_length=256 \ --num_train_epochs=2 \ --warmup_steps=190 \ --save_steps=600 \ --logging_steps=100

liuwei1206 commented 3 years ago

Hi,

Since we use the CRF as inference layer, so we will add and tokens. You can read the code of CRF in detail.

Usually, I the model is trained with enough data, it will not predict or . So your error may due to the bad training of the model.

bultiful commented 2 years ago

请问这个问题解决了吗?如果解决了可否告知是什么问题,谢谢。 @s1162276945 @liuwei1206

lvjiujin commented 2 years ago

刘老师你好,按理说预测值的范围应该是[0,29],但预测值pred里面有31,导致下标越界。我重新下载了weibo数据,从新训练之后,再evaluate还是有这个异常,实在是找不到问题在哪了,是梯度没更新好么。参数如下: CUDA_VISIBLE_DEVICES='0' python3 -m torch.distributed.launch --master_port 13117 --nproc_per_node=1 Trainer.py --do_eval --do_predict --evaluate_during_training --data_dir="data/dataset/NER/weibo" --output_dir="data/result/NER/weibo/wcbertcrf" --config_name="data/berts/bert/config.json" --model_name_or_path="data/berts/bert/pytorch_model.bin" --vocab_file="data/berts/bert/vocab.txt" --word_vocab_file="data/vocab/tencent_vocab.txt" --max_scan_num=1500000 --max_word_num=5 --label_file="data/dataset/NER/weibo/labels.txt" --word_embedding="data/embedding/word_embedding.txt" --saved_embedding_dir="data/dataset/NER/weibo" --model_type="WCBertCRF_Token" --seed=106524 --per_gpu_train_batch_size=4 --per_gpu_eval_batch_size=16 --learning_rate=1e-5 --max_steps=-1 --max_seq_length=256 --num_train_epochs=2 --warmup_steps=190 --save_steps=600 --logging_steps=100

我也是反复研究,单步调试我都找不到问题出在哪里,我是出现29下标越界了,我对着CRF的代码和soft-lexicon 和lattice-lstm的代码几乎一模一样,crf应该是没有问题的,问题可能在模型部分。

lvjiujin commented 2 years ago

刘老师你好,按理说预测值的范围应该是[0,29],但预测值pred里面有31,导致下标越界。我重新下载了weibo数据,从新训练之后,再evaluate还是有这个异常,实在是找不到问题在哪了,是梯度没更新好么。参数如下: CUDA_VISIBLE_DEVICES='0' python3 -m torch.distributed.launch --master_port 13117 --nproc_per_node=1 Trainer.py --do_eval --do_predict --evaluate_during_training --data_dir="data/dataset/NER/weibo" --output_dir="data/result/NER/weibo/wcbertcrf" --config_name="data/berts/bert/config.json" --model_name_or_path="data/berts/bert/pytorch_model.bin" --vocab_file="data/berts/bert/vocab.txt" --word_vocab_file="data/vocab/tencent_vocab.txt" --max_scan_num=1500000 --max_word_num=5 --label_file="data/dataset/NER/weibo/labels.txt" --word_embedding="data/embedding/word_embedding.txt" --saved_embedding_dir="data/dataset/NER/weibo" --model_type="WCBertCRF_Token" --seed=106524 --per_gpu_train_batch_size=4 --per_gpu_eval_batch_size=16 --learning_rate=1e-5 --max_steps=-1 --max_seq_length=256 --num_train_epochs=2 --warmup_steps=190 --save_steps=600 --logging_steps=100

can you offer your email? I think we can communicate with each other through the email.

AnddyWang commented 2 years ago

@lvjiujin #48 有人解决了这个问题了么

bultiful commented 2 years ago

您发给我的信件已经收到!Best Regards!