tonytan48 / KD-DocRE

Implementation of Document-level Relation Extraction with Knowledge Distillation and Adaptive Focal Loss
110 stars 20 forks source link

terminated #19

Open XingYu131 opened 1 year ago

XingYu131 commented 1 year ago

Loaded train features Loaded dev features Loaded test features Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight']

Hello, I can't start training, and the program automatically terminates. What is the reason?

SnowWangyue commented 1 year ago

have you solved this problem?

SnowWangyue commented 1 year ago

Loaded train features Loaded dev features Loaded test features Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight']

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). /home/guoweihu/anaconda3/envs/torch1.5/lib/python3.7/site-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set to disable this warning FutureWarning, /home/guoweihu/anaconda3/envs/torch1.5/lib/python3.7/site-packages/apex/init.py:68: DeprecatedFeatureWarning: apex.amp is deprecated and will be removed by the end of February 2023. Use PyTorch AMP warnings.warn(msg, DeprecatedFeatureWarning) scripts/batch_bert.sh: line 19: 1577 Terminated python train.py --data_dir docred_data --transformer_type bert --model_name_or_path bert-base-cased --save_path checkpoints/bert-annotated-3.pt --train_file train_annotated.json --dev_file dev.json --test_file test.json --train_batch_size 2 --test_batch_size 2 --gradient_accumulation_steps 1 --evaluation_steps 5000 --num_labels 4 --classifier_lr 1e-4 --learning_rate 3e-5 --max_grad_norm 1.0 --warmup_ratio 0.06 --num_train_epochs 50.0 --seed 66 --num_class 97no_deprecation_warning=True

Hello, I can't start training, and the program automatically terminates. What is the reason?

have you solved this problem?