boostcampaitech2 / mrc-level2-nlp-04

mrc-level2-nlp-04 created by GitHub Classroom
4 stars 5 forks source link

Reader 모델 학습 시에 question & context(1개) 사용 -> question & context(5개) 사용 #31

Open raki-1203 opened 2 years ago

raki-1203 commented 2 years ago

mrc 학습에 사용되는 train data 를 elastic search 를 활용해서 비슷한 문장을 추가해 inference 시에 k 개의 문서를 이어붙인것과 비슷하게 만들기

train 과 inference 의 mismatch 를 줄이기 위함

concat_train.pkl 파일 만들어놓고 사용 elastic_search 에서는 wiki context 들을 preprocessing 진행한 preprocess_wiki.json 파일을 사용해서 elastic_search 에 저장한index_name=preprocess-wiki-index 사용

context 추가시 random 으로 원래 ground truth 의 앞뒤로 붙여줘서 학습을 진행해봤지만 exact_match 점수가 1점을 못넘고 학습이 멈추는 이슈발생 문제가 너무 어려워서 생기는것 같음 혹시 learning_rate 문제인가 싶어서 learning_rate 를 더 키워보거나 더 줄여보거나 했는데도 효과가 없었음 완전히 문제해결력을 키우지 못하는 것 같아서 코드의 흔적도 지웠음 wandb 는 남겼어야하나 싶긴한데 그것도 지웠음 너무 바닥에서 놀아가지고 ㅎㅎ

concat_k_num 이라는 인자를 사용하는 코드를 써서 test 를 진행했는데 필요없어져서 지웠습니다. 성능을 재현하고 싶다면 concat_k_num 5 인 부분만 재현해보시길 바래요 코드가 없거든요 ㅎㅎ

결과

배치사이즈가 늘어날수록 성능이 오르다가 128 에서 256 갈때는 memory 부족으로 gradient_accumulation_steps 을 2로 늘려줘서 배치사이즈를 늘리는 방법으로 사용해봤는데 eval/exact_match 점수는 올랐지만 LB/exact_match 성능은 떨어졌음

model_name concat_k batch_size eval/exact_match LB/exact_match
klue/roberta-small 3 128 52.083 42.080
klue/roberta-small 5 16 42.917 32.080
klue/roberta-small 5 32 45 40.830
klue/roberta-small 5 64 51.25 42.080
klue/roberta-small 5 128 49.167 49.170
klue/roberta-small 5 256 53.75 46.250
klue/roberta-large 5 16 57.083 55.830
klue/roberta-large 5 128 65.00 61.670
klue/roberta-large 5 256 67.917 58.330
klue/roberta-small 7 128 50.803 39.170
klue/roberta-small 10 128 49.583 46.670
klue/roberta-small 10 256 51.25 42.920

roberta-small batch 128 concat_k_num 3


python train.py 
--do_train 
--project_name mrc_concat_data_train 
--run_name roberta-small_batch_128_concat_3 
--with_inference False 
--dataset_name concat 
--per_device_train_batch_size 128 
--concat_k_num 3

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small_batch_128_concat_3 --run_name roberta-small_batch_128_concat_3 --elastic_index_name preprocess-wiki-index


> roboerta-small batch 16 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small --with_inference False --dataset_name concat --per_device_train_batch_size 16

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small --elastic_index_name preprocess-wiki-index


> roboerta-small  batch 32 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small --with_inference False --dataset_name concat --per_device_train_batch_size 32

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small2 --elastic_index_name preprocess-wiki-index


> roboerta-small batch 64 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small --with_inference False --dataset_name concat --per_device_train_batch_size 64

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small3 --elastic_index_name preprocess-wiki-index


> roboerta-small batch 128 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small --with_inference False --dataset_name concat --per_device_train_batch_size 128

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small4 --elastic_index_name preprocess-wiki-index


> roboerta-small batch 256 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small --with_inference False --dataset_name concat --per_device_train_batch_size 128 --gradient_accumulation_steps 2 --num_train_epochs 20

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small5 --elastic_index_name preprocess-wiki-index


> roboerta-large batch 16 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --model_name_or_path klue/roberta-large --run_name roberta-large --with_inference False --dataset_name concat --per_device_train_batch_size 16

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-large --run_name roberta-large_batch_16 --elastic_index_name preprocess-wiki-index


> roboerta-large batch 128 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --model_name_or_path klue/roberta-large --run_name roberta-large_batch_128_concat_5 --with_inference False --dataset_name concat --per_device_train_batch_size 16 --gradient_accumulation_steps 8 --num_train_epochs 20 --concat_k_num 5

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-large_batch_128_concat_5 --run_name roberta-large_batch_128_concat_5 --elastic_index_name preprocess-wiki-index


> roboerta-large batch 256 concat_k_num 5

python train.py --do_train --project_name mrc_concat_data_train --model_name_or_path klue/roberta-large --run_name roberta-large_batch_256_concat_5 --with_inference False --dataset_name concat --per_device_train_batch_size 16 --gradient_accumulation_steps 16 --num_train_epochs 30 --concat_k_num 5

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-large_batch_256_concat_5 --run_name roberta-large_batch_256_concat_5 --elastic_index_name preprocess-wiki-index


> roberta-small batch 128 concat_k_num 7

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small_batch_128_concat_7 --with_inference False --dataset_name concat --per_device_train_batch_size 128 --concat_k_num 7

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small_batch_128_concat_7 --run_name roberta-small_batch_128_concat_7 --elastic_index_name preprocess-wiki-index


> roberta-small batch 128 concat_k_num 10

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small_batch_128_concat_10 --with_inference False --dataset_name concat --per_device_train_batch_size 128 --concat_k_num 10

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small_batch_128_concat_10 --run_name roberta-small_batch_128_concat_10 --elastic_index_name preprocess-wiki-index


> roberta-small batch 256 concat_k_num 10

python train.py --do_train --project_name mrc_concat_data_train --run_name roberta-small_batch_128_concat_10 --with_inference False --dataset_name concat --per_device_train_batch_size 128 --gradient_accumulation_steps 2 --num_train_epochs 20 --concat_k_num 10

python inference.py --do_predict --project_name mrc_concat_data_train --finetuned_mrc_model_path ../output/mrc_concat_data_train/roberta-small_batch_256_concat_10 --run_name roberta-small_batch_256_concat_10 --elastic_index_name preprocess-wiki-index



|model_name|concat_k|batch_size|eval/exact_match|LB/exact_match|
|:---:|:---:|:---:|:---:|:---:|
|klue/roberta-small|3|128|52.083|42.080|
|klue/roberta-small|5|16|42.917|32.080|
|klue/roberta-small|5|32|45|40.830|
|klue/roberta-small|5|64|51.25|42.080|
|klue/roberta-small|5|128|49.167|49.170|
|klue/roberta-small|5|256|53.75|46.250|
|klue/roberta-small|7|128|50.803|39.170|
|klue/roberta-small|10|128|49.583|46.670|
|klue/roberta-small|10|256|51.25|42.920|
|klue/roberta-large|5|16|57.083|55.830|
|klue/roberta-large|5|128|65.00|_**61.670**_|
|klue/roberta-large|5|256|**67.917**|58.330|