FadedCosine / kNN-KD

Code for paper "Nearest Neighbor Knowledge Distillation for Neural Machine Translation" by Zhixian Yang, Renliang Sun, and Xiaojun Wan. This paper is accepted by NAACL 2022 Main Conference.
MIT License
30 stars 2 forks source link

Fail to Reproduce Results of Multi-Domain Dataset #3

Open OwenNJU opened 1 year ago

OwenNJU commented 1 year ago

Hi, authors

Thanks for your nice work!

I am trying to reproduce results of multi-domain datasets.

However, the results i get are quite different from the reported results in the paper. Medical Law Koran
reproduced results 19.06 11.09 12.41
reported results 56.50 61.89 24.86

save_and_train_datastore.sh

DOMAIN=$1

declare -A DSTORE_SIZE
DSTORE_SIZE[medical]=6903141; DSTORE_SIZE[law]=19062738; DSTORE_SIZE[koran]=524374

MODEL_PATH=/path/to/wmt19/pretrain/model
DATA_PATH=/path/to/data
DSTORE_PATH=/path/to/datastore

mkdir -p $DSTORE_PATH

python ../save_datastore.py $DATA_PATH \
    --dataset-impl mmap \
    --task translation \
    --valid-subset train \
    --path $MODEL_PATH \
    --max-tokens 4096 \
    --skip-invalid-size-inputs-valid-test \
    --decoder-embed-dim 1024 --dstore-fp16 --dstore-size ${DSTORE_SIZE[$DOMAIN]} --dstore-mmap $DSTORE_PATH

python ../train_datastore_gpu.py \
  --dstore_mmap $DSTORE_PATH \
  --dstore_size ${DSTORE_SIZE[$DOMAIN]} \
  --dstore-fp16 \
  --faiss_index ${DSTORE_PATH}/knn_index \
  --ncentroids 4096 \
  --probe 32 \
  --dimension 1024

save_knntargets.sh

DOMAIN=$1
declare -A DSTORE_SIZE TrainKNNTarget_SIZE ValidKNNTarget_SIZE
DSTORE_SIZE[medical]=6903141; DSTORE_SIZE[law]=19062738; DSTORE_SIZE[koran]=524374;
TrainKNNTarget_SIZE[medical]=6903141; TrainKNNTarget_SIZE[law]=19062738; TrainKNNTarget_SIZE[koran]=524374;
ValidKNNTarget_SIZE[medical]=56613; ValidKNNTarget_SIZE[law]=82351; ValidKNNTarget_SIZE[koran]=58318;

MODEL_PATH=/path/to/wmt19/pretrain/model
DATA_PATH=/path/to/data
DATASTORE_PATH=/path/to/datastore
KNNTarget_PATH=/path/to/datastore

K=64
TEM=10

mkdir -p $KNNTarget_PATH

python ../save_knntargets.py $DATA_PATH \
    --dataset-impl mmap \
    --task translation_build_knntargets \
    --valid-subset train \
    --save-k $K \
    --path $MODEL_PATH \
    --batch-size 1 \
    --skip-invalid-size-inputs-valid-test \
    --decoder-embed-dim 1024 --knndistance-fp16 --knntarget-size ${TrainKNNTarget_SIZE[$DOMAIN]} --knntarget-mmap $KNNTarget_PATH \
    --knn-temperature $TEM \
    --seed 910 \
    --model-overrides "{'load_knn_datastore': True, 'use_knn_datastore': False,
    'dstore_filename': '$DATASTORE_PATH', 'dstore_size': ${DSTORE_SIZE[$DOMAIN]}, 'dstore_fp16': True, 'k': $K, 'probe': 32,
    'knn_sim_func': 'do_not_recomp_l2', 'use_gpu_to_search': True, 'move_dstore_to_mem': True, 'no_load_keys': True,
    'knn_lambda_type': 'fix', 'knn_lambda_value': 0.7, 'knn_temperature_type': 'fix', 'knn_temperature_value': 10,
     }"

python ../save_knntargets.py $DATA_PATH \
    --dataset-impl mmap \
    --task translation_build_knntargets \
    --valid-subset valid \
    --save-k $K \
    --path $MODEL_PATH \
    --batch-size 1 \
    --skip-invalid-size-inputs-valid-test \
    --decoder-embed-dim 1024 --knndistance-fp16 --knntarget-size ${ValidKNNTarget_SIZE[$DOMAIN]} --knntarget-mmap $KNNTarget_PATH \
    --knn-temperature $TEM \
    --seed 910 \
    --model-overrides "{'load_knn_datastore': True, 'use_knn_datastore': False,
    'dstore_filename': '$DATASTORE_PATH', 'dstore_size': ${DSTORE_SIZE[$DOMAIN]}, 'dstore_fp16': True, 'k': $K, 'probe': 32,
    'knn_sim_func': 'do_not_recomp_l2', 'use_gpu_to_search': True, 'move_dstore_to_mem': True, 'no_load_keys': True,
    'knn_lambda_type': 'fix', 'knn_lambda_value': 0.7, 'knn_temperature_type': 'fix', 'knn_temperature_value': 10,
     }"

train_knnmt.sh

DOMAIN=$1

declare -A TrainKNNTarget_SIZE ValidKNNTarget_SIZE K T EPOCH
TrainKNNTarget_SIZE[medical]=6903141; TrainKNNTarget_SIZE[law]=19062738; TrainKNNTarget_SIZE[koran]=524374;
ValidKNNTarget_SIZE[medical]=56613; ValidKNNTarget_SIZE[law]=82351; ValidKNNTarget_SIZE[koran]=58318;
K[medical]=4; K[law]=4; K[koran]=16;
T[medical]=10; T[law]=10; T[koran]=100;
EPOCH[medical]=100; EPOCH[law]=120; EPOCH[koran]=250;

DATA_PATH=/path/to/data
DSTORE_DIR=/path/to/datastore
SAVE_DIR=/path/to/save/model

python ../train.py \
    $DATA_PATH \
    --save-dir ${SAVE_DIR} --keep-last-epochs 3 \
    --task translation_with_stored_knnls \
    --knn-k ${K[$DOMAIN]} --save-k 64 --train-knntarget-size ${TrainKNNTarget_SIZE[$DOMAIN]} --valid-knntarget-size ${ValidKNNTarget_SIZE[$DOMAIN]} \
    --knntarget-filename $DSTORE_DIR --knndistance-fp16 \
    --arch transformer_wmt19_de_en_with_datastore --share-all-embeddings \
    --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
    --lr 5e-4 --lr-scheduler inverse_sqrt --min-lr 1e-09 \
    --warmup-updates 4000 --warmup-init-lr 1e-07 \
    --dropout 0.2 --weight-decay 0.0001 \
    --attention-dropout 0.1 --activation-dropout 0.1 \
    --criterion knn_label_smoothed_cross_entropy --label-smoothing 0.1 --knn-temp ${T[$DOMAIN]} --distil-strategy 'knn_kd' \
    --max-tokens 1280 --update-freq 8 --max-epoch ${EPOCH[$DOMAIN]} --patience 20 \
    --eval-bleu \
    --eval-bleu-args '{"beam": 4, "lenpen": 0.6, "max_len_a": 1.2, "max_len_b": 10}' \
    --eval-bleu-detok moses \
    --eval-bleu-remove-bpe \
    --eval-bleu-print-samples \
    --best-checkpoint-metric bleu --maximize-best-checkpoint-metric \
    --fp16 --seed 910
OwenNJU commented 1 year ago

(1/2) Here are my shell scripts (save_and_train_datastore.sh, save_knntargets.sh, train_knnmt.sh). Could the author help me to with this issue? Looking forward to your reply!

FadedCosine commented 1 year ago

However, the results i get are quite different from the reported results in the paper.

Medical Law Koran reproduced results 19.06 11.09 12.41 reported results 56.50 61.89 24.86

It seems that the model that produces reproduced results has not been trained properly. Can you double check this? And we provid the processed data and the checkpoints in this link. I wish this will solve your problems.

OwenNJU commented 1 year ago

Thanks for your reply!

After carefully checking the paper and provided scripts, I find that the description of batch size is inconsistent (batch size has a huge impact on the training process):

When I set batch size larger (from 1280 8 to 8192 4), the performance becomes normal.

However, there is still a gap between reproduced results and reported results (especially on law and koran)

Could you check my attached scripts? Maybe there is something else that is different from your original scripts?

FadedCosine commented 11 months ago

Sorry, as I have already graduated, I cannot obtain my experimental materials at the time. Can you directly use the checkpoint we provided to continue your experiment?