google-research / albert

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Apache License 2.0
3.23k stars 571 forks source link

The results can't be reproduced #269

Open kavin525zhang opened 2 years ago

kavin525zhang commented 2 years ago

hi,I run the code according to the official command, but it doesn't reproduce the results, only get accuracy of ~71(datasets: RTE), can you tell me where is wrong?

kavin525zhang commented 2 years ago

the command is "python -m run_classifier.py --output_dir=./output/RTE \ --data_dir=./glue \ --vocab_file=albert_base_v2/vocab.txt \ --spm_model_file=albert_base_v2/30k-clean.model \ --albert_config_file=albert_base_v2/albert_config.json \ --do_lower_case \ --do_train \ --do_eval \ --use_tpu=False \ --init_checkpoint=albert_base_v2/model.ckpt-best \ --max_seq_length=512 \ --optimizer=adamw \ --task_name=RTE \ --warmup_step=200 \ --learning_rate=3e-5 \ --train_step=800 \ --save_checkpoints_steps=100 \ --train_batch_size=16"

vqiangv commented 2 years ago

Are your results higher