huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
131.25k stars 26.09k forks source link

Model is saved every eval_steps steps if eval_steps < save_steps. Is this expected behavior? #12315

Closed sam-writer closed 3 years ago

sam-writer commented 3 years ago

Environment info

Who can help

@sgugger

Information

Model I am using (Bert, XLNet ...): Bert, but I don't think that is relevant

The problem arises when using:

The tasks I am working on is:

To reproduce

Steps to reproduce the behavior:

  1. Make a TrainingArgs object with eval_steps < save_steps and eval_strategy and save_strategy both set to "steps"
  2. Pass those to a Trainer
  3. Model checkpoints every eval_steps steps, not every save_steps steps

Here is my TrainingArguments code:

args = TrainingArguments(
    output_dir=outpath,
    save_total_limit=10,
    load_best_model_at_end=True,
    save_strategy="steps" if cli_args.save_steps is not None else "epoch",
    save_steps=cli_args.save_steps,
    evaluation_strategy="steps" if cli_args.eval_steps is not None else "epoch",
    eval_steps=cli_args.eval_steps,
    metric_for_best_model="loss",
    learning_rate=cli_args.learning_rate,
    per_device_train_batch_size=cli_args.batch_size,
    per_device_eval_batch_size=cli_args.batch_size,
    num_train_epochs=cli_args.num_train_epochs,
    weight_decay=cli_args.weight_decay,
    fp16=cli_args.fp16,
    deepspeed=deepspeed,
    local_rank=cli_args.local_rank,
)

with the values I am using filled in, this is:

args = TrainingArguments(
    output_dir="ten_m/model",
    save_total_limit=10,
    load_best_model_at_end=True,
    save_strategy="steps",
    save_steps=6,  # for testing
    evaluation_strategy="steps",
    eval_steps=2,  # for testing
    metric_for_best_model="loss",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=3,
    weight_decay=0.01,
    fp16=False,
    deepspeed=None,
    local_rank=-1,
)

Expected behavior

Well, maybe this is expected? But if so, I feel like it should be documented more obviously.

I wrote a callback to upload the saved checkpoint to GCS, but the eval step is very quick, so I was going to do those much more frequently. However, if evaluating means I have to upload to GCS, then I will evaluate less often. However, I verified that even if I don't use the GCS save callback, with the above settings, a checkpoint is saved every 2 steps, not every 6.

If this is expected behavior, then is the correct way to change it to write a Callback that on_evaluate sets the argument of type transformers.TrainerControl to have property should_save to False?

Thank you

sgugger commented 3 years ago

You can't have different evaluation and save intervals when using load_best_model_at_end=True (save need to be synchronized with evaluation otherwise we can't keep track of the best model). Remove that option and you will have the evaluation and save disconnected as requested.

sam-writer commented 3 years ago

Thank you, that makes sense.

Also, now that I know it's related I immediately noticed image

Might be worth mentioning under save_strategy as well? But maybe it was just me.

sgugger commented 3 years ago

Sure! Do you want to make a PR with that change?

sam-writer commented 3 years ago

sure!

sam-writer commented 3 years ago

haha it's been a while! image

sgugger commented 3 years ago

Oh indeed! :sweat_smile: