Lightning-AI / pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.04k stars 3.36k forks source link

min_epochs and EarlyStopping in conflict #19966

Open timlod opened 3 months ago

timlod commented 3 months ago

Bug description

I have a problem where I use min_epochs because it can take a while before the training starts to converge. EarlyStopping is triggered quite early, but I thought to set min_epochs appropriately to 'get over' that initial period. However, even though training is converging by the time we reach min_epochs, early stopping will stop training immediately once we reached min_epochs, just because it was triggered very early on in training.

I think that EarlyStopping should pick itself back up if we improve upon the monitored metric before reaching min_epochs.

Example Trainer config:

trainer = L.Trainer(
        max_epochs=10000,
        callbacks=[
            EarlyStopping(monitor="val_loss", mode="min", patience=100),
        ]
        min_epochs=1000,
    )

Now imagine EarlyStopping triggering at epoch 100, but val_loss improving at 101 all the way until epoch 1000 - right now training will still stop.

What version are you seeing the problem on?

v2.2

How to reproduce the bug

No response

Error messages and logs

No response

Environment

No response

More info

No response

shirondru commented 3 months ago

I also see this and think the implementation would be better suited if, after min_epochs is reached, EarlyStopping takes precedence. As it stands right now, it is as if EarlyStopping does not exist because training exits once min_epochs is reached no matter what.