Hello can someone help me how can i continue training detr from last epoch with checkpoint
this is the code for training :
from pytorch_lightning.callbacks import EarlyStopping
from pytorch_lightning import Trainer
MAX_EPOCHS = 200
early_stopping_callback = EarlyStopping(
monitor='training_loss', # Monitor validation AP
min_delta=0.00, # Minimum change in AP
patience=3, # Number of epochs to wait for improvement before stopping
mode='max' # Consider AP as a maximization metric
)
Hello can someone help me how can i continue training detr from last epoch with checkpoint
this is the code for training :
from pytorch_lightning.callbacks import EarlyStopping from pytorch_lightning import Trainer
MAX_EPOCHS = 200
early_stopping_callback = EarlyStopping( monitor='training_loss', # Monitor validation AP min_delta=0.00, # Minimum change in AP patience=3, # Number of epochs to wait for improvement before stopping mode='max' # Consider AP as a maximization metric )
trainer = Trainer( devices=1, accelerator="gpu", max_epochs=MAX_EPOCHS, gradient_clip_val=0.1, accumulate_grad_batches=8, log_every_n_steps=5, callbacks=[early_stopping_callback] )
trainer.fit(model) should i add something or what i should do next