Open DeanLa opened 1 month ago
May be similar to #19575
I managed a workaround by subclassing and doing
def on_fit_start(self, trainer, pl_module):
return
def on_train_epoch_start(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule") -> None:
if trainer.current_epoch == 0:
self.lr_find(trainer, pl_module)
self.log_chart()
I hope it does not affect other things I'm not aware of
Bug description
I have a lightning module which logs the metrics
val_loss
, and a scheduler that monitors itI also have a list of callback on of them is
LearningRateFinder
I run a fittrainer = L.Trainer(logger=logger, callbacks=callbacks, **trainer_args)
. When the Lr Finder is in the list I getWhen I remove the LR finder, training seems to work well.
What version are you seeing the problem on?
v2.4
How to reproduce the bug
No response
Error messages and logs
Environment
Current environment
``` #- PyTorch Lightning Version (e.g., 2.4.0): #- PyTorch Version (e.g., 2.4): #- Python version (e.g., 3.12): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): ```More info
No response