Open RNA429 opened 11 months ago
the default max_epoch in pytorch lighting is 1000?
you can set the max_epoch manually with 'python main.py -max_epochs 5000'
chenyichi @.***> 于2024年5月6日周一 13:24写道:
the default max_epoch in pytorch lighting is 1000?
— Reply to this email directly, view it on GitHub https://github.com/Loco-Roco/DiffAD/issues/3#issuecomment-2095218307, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMKKJPAPLMJJFZC7IRIUL73ZA4HXBAVCNFSM6AAAAABAKMFG66VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJVGIYTQMZQG4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>
{'_default_root_dir': '/data/DJL/DiffAD-main', '_fit_loop': <pytorch_lightning.loops.fit_loop.FitLoop object at 0x7fa6575dd7c0>, '_is_data_prepared': False, '_lightning_optimizers': None, '_predict_loop': <pytorch_lightning.loops.dataloader.prediction_loop.PredictionLoop object at 0x7fa6575f1820>, '_progress_bar_callback': <pytorch_lightning.callbacks.progress.ProgressBar object at 0x7fa6575f1ac0>, '_stochastic_weight_avg': False, '_test_loop': <pytorch_lightning.loops.dataloader.evaluation_loop.EvaluationLoop object at 0x7fa6575f1520>, '_validate_loop': <pytorch_lightning.loops.dataloader.evaluation_loop.EvaluationLoop object at 0x7fa6575f1220>, '_weights_save_path': '/data/DJL/DiffAD-main', 'accelerator_connector': <pytorch_lightning.trainer.connectors.accelerator_connector.AcceleratorConnector object at 0x7fa6575dd340>, 'accumulate_grad_batches': 2, 'accumulation_scheduler': <pytorch_lightning.callbacks.gradient_accumulation_scheduler.GradientAccumulationScheduler object at 0x7fa6575f1b50>, 'auto_lr_find': False, 'auto_scale_batch_size': False, 'callback_connector': <pytorch_lightning.trainer.connectors.callback_connector.CallbackConnector object at 0x7fa6575dd550>, 'callbacks': [<main.SetupCallback object at 0x7fa6575c5760>, <main.ImageLogger object at 0x7fa6575c5730>, <pytorch_lightning.callbacks.lr_monitor.LearningRateMonitor object at 0x7fa6575c5af0>, <main.CUDACallback object at 0x7fa6575c57f0>, <pytorch_lightning.callbacks.progress.ProgressBar object at 0x7fa6575f1ac0>, <pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint object at 0x7fa6575d6c70>], 'check_val_every_n_epoch': 1, 'checkpoint_connector': <pytorch_lightning.trainer.connectors.checkpoint_connector.CheckpointConnector object at 0x7fa6575dd6a0>, 'config_validator': <pytorch_lightning.trainer.configuration_validator.ConfigValidator object at 0x7fa6575dd220>, 'data_connector': <pytorch_lightning.trainer.connectors.data_connector.DataConnector object at 0x7fa6575dd2b0>, 'datamodule': None, 'debugging_connector': <pytorch_lightning.trainer.connectors.debugging_connector.DebuggingConnector object at 0x7fa6575dd5e0>, 'dev_debugger': <pytorch_lightning.utilities.debugging.InternalDebugger object at 0x7fa6575dd100>, 'fast_dev_run': False, 'flush_logs_every_n_steps': 100, 'gradient_clip_algorithm': <GradClipAlgorithmType.NORM: 'norm'>, 'gradient_clip_val': 0.0, 'limit_predict_batches': 1.0, 'limit_test_batches': 1.0, 'limit_train_batches': 1.0, 'limit_val_batches': 1.0, 'log_every_n_steps': 50, 'logger': <pytorch_lightning.loggers.test_tube.TestTubeLogger object at 0x7fa6f0140fa0>, 'logger_connector': <pytorch_lightning.trainer.connectors.logger_connector.logger_connector.LoggerConnector object at 0x7fa6575dd040>, 'model_connector': <pytorch_lightning.trainer.connectors.model_connector.ModelConnector object at 0x7fa6575dd4c0>, 'move_metrics_to_cpu': False, 'num_predict_batches': [], 'num_sanity_val_batches': [], 'num_sanity_val_steps': 2, 'num_test_batches': [], 'num_training_batches': 0, 'num_val_batches': [], 'optimizer_connector': <pytorch_lightning.trainer.connectors.optimizer_connector.OptimizerConnector object at 0x7fa6575dd2e0>, 'overfit_batches': 0.0, 'predicted_ckpt_path': None, 'prepare_data_per_node': True, 'profiler': <pytorch_lightning.profiler.base.PassThroughProfiler object at 0x7fa6575dd190>, 'reload_dataloaders_every_n_epochs': 0, 'should_stop': False, 'shown_warnings': set(), 'slurm_connector': <pytorch_lightning.trainer.connectors.slurm_connector.SLURMConnector object at 0x7fa6575dd700>, 'state': TrainerState(status=<TrainerStatus.INITIALIZING: 'initializing'>, fn=None, stage=None), 'terminate_on_nan': False, 'test_dataloaders': None, 'tested_ckpt_path': None, 'track_grad_norm': -1.0, 'train_dataloader': None, 'training_tricks_connector': <pytorch_lightning.trainer.connectors.training_trick_connector.TrainingTricksConnector object at 0x7fa6575dd640>, 'truncated_bptt_steps': None, 'tuner': <pytorch_lightning.tuner.tuning.Tuner object at 0x7fa6575dd760>, 'val_check_interval': 1.0, 'val_dataloaders': None, 'validated_ckpt_path': None, 'verbose_evaluate': True, 'weights_summary': 'top'}