CAMMA-public / rendezvous

A transformer-inspired neural network for surgical action triplet recognition from laparoscopic videos.
Other
24 stars 8 forks source link

pytorch lr_scheduler problem #2

Closed gshuangchun closed 2 years ago

gshuangchun commented 2 years ago

Hi, I have this error below. Could you please help me?

Traceback (most recent call last): File "/home/shuangchun/Code/Triplet 2022/rendezvous/pytorch/run.py", line 460, in [float(f'{sch.get_last_lr()[0]:.6f}') for sch in lr_schedulers], [float(f'{v:.6f}') for v in wp_lr], warmups, power, File "/home/shuangchun/Code/Triplet 2022/rendezvous/pytorch/run.py", line 460, in [float(f'{sch.get_last_lr()[0]:.6f}') for sch in lr_schedulers], [float(f'{v:.6f}') for v in wp_lr], warmups, power, File "/home/shuangchun/anaconda3/envs/P7t10/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 99, in get_last_lr return self._last_lr AttributeError: 'SequentialLR' object has no attribute '_last_lr'

nwoyecid commented 2 years ago

Please provide your torch and torchvision version.

gshuangchun commented 2 years ago

Pyorch = 1.10.1 TorchVision = 0.11.2

gshuangchun commented 2 years ago

Thank you for your attention. I have solved this problem by replacing:

[float(f'{sch.get_last_lr()[0]:.6f}') for sch in lr_schedulers], [float(f'{v:.6f}') for v in wp_lr], warmups, power, to

header3 = "** LR Config: Init: {} | Peak: {} | Warmup Epoch: {} | Rise: {} | Decay {} | train params {} | all params {} **".format(
    [float(f"{op.state_dict()['param_groups'][0]['lr']:.6f}") for op in optimizers], [float(f'{v:.6f}') for v in wp_lr],
    warmups, power,
    decay_rate, pytorch_train_params, pytorch_total_params)