FengheTan9 / Multi-Level-Global-Context-Cross-Consistency

Official Pytorch Code base for "Multi-Level Global Context Cross Consistency Model for Semi-Supervised Ultrasound Image Segmentation with Diffusion Model"
MIT License
33 stars 1 forks source link

Unable to Reproduce the Final Result #9

Open thesupermanreturns opened 1 year ago

thesupermanreturns commented 1 year ago

Hi, We ran the code for 295 epochs. Below is the log after the run of the code. Please help us if we are missing something

epoch [294/295] train_loss 0.2000 supervised_loss 0.1954 consistency_loss 0.0012 train_iou: 0.9596 - val_loss 0.5416 - val_iou 0.6689 - val_SE 0.5690 - val_PC 0.6468 - val_F1 0.5644 - val_ACC 0.7565

We have done this modification for learning rate, as we were encountering the "RuntimeError: For non-complex input tensors, argument alpha must not be a complex number.". BAsed on the link provided by you in other issue

def adjust_learning_rate(optimizer, i_iter, len_loader, max_epoch, power, args): lr = lr_poly(args.base_lr, i_iter, max_epochlen_loader, power) optimizer.param_groups[0]['lr'] = lr if len(optimizer.param_groups) > 1: optimizer.param_groups[1]['lr'] = lr 10 return lr

lr_ = adjust_learning_rate(optimizer, iter_num, len(trainloader), max_epoch, 0.9, args)

FengheTan9 commented 1 year ago

This may be a problem caused by the learning rate dropping to 0. You can use CosineAnnealingLR.

thesupermanreturns commented 1 year ago

Could you please provide the code or can you refer us to some link. Thanks for replying

FengheTan9 commented 1 year ago

scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer=optimizer, T_max=max_epoch) And end epoch for scheduler.step()

or

adjust -> max_iterations

fengchuanpeng commented 1 year ago

hello,Is there a formula for calculating this max_iterations?