MASILab / 3DUX-Net

238 stars 33 forks source link

About Learning Rate Scheduler in flare2021 dataset #35

Closed vincentwan0930 closed 10 months ago

vincentwan0930 commented 1 year ago

Hi, thanks for your work. In your paper, the patience of the scheduler in flare2021 dataset is 10. But in the code of the repo, the patience is 1000 and is based on the dice_val ( and these code is commented, defaults to a constant lr ).

I wonder if the patience of lr_scheduler is based on loss_tr or dice_val, and which patience should i use.

Thank you very much.

leeh43 commented 1 year ago

Hi, thank you for your interest! The original code is to use the patience of 10 for training. However, I have further performed additional experiments and would recommend just neglect using the lr_scheduler during training, although two results may only have subtle differences.

vincentwan0930 commented 1 year ago

Thank you very much for your response. I have another question. In you paper, you "substitute the plain 3D U-Net architecture with our proposed 3D UX-Net and adapt the self-configuring hyperparameters for training". But I have trouble reproducing that. Could you please offer the modified codes in nnUNet pipeline?

leeh43 commented 1 year ago

For the modified code to nnUNet, it is not well organized yet, as it is for the discussion stage, which is really rush to run experiments within two weeks. I will further look into it and have another GitHub for nnUNet version.