lucidrains / med-seg-diff-pytorch

Implementation of MedSegDiff in Pytorch - SOTA medical segmentation using DDPM and filtering of features in fourier space
MIT License
209 stars 28 forks source link

Trainloss:Very high volatility in loss #13

Open pnaclcu opened 1 year ago

pnaclcu commented 1 year ago

Hello, thanks for your codes. They are elegant and clear. These codes help me a lot. I got a problem as the training loss performed very well about 0.001 at the beginning of the training.
The default end epoch is set as 10000. But the training loss will get a surprising number about "Training Loss : 325440.0592" at 2000+ epochs. I am curious. Have you ever encountered this issue before? The training batch size is 96 with 4 GPUs with PyTorch.DDP. Since the full training data set only includes about 4000 images, 4 GPUs only need about 10 iterations to end an epoch. Do you think this is the reason? Thanks for your codes.

yuan5828225 commented 1 year ago

I have the same problem. Have you solved the problem?

yibochen38 commented 1 year ago

I have the same problem. Have you solved the problem?

Hi bro. Have you solved the promblm?

yuan5828225 commented 1 year ago

I have the same problem. Have you solved the problem?

Hi bro. Have you solved the promblm?

Not yet . Testing with the results of the 10,000th round shows very poor results. It is ok to use the lowest loss before the fluctuation, although the result is not good, probably due to the small size of my dataset, I am trying to adjust the parameters

Alan-Py commented 1 year ago

Hey,guys. Have you solved the promblm?

pnaclcu commented 1 year ago

I have the same problem. Have you solved the problem?

Hi bro. Have you solved the promblm?

Not yet . Testing with the results of the 10,000th round shows very poor results. It is ok to use the lowest loss before the fluctuation, although the result is not good, probably due to the small size of my dataset, I am trying to adjust the parameters

Hey guys. I got the solution. Add a scheduler to control the learning rate. E.g. scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, patience=50, verbose=True, min_lr=1e-6) scheduler.step(THE LOSS YOU DEFINED) But the loss seems that the epoch_loss denotes the batch_loss in the driver.py. So I rewrited the loss. gl ^^

Alan-Py commented 1 year ago

@pnaclcu Good job! Can you share your loss code?

nhthanh0809 commented 1 year ago

Hi bros, My loss after each epoc is nan (loss value after some batch is nan) I checked input data (image and mask) but there are no problem with data. Does anyone have the same problem with me?

ChenqinWu commented 10 months ago

we can set the parameter args.scale_lr == False to solve this problem.