AImageLab-zip / alveolar_canal

This repository contains the material from the paper "Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation"
33 stars 5 forks source link

NaN errors during canal_pretrain process #10

Open puppy2000 opened 1 year ago

puppy2000 commented 1 year ago

Sorry for bothering you.I get NaN errors during canal_pretrain process. image From another issue https://github.com/AImageLab-zip/alveolar_canal/issues/7 I find that my prediction will get NaN during training.Could you help me to find the problem.

LucaLumetti commented 1 year ago

Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training.

The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss.

When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs.

Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them.

Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1.

I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know.

puppy2000 commented 1 year ago

Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training.

The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss.

When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs.

Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them.

Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1.

I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know.

Hi,I double check the code,and I set the batch_size = 1 to debug.This time no NaNs error occur,and it seems the network is correctly trained.As you can see in this picture image So I wonder if it's because random patches are extracted from the original volume,and during DataParellel,some bad examples will cause NaNs error.Maybe it is caused by the badly generated laebl,because I observe that some generated label is not so well.I will check the code further.