Open jasw1001 opened 7 years ago
Hi @jasw1001 . Did you find any solution for that? It seems it goes to the customized backward function but I cant debug it. My main problem is that I cant use multi-GPU for this backward function (it jumps out without any specific error). Do you have any idea how to used torch.autograde and skip this backward function?
It seems that the torch.autograde cannot be used in Dice loss function, because some functions in Dice loss fuction are not supported by torch.autograde. For the debug problem, it's a bug in pytorch, so you may use 'ipdb' or 'pdb' to debug it.
Thanks @jasw1001 . Have you tried to implement your own version of loss function? I have used 'pdb' but it stuck in a line after
and never went through next lines. It is very strange for me.
Hi @jasw1001 @CSMEDEEP could you give the steps as to how to start the training? The code uses some preprocessed files from the original Luna16 dataset. How do I run the preprocessing?
@abhiML hello~ i really want to know how to preprocess the dataset LUNA16 to get the files "normalized_brightened_CT_2_5"etc.. , have you figure it out? thanks a lot~`
hi there, we ran the program, and successfully train a model. but when we deeply take a look into the program, we found the program ran loss.backward which used the pytorch autograd instead of the one written in dice_loss class.
so we are wondering the reason why a backward function was written, and how to call the manually written backward function???