Open FoilHao opened 4 years ago
[2020-04-23 23:57:48,790 INFO train.py line 65 60930] train_dice_coef: 0.0, S Loss: 259086.375, C Loss: 1.6436882019042969 [2020-04-23 23:57:50,109 INFO train.py line 228 60930] Iter -24-976: Loss: 259088.015625 acc: 0.000000% dice: 0.000000% [2020-04-23 23:57:51,204 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 149152.40625, C Loss: 0.7466665506362915 [2020-04-23 23:57:52,193 INFO train.py line 106 60930] train_dice_coef: 0.9743859599359734, S Loss: 56511.94921875, C Loss: 0.6808452010154724 [2020-04-23 23:57:53,400 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 191083.203125, C Loss: 0.769726574420929 [2020-04-23 23:57:54,521 INFO train.py line 106 60930] train_dice_coef: 0.9844597773285577, S Loss: 54847.6015625, C Loss: 0.6399809718132019 [2020-04-23 23:57:55,523 INFO train.py line 106 60930] train_dice_coef: 0.9808427387149392, S Loss: 36877.94921875, C Loss: 0.6339898109436035 [2020-04-23 23:57:56,506 INFO train.py line 106 60930] train_dice_coef: 0.9786488801139038, S Loss: 53681.30078125, C Loss: 0.658286452293396 [2020-04-23 23:57:57,473 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 176392.96875, C Loss: 0.6420418620109558 [2020-04-23 23:57:58,450 INFO train.py line 106 60930] train_dice_coef: 0.9743859599359734, S Loss: 56511.94921875, C Loss: 0.6808452010154724 [2020-04-23 23:57:59,437 INFO train.py line 106 60930] train_dice_coef: 0.9897325827611348, S Loss: 51950.390625, C Loss: 0.6827568411827087 [2020-04-23 23:58:00,420 INFO train.py line 106 60930] train_dice_coef: 0.9779893776104277, S Loss: 38814.6640625, C Loss: 0.6428762078285217 [2020-04-23 23:58:01,438 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 144288.125, C Loss: 0.7742880582809448 [2020-04-23 23:58:02,427 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 98744.40625, C Loss: 0.731644332408905 [2020-04-23 23:58:03,440 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 187155.484375, C Loss: 0.7246747612953186 [2020-04-23 23:58:04,448 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 219648.1875, C Loss: 0.7651872634887695 [2020-04-23 23:58:05,427 INFO train.py line 106 60930] train_dice_coef: 0.9820626420525217, S Loss: 41388.83203125, C Loss: 0.6593422293663025 [2020-04-23 23:58:06,501 INFO train.py line 106 60930] train_dice_coef: 0.9820942632767734, S Loss: 56902.10546875, C Loss: 0.659173309803009 [2020-04-23 23:58:07,552 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 230089.640625, C Loss: 0.7651663422584534 [2020-04-23 23:58:08,787 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 244381.09375, C Loss: 0.7267017960548401 [2020-04-23 23:58:09,771 INFO train.py line 106 60930] train_dice_coef: 0.0, S Loss: 149224.34375, C Loss: 0.7264615297317505 [2020-04-23 23:58:11,233 INFO train.py line 106 60930] train_dice_coef: 0.9929340016206969, S Loss: 34773.8515625, C Loss: 0.667628288269043 [2020-04-23 23:58:11,233 INFO train.py line 252 60930] Iter -24-976 eval: Loss: 113621.721875 acc: 55.000000% dice: 49.087681% [2020-04-23 23:58:11,233 INFO train.py line 265 60930] --- Saving snapshot --- [2020-04-23 23:58:11,253 INFO train.py line 279 60930] --- 23.968756437301636 seconds ---
Train log is like above. In some iteration I have very high train_dice_coef while the other time, the dice_coefs stay near 0. I wonder if this is normal during training process and what could cause the problem.
Hi, Thanks for noticing my work, I didn't expect that this implementation will be noticed haha. I implement this paper casually in my free time without paying attention to details, so I lost 70% memory of this now. I am really happy that people start to re-run the experiment and provide more robust results on this. Thanks!
Thank you! Your job do help me a lot and I've found the cause for some of my problems
@FoilHao Hi, I am trying to correct the mistakes of this code. Do you have tried out this code? Can u send me a correct code to me? Thanks a lot.
Hello, Thanks for your great work. The code is very neat and readable, from which I learned a lot. But I have some questions while implementing this repo on csi2014 dataset.
- I set batch_size to 64(model's num_channels=64) but only consumes about 4500Mbit in my GPU(GTX 1080). You said you set batch_size to 1 due to memory limitation. Do you think there is something wrong about my setting considering my low memory use?
- In utils.py->force_inside_img(func), you set coordinate inside the image using img_shape[2]. I think it is more reasonable to limit z by img_shape[0], y by img_shape[1] and x by img_shape[2]. I'm not sure if it should be like that. Very grateful if you can give me some guide according to my question.
Hello,I applied for the data to the official, but didn't get a reply.
Could you please provide us with the dataset(dataset2 from SpineWeb)?
I am a graduate student, and the data will only be used for scientific research, never for commercial use.
My email address is panyinggang@stu.xjtu.edu.cn,
Thanks very much!
@FoilHao Hi, I am trying to correct the mistakes of this code. Do you have tried out this code? Can u send me a correct code to me? Thanks a lot.
x_low, x_up = force_inside_img(x, patch_size, img.shape[2])
y_low, y_up = force_inside_img(y, patch_size, img.shape[1])
z_low, z_up = force_inside_img(z, patch_size, img.shape[0])
just adjusting parameters
Hello, Thanks for your great work. The code is very neat and readable, from which I learned a lot. But I have some questions while implementing this repo on csi2014 dataset.