vinbigdata-medical / MIDL2021-VinDr-RibCXR

VinDr-RibCXR: A Benchmark Dataset for Automatic Segmentation and Labeling of Individual Ribs on Chest X-rays
MIT License
18 stars 2 forks source link

Result is not re-producible #4

Open MRJasonP opened 1 year ago

MRJasonP commented 1 year ago

Hi, Thank you for sharing such an amazing dataset. I have tried to reproduce the result by using the exact code provided in this repo. But the result I re-produced has quite a big gap between the number reported in the MIDL paper. The configuration file I used is: "cvcore/config/multi_unet_b0_diceloss.yaml". And the best Dice I got is 64.7%. I attached the log file below.

Any hint for solving this issue is applicated.

Thank you multi_unet_b0_DiceLoss.yaml.txt

levi3001 commented 1 year ago

Sorry for the late reply. Did you solve the problem? I am sorry that I can not run the code now since I left the company and I don't have enough resources to re-run the model to see what happened. One potential issue is that the change of torch and monai or other module version make the code have different results. Also it may because of the unstable of my code but I can not check it now. So if you have any update please let me know.

On Wed, Dec 14, 2022 at 9:44 AM MRJasonP @.***> wrote:

Hi, Thank you for sharing such an amazing dataset. I have tried to reproduce the result by using the exact code provided in this repo. But the result I re-produced has quite a big gap between the number reported in the MIDL paper. The configuration file I used is: "cvcore/config/multi_unet_b0_diceloss.yaml". And the best Dice I got is 64.7%. I attached the log file below.

Any hint for solving this issue is applicated.

Thank you multi_unet_b0_DiceLoss.yaml.txt https://github.com/vinbigdata-medical/MIDL2021-VinDr-RibCXR/files/10223169/multi_unet_b0_DiceLoss.yaml.txt

— Reply to this email directly, view it on GitHub https://github.com/vinbigdata-medical/MIDL2021-VinDr-RibCXR/issues/4, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOTREJDXLYUIIJHMSHR32YDWNEKALANCNFSM6AAAAAAS52X4Y4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>

fstylianou commented 1 year ago

Hi I was able to run the code on a Mac M1 after making some minor changes to the code because the M1 system is not compatible with Cuda. I got the following results Epoch 200 Train loss: 0.08936, learning rate: 0.001000: 100%|██████████████████████████████████| 25/25 [22:13<00:00, 53.35s/it] 2023-03-22 14:48:39,117 train INFO: Train loss: 0.08936, learning rate: 0.001000 2023-03-22 14:48:39,117 - Train loss: 0.08936, learning rate: 0.001000 100%|████████████████████████████████| 49/49 [00:27<00:00, 1.76it/s] 0.7819569110870361 2023-03-22 14:49:09,054 train INFO: Validation dice: 0.781957, val loss:0.963734 best: 0.829610

2023-03-22 14:49:09,054 - Validation dice: 0.781957, val loss:0.963734 best: 0.829610

So base on the best dice 0.829610 it means I can reproduce the results of the paper?????? From the paper: Model: U-Net w. EfficientNet-B0 Dice: .829 (.808–.847) 95% HD: 16.807 (14.372–19.539) Sensitivity .844 (.818–.858) Specificity .998 (.997–.998)

levi3001 commented 1 year ago

what torch version are you using?

fstylianou commented 1 year ago

python -c "import torch; print(torch.version)" 2.1.0.dev20230317

fstylianou commented 1 year ago

I am new to the python torch ecosystem. Are my results ok?

levi3001 commented 1 year ago

i think it is ok