Zyun-Y / DconnNet

Codes for CVPR2023 paper "Directional Connectivity-based Segmentation of Medical Images"
131 stars 7 forks source link

Outcomes of My Reproduction on the ChaseDB1 dataset failed to reach the reported results #3

Closed Fivethousand5k closed 1 year ago

Fivethousand5k commented 1 year ago

Hello, I tried to train the DconnNet on the ChaseDB1 dataset with the code in this repo. However, I failed to reach the reported result in the paper by a margin.

I referred to this script to run the code: https://github.com/Zyun-Y/DconnNet/blob/main/scripts/chasedb1_train.sh

Here are the evaluation results on 5 folds, together with the mean result:

   fold      dice    cldice        β0        β1
     1  0.825640  0.829551  0.355556  0.028148
     2  0.833390  0.846933  0.302963  0.122222
     3  0.805985  0.821772  0.369630  0.180741
     4  0.786474  0.828176  0.352593  0.136296
     5  0.784559  0.827419  0.276667  0.031111
  mean  0.807210  0.830770  0.331481  0.099704

Note that I used the same computation method for dice and cldice as you provided in this repo. Since you did not offer complete code for betti computation (the directory 'Betti_Compute/' and the Gudhi package are missing), I used mine instead. It can be viewed that only the cldice (0.831) and β0 (0.331) are comparable to the reported ones (0.833 and 0.341 respectively).

For the β1, I don’t understand how you could reach such a big value (above 1). Because you calculated the overall betti error based on a series of (65, 65) patches and reported the mean value. For most of the patches that you iterated on the (960, 960) image, they could not form any loops, thus I believe the final β1 score should be a relatively small value.

Could you please shed some light on why this might be the case? Thanks!

Zyun-Y commented 1 year ago

Hi,

Thanks for your interest in this work!

Based on the results you showed here, we checked the training script that we updated earlier and we realized this training setting gives unstable results from time to time. We have updated the script accordingly (please download the latest version of the code). Specifically, here is what we did:

These steps will give the results that showed the paper. Since the training data is limited in this dataset, one might want to rerun the experiments if the network does not get to a good local optimum.

To test this setting, we retrained the network on the same 5 fold, and here are the results:

fold      dice    cldice        β0        β1
     1  0.832241 0.833772 0.384444 2.274074
     2  0.838394 0.854267 0.326667 1.625926 
     3  0.814446 0.834307 0.32963  1.917037 
     4  0.801684 0.821157 0.383704 1.32     
     5  0.798302 0.83801  0.276667 0.678889 
  mean  0.8183   0.8362   0.343    1.626 
please note that the mean here is the image-wise mean, instead of fold-wise mean.

The pretrained models of the above results have been uploaded to drive.

For further reproducibility, we attach the sample training docs of folds 4 and 5 here, since there seem to be larger gaps.

results_4.csv results_5.csv

Regarding the Betti number: Thanks for pointing out the missing package. We have uploaded it to the repository. The betti number evaluation code we used was the clone of TopoLoss.

Thank you so much for helping us to improve the codes!

Fivethousand5k commented 1 year ago

Thanks for your reply! @Zyun-Y However, it seems that you have not uploaded the complete Code for Betti Error Computation yet. The provided betti_compute.py only returns a single value, which represents β0 I guess?

I ran a demo and its output is 2. (There are 2 connected components and 1 loop in the test array).

import sys
import numpy as np
import matplotlib.image as mpimg
import os
sys.path.append('Betti_Compute/')
# add parent directory to path
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import metrics.ext_libs.Gudhi as gdh
import torch

def betti_number(imagely):
    # imagely_copy = mpimg.imread('output_IMG_1.png')
    imagely = imagely.detach().cpu().clone().numpy()
    width,height = imagely.shape
    imagely[width - 1, :] = 0
    imagely[:, height - 1] = 0
    imagely[0, :] = 0
    imagely[:, 0] = 0
    temp = gdh.compute_persistence_diagram(imagely, i = 1)
    betti_number = len(temp)
    # print (betti_number)
    return betti_number

# main
if __name__ == '__main__':
    test_array = torch.tensor(
        [
        [0,0,0,0,0,0],
        [0,1,1,1,1,0],
        [0,1,0,1,1,0],
        [0,1,1,1,0,0],
        [0,1,1,0,1,0],
        [0,0,0,0,0,0],
   ]
    )
    num = betti_number(test_array)
    print(num)
    # the output is 2

Thus, could you please upload the complete code for calculating β1 ?

Fivethousand5k commented 1 year ago

Still confused about this issue, maybe the original way of computing β1 itself provided by the author of TopoLoss is problematic. Anyway, thanks for your replies.