assassint2017 / MICCAI-LITS2017

liver segmentation using deep learning
388 stars 91 forks source link

Strange dice coefficient on volume-54.nii #5

Open mitiandi opened 5 years ago

mitiandi commented 5 years ago

I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957). Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.

mitiandi commented 5 years ago

And there is another question that confused me. It is that 'epoch=3000' is used to trained the network. But I found the network tended to convergence at a quite early time.(May be it seems before 1000 epoch.)

ahmadmubashir commented 5 years ago

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

ahmadmubashir commented 5 years ago

And from which file of code you are doing testing?

mitiandi commented 5 years ago

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

mitiandi commented 5 years ago

And from which file of code you are doing testing?

val.py

ahmadmubashir commented 5 years ago

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

I did the above steps. the first code gave me 256×256×n image. but it not gave me 256×256×48 3d patches. Could I get it manually? because the output of 'data_prepare/get_random_data.py', I understand. But the second code dont gave me exactly 256×256×48 3D patch. 'dataset/data_random.py'. is used in train_ds.py as from dataset.dataset_random import train_ds Is this make the 256×256×48 3D patches automatically or we will manually make these samples? one another issue I found that the size of volume after 'data_prepare/get_random_data.py', I obtained is 256×256×n and the size of its ground truth is 512×512×n, why? please help me. Thanks

mitiandi commented 5 years ago

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

I did the above steps. the first code gave me 256×256×n image. but it not gave me 256×256×48 3d patches. Could I get it manually? because the output of 'data_prepare/get_random_data.py', I understand. But the second code dont gave me exactly 256×256×48 3D patch. 'dataset/data_random.py'. is used in train_ds.py as from dataset.dataset_random import train_ds Is this make the 256×256×48 3D patches automatically or we will manually make these samples? thank. please help me. The former is correct.The 3d patches were not saved. The data was automatically organized and load in the form of 256×256×48 patches , and its implementation is as followed, which is in 'dataset/data_random.py'.

在slice平面内随机选取48张slice

start_slice = random.randint(0, ct_array.shape[0] - size) end_slice = start_slice + size - 1

    ct_array = ct_array[start_slice:end_slice + 1, :, :]
    seg_array = seg_array[start_slice:end_slice + 1, :, :]

zz10001 commented 5 years ago

Hi,@mitiandi,@ahmadmubashir Is this program start with using data_prepare/get_random_data.py to get pre-process train data and dataset/data_random.py to extract 48 continuous slices,then i can use python train_ds.py directly to start train,looking forward to your reply! Best, Ming

mitiandi commented 5 years ago

In the truth, i have forgot the details because there are a long time since then. But it seems right. what you need to do is just to change the data path to yours. Good luck~

---Original--- From: "zz10001"notifications@github.com Date: Wed, Jul 24, 2019 16:20 PM To: "assassint2017/MICCAI-LITS2017"MICCAI-LITS2017@noreply.github.com; Cc: "Mention"mention@noreply.github.com;"mitiandi"1422578348@qq.com; Subject: Re: [assassint2017/MICCAI-LITS2017] Strange dice coefficient on volume-54.nii (#5)

Hi,@mitiandi,@ahmadmubashir Is this program start with using data_prepare/get_random_data.py to get pre-process train data and dataset/data_random.py to extract 48 continuous slices,then i can use python train_ds.py directly to start train,looking forward to your reply! Best, Ming

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

zz10001 commented 5 years ago

In the truth, i have forgot the details because there are a long time since then. But it seems right. what you need to do is just to change the data path to yours. Good luck~

Thanks for your kind help,may you have a good day!

Oct6ber commented 4 years ago

I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957). Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.

hi, have you found the reason Or how did you solve it

zz10001 commented 4 years ago

hi, have you found the reason Or how did you solve it

You can solve it by https://github.com/assassint2017/MICCAI-LITS2017/issues/6#issue-375503582.

Oct6ber commented 4 years ago

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

Thank you very much

Oct6ber commented 4 years ago

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

zz10001 commented 4 years ago

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image

Oct6ber commented 4 years ago

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image

Thank you very much

lcl180 commented 3 years ago

The input of DialResUnet is 512512, but the output is 10241024. Shouldn't the divided input and output be the same size? Do you know why this is?

lcl180 commented 3 years ago

What is the code operation procedure of this project, can you share it? Thank you

life-8079 commented 2 years ago

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image JI}2`V) YLF1AL5} A1ZENI hi,why my answers like these?The DICE and jacard were something wrong.

zz10001 commented 2 years ago

hi,why my answers like these?The DICE and jacard were something wrong.

Have you visualed the predict by ITK-SNAP or other viewer Software? Maybe you can see the predict first

life-8079 commented 2 years ago

hi,why my answers like these?The DICE and jacard were something wrong.

Have you visualed the predict by ITK-SNAP or other viewer Software? Maybe you can see the predict first 9(T0PK_CPR2TOGLS1`8`Z_E Hi, the result is like this. The image‘s background is 1 and the liver is 0. Could you help me?

zz10001 commented 2 years ago

Hi, the result is like this. The image‘s background is 1 and the liver is 0. Could you help me?

It seems easy to exchange color for Liver and Tumor, You just need to Negate the Nii you predicted.

zhouyizhuo commented 9 months ago

Hi, @zz10001, @life-8079 , I'm wondering why the para.size is 48? image After I change the para.size to 32, and the para.slice_thickness from 1 to 4. I found the kiunet_org can't work. I'd appreciate it if you can give some help! image