ellisdg / 3DUnetCNN

Pytorch 3D U-Net Convolution Neural Network (CNN) designed for medical image segmentation
MIT License
1.94k stars 651 forks source link

Running testing.py #7

Closed build2create closed 7 years ago

build2create commented 7 years ago

Will be this be correct way to the testing.py

if __name__ == '__main__':
    run_test_case(0,"./data_test")

where ./data_test is the name to the out_dir where the prediction.nii.gz should I also call predict_from_data_file_and_write_image from run_test_case() to actually write the image?

ellisdg commented 7 years ago

@build2create sorry, I hadn't tested that code out yet. With the latest commit that I just committed, you should be able to run the code as you have it in your comment.

build2create commented 7 years ago

Alright!

build2create commented 7 years ago

With the pre-trained model given here I generated training_ids.pklandtesting_ids.pkl.Bascially just to get testing ids required to test the model. I defined main class as :

if __name__ == '__main__':
    run_test_case(1,"./data_test")

The problem is it is not producing any data_dir .Neither it is appending to it if I create one. And this:

if __name__ == '__main__':
    model = load_model(config["model_file"])
    predict_from_data_file_and_write_image(model,config["hdf5_file"],0,result.nii.gz)

And this

if __name__ == '__main__':
    model = load_model(config["model_file"])
    predict_from_data_file_and_write_image(model,config["hdf5_file"],0,data_dir)#data_dir  is directory

All of them fail to produce any segmented output. Plus where is get_prediction_labels getting called? @ellisdg any idea on this one? Apologies for the constant trouble.

ellisdg commented 7 years ago

Maybe try giving a full path name instead of "./data_test"? Does that work?

build2create commented 7 years ago

It worked but gave error ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 1024, 18, 18, 9), (None, 256, 18, 18, 18)] then I updated the configuration in keras.json file as

{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}

Note: earlier"backend": "theano" this was the in the keras.json file so it was giving error.(refer this) it ran finally but gave error: ValueError: Invalid objective: dice_coef_loss

then I added these: from model import dice_coef_loss from model import dice_coef and model = load_model(config["model_file"],custom_objects={'dice_coef_loss': dice_coef_loss,'dice_coef':dice_coef})

Now I get this error:

ValueError: Error when checking : expected input_1 to have 5 dimensions, but got array with shape (3, 144, 144, 144) To which I added 'T2' to config["training_modalities"] Now to the call in main: run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir'); it gives index error:

  File "testing.py", line 67, in <module>
    run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir');
  File "testing.py", line 53, in run_test_case
    image = nib.Nifti1Image(test_data[i], affine)
IndexError: index 3 is out of bounds for axis 0 with size 3

My testing_ids.pkl is: [141, 20, 15, 57, 186, 167, 60, 80, 23, 54, 143, 158, 132, 83, 61, 81, 168, 85, 157, 90, 147, 86, 127, 199, 74, 136, 24, 182, 146, 152, 170, 44, 30, 156, 172, 154, 1, 25, 59, 45]

I guess it is because I changed the config["training_modalities"] @ellisdg how to fix this Index error?

ellisdg commented 7 years ago

There are no error messages? Are you sure it is running? Have you tried a debugger?

On Apr 20, 2017 11:47 PM, "build2create" notifications@github.com wrote:

Nope, it did not work

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ellisdg/3DUnetCNN/issues/7#issuecomment-296062346, or mute the thread https://github.com/notifications/unsubscribe-auth/AIKZk4TpqlJ5VukIkX3dhc5wNrdF5yI_ks5ryDT1gaJpZM4M6908 .

build2create commented 7 years ago

I am getting index error. Please see updated comment

build2create commented 7 years ago

In case you are not able to see I am reposting it here: Initially I was getting this: ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 1024, 18, 18, 9), (None, 256, 18, 18, 18)] then I updated the configuration in keras.json file as


{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}

Note: earlier"backend": "theano" this was the in the keras.json file so it was giving error.(refer this) it ran finally but gave error: ValueError: Invalid objective: dice_coef_loss then I added these:

from model import dice_coef_loss
from model import dice_coef

and model = load_model(config["model_file"],custom_objects={'dice_coef_loss': dice_coef_loss,'dice_coef':dice_coef}) Now I get this error:

ValueError: Error when checking : expected input_1 to have 5 dimensions, but got array with shape (3, 144, 144, 144) So in config.py I added 'T2' to config["training_modalities"] Now to the call in main: run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir'); it gives index error:

  File "testing.py", line 67, in <module>
    run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir');
  File "testing.py", line 53, in run_test_case
    image = nib.Nifti1Image(test_data[i], affine)
IndexError: index 3 is out of bounds for axis 0 with size 3

My testing_ids.pkl is: [141, 20, 15, 57, 186, 167, 60, 80, 23, 54, 143, 158, 132, 83, 61, 81, 168, 85, 157, 90, 147, 86, 127, 199, 74, 136, 24, 182, 146, 152, 170, 44, 30, 156, 172, 154, 1, 25, 59, 45]

I guess it is because I changed theconfig["training_modalities"] @ellisdg how to fix this Index error?

ellisdg commented 7 years ago

It looks like you may have modified line 53, or you may not have the latest version of the file.

build2create commented 7 years ago

Ok my bad, ran the updated code this time. I got his output : image for a test image id 20. How should I infer segmentation labels from this, which one is necrotic,edemic and so on? Like the one in this When I choose to open segmented image I get this: image Which patch is tumorous how should I interpret?

ellisdg commented 7 years ago

The image is a scalar image and not a label map. You need to view it in grayscale as a scalar image.

On Apr 22, 2017 12:24 AM, "build2create" notifications@github.com wrote:

Ok my bad, ran the updated code this time. I got his output : [image: image] https://cloud.githubusercontent.com/assets/25721143/25301595/219251ba-2749-11e7-87a8-0dd95880fda5.png for a test image id 20. How should I infer segmentation labels from this, which one is necrotic,edemic and so on? Like the one in this https://github.com/naldeborgh7575/brain_segmentation When I choose to open segmented image I get this: [image: image] https://cloud.githubusercontent.com/assets/25721143/25301629/d02a263a-2749-11e7-86a1-ddaf4b51fbb8.png Which patch is tumorous how should I interpret?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ellisdg/3DUnetCNN/issues/7#issuecomment-296349055, or mute the thread https://github.com/notifications/unsubscribe-auth/AIKZkyAJ_X9es17FxQyJpYfJtyBvOcdYks5ryY76gaJpZM4M6908 .

build2create commented 7 years ago

Can you elaborate? The first screenshot shows grayscale image. Is that correct?How do I read the first image?Is it like darker parts tumorous? Can you cite/give an example?

build2create commented 7 years ago

If I have understood it right this is not for tumor detection but just for volumetric segmentation right?

build2create commented 7 years ago

For FLAIR modality I am sort of getting evidently slightly more greyish patch, Is that the tumor?

ellisdg commented 7 years ago

T1: t1 T1c: t1c FLAIR: flair Tumor prediction. Ranges from 0 to 1: prediction Manual segmentation. The separate tumor regions are not used by the classifier: truth

build2create commented 7 years ago

Thanks. By the way which tool are you using for viewing? Is it ITK sanp?

ellisdg commented 7 years ago

3D Slicer. I know a lot of people use ITK Snap, but I've never taken the time to learn how to use it. 3D Slicer has a lot of features, so there might be a little bit of a learning curve. They also have plenty of helpful tutorials too.

build2create commented 7 years ago

Thanks a lot!

build2create commented 7 years ago

By volumetric segmentation we are detecting the presence or absence of the tumor but not the different segments within the tumor, like necrotic, core or enhancing. Doing this would be a multi-class classification problem right?

build2create commented 7 years ago

Also why are we not able to segment the tumor into sub-parts.Is it because of data set?I viewed ground truth of BRATS dataset in mha format. They do provide all the segmentation labels.

ellisdg commented 7 years ago

You should be able to train a classifier to segment different tumor regions with Keras. However, I currently have no need for multi-class labels, so I have not implemented it yet. I will probably add this eventually, but that could be months from now.

mingrui commented 7 years ago

for anyone coming from the future, testing.py was renamed to predict.py:

https://github.com/ellisdg/3DUnetCNN/commit/b4012bb0b74dc7c48aa468fc0d277f5f0e374720#diff-22508acf6c6356eb94fbcda75eaf09f2