Closed build2create closed 7 years ago
@build2create sorry, I hadn't tested that code out yet. With the latest commit that I just committed, you should be able to run the code as you have it in your comment.
Alright!
With the pre-trained model given here I generated training_ids.pkl
andtesting_ids.pkl.
Bascially just to get testing ids required to test the model. I defined main class as :
if __name__ == '__main__':
run_test_case(1,"./data_test")
The problem is it is not producing any data_dir
.Neither it is appending to it if I create one. And this:
if __name__ == '__main__':
model = load_model(config["model_file"])
predict_from_data_file_and_write_image(model,config["hdf5_file"],0,result.nii.gz)
And this
if __name__ == '__main__':
model = load_model(config["model_file"])
predict_from_data_file_and_write_image(model,config["hdf5_file"],0,data_dir)#data_dir is directory
All of them fail to produce any segmented output. Plus where is get_prediction_labels
getting called?
@ellisdg any idea on this one? Apologies for the constant trouble.
Maybe try giving a full path name instead of "./data_test"? Does that work?
It worked but gave error
ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 1024, 18, 18, 9), (None, 256, 18, 18, 18)]
then I updated the configuration in keras.json file as
{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}
Note: earlier"backend": "theano"
this was the in the keras.json file so it was giving error.(refer this)
it ran finally but gave error:
ValueError: Invalid objective: dice_coef_loss
then I added these:
from model import dice_coef_loss
from model import dice_coef
and
model = load_model(config["model_file"],custom_objects={'dice_coef_loss': dice_coef_loss,'dice_coef':dice_coef})
Now I get this error:
ValueError: Error when checking : expected input_1 to have 5 dimensions, but got array with shape (3, 144, 144, 144)
To which I added 'T2' to config["training_modalities"]
Now to the call in main:
run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir');
it gives index error:
File "testing.py", line 67, in <module>
run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir');
File "testing.py", line 53, in run_test_case
image = nib.Nifti1Image(test_data[i], affine)
IndexError: index 3 is out of bounds for axis 0 with size 3
My testing_ids.pkl is:
[141, 20, 15, 57, 186, 167, 60, 80, 23, 54, 143, 158, 132, 83, 61, 81, 168, 85, 157, 90, 147, 86, 127, 199, 74, 136, 24, 182, 146, 152, 170, 44, 30, 156, 172, 154, 1, 25, 59, 45]
I guess it is because I changed the config["training_modalities"]
@ellisdg how to fix this Index error?
There are no error messages? Are you sure it is running? Have you tried a debugger?
On Apr 20, 2017 11:47 PM, "build2create" notifications@github.com wrote:
Nope, it did not work
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ellisdg/3DUnetCNN/issues/7#issuecomment-296062346, or mute the thread https://github.com/notifications/unsubscribe-auth/AIKZk4TpqlJ5VukIkX3dhc5wNrdF5yI_ks5ryDT1gaJpZM4M6908 .
I am getting index error. Please see updated comment
In case you are not able to see I am reposting it here:
Initially I was getting this:
ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 1024, 18, 18, 9), (None, 256, 18, 18, 18)]
then I updated the configuration in keras.json file as
{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}
Note: earlier"backend": "theano" this was the in the keras.json file so it was giving error.(refer this)
it ran finally but gave error:
ValueError: Invalid objective: dice_coef_loss
then I added these:
from model import dice_coef_loss
from model import dice_coef
and
model = load_model(config["model_file"],custom_objects={'dice_coef_loss': dice_coef_loss,'dice_coef':dice_coef})
Now I get this error:
ValueError: Error when checking : expected input_1 to have 5 dimensions, but got array with shape (3, 144, 144, 144)
So in config.py I added 'T2' to config["training_modalities"]
Now to the call in main:
run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir');
it gives index error:
File "testing.py", line 67, in <module>
run_test_case(1,'/home/adminsters/Desktop/3DUnetCNN-master/data_dir');
File "testing.py", line 53, in run_test_case
image = nib.Nifti1Image(test_data[i], affine)
IndexError: index 3 is out of bounds for axis 0 with size 3
My testing_ids.pkl is:
[141, 20, 15, 57, 186, 167, 60, 80, 23, 54, 143, 158, 132, 83, 61, 81, 168, 85, 157, 90, 147, 86, 127, 199, 74, 136, 24, 182, 146, 152, 170, 44, 30, 156, 172, 154, 1, 25, 59, 45]
I guess it is because I changed theconfig["training_modalities"]
@ellisdg how to fix this Index error?
It looks like you may have modified line 53, or you may not have the latest version of the file.
Ok my bad, ran the updated code this time. I got his output : for a test image id 20. How should I infer segmentation labels from this, which one is necrotic,edemic and so on? Like the one in this When I choose to open segmented image I get this: Which patch is tumorous how should I interpret?
The image is a scalar image and not a label map. You need to view it in grayscale as a scalar image.
On Apr 22, 2017 12:24 AM, "build2create" notifications@github.com wrote:
Ok my bad, ran the updated code this time. I got his output : [image: image] https://cloud.githubusercontent.com/assets/25721143/25301595/219251ba-2749-11e7-87a8-0dd95880fda5.png for a test image id 20. How should I infer segmentation labels from this, which one is necrotic,edemic and so on? Like the one in this https://github.com/naldeborgh7575/brain_segmentation When I choose to open segmented image I get this: [image: image] https://cloud.githubusercontent.com/assets/25721143/25301629/d02a263a-2749-11e7-86a1-ddaf4b51fbb8.png Which patch is tumorous how should I interpret?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ellisdg/3DUnetCNN/issues/7#issuecomment-296349055, or mute the thread https://github.com/notifications/unsubscribe-auth/AIKZkyAJ_X9es17FxQyJpYfJtyBvOcdYks5ryY76gaJpZM4M6908 .
Can you elaborate? The first screenshot shows grayscale image. Is that correct?How do I read the first image?Is it like darker parts tumorous? Can you cite/give an example?
If I have understood it right this is not for tumor detection but just for volumetric segmentation right?
For FLAIR modality I am sort of getting evidently slightly more greyish patch, Is that the tumor?
T1: T1c: FLAIR: Tumor prediction. Ranges from 0 to 1: Manual segmentation. The separate tumor regions are not used by the classifier:
Thanks. By the way which tool are you using for viewing? Is it ITK sanp?
3D Slicer. I know a lot of people use ITK Snap, but I've never taken the time to learn how to use it. 3D Slicer has a lot of features, so there might be a little bit of a learning curve. They also have plenty of helpful tutorials too.
Thanks a lot!
By volumetric segmentation we are detecting the presence or absence of the tumor but not the different segments within the tumor, like necrotic, core or enhancing. Doing this would be a multi-class classification problem right?
Also why are we not able to segment the tumor into sub-parts.Is it because of data set?I viewed ground truth of BRATS dataset in mha format. They do provide all the segmentation labels.
You should be able to train a classifier to segment different tumor regions with Keras. However, I currently have no need for multi-class labels, so I have not implemented it yet. I will probably add this eventually, but that could be months from now.
for anyone coming from the future, testing.py was renamed to predict.py:
Will be this be correct way to the
testing.py
where
./data_test
is the name to theout_dir
where theprediction.nii.gz
should I also callpredict_from_data_file_and_write_image
fromrun_test_case()
to actually write the image?