naldeborgh7575 / brain_segmentation

MIT License
301 stars 186 forks source link

No detection or inverse results #2

Open JRevati opened 7 years ago

JRevati commented 7 years ago

Hi Nikki,

I am trying to replicate your model for brain tumour segmentation to explore image analysis tools and algorithms. After carefully implementing your model, I got either completely blank predictions or negated results (image had slices made in the area where tumour is absent) . I am using your pre-trained model with available weights downloaded from this repository. Could you please help me improve my result? Few changes I made are listed here: 1.The BasicModel() used in SegmentationModels.py seems undefined, I replaced it with SegmentationModel().

  1. I downloaded the BRATS2015 dataset as it is. Are there any changes made to the same before using?

Thanks in advance.

build2create commented 7 years ago

@JRevati Same problem. Once the model is saved in models/examples.json and we test we get IO error IOError: cannot identify image file <open file '/home/adminsters/Documents/Training/HGG/brats_tcia_pat165_0001/VSD.Brain.XX.O.MR_T1c.40873/VSD.Brain.XX.O.MR_T1c.40873.mha', mode 'rb' at 0x7ff4cae90660>

But the file brats_tcia_pat165_0001/VSD.Brain.XX.O.MR_T1c.40873/VSD.Brain.XX.O.MR_T1c.40873.mha is at correct location

JRevati commented 7 years ago

@build2create Can you specify where exactly do you get this error, is it in brain_pipeline.py ? . In my case the T1c file name gets a suffix "*_n" before .mha extension (after running the script n4_bias_correction.py). So while creating basically, it searches for VSD.Brain.XX.O.MR_T1c.36175_n.mha file instead as per the code ( t1_n4 = glob(self.path + '/*T1*/*_n.mha') in brain_pipeline ). Did you run and check the output of the same script before?

build2create commented 7 years ago

@JRevati @naldeborgh7575 Basically this is what I did in steps Step 1. Rann4_bias_correction(modified specifying the arguments as in brain_pipeline namely path etc and saved to respective folder with _n.mha for all T1 t1 c type) Step 2. Ran brain_pipeline.py (uncommenting the code for slices) with norm=n4 that saved slices to n4_PNG (see the comments in the code). I did not do other 2 normalisation. Next commented that and ran the code to save the labels for ground truth## (# Doubt 1 where is that Label/ folder used ahead?) Step 3. Ran the Segmentation_Models.py. I replaced this `train_data = glob('train_data/')` by train_data=glob('n4_PNG/').# (Doubt 2: Is this ok? n4_PNG contains all the n4 normalised images) The sequential model ran for 10 epochs each taking appoximately 1800 seconds Step 4* :Swap the comment and uncommented portion for running the Testing phase. Now here i replaced first for entire folder(i.e replace `tests = glob('testdata/2')bymy path for testing folder`) unfortunately that didn,t work out.So tried for single image, now I get Value error on reshape for mha image (5,240,240) and it runs for png image though. # (Doubt 3: What must be the input to tests=glob(?))

The biggest doubt after all this is use of Label/* ground truth images( I believe that this path Original_Data/Training/HGG/**/*more*/**.mha is path to ground truth) Another big doubt is if we go for loading two-path CNN Graph() is deprecated according to latest keras documentation. Simply helpless at this point. Please help.

JRevati commented 7 years ago

@build2create This is what I think, Last question first and Doubt2 : As you have mentioned, Original_Data/Training/HGG/**/*more*/**.mha is the path to ground truth, moreover, this will be used to generate ground truth which will be appended to the strip created using other 4 scanned forms of images, ex. scans = [flair[0], t1[0], t1[1], t2[0], gt[0]] in brain_pipeline . Now if you haven't used the any other normalized forms I don't think that will cause any trouble. The labels are saved in a path that you provide in save_labels() method. Also, Graph is depricated in later versions, so I used Keras 1.1.1 for the same reason. From 1.2.1 onwards, Graph() will not work. If you are planning to train the model on your own, then you need to either use the alternatives of downgrade the keras version.

Doubt 3 : input to test glob should be the folder path where you have saved preprocessed test images (similar to the training ones) from downloaded BRATS_testing folder. In my case this works. Note that you will have to use all the pipelined methoods for testing images to get the same effect and dimensions. In patch_library.py you will find a comment where it is explicitly mentioned that the images should have shape (5*240,240) for training images which also applies to test images.

Doubt 1 : the labels are used to feed y_train (ref. find_patches() where you provide labels folder path ) and later in calculating dice co-efficient etc.

I haven't used the two-path model, hence I am unable to comment on it, but hope the rest helps a bit.

build2create commented 7 years ago

@JRevati Thanks for the reply. Just confirming as you said " _Note that you will have to use all the pipelined methoods for testing images to get the same effect and dimensions. In patchlibrary.py you will find a comment where it is explicitly mentioned that the images should have shape (5*240,240) for training images which also applies to test images." This means we have to convert test images in BRATS Training set to PNG and reshape to required dimensions,right?

Also another point current version of brain_pipeline.py generates n4_PNG( as folder of n4ITK normalized image see comments in code and line io.imsave('n4_PNG/{}_{}.png'.format(patient_num, slice_ix), strip) ). Here the dimension of each image is 1200X240 was that also in your case? Did you use modified code given by @umanghome in the Pull request section.

One last thing, you said you used pre trained model "I am using your pre-trained model with available weights downloaded from this repository." How did you do that what are steps for directly doing testing? Or do we need to train everytime before we test?

build2create commented 7 years ago

@JRevati Please give confirmation for above question. I am really stuck here.

lazypoet commented 7 years ago

@JRevati I got the same problem when I corrected and ran the code. Black images. I think @naldeborgh7575 did not upload the correct model. Because this is the same problem I faced.

lazypoet commented 7 years ago

Okay, seems like there might be a problem with your normalization if your network is not learning

ujjwalbaid0408 commented 7 years ago

Can you post which .py file we need to run first?

Jiyya commented 7 years ago
brain_seg_workflow
tiantian-li commented 6 years ago

@JRevati I am running the code with BRATS2013,but some problems happen to me,so I want to try the BRATS2015 dataset.I can't download it successfully, sincerely hope you can help me for the datasets

Jiyya commented 6 years ago

www.smir.chdownload it from here

On Friday, December 22, 2017 7:24 AM, tiantian-li <notifications@github.com> wrote:

@JRevati I am running the code with BRATS2013,but some problems happen to me,so I want to try the BRATS2015 dataset.I can't download it successfully, sincerely hope you can help me for the datasets— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

tiantian-li commented 6 years ago

@Jiyya I have tried,but i can't download the brats2015 from http://www.smir.ch.If you have download it successfully,and if you can share it with me .Thanks