josedolz / LiviaNET

This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
MIT License
161 stars 52 forks source link

About the perfermance of results and sampling #10

Closed gangli95 closed 6 years ago

gangli95 commented 6 years ago

Hi, Jose. Thanks for your guide and I had run the example code successfully using the given data, but the result was not satisfactory. Here is the Testing result I got: given_data And this is the result using my own data: own_data

Should the results be like these above using the example code ? Or there is something wrong in my training?

Another question is about sampling. In your paper, it said that the segment size is 272727, and at each subepoch, a total of 500 samples were randomly selected from the training image segments. Is the segment size equals to sample size? (I didn't find anything about segment in your code, just sampling[252525] ). And the 500 samples should be selected from several MR/GT images, it's that right?

gangli95 commented 6 years ago

BTW, is the architecture in the code different from that in the paper?

josedolz commented 6 years ago

Hi @gangli95

Thanks for you comments. So, few things about this.

1 - For the sampling. Segments, it is just another way to call the sub-patches. So segment size is the same than sample size. It is actually 27x27x27 for training and I think we used 35x35x35 for segmentation in that paper. And yes, the 500 samples must be selected from different (ideally all) subjects during training.

2 - I realized last week that there is a small error in the code (see this ). Basically I hard-coded the values of the dropout rates for the fully convolutional connected layers, being equal to 0.0. A workaround (that I just made for the following results) might be to manually set these dropout rates to 0.25 in all the fully connected layers (Keep 0.5 for softmax).

3 - The architecture in the config file is a tinny net to simply try the code. It is not the one used in the paper. Just have a look to the paper and you will easily see what you need to change in the config file (number of layers, kernels per layer, interconnected layers,etc...). I just run it with the current code and current data, just modifying the config file properly and the trick of the dropout and only after 3 epochs I got this for the validation subject (training on 6 subjects and validating in only one):

subcortical

josedolz commented 6 years ago

Just one question. When you use your data? Do you use it for training and testing? Or only for testing? When I started to work with this dataset, I used matlab to load nifti files, which made changes in some axis (That is why the images look weird in your first row). So if you train with the given data, as it is, and test in your own data, you will have high chances that it will not work, since the appearance is way different.

gangli95 commented 6 years ago

Thanks for your replying! I used my data for training and testing, and it looks like now there is something wrong with the pre-processing, which I used the FreeSurfer to do it. \ Could you please give me the new config file that that you used above? I have difficulty in changing there parameters. Thanks.

josedolz commented 6 years ago

Sure,

I just uploaded the config file I used to generate that image. Here is the link.

gangli95 commented 6 years ago

Thanks, Jose. I had run the code using the new config file, and its perfermance is great!

2017年11月30日星期四,Jose Dolz notifications@github.com 写道:

Sure,

I just uploaded the config file I used to generate that image. Here is the link https://github.com/josedolz/LiviaNET/blob/master/src/LiviaNET_Config_NeuroPaper.ini .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/josedolz/LiviaNET/issues/10#issuecomment-348216514, or mute the thread https://github.com/notifications/unsubscribe-auth/AgFsY9M_faMt7V6CN8WsIpNjz60MlA9Uks5s7sWCgaJpZM4Qu_b4 .

josedolz commented 6 years ago

Hi @gangli95

I am glad you could have reproduce the results. I hope you enjoy it and that it can be useful for your research.

Jose

gangli95 commented 6 years ago

Hi, @josedolz I got problems again, the results contained numbers of small regions when I used my data(both training and testing). And after I looked up at the https://github.com/josedolz/LiviaNET/issues/3, I did the normalization again, but the results were still similar. Did I need to do the post-pressing? I thought it was included in your code already. Here are the result(one subject):

result1

result2

josedolz commented 6 years ago

Hi @gangli95 I do not see any problem on your results, they look pretty ok. There are some isolated areas, which is normal given the reduced receptive field of the network. As I did in the paper, you need to apply some sort of component analysis to remove those isolated blobs. It is a straightforward step, so I did not include it in the code, since it might vary depending on the application.

gangli95 commented 6 years ago

Thanks for the reply! And it's great for me to hear the results are good. I'll try using regionprop in matlab to deal with. BTW, I think that regionprops may cause regions' loss, both I expect and not. And maybe regionprops3 (introduced in MatLAB R2017b) may improve this.