MrGiovanni / ModelsGenesis

[MICCAI 2019 Young Scientist Award] [MEDIA 2020 Best Paper Award] Models Genesis
Other
741 stars 140 forks source link

How to determine the patch size please #19

Closed zlinzju closed 4 years ago

zlinzju commented 4 years ago

Hi Zongwei: Thank you for the excellent work.

I have some questions and any help would be appreciated.

How should I determine the input patch size? I mean, is there any particularity about whether the input patch is large or small. If my target is lung tissue such as pulmonary blood vessels, should I take a smaller patch size?

I am working on the segmentation of pulmonary vessels. Some previous work tried the Gaussian pyramid for multi-scale processing, and then used a 5x5x5 patch size into a double-layer network. Here, since your model has multiple layers of downsampling, does this mean that my patch can be correspondingly larger than 5 and no longer require multi-scale preprocessing ?

Sincerely looking forward to your reply.

Best Wishes! zlin

MrGiovanni commented 4 years ago

How should I determine the input patch size?

The input can be any size (dividable by 16), for example, 64x64x64, 128x128x128, 128x128x64, etc. I would recommend 64x64x32 because I pre-trained this model with that size of inputs. All these sizes refer to the pixel. That said, you can select any physical size in original CT scans.

Some previous work tried the Gaussian pyramid for multi-scale processing, and then used a 5x5x5 patch size into a double-layer network

Not sure which architecture you were referring to. In our paper, we adopt the most popular image segmentation architectures: V-Net (for 3D) and U-Net (for 2D). Using these architectures, I do not think you need a Gaussian pyramid for multi-scale processing. You may refer to these two papers for more details about the architectures.

  1. https://arxiv.org/abs/1606.04797
  2. https://arxiv.org/abs/1505.04597

Please let me know if you have further questions.

Thank you, Zongwei

zlinzju commented 4 years ago

Hello!My data set is small, so your pre-trained model is exactly what I need. I have tried using the depth_3, depth_5, depth_7 output layers as feature extractors, followed by the classifier.

However, the result is always kept at about 0.7, no matter how to adjust the parameters or classifier, which is not ideal. I guess it is because the input 64x64x32 patches is too large to extract good feature for my target (pulmonary vessels), do you have any experience and suggestions? Thanks a lot!