Tencent / MedicalNet

Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project provides a series of 3D-ResNet pre-trained models and relative code.
Other
1.95k stars 416 forks source link

Starting experiments with MedicalNet; question one: what are parameters --input_D, --input_H, --input_W? #67

Open giorgio-denunzio opened 2 years ago

giorgio-denunzio commented 2 years ago

Hi all, I have worked in the past with segmentation by machine learning but this is the first time I'm working on it with DL. I have a series of questions and this is the first one. I am running MedicalNet test, with the idea of later using it for the images I am interested in (MRI images of fetuses). I thought that the parameters listed in the subject (input_D etc, available in settings.py), were the z, x, y dimensions of the input images. They are 56, 448, 448 by default in settings.py. Checking the available dataset (directory data/MRBrainS18) I see the available images sizes vary: the first dimension is always 143, the second varies from 225 to 228, the third varies from 191 to 200. Can you please explain this apparent incoherence? What am I not understanding? Thanks Giorgio ps I'd be pleased to hear from other people experimenting with this interesting software, so to exchange information and one's experience, giorgio.denunzio@unisalento.com.

giorgio-denunzio commented 2 years ago

OK, I think I have found myself: inputD etc are the sizes that the input images are resized to; see def __resize_data__(self, data) in datasets/brains18.py. This means that the input sizes should be (more or less) equal for all the images. Possibly voxel size should be equal for all the input images too, but this is another story. I do not think there is any reslicing to a common voxel size in the code. I go on studying.

Deevyankar-Agarwal commented 2 years ago

How to use it for my data of MRI classification ?