ginobilinie / medSynthesisV1

This is a copy of package for medical image synthesis work with LRes-ResUnet and GAN (wgan-gp) in pytorch framework
MIT License
183 stars 45 forks source link

Shall I run “extract23Dpatches4SingleImg.py” for creating patches? #5

Closed zhao98 closed 4 years ago

zhao98 commented 6 years ago

Hi, I am trying to use your network for MR to CT synthesis. My original MR data are in the same modality, so shall I run “extract23Dpatches4SingleImg.py” for creating patches? Or I should run "extract23Dpatches4MultiImg.py" and let matLHPET =np.zero() ?

ginobilinie commented 6 years ago

what do you mean by "My original MR data are in the same modality"?

ct_fn is the name of a 2nd input modality, lpet_fn is the output modality.

zhao98 commented 6 years ago

I‘m sorry I didn't make myself clear. What I'm saying is that my work is MR to CT synthesis. So there's nothing to do with lowpet or highpet. Then should I use “extract23Dpatches4SingleImg.py”? In "RunCTRecon", you called a function 'Generator_2D_slicesV1()' and there's a parameter 'contourKey='dataContour''. Maybe I should use “extract23Dpatches4MultiImg.py”?

ginobilinie commented 6 years ago

You should use "extract23Dpatches4SingleImg", mr is your source input, ct is the target input. since you use only one modality as input, so you will not use 'contourKey='dataContour'' (this one is under the condition of multiSource input).

zhao98 commented 6 years ago

Thank you!

zhao98 commented 5 years ago

Hi, I got another problem. I modified the code in 'runCTRecon.py' as following: data_generator = Generator_2D_slices(path_patients_h5,opt.batchSize,inputKey='dataMR',outputKey='dataCT') inputs , labels= data_generator.next() But there's an error The tensor 'labels' lost 1 dimension, and the size now is [32,64,64] (I set batchSize as 32), but the size should be [32,1,64,64]. Could you help me with this?

ginobilinie commented 5 years ago

You can expand the tensor using torch.unsqueeze()

zhao98 commented 5 years ago

Thank you!

hnbonsou commented 5 years ago

Hi,

I faced the same issue, in fact, the generated error is the following:

RuntimeError: Given input size: (128x1x16x16). Calculated output size: (128x0x8x8). Output size is too small at c:\anaconda2\conda-bld\pytorch_1519496000060\work\torch\lib\thcunn\generic/VolumetricDilatedMaxPooling.cu:104

When I am using pool2.unsqueeze(2) as you suggested just after the command line pool2 = self.pool2(block2), in Unet_LRes the in the forward function, I got the following error

"ValueError: Expected 5D tensor as input, got 6D tensor instead.".

Please, any advice.

ginobilinie commented 5 years ago

@hnbonsou what's the code you're running?

hnbonsou commented 5 years ago

@ginobilinie Hi, I am running the runCTRecon3d.py code.

ginobilinie commented 5 years ago

@hnbonsou since you patch is in 2D/2.5D format, you should run runCTRecon.py

hnbonsou commented 5 years ago

@ginobilinie, Thank you.