Closed zhao98 closed 4 years ago
what do you mean by "My original MR data are in the same modality"?
ct_fn is the name of a 2nd input modality, lpet_fn is the output modality.
I‘m sorry I didn't make myself clear. What I'm saying is that my work is MR to CT synthesis. So there's nothing to do with lowpet or highpet. Then should I use “extract23Dpatches4SingleImg.py”? In "RunCTRecon", you called a function 'Generator_2D_slicesV1()' and there's a parameter 'contourKey='dataContour''. Maybe I should use “extract23Dpatches4MultiImg.py”?
You should use "extract23Dpatches4SingleImg", mr is your source input, ct is the target input. since you use only one modality as input, so you will not use 'contourKey='dataContour'' (this one is under the condition of multiSource input).
Thank you!
Hi, I got another problem. I modified the code in 'runCTRecon.py' as following:
data_generator = Generator_2D_slices(path_patients_h5,opt.batchSize,inputKey='dataMR',outputKey='dataCT')
inputs , labels= data_generator.next()
But there's an error The tensor 'labels' lost 1 dimension, and the size now is [32,64,64] (I set batchSize as 32), but the size should be [32,1,64,64]. Could you help me with this?
You can expand the tensor using torch.unsqueeze()
Thank you!
Hi,
I faced the same issue, in fact, the generated error is the following:
RuntimeError: Given input size: (128x1x16x16). Calculated output size: (128x0x8x8). Output size is too small at c:\anaconda2\conda-bld\pytorch_1519496000060\work\torch\lib\thcunn\generic/VolumetricDilatedMaxPooling.cu:104
When I am using pool2.unsqueeze(2) as you suggested just after the command line pool2 = self.pool2(block2), in Unet_LRes the in the forward function, I got the following error
"ValueError: Expected 5D tensor as input, got 6D tensor instead.".
Please, any advice.
@hnbonsou what's the code you're running?
@ginobilinie Hi, I am running the runCTRecon3d.py code.
@hnbonsou since you patch is in 2D/2.5D format, you should run runCTRecon.py
@ginobilinie, Thank you.
Hi, I am trying to use your network for MR to CT synthesis. My original MR data are in the same modality, so shall I run “extract23Dpatches4SingleImg.py” for creating patches? Or I should run "extract23Dpatches4MultiImg.py" and let matLHPET =np.zero() ?