Open shsargordi opened 2 weeks ago
Hi:
Could you check
whether your predict_xstart=True?
Also, what is your patch size? I believe 256x256x128 is too large for GPU. Did you use 256x256x4 or something similar?
Thanks for your answer. predict_xstart is True. The only change I've made is in this part of the dataloader:
I used a patch size of 256x256x4 for the train images and a patch size of 256x256x128 for test images.
Ah I see the problem, actually it is my fault.
ResizeWithPadOrCropd is crop the boundary of the images to have the target size matrix, so now your all volumes become 256x256x4 which was the central part. RandSpatialCropSamplesd is the right function to randomly select patches for training. So in the training transformation the correct way is:
ResizeWithPadOrCropd( keys=["image","label"], spatial_size=(256,256,128), constant_values = -1, ), RandSpatialCropSamplesd(keys=["image","label"], roi_size = (256,256,4), num_samples = patch_num, random_size=False, ),
where the first one just resize all images to have a same size, the second function extract the patches for training. Your training transformation directly crop all images to the 256x256x4, so the diffusion will perform bad when you input the non-central part.
Thank you for your explanations.
Hi Shaoyan, I fixed the code, but the results unexpectedly worsened compared to when I trained with only 4 slices per image. Could you please take a look at my new results and offer suggestions? it is trained 550 epochs.
And the strange thing is that I got better metrics when the model trained with only 4 slices of each image, using spatial_size=(256, 256, 4) in ResizeWithPadOrCropd.
Hi, just back form the work.
Yes it is weird. I think the loss (MAE) actually is too high, usually I can get about 0.03 for this MRI-to-CT in only about 10 epochs. So the issue is not even for testing, you cannot even train the model well. I think do you want to send me 1 or 2 example and let me try?
That would be great, thank you. Could you please give me your email?
Or please email me: sh.sargordi@gmail.com
Hi Shaoyan, could you please take a look at the below generated CT image? I used my own data, but my outcomes differ significantly from those reported in the paper. Could you please offer suggestions for how I can improve it? All of my data have dimensions of 256x256x128, with CT images ranging from -1000 to 1000 HU and MRI images ranging from -10 to 10, both normalized to [-1,1]. The below generated CT images are
after 800 epochs of training and the results didn't improve with further training.