XiongchaoChen / DuSFE_CrossRegistration

DuSFE: Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT (MedIA 2023, MICCAI 2022)
MIT License
17 stars 0 forks source link

Hi,could you please tell me your model about transform μ-maps? #4

Open Qqhjkl opened 1 year ago

Qqhjkl commented 1 year ago

图片 Base on the instruction of the dataset I think I need to transform μ-maps first, and could you please tell me which model you have chosen to do this. And I'm also confused about how to make .mat files into .h5 files. If you can answer these questions I will be deeply grateful. Thanks a lot.

XiongchaoChen commented 1 year ago

Hi, thanks for your question about teh data preprocessing. (1) How to transform μ-maps. As presented in the MICCAI paper, we first generated the random registration index (Δtx, Δty, Δtz, Δαx, Δαy, Δαz) and then transformed the μ-map to simulate the SPECT-CT misregistration. For the transformation process, we first rotate the μ-map for (Δαx, Δαy, Δαz) degrees and then translate the μ-map for (Δtx, Δty, Δtz) voxels. You can implement this process using any tool you like. We use the function "imrotate3" and "imtranslate" in MATLAB for your reference.

(2) How to make ".mat" file into ".h5" file. The purpose of using ".h5" files is to accelerate the data loadiing process during training/test. You can using the function "h5create", "h5wirte" in MATLAB to convert the ".mat" to ".h5". For example, if we want to incorporate "Amap_CT.mat" in to ".h5", we can do like this for your reference: " clear; clc; load("Amap_CT.mat"); % Here I suppose the parameter name is "Amap_CT"

filename = 'demo.h5'; h5create(filename, '/Amap_CT', size(Amap_CT), 'DataType', 'single');
h5write(filename, '/Amap_CT', Amap_CT); "

A small tip for you is that you don't need to 100% follow my pipeline for data preprocessing. Please let me know if you have any additional questions :)

Qqhjkl commented 1 year ago

Thanks a lot for your answers, they are really helpful for me. And also thanks for your advice, I will try to make some changes as needed.

Jiase commented 1 year ago

Hi, I have some questions about data preprocessing. I use imrotate3 and imtranslate functions to process the image according to your suggestion, but the image size after rotation and translation becomes larger. Should I crop it to the same size as the original image, or fill the original image to the same size? I don't know what to do. I hope you can tell me the details of data processing. Thank you very much.

XiongchaoChen commented 1 year ago

Hi, thank you for your question, I think both ways are OK. As for me, I first padded the original image into a larger size and then implemented the rotation and translation so that I can avoid the image cropping.

Jiase commented 1 year ago

Thank you very much for your reply. I tried to fill the original image to a sufficient size, but the rotation and translation were carried out according to the image matrix, instead of what I expected, only the patient was rotating and translating. I still need to cut it. I am very sorry to make such a request. Could you disclose the method of making the data set?

XiongchaoChen commented 1 year ago

Not sure whether I fully understand your questions, but here follows the dataset preprocessing step I implemeted using MATLAB: (1) Zero-pad the μ-map and SPECT images into a larger size (from [72, 72, 32] to [80, 80, 40]); (2) Generate the random translation and rotation indexes (tx, ty, tz, rx, ry, rz); (3) Rotate and translate the μ-map using the function "func_affine.m" (I just uploaded it to "/utils/func_affine.m"); (4) Use the transfromed μ-map and the untransformed SPECT as inputs of networks to predict the indexes (tx, ty, tz, rx, ry, rz); Please let me know if you need more information.

Jiase commented 1 year ago

Thank you very much for generously sharing your method with me, it's very important to me. I salute your open-source spirit.

Jiase commented 1 year ago

Hi~,I see that the size of SPET image in the paper is 64 ×64 × 64 and the size of CT image is 512×512, but the data is filled from 72×72×32 to 80×80×40. Is it downsampled? Because my situation is also that the CT size is 200×200, the effect will be worse if the data is patch, so I want to ask how to deal with your data size.Thank you very much.

XiongchaoChen commented 1 year ago

(1) I just implemented some "zero-padding" and "cropping" to edit the image size so that the images can be well fit into the networks. No downsampling was applied since it might worsen the image quality of SPECT or μ-map. (2) The image size is not a big issue and you can decide your size based on your data. You just need to make sure 1) the two input images are of the same size; 2) the image size can be fit into the neural networks. For example, the length of each image side should be divisible by 2^3=8 because the registration framework has a "UNet-like 3-layer downsampling structure." (3) For your case, the CT size is way too large. You can try to downsample the image from 200x200 to 100x100 if the GPU memory limits. (4) I personally don't suggest patch-based training, since the registration features are based on the spatial correspondance of the whole image volumes. (5) I notice that your image might be 2D. In that case, you need to change all the 3D operators in the code (e.g.conv3d, batchnorm3d, avepool3d) to the corresponding 2D operators (conv2d, batchnorm2d, avepool2d).

Jiase commented 1 year ago

Hi, I tried your suggestion these days, and it was very useful. Thank you very much. By the way, I couldn't express it clearly last time. My data is three-dimensional, and the size is 200×200×200+. But now I have another problem, that is, using the data set function "func_affine.m", it is impossible to invert the rotated and translated image back to the original image. For example, I randomly generated registration indices (2.83,4.63,3.87,5.43,7.19,9.42) to rotate and translate the image, and then used (2.83,4.63,3.87,5.43,7.19,9.42) in the inverse transform function.The number of layers of the obtained image does not correspond to that of the original image. Although their dimensions are consistent, my inverse transform function is shown in the figure. I've been worried about this for a long time. I hope you can help me. I appreciate it. 1681568262336

XiongchaoChen commented 1 year ago

The linear interpolations in the transformation or inverse transformation processes will change the original voxel values. It's quite normal, and you cannot 100% recover the original values, but the difference should be quite tiny. The inverse transformation function you just showed looks OK. I also uploaded my function for your reference.

If you inspect obvious differences between the original image and the image after inverse transformation, you can try to debug step by step. For example. try [1, 0, 0, 0, 0, 0] to check the translation process, and try [0, 0, 0, 1, 0, 0] to check the rotation process.

Jiase commented 1 year ago

Thank you very much for your generous help. I will try again. Thanks again.