XiongchaoChen / DuSFE_CrossRegistration

DuSFE: Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT (MedIA 2023, MICCAI 2022)
MIT License
17 stars 0 forks source link

Whether the method can be applied to 2D data sets 方法能否运用到2D数据集上 #9

Open yaoliu0803 opened 8 months ago

yaoliu0803 commented 8 months ago

Hello, This is an excellent work! I'm doing a county study on cross-modal image registration. I would like to ask if your method can be applied to 2d cross-mode image registration, such as infrared and visible light images? If so, how should you modify the network model section?

XiongchaoChen commented 8 months ago

Sure, you can easily apply this framework to 2D images. Just update all the 3D operators (conv3d, maxpool3d, etc.) with the corresponding 2D ones (conv2d, maxpool2d, etc.). Also, remember to re-calculate the vector size/length based on your image size in the fully connected layers.

yaoliu0803 commented 8 months ago

Thank you for your reply. Is this an unsupervised method?

XiongchaoChen commented 8 months ago

No problem. It is supervised. Unsupervised manner is challenging for the cross-modal image registration, since it is hard to quantify the registration goodness between the registered and target images when they are of different modalities. Superised learning is more common for mono-modal registration. Hope this helps :)