Open Mohinta2892 opened 3 weeks ago
Hi, @Mohinta2892 Samia, thank you for your interest in our work and for inquiring about image translation across different modalities using our method. It is indeed possible to train our method for translating between different modalities. For example, you can use a T1 image as the condition, and the model can output a T2, FLAIR, or any other target modality. In this case, you would need to adjust the model's input structure according to your design. This might involve removing the mask channels and using a single-channel image as input, or incorporating additional image processing techniques by using 2 or 3 channels. Since our method offers a conditioning technique for direct input, it is flexible enough to handle various image translation tasks. If you have any further questions or need clarification, please feel free to reach out to us. If you find our code and model weights helpful to your research, we would greatly appreciate it if you could cite our work. Thank you.
Hello authors!
Do we always have to supply pre-calculated masks from available modalities during inference? Or is it possible to only supply one modality to generate other modalities? For example, after training, can I only pass T1 volumes to generate T2 images?
Best, Samia