taesungp / contrastive-unpaired-translation

Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
https://taesung.me/ContrastiveUnpairedTranslation/
Other
2.16k stars 413 forks source link

What are your thoughts on CUT vs CycleGAN for medical images? #25

Closed ibro45 closed 3 years ago

ibro45 commented 3 years ago

I'm thinking of trying out CUT for domain adaptation of medical images (e.g. MR to CT translation) - I'm interested if you have any thoughts on how CUT would compare against CycleGAN and, of course, if you have any tips. Thanks for you work!

taesungp commented 3 years ago

In general, CUT would be more "flexible" than CycleGAN, in that it tries to match the target distribution more. While this property is desirable in many cases, it comes at the cost of vulnerability to dataset biases. For example, if the MR dataset contains 90% / 10% ratio of healthy and unhealthy samples, while the CT dataset contains 80% / 20%, CUT is more likely than CycleGAN to hallucinate the extra 10% of the unhealthy samples. One way to suppress this would be increasing the weight on the NCE contrastive loss (--lambda_NCE).

"How to interpret CycleGAN results" section of the CycleGAN webpage contains a bit more detailed information about this. The same argument applies to CUT as well.

ibro45 commented 3 years ago

Interesting, very much appreciated!

msseibel commented 11 months ago

For future readers: The CUT model was used by some of the top performing participants in the CrossMoDA 2021 challenge (domain adaptation between T1 and T2 MRI).

YiyouSun commented 10 months ago

For future readers: The CUT model was used by some of the top performing participants in the CrossMoDA 2021 challenge (domain adaptation between T1 and T2 MRI).

Wow, informative comments! Very much appreciated!!!