icon-lab / SynDiff

Official PyTorch implementation of SynDiff described in the paper (https://arxiv.org/abs/2207.08208).
Other
229 stars 39 forks source link

Question about model performance on other dataset #20

Closed chengyu89527 closed 1 year ago

chengyu89527 commented 1 year ago

Hi, When I used the IXI dataset ,your model works well, However when I applied your work on WMH(https://wmh.isi.uu.nl/), I met some issues. Some lession part appeared and disappeared randomly. please tell me if it could be solved with some new technology in your cycle framework. Thanks a lot. image

onat-dalmaz commented 1 year ago

Hello, thank you for your interest in our work. We are glad to hear that our model works well on the IXI dataset. However, we have not tested our model on the WMH dataset1, which is a different domain from the IXI dataset. The WMH dataset contains images of patients with white matter hyperintensities (WMH), which are areas of the brain that have been damaged by ischemic injury or small vessel disease. These lesions may affect the image quality and the translation performance of our model.

Our model is based on cycle-consistent adversarial networks, which is a framework for unpaired image-to-image translation. The idea is to learn two mappings between two image domains, such that the distribution of translated images is indistinguishable from the distribution of real images using an adversarial loss. Moreover, to ensure that the translated images preserve the content of the input images, a cycle-consistency loss is used to enforce that the original image can be reconstructed from the translated image and vice versa.

One possible suggestion is to use a paired version of SynDiff, i.e., supervised training improved performance if you have access to a paired image database.

Yet, I can see from the images you attached that the lesions did not disappear in T1-w images, as they are relatively have low contrast for that sequence.

chengyu89527 commented 1 year ago

Hello: Really thanks for your reply!!! I try the most trivial version of cyclegan to tansfer the images.Even if it can't gaurantee the disease shape consistent 100%,it's still much better than syndiff. The syndiff seems have a larger imagination space(easy to generator an alter shape) I can only guess, the reason maybe larger model capcity or diffusion produre. Hope you can share your idea.  By the way we only have unpaired different centors t2flair data from WMH(that one is one version t2f not t1w), the lession segmention will change even it's not appearred. I think unpaired shape constraint transfer model may still be a issue need be developed. Thanks again, looking for your idea! Have a good day

成大宇 @.***

 

------------------ 原始邮件 ------------------ 发件人: "icon-lab/SynDiff" @.>; 发送时间: 2023年5月14日(星期天) 下午3:29 @.>; @.**@.>; 主题: Re: [icon-lab/SynDiff] Question about model performance on other dataset (Issue #20)

Hello, thank you for your interest in our work. We are glad to hear that our model works well on the IXI dataset. However, we have not tested our model on the WMH dataset1, which is a different domain from the IXI dataset. The WMH dataset contains images of patients with white matter hyperintensities (WMH), which are areas of the brain that have been damaged by ischemic injury or small vessel disease. These lesions may affect the image quality and the translation performance of our model.

Our model is based on cycle-consistent adversarial networks, which is a framework for unpaired image-to-image translation. The idea is to learn two mappings between two image domains, such that the distribution of translated images is indistinguishable from the distribution of real images using an adversarial loss. Moreover, to ensure that the translated images preserve the content of the input images, a cycle-consistency loss is used to enforce that the original image can be reconstructed from the translated image and vice versa.

One possible suggestion is to use a paired version of SynDiff, i.e., supervised training improved performance if you have access to a paired image database.

Yet, I can see from the images you attached that the lesions did not disappear in T1-w images, as they are relatively have low contrast for that sequence.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

william2ai commented 5 months ago

@chengyu89527 Hi, have you found a model that could handle image translation with disease shape?(like T1 to T2) I'm new with this topic and really looking forward for help! For the dataset of my own is also with disease(no dataset pair), and I'm wondering whether to use GAN or diffusion as my baseline. However, upon to your reply and experiment, it seems traditional cGAN is rather better?

Thanks! Looking for reply Yours