wdayang / CTformer

This repository includes implementation of CTformer on Low-dose CT Denoising
MIT License
56 stars 10 forks source link

Strange results with artifacts #6

Closed Joker-ZXR closed 2 years ago

Joker-ZXR commented 2 years ago

Hi, thank you for your valued work! I recently tried to use your network structure for medical image generation tasks. The model does not load pre-training weights, but produces strange results with artifacts. Have you encountered this situation? I am looking forward to your reply! 图片

wdayang commented 2 years ago

Hi, @Joker-ZXR Thanks for your interest in our work.

Did you mean you loaded the weights from pre-trained models? I think you need to train or load pre-trained weights to get reasonable results. If you have loaded the pre-trained weights. You need to use our latest code. We made some minor modifications to the first-version code. Feel free to update with me.

Best, Dayang

Joker-ZXR commented 2 years ago

Thank you for your prompt reply. I did not load the pre-training model because I wanted to try to use this framework for image generation tasks. I did a supervised generation task, but I found this artifact on the result. What do you think might account for that? Is it because of the model itself? I find that the structure of your model is to generate noise image at last, and then subtract the noise image from the original image. Have you tried to directly generate the de-noised image? Is the model only capable of reconstructing itself, rather than generating new images? Looking forward to your reply. Thank you very much!

wdayang commented 2 years ago

Hi @Joker-ZXR , I am sorry I am not very familiar with generation tasks. Are you using our model in GAN as a generator or something? Do you want to share some of your model codes, I can have a look.

Yes, our current model learns the noise patterns rather than the de-noised image itself. But I also tried to take away the residual shortcut. It should also be able to generate the de-noised image.

I think the design of the model is indeed a reconstruction of the original image pixels. I am not sure whether it's able to generate unseen images from random noise.

Best, Dayang

wdayang commented 2 years ago

Hi @Joker-ZXR , sorry for late reply. I think you can use MRI image to generate MRI image by directly using our model, but if you want to generate both MRI and CT images from MRI images, I think it may not work. One of the possible reasons might be, the encoder-decoder architecture assumes that similar level feature maps has similar information. Maybe you need to add some more modules for the transformation between MRI and CT images? I am not sure, just a possible solution.

Best, Dayang

Joker-ZXR commented 2 years ago

Thank you for your reply. I think so too. I think the T2T approach is a little too difficult for generating tasks. Personally, I understand that its architecture is to expand the image from another dimension to realize the local information integration of convolution and the long information integration of self-attention. More is to retain the original image information, including local and global. So I think maybe T2T is better suited for rebuilding tasks than for generating tasks. I also looked into it, and T2T seems to be used for classification. And your article is used for reconstruction for the first time, which I think is a very appropriate idea.

Thank you very much for your answer, which makes me have a further understanding of T2T.

wdayang commented 2 years ago

Thanks for your interest and acknowledgment. I agree with your point, the T2T is more to retain the original information from the input image. The first T2T paper is for the classification task. But after the T2T process, they have plenty of transformer blocks for feature inference and learning high-level information. For your task, I think you may need to add some prior knowledge on the transformation between the MRI and CT image. Hope you good luck. Thanks.

Best, Dayang