icon-lab / ResViT

Official Implementation of ResViT: Residual Vision Transformers for Multi-modal Medical Image Synthesis
Other
133 stars 27 forks source link

Pixel-wise consistency loss between acquired and reconstructed source modalities based on an L1 distance #14

Closed AgustinaLaGreca closed 1 year ago

AgustinaLaGreca commented 1 year ago

Dear Mr. Dalmaz and rest of the team,

First, I would like to thank you for your work and for making it available to the public.

I am trying to use it for the task of sCT generation with my dataset. Unfortunately, I cannot find where in the code is the second term of the loss (as explained in your paper): the pixel-wise consistency loss or Lrec. Based on my understanding, this loss computes the L1 distance between the source image (MR in this case) and the MR image generated by the generator from the sCT?

Would you mind pointing where does that happen in the code? I can only locate the pixel-wise L1 loss between the CT and the sCT, and the adversarial loss.

Thanks in advance!

Best regards

onat-dalmaz commented 1 year ago

Hello, Thanks for your interest in our work. Pixel-wise consistency Loss is defined for unified synthesis tasks. Therefore, it is not included in the code for many-to-one and one-to-one synthesis tasks. If you would like to implement a unified version, you can go ahead and include

self.loss_G_Lrec = self.criterionL1(self.fake_A, self.real_A) * self.opt.lambda_A

ReubenDo commented 1 year ago

Hi,

Following up on this, could you provide more details about how to train your approach for unified synthesis task, please?

Thanks!

Reuben