LingxiaoYang2023 / DSG2024

Official pytorch repository for “Guidance with Spherical Gaussian Constraint for Conditional Diffusion”
43 stars 2 forks source link

How DSG adapts to different tasks #5

Open shikailun-up opened 4 months ago

shikailun-up commented 4 months ago

Thank you for your excellent work. But I'm running into some problems. I tried to put the ‘conditioning’ method which in 110 to 128 lines of /guided_diffusion /condition_methods.py applied to my task, but it didn't work as well as it should. My task is low-light image enhancement, the goal is to enhancement low-light images to normal-light images. In my dataset, low-light and normal-light images exist in pairs. In my diffusion model, I use low-light image as a condition. I noticed that self.operator.forward(x_0_hat, **kwargs) in line 113 does different operation to x_0_hat for different tasks. So I'd like to ask what should I do with my x_0_hat in my task?

LingxiaoYang2023 commented 4 months ago

DSG deals with a problem where the condition is unknown in the training stage. During sampling, we want to minimize an additional loss function L(x_0,y) using the pre-trained diffusion models. Hence, it can be applied to various tasks depending on different loss functions. However, it seems that you have already incorporated conditional pairs during the training stage, and you also want to use DSG during the sampling stage. What is your purpose? If you have other loss functions to use during the sampling stage(e.g. the forward function from normal-light images to low-light images, if it exists), you can employ DSG to minimize them."