Open lianggaoquan opened 2 years ago
Yes, just concatenate your image to the noised-image input and change the input-channel size.
@lianggaoquan yea, what Robert said
i can add it later this week
That depends | would say for paired i2i you can do what @robert-graf mentioned however if you for example have segmentation maps as one pair you might be better of adding a SPADE normalization layer into your UNet and don't attach the segmentation map as input.
However for unpaired i2i I think this current framework most likely will not work as I can't see how the current training signal would be enough but maybe I am wrong
Hi, any update for the paired image translation in the repo? Or can anyone show at least snippet of code in order to modify the repo to do the work? Anyway, really appreciate all the works, learn a lot!
@robert-graf Where exactly should I perform concatenation operation? Could you please give more details? I tried to do it very beginning of the Unet forward, but did not work.
Yes, just concatenate your image to the noised-image input and change the input-channel size.
@huseyin-karaca This Google paper introduced this https://iterative-refinement.github.io/palette/.
I did it before the forward call of the U-Net and only updated the input size of the first Con-Block.
# Conditional p(x_0| y) -> p(x_0)*p(y|x_0) --> just added it to the input
if not x_conditional is None and self.opt.conditional:
x = torch.cat([x, x_conditional], dim=1)
# --------------
Here is the rest for context my Image2Image Code under /img2img2D/diffusion.py. I hope lucidrains is fine with linking my Code here. If you are looking for the paper referenced, the preprint is coming out on Tuesday.
@robert-graf Thank you for your kind reply!
Hi, so to do i2i using this repo, is it okay to use the Unet self_condition=True, or we have to do the cat manually and change in another place?
@heitorrapela You would have to manually change the code written in this repo to achieve i2i. The self_condition=True in the Unet from this repo is the implementation of this paper: https://arxiv.org/abs/2208.04202
By the way, diffusion model often achieve better results from pre-trained model when applying to i2i, maybe you could take a look at HuggingFace's diffusers: https://github.com/huggingface/diffusers
@FireWallDragonDarkFluid, thanks for the response. I was trying with the self_condition, but yes, it was not what I wanted, and in the end, it was still adding artifacts to the translation process.
I will see if I can implement myself with this library or the diffusers. Using diffusers, I just tried simple things, but I still need to train, so I must investigate. Due to my task restrictions, I also cannot use a heavy model, such as SD.
I did a quick implementation, but I am not 100% sure; I am training some models with it; here are my modifications if anyone wants to try also:
Unet(dim = 64,dim_mults = (1, 2, 4, 8), flash_attn = False,channels=6)
.model_out = self.model(x, t, x_self_cond))
, I added x = torch.cat([x, x_start], dim=1)
target = torch.cat([target, x_start], dim=1)
.
Can diffusion model be used into image to image translation ?