entavelis / OpenSESAME

SESAME: Semantic Editing of Scenes by Adding, Manipulating or Erasing Objects
59 stars 5 forks source link

how to use pre-trained ImageGeneration checkpoint or training for image-to-image translation? #8

Open SherryXTChen opened 2 years ago

SherryXTChen commented 2 years ago

Thanks for your work and code.

I read your paper and it says that for the image-to-image translation task, you use your discriminator and SPADE generator. Since I couldn't find the code of the SPADE generator in this repo, I copied the corresponding code from the SPADE repo(https://github.com/NVlabs/SPADE/blob/master/models/networks/generator.py) but there some parameters that didn't appear in your option code and I am not sure how to change it so I can load your pre-trained weights in the ImageGeneration folder.

Really appreciate your time and help!

entavelis commented 2 years ago

Hello SherryXTChen,

For the training of layout-to-image translation we added our discriminator class to the SPADE code base. I would suggest you try the same. It should be straightforward as we used SPADE as our basis. Just copy the two classes from here to SPADE's discriminator class and change the discriminator options when running the code.

I hope this helps!

SherryXTChen commented 2 years ago

Hello SherryXTChen,

For the training of layout-to-image translation we added our discriminator class to the SPADE code base. I would suggest you try the same. It should be straightforward as we used SPADE as our basis. Just copy the two classes from here to SPADE's discriminator class and change the discriminator options when running the code.

I hope this helps!

Thank you!