Closed alelordelo closed 1 year ago
Hi Alex, you can use the pretrained model directly to do style transfer. It works by taking a written image editing instruction, such as "change the image from digital to analog," and doing the specified edit via a single model. Image captions are not required to edit images, just the written instruction. If you want to train your own model, I recommend reading the InstructPix2Pix paper, which explains a process for generating paired training data.
Hi, thanks for sharing this amazing research!
If I want to train paired images, ex: Digital -> Analog
Do I need to caption all images, or can I just use “digital style” and “analog style”?l an all images?
As I understand, the advantage of this repo is that we can train image pairs (like on pix2pix GAN), and therefore preserve the overall structure of the input image, correct? So seems like a good fit for style transfer tasks (given that you have image pairs)?
Cheers from Stockholm! Alex