Open heurainbow opened 1 year ago
You can get something similar to Dall-E 2's image variations if you play around with img2img.py. Feed in the original image that you want to get variations for, use the same text prompt that you used to create your original image, and then play around with the "strength" parameter. Higher strength means more of your original image is overwritten with noise, and so the results will be further from your original.
You can get something similar to Dall-E 2's image variations if you play around with img2img.py. Feed in the original image that you want to get variations for, use the same text prompt that you used to create your original image, and then play around with the "strength" parameter. Higher strength means more of your original image is overwritten with noise, and so the results will be further from your original.
This works only when I generate the init-image first, what if I use a real image instead of a generated one?
You don't need to generate the initial image. You can use a real image just fine. Just resize it to the same dimensions you wish to output.
You don't need to generate the initial image. You can use a real image just fine. Just resize it to the same dimensions you wish to output.
What I need is image translation without conditioning on text prompt, and the generated image must maintain the general semantics of the original image.
As in , the top right image changes into the bottom right one without using the text prompt.
@cunicode same issues here! Have you found an appropriate way to do this? ðŸ˜
Versatile Diffusion has an image variation task
can this diffusion model support image variation without text prompt as dalle2