A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
MIT License
2.57k
stars
304
forks
source link
Using specific inputs and outputs with dream.train_step #120
I was wondering if you can input a specific file to the model (via dream.train_step()) each iteration, so effects like cropping can be applied between each iteration before generating?
For example:
Uses BigSleep+CLIP to do an iteration
Applies a crop to the image
Replaces the generated image with the cropped image or passes the path to the cropped image to dream.train_step(), so the input is slightly zoomed in
Loops that 400 times
Hi There!
I was wondering if you can input a specific file to the model (via dream.train_step()) each iteration, so effects like cropping can be applied between each iteration before generating? For example: Uses BigSleep+CLIP to do an iteration Applies a crop to the image Replaces the generated image with the cropped image or passes the path to the cropped image to dream.train_step(), so the input is slightly zoomed in Loops that 400 times