Closed johannwyh closed 2 years ago
Sure, you should be able to change the output size of the generator to 256. You'll probably only need to set --output_size 256
when training.
If you want to generate/edit images, check the arguments in each script. There is usually anda"output size" option which is easily configurable.
The batch size per GPU depends on your GPU. I would start with a batch size of 2
and go from there.
Thanks for your feedback!
May I ask this question in another way that, what is the logic behind --output_size 256
? Does it mean
The generator generates an 1024x1024 image, then resized to 256x256
or
The generator generates 256x256 images directly, therefore consuming fewer GPU memories compared to 1024 output size.
Or in another word, can I apply your inversion pipeline to a StyleGAN3 model trained on 256 resolution dataset?
As I want to do some extending work among your inversion pipeline and I am using 16GB V100, the memory issue seems to be a big one.
Thank you so much for your help.
May I ask this question in another way that, what is the logic behind
--output_size 256
? Does it mean
The second option is the correct one. We assume by default that you are using a generator that was trained to output an image of size 1024. If your generator outputs an image of size 256, then you'll want to set --output_size 256
.
Or in another word, can I apply your inversion pipeline to a StyleGAN3 model trained on 256 resolution dataset?
Yes. If you have a SG3 generator that was trained to output images of size 256
then you can train an encoder to perform an inversion using this generator.
Let me know if you have further questions.
You are so helpful, Yuval! Many thanks to you.
First of all, thank you for your brilliant work!
I have a question that does your implementation support StyleGAN3 that has 256 output resolution?
If yes, what batch size per GPU can be applied?