Thank you for your implementation and share the code. I am trying to understand the StyleGAN generally and your implementation specifically.
Starting from the NVIDIA introduction here: [https://www.youtube.com/watch?v=kSLJriaOumA], I saw a very cool animation that by changing some "inputs" (I assumed) such as Coarse style, Middle style or fine style (somehow represented by the face images) can generate different image's texture. I wonder where take this into account in your code. I saw the input of model.generateTruncated() get n1 and n2 created from random function. How do we interface the generation with Coarse style, Middle style or fine style from a set of image.
I hope my concern makes sense. I am trying to see if StyleGAN can be applied to my application.
Hi Mathew,
Thank you for your implementation and share the code. I am trying to understand the StyleGAN generally and your implementation specifically.
Starting from the NVIDIA introduction here: [https://www.youtube.com/watch?v=kSLJriaOumA], I saw a very cool animation that by changing some "inputs" (I assumed) such as Coarse style, Middle style or fine style (somehow represented by the face images) can generate different image's texture. I wonder where take this into account in your code. I saw the input of model.generateTruncated() get n1 and n2 created from random function. How do we interface the generation with Coarse style, Middle style or fine style from a set of image.
I hope my concern makes sense. I am trying to see if StyleGAN can be applied to my application.
Bests,