Closed bumsun closed 4 years ago
200 epochs in total. We keep the same learning rate for the first 100 epochs and linearly decay the rate to zero over the next 100 epochs. See the appendix of our paper for more details.
Is there a way to achieve that with the options? I see the lr_policy flag is set to linear by default. But how do you keep it steady for the first 100 epochs?
Do you run 10,000 iterations per epoch?
We go through the entire dataset per epoch. If there are 1000 images, the model will be trained for 1000 iterations per epoch. Regarding the flag 'linear' (the name is a little bit confusing). In our implementation, we keep the same learning rate for the first 100 epochs and linearly decay the rate to zero over the next 100 epochs.
After how many epochs will we start to get reasonable images, will the images be just random noise for say the first 5 epochs
around 50 epochs (for horse2zebra). It depends on the dataset.
Thanks for replying I am working on a project to convert a normal image of a person into an image in which the person is smiling. I have 1200 images of people not smiling and another 1500 images of people smiling.
Also will I be able to get good results if I use the Google colab version of your code by just changing the dataset folder.
Again thank you so much :)
Hello, @SurajSubramanian
Are they not smiling and smiling images matching?
I mean, do I need to prepare a set of smiling and not smiling faces of the same person?
Or just random smiling and not smiling faces work?
@youjinChung You don't need to have paired/matching images, that's the point of CycleGAN. i.e. Random images from both sets should work.
@SurajSubramanian That is so cool. Thanks for the explanation, Suraj. Let me try it right away.
How many epochs were required to create a model "horse2zebra"?