Closed ZeroAct closed 3 years ago
Hi, may I know how you did it?
I put a image in ./input/content and another in ./input/style, using "vgg_normalized.pth"
CMD:python train.py --content_dir input/content --style_dir input/style --batch_size 4 --max_iter 160
Though I got a xx.pth.tar in ./experiments, I use it "test.py", getting a gray image result.
@ZeroAct I guess the VGG model provided is a bit modified. I am still not 100% sure, but it seems that Gatys et al. normalized the VGG network for style transfer, according to this.
@wjjjjyourFA I wrote my new AdaIN code. So, I can't answer you. Sorry!
@naoto0804 Thank you for your answer. It was very helpful.
Hi.
I wonder the difference between 'vgg_normalized.pth' you uploaded and pretrained vgg19 from torchvision.
I tried to train AdaIN with pretrained vgg19 from torchvision many times, but I've failed. (I've succeed with 'vgg_normalized.pth')
Sample of my AdaIN inferences that trained with torchvision vgg19 for so many epochs looking so bad...
Thank you : )