Open dinusha94 opened 3 months ago
Hi, in the Nvidia Stylegan2-ada they have mentioned that they use vgg16 weights which are derived from the pre-trained LPIPS weights,
"vgg16_zhang_perceptual.pkl" is further derived from the pre-trained LPIPS weights by Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman
link : https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metrics/NOTICE.txt
I would like to do the same for other networks also, How can I do that
Thanks in advance Dinusha
Perceptual Loss has specified model structure and trained by specified dataset,if u wanna change into ResNet u should doing this procedure again
Hi, in the Nvidia Stylegan2-ada they have mentioned that they use vgg16 weights which are derived from the pre-trained LPIPS weights,
link : https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metrics/NOTICE.txt
I would like to do the same for other networks also, How can I do that
Thanks in advance Dinusha