Closed Johnvarghese11 closed 2 years ago
Hi John, Sorry, our method cannot be directly applied to full images. The following are the reasons: 1. As we mentioned in the readme, real images should be extracted and aligned using DLib and a function from the original FFHQ dataset preparation step. This is because we use the pre-trained FFHQ-1024 weight from Nvidia, which can only process images that are aligned and cropped. 2. Although we can reverse the "image align" and paste the cropped image onto the full-body image, the composed images sometimes still have some hair remaining (as the cropped image doesn't include all the hair in the original image). 3. As we mentioned in our paper (section 6), our method sometimes has the overall color changed, since we use the Poisson editing to blend images.
In sum, if you want to work on full-body images, you need to: 1. Paste the cropped image onto the full-body image according to the image align code. I can provide an exemplar code for you, please find it in the attached rar file. After all, it is just my simple implementation, maybe you should modify my code by yourself. 2. Try to input the original image that doesn't have long hair. Make sure the cropped image includes hair from the original image as much as possible. 3. Address the issue of color-changing. Maybe you can use another image blending method that won't change the overall color.
Good luck!
Hi John, Sorry, our method cannot be directly applied to full images. The following are the reasons: 1. As we mentioned in the readme, real images should be extracted and aligned using DLib and a function from the original FFHQ dataset preparation step. This is because we use the pre-trained FFHQ-1024 weight from Nvidia, which can only process images that are aligned and cropped. 2. Although we can reverse the "image align" and paste the cropped image onto the full-body image, the composed images sometimes still have some hair remaining (as the cropped image doesn't include all the hair in the original image). 3. As we mentioned in our paper (section 6), our method sometimes has the overall color changed, since we use the Poisson editing to blend images.
In sum, if you want to work on full-body images, you need to: 1. Paste the cropped image onto the full-body image according to the image align code. I can provide an exemplar code for you, please find it in the attached rar file. After all, it is just my simple implementation, maybe you should modify my code by yourself. 2. Try to input the original image that doesn't have long hair. Make sure the cropped image includes hair from the original image as much as possible. 3. Address the issue of color-changing. Maybe you can use another image blending method that won't change the overall color.
Good luck!
Can't find the attached rar file :)
Hi John, Sorry, our method cannot be directly applied to full images. The following are the reasons: 1. As we mentioned in the readme, real images should be extracted and aligned using DLib and a function from the original FFHQ dataset preparation step. This is because we use the pre-trained FFHQ-1024 weight from Nvidia, which can only process images that are aligned and cropped. 2. Although we can reverse the "image align" and paste the cropped image onto the full-body image, the composed images sometimes still have some hair remaining (as the cropped image doesn't include all the hair in the original image). 3. As we mentioned in our paper (section 6), our method sometimes has the overall color changed, since we use the Poisson editing to blend images. In sum, if you want to work on full-body images, you need to: 1. Paste the cropped image onto the full-body image according to the image align code. I can provide an exemplar code for you, please find it in the attached rar file. After all, it is just my simple implementation, maybe you should modify my code by yourself. 2. Try to input the original image that doesn't have long hair. Make sure the cropped image includes hair from the original image as much as possible. 3. Address the issue of color-changing. Maybe you can use another image blending method that won't change the overall color. Good luck!
Can't find the attached rar file :)
Sorry for the missing file!
Please find it here: https://drive.google.com/file/d/1O618iXND-iQfRrgI_BkUnJqbvD6o-5V_/view?usp=sharing
Thanks @oneThousand1000
That was quick :)
Hi John, Sorry, our method cannot be directly applied to full images. The following are the reasons: 1. As we mentioned in the readme, real images should be extracted and aligned using DLib and a function from the original FFHQ dataset preparation step. This is because we use the pre-trained FFHQ-1024 weight from Nvidia, which can only process images that are aligned and cropped. 2. Although we can reverse the "image align" and paste the cropped image onto the full-body image, the composed images sometimes still have some hair remaining (as the cropped image doesn't include all the hair in the original image). 3. As we mentioned in our paper (section 6), our method sometimes has the overall color changed, since we use the Poisson editing to blend images.
In sum, if you want to work on full-body images, you need to: 1. Paste the cropped image onto the full-body image according to the image align code. I can provide an exemplar code for you, please find it in the attached rar file. After all, it is just my simple implementation, maybe you should modify my code by yourself. 2. Try to input the original image that doesn't have long hair. Make sure the cropped image includes hair from the original image as much as possible. 3. Address the issue of color-changing. Maybe you can use another image blending method that won't change the overall color.
Good luck!
Hi, There! I have a question. I find that if the original image is a side face picture, the crop window calculated by the align function is often too small to include the whole head. Is there any ways to enlarge the window? Or what other ways may be useful to improve this phenomenon?
Hi John, Sorry, our method cannot be directly applied to full images. The following are the reasons: 1. As we mentioned in the readme, real images should be extracted and aligned using DLib and a function from the original FFHQ dataset preparation step. This is because we use the pre-trained FFHQ-1024 weight from Nvidia, which can only process images that are aligned and cropped. 2. Although we can reverse the "image align" and paste the cropped image onto the full-body image, the composed images sometimes still have some hair remaining (as the cropped image doesn't include all the hair in the original image). 3. As we mentioned in our paper (section 6), our method sometimes has the overall color changed, since we use the Poisson editing to blend images. In sum, if you want to work on full-body images, you need to: 1. Paste the cropped image onto the full-body image according to the image align code. I can provide an exemplar code for you, please find it in the attached rar file. After all, it is just my simple implementation, maybe you should modify my code by yourself. 2. Try to input the original image that doesn't have long hair. Make sure the cropped image includes hair from the original image as much as possible. 3. Address the issue of color-changing. Maybe you can use another image blending method that won't change the overall color. Good luck!
Hi, There! I have a question. I find that if the original image is a side face picture, the crop window calculated by the align function is often too small to include the whole head. Is there any ways to enlarge the window? Or what other ways may be useful to improve this phenomenon?
Hi, the valid face region of pretrained StyleGAN is fixed, so the window can not be enlarged, unless you retrain the StyleGAN model with a larger-window dataset (compared to FFHQ). Or you can simply paste the result to the full-head side face picture using the code I uploaded. https://drive.google.com/file/d/1O618iXND-iQfRrgI_BkUnJqbvD6o-5V_/view?usp=sharing
Hi, the valid face region of pretrained StyleGAN is fixed, so the window can not be enlarged, unless you retrain the StyleGAN model with a larger-window dataset (compared to FFHQ). Or you can simply paste the result to the full-head side face picture using the code I uploaded. https://drive.google.com/file/d/1O618iXND-iQfRrgI_BkUnJqbvD6o-5V_/view?usp=sharing
Thanks very much for your quick reply! I have used your code and your code works well. However, due to the size of the crop window, after pasting it may cause some hair cannot be removed :
So, if I hope to deal with the part not included in the crop window, the only way is to retrain the StyleGAN and HairMapper ,right ? (Oh it seems hard)
Hi, the valid face region of pretrained StyleGAN is fixed, so the window can not be enlarged, unless you retrain the StyleGAN model with a larger-window dataset (compared to FFHQ). Or you can simply paste the result to the full-head side face picture using the code I uploaded. https://drive.google.com/file/d/1O618iXND-iQfRrgI_BkUnJqbvD6o-5V_/view?usp=sharing
Thanks very much for your quick reply! I have used your code and your code works well. However, due to the size of the crop window, after pasting it may cause some hair cannot be removed :
So, if I hope to deal with the part not included in the crop window, the only way is to retrain the StyleGAN and HairMapper ,right ? (Oh it seems hard)
Yes, you're right! This is one of the limitations of our paper and still a challenging problem of many StyleGAN-based methods.
Maybe you can try to use the [StyleGAN3 pre-trained models (https://ngc.nvidia.com/catalog/models/nvidia:research:stylegan3) for config T (translation equiv.) and config R (translation and rotation equiv.). If you can use StyleGAN3 and apply translation and rotation to get a full-head side face image, you will hopefully retrain HairMapper on it and remove hair from the side face completely. But I don't think it can get better results because the full-head side faces do not exist in the FFHQ, and StyleGAN has no knowledge to generate one.
Maybe you can try to use the [StyleGAN3 pre-trained models (https://ngc.nvidia.com/catalog/models/nvidia:research:stylegan3) for config T (translation equiv.) and config R (translation and rotation equiv.). If you can use StyleGAN3 and apply translation and rotation to get a full-head side face image, you will hopefully retrain HairMapper on it and remove hair from the side face completely. But I don't think it can get better results because the full-head side faces do not exist in the FFHQ, and StyleGAN has no knowledge to generate one.
Okay, Thank you very much for your detailed reply! Your work is very well! fei chang gan xie xue jie !
Hi, There!
Could you let me know how to make workable for bald on full image?
Thankyou.