Closed yafeim closed 1 year ago
Hi there,
Yes we do generate them directly from the pytorch implementation using the cropped images. We simply pass the cropped images to SfSNet.
I see. Thanks for your quick reply. I am getting the following albedo following the same setup, which is different from yours. (I also did RGB2GRAY and resize to 256x256). And the lighting direction is also different. You provided [-0.37719698 0.8147933 0.44026619] for the image "0.jpg". But I got [0.02822213 0.14053412 0.98967355]. I use the [1:4]-th element of the 9-dim array. .
So for the lighting direction, you should use the 2nd, 3rd, and 4th values and then normalize them. I think for SfSNet, it will return a 9x3 SH lighting (one for each channel) so you may need to average the SH values across channels.
Yes, I did that. Take the average to get 9x1 and use the 2nd to 4th elements. Don't know why albedo looks different too. Your albedo does not seem to have the artifact at the upper left corner, which the images in the SfSNet paper also have.
Could be a difference in cuda or torch versions. Please check to make sure that your environment matches my own in the repo. If it does, then I would simply check to see if the albedo aligns with the image (e.g. by overlaying them). If it does, it's probably fine and you can train with it anyway.
Hi,
Do you generate lighting and albedo directly from the pytorch implementation of SfSNet using the cropped CelebA images (cropped using your provided code)? Did you enable the cropping function with face detection in SfSNet?