jannerm / intrinsics-network

Code for the paper "Self-Supervised Intrinsic Image Decomposition"
http://rin.csail.mit.edu/
142 stars 27 forks source link

Expected output of shader.py #10

Open akshay-iyer opened 5 years ago

akshay-iyer commented 5 years ago

Hi, I was able to run shader.py for 500 epochs. On looking in visualization.py, I think the output should be 3 columns where the first column contains the shape images, second contains the shading predictions by the network and the third contains the groundtruth shading images.

Is the understanding right? Since in the image, we can see the output after running for 500 epochs but the second column does not seem to be a reasonable shading output. It also does not look like the lighting sphere as I can see light from the same direction.

Kindly help me interpret the image and tell me where I'm going wrong in my understanding. Also my pytorch version is 1.1.0

499

jannerm commented 5 years ago

Your understanding is correct. These look like the predictions of the shader before any training has occurred, so something is going wrong. Can you try running with PyTorch 0.1.12.post2?

Look for cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl on the previous releases page.

akshay-iyer commented 5 years ago

Yes, it did give me a correct output. could you tell me why could that have been a reason?

Also, I'm trying to run decomposer.py with the same 2GB per class dataset you've provided and the default lights array shader.npy

Even though the losses are decreasing, the output does not seem quiet right. Could it be that I should not use the provided shader.npy for the dataset?

Please find below output of the decomposer after 490 epochs: Also somehow the predicted light spheres (8th column) seem more reasonable than the target spheres (9th column). Your thoughts on the same?

grid

colllin commented 5 years ago

I have a similar question about shader.npy. Were each of the provided ShapeNet datasets generated with these values? And do these match up with the sample index, i.e. does airplane_train/108_lights.png match up with np.load('shader.npy')[108]? And same for airplane_val/*, etc?

akshay-iyer commented 5 years ago

@colllin I was thinking along similar lines, but if my shader is able to give a good output with the same lights array, wouldn't it imply that the indexes actually match?

akshay-iyer commented 5 years ago

Hi, I'm seeing progress in my decomposer output. During the first time I trained, the saving of images was commented out by default, so I produced the output of the images separately from the state_dict saved as a t7 file after training. I guess the bad output could be due to saving state_dict in .t7 files instead of a .pth file. Kindly correct me if this guess is wrong. Now there is improvement in the output. I'll post the final outcome after the complete training.

Thanks.