Closed Calsia closed 2 years ago
The lighting estimaiton performance is evaluated by using the predicted illumination map as the environment light in Blender.
i see this in readme.md
,but i can't understand
by the way, i try to test it use my own jpg .something wrong happened. self.fc = nn.Linear(8208, 1024) doesn't match. is there any limit for input shape?
The lighting estimaiton performance is evaluated by using the predicted illumination map as the environment light in Blender. i see this in
readme.md
,but i can't understand
Hi, you should replace the environment map in blender with the predicted illumination map
.
Of course, the default size should be 256 * 192.
thanks a lot.
one more question( i wish).
in
RegressionNetwork/test.py
, u set ln
is 42. but the model ouput the distribution_pred
with size 96.
how could i fix them
You may just change ln to 96.
Of course, i have tried.
but the dirs
will not match, which is used to generate a .exr
in your code.
I have updated the code, should replace util.polyhedron(1) with util.sphere_points(ln) https://github.com/fnzhan/Illumination-Estimation/blob/b0b7832a842c7450539c0b338a4f194e06a3a1dc/RegressionNetwork/test.py#L59
that's totally what i do.
finally, i want use the .exr
(i think is the illumination map mentioned above) as light to render my own scene.
but how could i align it with my scene model?
by the way,these are my input and result .it seems not very good. do u have any suggestions to improve it?
The model is trained on Laval Indoor datasets, which only contains 2000 panoramas. Thus, it is hard to generalized to other scenes.
thks a lot for your reply
hi, your work is so beautiful!and thks for your code. i want to know what should i do to use the result in blender? thks for any reply