half-potato / nmf

Our method takes as input a collection of images (100 in our experiments) with known cameras, and outputs the volumetric density and normals, materials (BRDFs), and far-field illumination (environment map) of the scene.
https://half-potato.gitlab.io/posts/nmf/
MIT License
53 stars 3 forks source link

question about Appendix B #22

Closed Lingyun0109 closed 3 weeks ago

Lingyun0109 commented 1 month ago

Could you please tell me which part of the main text and code corresponds to the “neural shading work h” in Appendix B of the paper? I didn’t quite understand this section.

half-potato commented 1 month ago

I believe this is the corresponding section: https://github.com/half-potato/nmf/blob/7c79de00d111b5fdc0a6d0af2a9c00ee63bacb0f/models/microfacet.py#L596

Lingyun0109 commented 3 weeks ago

I believe this is the corresponding section:

https://github.com/half-potato/nmf/blob/7c79de00d111b5fdc0a6d0af2a9c00ee63bacb0f/models/microfacet.py#L596

thank you! I’ll go take a closer look