Closed umutyazgan closed 1 year ago
Hi,
just a follow-up here so it is documented in case someone is interested.
The neural shader is stored in the output folder by the reconstruction process. It is essentially just a torch module that can be loaded (https://github.com/fraunhoferhhi/neural-deferred-shading/blob/main/view.py#L72) and used to process arbitrary sets of positions, normals, and view directions.
In other words, as long as you can generate these position, normal, and view direction "maps", you can push them through the neural shader and synthesize color images. For example, for a given camera, you could either use Renderer.render
to obtain these maps or even Blender if you have set up a proper material with the Geometry
node (we actually did the latter as well).
Extracting a "shaded" mesh, however, is difficult because the surface color is view-dependent.
Another important bit is to ensure the meshes from the output directory are normalized before passing them through the neural shader. See: https://github.com/fraunhoferhhi/neural-deferred-shading/blob/main/view.py#L68-L69
Hi,
as long as you can generate these position, normal, and view direction "maps"
I guess positions and normals can be directly generated from the mesh, and view directions could be arbitrary so we don't need to evaluate any neural network here right?
or even Blender if you have set up a proper material with the Geometry node (we actually did the latter as well).
This sounds very interesting. I guess I could use something similar for my purposes, at least for now.
Thanks a lot
I guess positions and normals can be directly generated from the mesh, and view directions could be arbitrary so we don't need to evaluate any neural network here right?
Yes, this is correct. If you look into the code you will see that positions and normals are generated by nvdiffrast and only later shaded with the NN.
Hi, Is it possible to somehow "isolate" the neural shader from the viewer app? For example, I can use any .obj viewer app to view the output mesh. However, I couldn't find any way to apply the shader and view it through a third party app (Blender for example). This is important for me because I am using a headless cloud machine for reconstruction so I don't have a display to use the viewer app. I realize that the shader is essentially a neural network that needs to be evaluated and this may be not possible to translate into some vertex or pixel shader code (I'm not very knowledgeable about how classic shaders work) that can be used without a powerful GPU. But still I wanted to ask if you can give any insights, or give me a direction to look into. Thanks