sxyu / svox2

Plenoxels: Radiance Fields without Neural Networks
BSD 2-Clause "Simplified" License
2.79k stars 360 forks source link

sh coefs to physically based rendering coefs like roughness and metalness per vertex #52

Closed neurall closed 2 years ago

neurall commented 2 years ago

image

I am just brainstorming here. Please forgive crazy ideas ;D

Is there some way to extract typical physically based rendering shader coefficients like roughness and metalness from spherical harmonics coefs? If so could we then reduce mesh size and need for special renderers ? There is many pbr renderers sketchfab for web ue unity and many others all supporting vr ;)

Lets find good synthetic dataset with all known pbr coefs combinations.

image And convert to plenoxels with sh

Then perhaps train neural network which sh or pixel lumas was produced by which known pbr roughness and metalness coefs.

Or maybe nn or sh are not needed if observed luminance angle dependent volume has obvious patterns? For example if we observe all angles of the same voxel (sh or not) or even neighboring voxels patch and check

1) is there specular highlight area following camera / pointing to it.

2) if so. How fast/slow is luminance drop rate from maximum from all angles for this and/or neighboring voxels (perhaps rough and porous surfaces will have way shallow and wide luma including maximum ) . And narrow and tall for shiny ones. are these luminance peaks/shape, enough to roughly estimate roughness and metalness per voxel ?

image

3) if number of colors from various angles per voxel is low can we assume it is not metal/mirror/glass? And if color/luma from various angles change violently with slightest angle change but still has strong color tint than its tinted shiny surface or glass ?

Perhaps if we save just outer surface point cloud (just surface voxels = voxels bordering with trimmed outer voxels) with base color roughness and metalness.

Perhaps this could in theory allow for existing highly performant vr engines yet while still retaining view dependent realistic captured physical material behavior?

On the other hand if ue5 added support for sh to nanite then streaming 16 half floats per vertex maybe would not be that bad? But how much of reflected env is baked to sh for shiny areas? do we want that? Still. Since nanite allows rendering meshes with virtually infinite detail and even integrated graphics can nowadays handle milions of vertices 60 fps.

mzillag commented 1 year ago

image

I am just brainstorming here. Please forgive crazy ideas ;D

Is there some way to extract typical physically based rendering shader coefficients like roughness and metalness from spherical harmonics coefs? If so could we then reduce mesh size and need for special renderers ? There is many pbr renderers sketchfab for web ue unity and many others all supporting vr ;)

Lets find good synthetic dataset with all known pbr coefs combinations.

image And convert to plenoxels with sh

Then perhaps train neural network which sh or pixel lumas was produced by which known pbr roughness and metalness coefs.

Or maybe nn or sh are not needed if observed luminance angle dependent volume has obvious patterns? For example if we observe all angles of the same voxel (sh or not) or even neighboring voxels patch and check

  1. is there specular highlight area following camera / pointing to it.
  2. if so. How fast/slow is luminance drop rate from maximum from all angles for this and/or neighboring voxels (perhaps rough and porous surfaces will have way shallow and wide luma including maximum ) . And narrow and tall for shiny ones. are these luminance peaks/shape, enough to roughly estimate roughness and metalness per voxel ?

image

  1. if number of colors from various angles per voxel is low can we assume it is not metal/mirror/glass? And if color/luma from various angles change violently with slightest angle change but still has strong color tint than its tinted shiny surface or glass ?

Perhaps if we save just outer surface point cloud (just surface voxels = voxels bordering with trimmed outer voxels) with base color roughness and metalness.

Perhaps this could in theory allow for existing highly performant vr engines yet while still retaining view dependent realistic captured physical material behavior?

On the other hand if ue5 added support for sh to nanite then streaming 16 half floats per vertex maybe would not be that bad? But how much of reflected env is baked to sh for shiny areas? do we want that? Still. Since nanite allows rendering meshes with virtually infinite detail and even integrated graphics can nowadays handle milions of vertices 60 fps.

Hi. Maybe you can help me. How to convert final results of this model (contained in .npz file) to PointCloud and is it possible to view this .npz file?