Open bhouston opened 1 year ago
I can script this
BTW there is some discussion here on how to integrate these non-web browser hosted renderers into the test suite: https://github.com/google/model-viewer/issues/4483#issuecomment-1740139331
Maybe it’s worth to add Eevee as well?
Maybe it’s worth to add Eevee as well?
Sounds like a good idea, but let's do Cycles first. :). We can do Eevee as a separate GitHub issue after, and it will probably be easy to do.
Yup , it'll be easy to switch
one big difference is cycles does not use support "backface culling" directly (could be done by manually adding some extra shader nodes) rest of the properties should work in both
https://github.com/google/model-viewer/assets/119810373/949903db-4e94-4d1e-8583-0c3310047139
Here is instructions to add a command line renderer to the fidelity test suite: https://github.com/google/model-viewer/issues/4483#issuecomment-1741193246
will refer to https://github.com/google/model-viewer/pull/4487 on how to do the PR
for now
I'm done with loading the glb, camera/target coords and hdri lighting , will test more configs
more tests
(resolution & bg transparency mismatch in some due to missing values in config.json)
Hi ,@jasondavies
what config file did you use to get the golden
outputs ?
Hey! So you'll need to follow the instructions in the render-fidelity-tools
README.
The key bit is that you can use npm run update-screenshots
to update the "golden" screenshots. It reads the scenarios from test/config.json. To add a new renderer, you'll want to add it to config.json
. For "offline" renderers, such as Blender, you can add a "command"
property to call a command line script (such as a Python script) which will be called with the full JSON (including defaults) once for each scenario.
Note that you can simply run npm run update-screenshots [optional myRendererName] [optional scenarioName]
, where the additional arguments are interpreted as whitelisted renderers and/or scenarios to avoid re-generating the "golden" screenshots for other renderers.
BTW, the external renderer config is explained in more detail in #4483 (comment).
@vis-prime great work! BTW as with the V-Ray test, we may want to render in linear space and save as EXR and do the tone mapping after using Python code, the reason is that the Three.js tone mapping is very specific and any changes or improvements Blender has may screw up the comparisons significantly. Definitely do not use the AGX because while it is awesome, it will screw up trying to get the material correctly matching. Once we know the material is correct, we can do all the renders we want with any color space we want, but we need to ensure the materials match first.
ahhh... got it !
will, work on the PR itself with the correct writing pattern to test the npm run update-screenshots
stuff
and
will save standard colorspace 32bit exr (zip codec) then do the ACES pixel math
Tested all 87 configs to confirm the camera matches up correctly ... looks good (missed a fov to radians conversion earlier which messed up a few configs.... now all are correct) (cycles image | model-viewer image)
Results from converting temp EXR file
to ldr ACES
using three-gpu-pathtracer goldens as reference , colors looks correct
https://github.com/google/model-viewer/assets/119810373/1084954f-5a29-46ab-83b2-ebc9125f8c23
https://github.com/google/model-viewer/assets/119810373/9b57a191-490f-4af3-ac8e-8e06e9988bc7
https://github.com/google/model-viewer/assets/119810373/6b4516d2-7d6d-4f3e-b7bb-5036895d88dc
Beautiful! When the render results do not agree with ACES, such as the attenuation dragon, the issue is with the material properties being interpreted differently in Blender compared to Three-GPU-Pathtracer. This is exactly what we wanted to identify! It is amazing!
This is pushing forward the fidelity tests so far ahead! Thank you!
Excellent work! I look forward to seeing a PR. And @bhouston, I see why you're pushing for a new repo for this - if we're going to grow it in a serious way, that really does make sense.
Description
Add Blender Cycles using its latest Principled BRDF 2.0 to the Render Fidelity Test page. This will allow us to compare glTF PBR rendering in Blender Cycles with all of the other renderers featured on the fidelity page now.
My recommendation on implementing this is to use the Blender Python interface that allows you to use Blender from the command line via Python. You should set up the various backgrounds using the assets provided by model-viewer and then use the Blender Khronos glTF importer to import the various gltF assets and then render the images using Python. Thus it is one or more of python scripts to run this test suite.
This can be used to guide the development of the Khronos glTF plugin and as well as Blender if there are internal limitations to Blender's Principled 2.0 BSDF implementation.
I am willing to fund this on behalf of Threekit.
Live Demo
No live demo currently.
Version
No relevant version.
Browser Affected
Not relevant.
OS
Not relevant.
AR
Not relevant.