xelatihy / yocto-gl

Yocto/GL: Tiny C++ Libraries for Data-Driven Physically-based Graphics
https://xelatihy.github.io/yocto-gl
2.81k stars 205 forks source link

Version 4 - Design Documents #1151

Closed xelatihy closed 3 years ago

xelatihy commented 3 years ago

This issue tracks design consideration for the v4 simplification. Instead of keeping separate design documents for each part of the library, we keep everything into one for simplicity.

xelatihy commented 3 years ago

Color Images

There have been two problems we tackled in v3 and before. The main problem is that different image formats makes sense for different applications, but we do not want to support that as discussed in other issues. Instead we want to cut down code size while still allowing to support everything we do now. Here are the things a few things that we care about:

  1. keep float and byte images
  2. perform all computation in float, exception for image sampling which can be done in bytes
  3. we seldomly save data in byte for speed
  4. we compute in linear and non-linear spaces depending on the application
  5. makes sense to keep color computation in vec3f, but store always in vec4f or vec4b

So this means we have 4 variants

  1. float / linear (HDR)
  2. float / non-linear (computed values)
  3. byte / linear (normal maps)
  4. byte / nonlinear (albedo maps)

A major issue seems to be that we started the project thinking byte=non-linear and float=linear and that was a big mistake. That lead to weird things everywhere in the pipeline, typeing to perform conversions in places where none was desired. We have to try to fix it now. A major issues that needs to be addressed is loading should never change color space, and saving should always respect it.

Given all these constraints, and the discussion below, the proposed implementation is

Follows a discussion on design choices

Here are some operations we perform and how that relates to image formats.

  1. tone mapping: HDR to LDR, but more importantly linear to non-linear clamped
  2. color grading: HDR to LDR or LDR to LDR, i.e. linear/non-linear to non-linear clamped
  3. loading: either float-only or float/byte based on filename
    • should loading ignore color space or marked images with it?
  4. texturing: all the above, but generally all converted to linear depending on use
  5. resizing: same color space and flaot/byte type
  6. diffing: same color space and float/byte type
  7. viewing: all cases, although we could skip the float/byte conversion in the view API

The question now to decide how to handle all these cases in code.

  1. One option is to have a templated type as we have now
    • library is easy to but apps are painful to write
    • many duplications on types
  2. The other option is to have a single type with both byte/float and a linear flag
    • loading: always as is and keep color space; can force to float
    • saving: float converted appropriately based on color space and file formats, bytes as they are
    • resizing: as is
    • tone map: linear to non-linear; input only float; output byte or float, but default to float
    • make proc: can choose to output floats, but mark linear/non-linear
    • colorgrade: input as float, output as float; linear/non-linear to non-linear
    • eval: float/byte input, float output
    • bytes images are used little, but maybe helpful at times: textures (so eval), eval, output for tonemap/grade only for speed
xelatihy commented 3 years ago

Generic Shapes

In v3, Yocto did not have a concept of shape explicit within the library. Only low-level functions were used throughout. That makes it possible to build custom apps very easily. But this is at the price of usability and simplicity. In this version, we are planning to introduce two common data types for shapes that are either indexed or face-varying. The discussion here will mostly focus on the indexed shapes, since the face-varying ones are nearly identical.

The main features we want to support are:

  1. generic shape data that is, for now, not extensible to arbitrary data
  2. methods for building shapes with attributes
  3. query methods for shape types and properties (no more !triangles.empty() idiom)
  4. vertex data getter and setter, to be used optionally
  5. vertex data interpolation for the built it properties (these were in the path tracer before)
  6. loading and saving shapes
  7. high level helpers: smoothing normals and targets, subdivision, etc

It seems that to do these, there is very little design that still needs doing, besides naming conventions that are now not properly designed now. So we can just make a list of things to implement.

For subdivision, things are very similar, so we just add things here.

xelatihy commented 3 years ago

Scene creation

We have gone back and forth on scene creation, mostly to handle the complexity of either too many parameters, the pointer usage and so on. Right now, we are settling on a simple design that is natural to a data-oriented C++ stack. This design preserves the following goals:

  1. scene creation should be natural to avoid having to write tons of getters and setters
  2. data is accessed directly, so we should do so when we can
  3. in one alternative, scene creation is just manipulation of the various arrays in the scene
  4. add_XXX functions helpful to build a scene in a programmatic apron since they are potentially shorter to use and allow for presets; they also hide the complexity of the materials, at least for now
  5. names are optional and do not have to be set; add methods will take them always for simplicity
  6. add methods should for now take only simple params, even if they eventually will take the full description
  7. add methods should be the exact same in the glscene

Given this design, the translation to todos is easy

xelatihy commented 3 years ago

Names in IO

Names in scenes are quite helpful but at the sam time not strictly needed. It seems that the most helplfull way of doing this would be to have a scene_metadata structure where we can keep all this data. We should investigate this further or decide fatly what to do.

xelatihy commented 3 years ago

Library organization

After a few brainstorming discussion is was decided to refactor the library "by type". We list here the main action to be taken and also what remains to be decided.

The overall theme is to split the library by the type of object thy handle to make browsing code easier and establish a better foundation.

We tried initially to have all functionality in its own libraries. But it seems it would overcomplicate a few things. Instead we could handle this initially by moving the main structures into scene. Later on we can split this but we should remove the array-like functionality.

xelatihy commented 3 years ago

Simpler Material

The material model of v3 and before is derived from the Disney model. That has worked well so far, but has two main drawbacks: complexity in creation and complexity in rendering. In the years this has been hard to use for students, which should not have happened. We are now proposing to following pbrt and use a tagged material model. We want the following features:

  1. materials should have intuitive types types
  2. use types instead of flags to change major beavior
  3. if possible, match pbrt materials in the definitions
  4. convert materials to/from other formats: maintain compatibility with old format and focus on glTF for forward compatibility
  5. reduce the use of textures significantly and if possible to two textures
  6. consider whether to set colors as srgb or rub linear --- it is possible than srgb is simpler to handle overall
  7. in the renderer, use switches to handle brdf parameters

Given these goals, the first issue to solve is to deign the material types and parameters. We would base it on the current versions and then normalize it after.

From this discussion, we now need the following parameters:

Additional property might include:

Given these goals, the following changes are needed:

Material naming is another complex issues. All renderers have different kind of nomenclature, so there is no common ground to use. There are three main variations for this

Another issue is parametrization of subsurface scattering and emission. What do we do here is less clear than in all other. I will open an issue for discussion with others elsewhere and than update this design document with the results.

xelatihy commented 3 years ago

Rationalization of IO

Right now Yocto is mostly independent since many of its IO needs are served within Yocto libraries. This has served us very well in the past. But it is beginning to take a toll in our ability to move forward. All these IO needs require significant effort and a lot of maintenance, that it is unlikely to scale well in the future.

In the end, our goals are

These goals are well served with this refactoring

xelatihy commented 3 years ago

OpenGL Objects

This is a placeholder for discussions about OpenGL in the library.

xelatihy commented 3 years ago

Large tests

With all changes to Yocto/GL so far, we should probably start to rerun all tests. This will require selecting tests, updating the tests scripts and providing rendering configurations for all.

To run tests we first need to update some infrastructure of the various scripts we are using.

xelatihy commented 3 years ago

Apps

The current apps structure works better than in the past. We should converge on making this better still. Some of it is lost functionality or bug fixes w.r.t. the old interfaces. Others are strengthening to integrate more app examples.

Let us list first the known bug fixes.

xelatihy commented 3 years ago

Path tracing

This section lists improvements to the path tracing API. Overall, the current feature set works well for yocto. Improvements may be needed in the API, more uniform code, and small additions to support denoising.

Here is a list of changes to evaluate

xelatihy commented 3 years ago

Material Naming

This issue tracks a discussion of materials naming and parameters.

There are four main concerns:

Material Types

The current implementation supports the following materials:

Material naming

Material naming is another complex issues. All renderers have different kind of nomenclature, so there is no common ground to use. There are three main variations for this

I prefer the last naming convention.

Material emission

Right now emission is kept as a separate property, just like in all renderers. This may or may not be a good idea.

Here I am not sure what to do.

Material parameters

The following parameters are mostly reasonable, besides possibly coming up with better names.

The more controversial parameters are ior and all volumetric settings.

For metals, we can use either Schlick's or conductors' Fresnel terms. I refer to introduce all the complexity of complex for frankly since we have no data and since it cannot be textured. So a simple reflectivity setting, as a color, works well enough.

For dielectrics it is harder. Right now we use ior for two things: coating/plastic and transmission/refraction. For coating, a reflectivity settings works fine and in fact is required in glTF. Before we used to have specular for that, but it was used in awkward ways, so it was changed to single ior. For transmission/refraction, ior is used in dielectric fresnel and refraction direction, but has two issues: (1) we do not do spectral rendering, so we need single value ior, and (2) nobody changes it since who knowns what it does.

WE can leave this as single ior, basically a read-only property. for now, I use dielectric Fresnel, but it is impossible to teach and explain (in fact all renders have the same cut&pasted code). I would not have a 3 channel ior since it is completely strange (see pbrt-v4 scenes). Th other option is to use specular set to {0.04, 0.04, 0.04} and be done with it (gt for when needed). In the future, that can be used for other effects, like colored coatings.

Not sure what to do here.

Vometric properties

The other hard issue are volumetric properties. These are nasty since they are not intuitive no matter what one does. For this we have three materials now:

In the end, we need to set absorption, albedo and anisotropic (phase g), but it is helpful to consider the there cases separately.

This already would suggest different parametrizations for at least glass and subsurface. Or maybe it make sense to keep the same one, which is what is done in production (glass: color -> density, subsurface: mfp -> density).

BTW, the renderer should also be changed based on these materials a bit. For example if no scattering is there, a simpler early exit is useful, but is for another day.

Here I am really not sure what do. Noe I know what to do with volumetric textures since I have not tried any data yet.

The bottom line is that we could follow two way:

I am not sure what to do.

Feedback on Material Naming

Naming materials based on the appearance is probably better idea for our goals.

In the code there's also an unimplemented "leaf". I'm not sure about what it does, but a material with a reflective lobe and a in-place single-scattering lobe can be named translucent_thin.

To sum up: matte, glossy, metal, transparent, translucent, transparent_thin, translucent_thin(?), volume, gltf_metallic.

We can keep emission as it is now, but we probably need to add emissive to the list of materials type.

Material parameters and their naming are ok.

xelatihy commented 3 years ago

Prepare for Release

This is a list of final things we need to do to get the release ready.

Here are the things that can be postponed.

Postponed

xelatihy commented 3 years ago

The release is ready.