Closed xelatihy closed 3 years ago
There have been two problems we tackled in v3 and before. The main problem is that different image formats makes sense for different applications, but we do not want to support that as discussed in other issues. Instead we want to cut down code size while still allowing to support everything we do now. Here are the things a few things that we care about:
So this means we have 4 variants
A major issue seems to be that we started the project thinking byte=non-linear and float=linear and that was a big mistake. That lead to weird things everywhere in the pipeline, typeing to perform conversions in places where none was desired. We have to try to fix it now. A major issues that needs to be addressed is loading should never change color space, and saving should always respect it.
Given all these constraints, and the discussion below, the proposed implementation is
Follows a discussion on design choices
Here are some operations we perform and how that relates to image formats.
The question now to decide how to handle all these cases in code.
In v3, Yocto did not have a concept of shape explicit within the library. Only low-level functions were used throughout. That makes it possible to build custom apps very easily. But this is at the price of usability and simplicity. In this version, we are planning to introduce two common data types for shapes that are either indexed or face-varying. The discussion here will mostly focus on the indexed shapes, since the face-varying ones are nearly identical.
The main features we want to support are:
!triangles.empty()
idiom)It seems that to do these, there is very little design that still needs doing, besides naming conventions that are now not properly designed now. So we can just make a list of things to implement.
For subdivision, things are very similar, so we just add things here.
We have gone back and forth on scene creation, mostly to handle the complexity of either too many parameters, the pointer usage and so on. Right now, we are settling on a simple design that is natural to a data-oriented C++ stack. This design preserves the following goals:
Given this design, the translation to todos is easy
Names in scenes are quite helpful but at the sam time not strictly needed. It seems that the most helplfull way of doing this would be to have a scene_metadata
structure where we can keep all this data. We should investigate this further or decide fatly what to do.
scene_metadata
as either internal to the scene, derived from it or packaged into it.After a few brainstorming discussion is was decided to refactor the library "by type". We list here the main action to be taken and also what remains to be decided.
The overall theme is to split the library by the type of object thy handle to make browsing code easier and establish a better foundation.
We tried initially to have all functionality in its own libraries. But it seems it would overcomplicate a few things. Instead we could handle this initially by moving the main structures into scene
. Later on we can split this but we should remove the array-like functionality.
The material model of v3 and before is derived from the Disney model. That has worked well so far, but has two main drawbacks: complexity in creation and complexity in rendering. In the years this has been hard to use for students, which should not have happened. We are now proposing to following pbrt and use a tagged material model. We want the following features:
Given these goals, the first issue to solve is to deign the material types and parameters. We would base it on the current versions and then normalize it after.
From this discussion, we now need the following parameters:
Additional property might include:
Given these goals, the following changes are needed:
Material naming is another complex issues. All renderers have different kind of nomenclature, so there is no common ground to use. There are three main variations for this
Another issue is parametrization of subsurface scattering and emission. What do we do here is less clear than in all other. I will open an issue for discussion with others elsewhere and than update this design document with the results.
Right now Yocto is mostly independent since many of its IO needs are served within Yocto libraries. This has served us very well in the past. But it is beginning to take a toll in our ability to move forward. All these IO needs require significant effort and a lot of maintenance, that it is unlikely to scale well in the future.
In the end, our goals are
These goals are well served with this refactoring
fast_obj
is a great alternative, but it only allows for reading filesfast_obj
writingget_shape()
functionThis is a placeholder for discussions about OpenGL in the library.
With all changes to Yocto/GL so far, we should probably start to rerun all tests. This will require selecting tests, updating the tests scripts and providing rendering configurations for all.
To run tests we first need to update some infrastructure of the various scripts we are using.
The current apps structure works better than in the past. We should converge on making this better still. Some of it is lost functionality or bug fixes w.r.t. the old interfaces. Others are strengthening to integrate more app examples.
Let us list first the known bug fixes.
This section lists improvements to the path tracing API. Overall, the current feature set works well for yocto. Improvements may be needed in the API, more uniform code, and small additions to support denoising.
Here is a list of changes to evaluate
This issue tracks a discussion of materials naming and parameters.
There are four main concerns:
The current implementation supports the following materials:
color
as diffusecolor
as reflectivity at grazing angles (conductor fresnel for now) and roughness
color
as diffuse, ior
and roughness
for specularcolor
for transmission color and ior
and roughness
for specular/fresnelcolor
and depth
for density and scattering
and anisotropy
for albedocolor
, metallic
and roughness
scattering
texture and to allow a different way of entering the surface; it is ignored for now but I think it is better to keep a subsurface material for skin renderingMaterial naming is another complex issues. All renderers have different kind of nomenclature, so there is no common ground to use. There are three main variations for this
I prefer the last naming convention.
Right now emission is kept as a separate property, just like in all renderers. This may or may not be a good idea.
color
is a color in [0,1], while emission
is in [0,inf]; we could fix this by adding an exposure
paramHere I am not sure what to do.
The following parameters are mostly reasonable, besides possibly coming up with better names.
color
: this is the main object reflectance, but it really is different in all materials; generally this is the one that gets the texture
reflectance
in pbrt-v4color_tex
for texture, but could be color_map
opacity
: clear what it is, I would leave it like this
color_tex
roughness
: clear what it is, I would leave it like this
roughness_tex
for texture, but could be roughness_map
metallic
: used only for glTF, I would leave it as is since it matches those specs
roughness_tex
emission
: see above
emission_tex
as texturenormal_tex
for normal maps; works like it ism but maybe call it normal_map
The more controversial parameters are ior
and all volumetric settings.
For metals, we can use either Schlick's or conductors' Fresnel terms. I refer to introduce all the complexity of complex for frankly since we have no data and since it cannot be textured. So a simple reflectivity
setting, as a color, works well enough.
For dielectrics it is harder. Right now we use ior
for two things: coating/plastic and transmission/refraction. For coating, a reflectivity settings works fine and in fact is required in glTF. Before we used to have specular
for that, but it was used in awkward ways, so it was changed to single ior
. For transmission/refraction, ior
is used in dielectric fresnel and refraction direction, but has two issues: (1) we do not do spectral rendering, so we need single value ior
, and (2) nobody changes it since who knowns what it does.
WE can leave this as single ior
, basically a read-only property. for now, I use dielectric Fresnel, but it is impossible to teach and explain (in fact all renders have the same cut&pasted code). I would not have a 3 channel ior
since it is completely strange (see pbrt-v4 scenes). Th other option is to use specular
set to {0.04, 0.04, 0.04} and be done with it (gt for when needed). In the future, that can be used for other effects, like colored coatings.
Not sure what to do here.
The other hard issue are volumetric properties. These are nasty since they are not intuitive no matter what one does. For this we have three materials now:
In the end, we need to set absorption, albedo and anisotropic (phase g), but it is helpful to consider the there cases separately.
This already would suggest different parametrizations for at least glass and subsurface. Or maybe it make sense to keep the same one, which is what is done in production (glass: color -> density, subsurface: mfp -> density).
BTW, the renderer should also be changed based on these materials a bit. For example if no scattering is there, a simpler early exit is useful, but is for another day.
Here I am really not sure what do. Noe I know what to do with volumetric textures since I have not tried any data yet.
The bottom line is that we could follow two way:
subsurface
I am not sure what to do.
Naming materials based on the appearance is probably better idea for our goals.
matte
is ok. It communicates quite well the final appearance.metal
is also ok, even though it does not describe the phenomenological appearance but rather the material. However metals have such a unique look that it is fine for this case.glossy
, which should communicate a colored surface with reflections. It is quite generic, but it must be so.transparent
. I would exclude transmissive/refractive, they are still technical terms that describe the effect but not the appearance. transparent_thin
.subsurface
is ok, maybe translucent
is better. My only worry is that someone can confuse it with the word with transparent
.volume
is ok.metal
, having high potential to confuse people. Since we have it only to support Gltf, I would go for gltf_metallic
which is intentionally ugly to warn the user and be clear about its usage. In the code there's also an unimplemented "leaf". I'm not sure about what it does, but a material with a reflective lobe and a in-place single-scattering lobe can be named translucent_thin
.
To sum up: matte
, glossy
, metal
, transparent
, translucent
, transparent_thin
, translucent_thin(?)
, volume
, gltf_metallic
.
We can keep emission as it is now, but we probably need to add emissive
to the list of materials type.
Material parameters and their naming are ok.
This is a list of final things we need to do to get the release ready.
Here are the things that can be postponed.
The release is ready.
This issue tracks design consideration for the v4 simplification. Instead of keeping separate design documents for each part of the library, we keep everything into one for simplicity.