mrdoob / three.js

JavaScript 3D Library.
https://threejs.org/
MIT License
102.79k stars 35.38k forks source link

FlatShading on MeshLambertMaterial #7130

Closed Astrak closed 3 years ago

Astrak commented 9 years ago

The shading property has been removed from MeshLambertMaterial, it is only rendered the SmoothShading way.

Is it the desired effect ? Phong looks heavier to get this result.

WestLangley commented 9 years ago

That is correct. Use MeshPhongMaterial if you want FlatShading.

mrdoob commented 9 years ago

Maybe wet can do a new THREE.FlatShadedGeometry( geometry ) which is what WebGLRenderer used to do internally.

WestLangley commented 9 years ago

Or geometry.computeFlatVertexNormals().

But it would not work for indexed BufferGeometry...

Astrak commented 9 years ago

After testing few projects of mine on different devices with MeshPhongMaterial's result i realized I shouldn't have closed it ^^

I don't know much about threejs and reading ShaderChunk or the 1000 lines resulting Phong shader scares me a little, but MeshPhongMaterial looks to need more resources than MeshLambertMaterial, even with shininess:0/specular:0x000000. I also read https://github.com/mrdoob/three.js/issues/6586 where @mrdoob wrote about MeshLambertMaterial :

It definitely is better performant

I don't know the reason behind deleting shading in MeshLambertMaterial but i use to use it often, in basements for instance where SmoothShading is not appropriate. Turning them into MeshPhongMaterial makes the framerate drop.

Is it possible to bring shading back to MeshLambertMaterial ?

mrdoob commented 9 years ago

Would a class like new THREE.FlatShadedGeometry( geometry ) work for you?

Astrak commented 9 years ago

If you propose it i guess yes ^^ depending on your discussion about this class i let you close this issue

erichlof commented 9 years ago

Hi @mrdoob , +1 for THREE.FlatShadedGeometry( geometry ) This would be useful for a low-poly or "retro 3D" look that is performant. It would be nice to avoid the per-fragment operations of MeshPhongMaterial if we can.

jee7 commented 9 years ago

I also just discovered that the FlatShading has been removed. I read the MeshGouraudMaterial idea also in #6586. First, it seems kind of weird that in order to use FlatShading we would need to use MeshPhongMaterial. I mean, Phong's lighting model theoretically assumes that we have per-fragment (Phong's) shading. Of course it might be interesting to see, how Phong's lighting looks with per-face shading and per-vertex shading as well, but that is not its main case. As for MeshLambertMaterial, per-face and per-vertex shading options are definitely the main case.

Here are the different lighting models (how the color is calculated):

Here are the different shading models / schemes (where the color is calculated):

It kind of confuses me, why you want to mix up the how and the where and create something called MeshGouraudMaterial?! It seems more logical to keep those things separated and allow all of the combinations of lighting models (how) and shading schemes (where) available and easily usable by the library.

Look at the first three.js example here: https://cglearn.codelight.eu/pub/shading-and-lighting Now, if I'm implementing some algorithm, where I have 100K cubes, then I probably want to flat shade them. And I will use only the Lambertian (diffuse) calculations. Seems weird to use Phong's model there and set the specular as 0 or something.

And this definitely seems to be a job for the Material, not the Geometry. Geometry's responsibility shouldn't be to know about shading, lighting or such. So FlatShadedGeometry seems also misleading. I mean, we would have to construct a new geometry each time we want to change how the material's color is rendered?

I'm sorry if I sound kind of demanding, but it's just a shock to see FlatShading gone from MeshLambertMaterial.

mrdoob commented 9 years ago

we would have to construct a new geometry each time we want to change how the material's color is rendered

Well, if you don't do it, the renderer will have to do it internally for you...

Another option would be to always generate 2 attributes: vertexNormals and faceNormals.

jee7 commented 9 years ago

Would it not work, if you had one Geometry that could potentially calculate vertexNormals and faceNormals? When using FlatShading the shader would expect faceNormals and has the Geometry find them (if not already found). When using SmoothShading the shader would expect vertexNormals and has the Geometry find them (if not already found). It would do some work, but not as much as creating a new (FlatShaded)Geometry instance from existing geometry would. The shaders have to be recompiled anyway when you change the material.

Also, in regard to FlatShadedGeometry. This would mean that you would also have GouraudShadedGeometry, PhongShadedGeometry and not to mention FlatShadedBufferGeometry, GouraudShadedBufferGeometry and PhongShadedBufferGeometry? I understand that indexed BufferGeometry would not work with FlatShading unless you duplicate the vertices. I guess that would have to wait until glProvokingVertex() comes to WebGL 2.0 (or later) maybe.

Still, it seems reasonable that any Geometry could potentially calculate vertexNormals and faceNormals. It could also be like geometry.computeVertexNormals(THREE.Geometry.FlatStrategy). And the Material will ask for that if FlatShading is specified.

mrdoob commented 9 years ago

The shaders have to be recompiled anyway when you change the material.

Actually, the shaders don't need to be recompiled. It's just a different vertex attribute.

Also, in regard to FlatShadedGeometry. This would mean that you would also have GouraudShadedGeometry, PhongShadedGeometry and not to mention FlatShadedBufferGeometry, GouraudShadedBufferGeometry and PhongShadedBufferGeometry?

No...

I understand that indexed BufferGeometry would not work with FlatShading unless you duplicate the vertices. I guess that would have to wait until glProvokingVertex() comes to WebGL 2.0 (or later) maybe.

That is correct. Thanks for the glProvokingVertex() tip, I wasn't aware of that.

Still, it seems reasonable that any Geometry could potentially calculate vertexNormals and faceNormals.

Yes, I think you're right. I'll try to implement the vertexNormals/faceNormals attribute idea in this cycle. Sorry for the temporal breakage...

WestLangley commented 9 years ago

@mrdoob One solution is to modify MeshLambertMaterial to do per-pixel shading instead of per-vertex shading. Then we can implement FlatShading like we do with MeshPhongMaterial. Plus the material names will finally make sense. : - )

The only reason not to do this would be if speed is an issue, and if per-vertex shading really makes a difference in that regard.

mrdoob commented 9 years ago

The only reason not to do this would be if speed is an issue, and if per-vertex shading really makes a difference in that regard.

Yeah, speed is an issue... I was going to rename it to MeshGouraudMaterial in r72 but I though we had too many changes already 😇

Hopefully this cycle!

jee7 commented 9 years ago

But that is still wrong. It will still apply the Lambertian (diffuse) lighting model, right? I mean, you could apply Lambertian or Phong (or Blinn-Phong, or etc) with Gouraud shading. Or is the approach to specify the shading (and thus the complexity) in the Material's name and configure the actual lighting model as a parameter? So it would be something like:

And then there will be a parameter to specify a lightingModel, which could be:

So for example one could call:

var material1 = new MeshGouraudMaterial({ lightingModel: THREE.Material.PhongModel });
var material2 = new MeshFlatMaterial({ lightingModel: THREE.Material.LambertModel });
var material3 = new MeshPhongMaterial({ lightingModel: THREE.Material.LambertModel });

Instead of the previous approach, where we called:

var material2 = new MeshLambertMaterial({ shading: THREE.FlatShading });
var material3 = new MeshLambertMaterial({ shading: THREE.SmoothShading });

which I think also made good sense...

Personally, because there are more lighting models than shading models, the previous approach seemed more scalable. You could have just another class MeshBeckmannMaterial or MeshCookTorrenceMaterial (etc) that specified another lighting model and which shading parameters are supported for it. Instead of creating another conditional branch in different shading-based classes for the new lighting model.

WestLangley commented 9 years ago

@jee7 We are going to add more lighting models, but (IMO) we need to get the material-specific code out of the renderer first.

Until then, we have two main materials that support lights: MeshLambert, and MeshPhong.

Specular highlights will not render correctly when using per-vertex shading, so we do not need to consider all your combinations.

mrdoob commented 9 years ago

But that is still wrong. It will still apply the Lambertian (diffuse) lighting model, right?

Yep!

  • MeshFlatMaterial - face normals, color calculated per-face

You know, I hadn't thought of that... But I think I vote for new THREE.MeshGouraudMaterial( { shading: THREE.FlatShading } ).

WestLangley commented 9 years ago

@mrdoob wrote in #6586

I think we should move the current MeshLambertMaterial out of core.

Do you still plan to do that?

There was a plan at one time to keep MeshLambertMaterial simple. So the idea was if users want a lightmap or aomap they can use MeshPhongMaterial.

But I will go along with whatever you want to do. Currently, MeshLambertMaterial does not support lightmaps or aomaps. If you want to add them back in, it is doable, but there are significant changes that are required to separate ambient (indirect) light from direct light in the vertex shader so the light sources can be attenuated separately. (Ambient occlusion maps only occlude ambient light.)

mrdoob commented 9 years ago

I think we should move the current MeshLambertMaterial out of core.

Do you still plan to do that?

I'm reconsidering it. It may be a matter of refactoring.

jee7 commented 9 years ago

I don't understand the deal with MeshGouraudMaterial. Gouraud is a shading scheme, like FlatShading or SmoothShading. It does not make any sense to have: new THREE.MeshGouraudMaterial( { shading: THREE.FlatShading } ) What lighting model is that going to use? Lambert? Basic? (Edit: @mrdoob, ok, you mentioned that it's Lambert. But do read on...)

If you look at the Wikipedia page for Gouraud shading, then the examples there use Phong's lighting model. Sure, the specular is rendered incorrectly and I understand that it should not be a priority to support that. Still that does not seem a reason enough to specify that Gouraud is now a lighting model that uses only some specific lighting model X.

Here are the combinations that are probably the most used:

Lighting model Shading scheme Usage
Basic Flat Single-colored faces
Lambert Flat Shaded faces
Lambert Smooth (Phong) Smooth polygon without specular
Phong Smooth (Phong) Smooth polygon with specular

Gouraud shading scheme probably is not that widely used. If you say that Lambert-Flat is actually Gouraud, because currently the only way to achieve that is to have duplicated vertices with all normals as face normals (and for optimization purposes it's now better to calculate the lighting on vertices and interpolate the same color), then that is just a special case of Gouraud. I don't think that is a good idea, because: 1) People can get confused, because if you read about Gouraud, then it historically was just a fast way to create smooth-shaded objects. One would expect a result similar as mentioned in this post. 2) It is just a workaround for Lambert-Flat. If you later have an opportunity to implement it differently (for example with glProvokingVertex) then it is not longer Gouraud at all. 3) What is your naming convention? Is it Mesh(LightingModel)Material or Mesh(ShadingScheme)Material. Does not seem wise to mix them.

Now, what could perhaps be done is if you have 3 distinct shading schemes (as inner classes for example): flat, gouraud and smooth. When you call new THREE.MeshLambertMaterial( { shading: THREE.FlatShading } ), then innerly gouraud shading is used with face normals. This way the current users still can use what was here in the previous revision, the result is what one would expect, and later on you could just change it without changing the interface. If they happen to call new THREE.MeshLambertMaterial( { shading: THREE.GouraudShading } ), then innerly gouraud shading is used with vertex normals.

Pros for that seem to be:

mrdoob commented 9 years ago

What I was going for was this:

mrdoob commented 9 years ago

Bear in mind that the output of FlatShading with MeshPhongMaterial is different than with MeshGouraudMaterial. That's a look/feature I don't want to lose and I think your proposals are ignoring it.

jee7 commented 9 years ago

Ok, so the new naming convention would be Mesh(ShadingScheme)Material? That seems quite a big change...

And now one has to know that for MeshGouraudMaterial:

Then for MeshPhongMaterial:

This also means that creating the per-vertex lighting with Phong's lighting model (the example that is widely used for Gouraud) is impossible. Which I guess does create some confusion, but is probably fine.

If you are planning to create more lighting models with that approach then it seems really hard. You either create MeshNewLightingModelMaterial, which will break your convention. Or have another parameter as a shading option. So it would become: MeshPhongMaterial (with flat/smooth/Cook-Torrance shading). In that case: 1) You now have a parameter called shading, that actually specifies the lighting model. 2) You always have to change MeshPhongMaterial in order to create a new lighting model.

What I was suggesting is to keep using your current convention of Mesh(LightingModel)Material. That way you can create a new class for a new lighting model without breaking the convention. Each such class can define what strategies for shading (flat [per-face], Gouraud [per-vertex] or Phong [per-fragment]) are available. Each class also defines the model's formula for the shader (that gets sent to either the vertex or fragment shader with corresponding normals and other data based on the shading strategy). So for MeshLambertMaterial you could to something like (very simplified):

lightingModelShaderCode = "max(0.0, dot(normal, light))";

if ( shading == THREE.Material.SmoothShading ) {
   attributes.normal = geometry.getVertexNormals();
} else if ( shading == THREE.Material.GouraudShading ) {
   attributes.normal = geometry.getVertexNormals();
} else if ( shading == THREE.Material.FlatShading ) {
   attributes.normal = geometry.getFaceNormals();
   shading = THREE.Material.GouraudShading;
}

So for MeshPhongMaterial you could to something like:

lightingModelShaderCode = "max(0.0, dot(normal, light)) + max(0.0, dot(reflection, viewer))^shininess";

if ( shading == THREE.Material.SmoothShading ) {
   attributes.normal = geometry.getVertexNormals();
} else if ( shading == THREE.Material.GouraudShading ) {
   attributes.normal = geometry.getVertexNormals();
} else if ( shading == THREE.Material.FlatShading ) {
   attributes.normal = geometry.getFaceNormals();
   shading = THREE.Material.GouraudShading;
}
uniforms.shininess = shininess;

For some new lighting model you could do:

lightingModelShaderCode = "complicated formula";

if ( shading == THREE.Material.SmoothShading ) {
   attributes.normal = geometry.getVertexNormals();
} else if ( shading == THREE.Material.GouraudShading ) {
   attributes.normal = geometry.getVertexNormals();
} else if ( shading == THREE.Material.FlatShading ) {
   console.error("Not supported");
}

Those codes would look a lot better with a switch.

You then need some ShaderBuilder that will take the material and:

if ( material.shading == THREE.Material.GouraudShading ) {
   //Send material.lightingModelShaderCode to vertex shader
   //Build the shader so that it outputs a color for the fragment shader
} else if ( material.shading == THREE.Material.PhongShading ) {
   //Send material.lightingModelShaderCode to fragment shader
   //Build the shader so that it interpolates the normal 
        //You could even ask some portion of the vertex shader code from the material 
        // (in case there are more complicated things to do)
} else if ( material.shading == THREE.Material.FlatShading ) {
   console.error("This is for the future... can not use yet. Do Gouraud shading instead");
}

As for the difference between FlatShading on MeshPhongMaterial and MeshGouraudMaterial. Could you be a bit more specific? Assuming that you mean the specular highlight that is not present in MeshGouraudMaterial (which you said uses the Lambert model always)... Of course it is also considered. The lighting model formula for MeshPhongMaterial is the same formula for either FlatShading, GouraudShading or SmoothShading. It is just that the formula is sent to the different shaders. It will still highlight the face if the normals are right.

mrdoob commented 9 years ago

Ah, I understand what you mean now. I does make sense. It will require quite a bit of refactoring though...

It's annoying/confusing that there is MeshPhongMatetial and PhongShading at the same time

WestLangley commented 9 years ago

MeshLambertMaterial uses a Lambert lighting model. It began as a per-vertex shader, and has remained so.

MeshPhongMaterial uses a Blinn-Phong lighting model. It also began as a per-vertex shader. Later, a perPixel option was added. Sometime afterward, the perPixel option was removed and made the default. So now MeshPhongMaterial is a per-pixel shader.

The MeshPhysicalMaterial shader that is coming will also be per-pixel. It has not been decided if it will use the GGX lighting model only, or if there will be options for other lighting models with this material.

In any event, I would keep the current material names the same. In the documentation we can explain that MeshLambertMaterial is per-vertex and may be better for slower devices. I would not confuse things by introducing MeshGouroudMaterial.

erichlof commented 9 years ago

Hi all, When I am choosing which THREE.Material I would like to use in my app, it basically boils down to: do I want no lighting (for computationally-challenged devices), flat diffuse only for cheap lighting, or specular/smooth for more realistic but still somewhat performant lighting, or, in the near future, do I want a physically-based material and advanced lighting model for ultimate realism on a higher-end desktop that can handle it?

Here is a proposed list ranging from fastest-performing / least-realistic to slowest-performing but most-realistic:

  1. MeshBasicMaterial (no lighting model can be used, just a pure color per-face)
  2. MeshFlatShadedMaterial (Lambert per-face lighting, similar to THREE.FlatShading in MeshLambert)
  3. MeshGouraudMaterial (Lambert lighting model per-vertex only, or Phong lighting could be used)
  4. MeshPhongMaterial (Blinn-Phong lighting model, per-fragment only)
  5. MeshPhysicalMaterial (Cook-Torrance model, etc. - here could be added numerous BRDF options, as many as those who are implementing this material option are willing to make for us).

There is a nice, intuitive progression in the above list from computationally cheapest to most expensive vs most-unreal to most -realistic. Devs could tailor these material choices to suit the target device.

mrdoob commented 9 years ago

As for the difference between FlatShading on MeshPhongMaterial and MeshGouraudMaterial. Could you be a bit more specific? Assuming that you mean the specular highlight that is not present in MeshGouraudMaterial (which you said uses the Lambert model always)... Of course it is also considered. The lighting model formula for MeshPhongMaterial is the same formula for either FlatShading, GouraudShading or SmoothShading. It is just that the formula is sent to the different shaders. It will still highlight the face if the normals are right.

I mean that, as far I understand, with your proposal, when using FlatShading the lighting wouldn't be per pixel anymore, so we wouldn't be able to do this:

screen shot 2015-09-18 at 09 16 08

http://jsfiddle.net/9u1ux4Lg/

But maybe this is something we shouldn't have been able to do to start with? I guess a solution for this, would be to, again, use new THREE.FlatShadedGeometry( geometry ) and new THREE.MeshPhongMaterial( { shading: THREE.PhongShading } ).

By the way... maybe better names could be THREE.FaceShading, THREE.VertexShading and THREE.PixelShading (of THREE.FragmentShading).

jee7 commented 9 years ago

Here is a proposed list ranging from fastest-performing / least-realistic to slowest-performing but most-realistic:

  1. MeshBasicMaterial (no lighting model can be used, just a pure color per-face)
  2. MeshFlatShadedMaterial (Lambert per-face lighting, similar to THREE.FlatShading in MeshLambert)
  3. MeshGouraudMaterial (Lambert lighting model per-vertex only, or Phong lighting could be used)
  4. MeshPhongMaterial (Blinn-Phong lighting model, per-fragment only)
  5. MeshPhysicalMaterial (Cook-Torrance model, etc. - here could be added numerous BRDF options, as many as those who are implementing this material option are willing to make for us). – erichlof

Good, now there are use cases. But now the class names do not actually say much about the lighting or shading model used. What you really want is:

Lighting model Shading scheme Proposed call
Basic PerFace new MeshBasicMaterial()
Lambert PerFace new MeshLambertMaterial({ shading: THREE.FaceShading })
Lambert PerVertex new MeshLambertMaterial({ shading: THREE.VertexShading })
Phong PerFragment new MeshPhongMaterial()
Cook-Torrance PerFragment new MeshCookTorranceMaterial()

You see Phong (or Blinn-Phong, or even Lambert or single color) is also a BRDF. There is a large number of BRDF-s that each work differently and use different parameters. With that in mind, I'm not sure about one MeshPhysicalMaterial, because depending on the BRDF you have different parameters for it. Unless you want to call something like

 new MeshPhysicalMaterial({ 
    brdf: THREE.CookTorranceBRDF, 
    brdfParameters: { 
        //specific ones for Cook-Torrance
    }
});

Then the MeshPhongMaterial could underneath just call:

 new MeshPhysicalMaterial({ 
    brdf: THREE.BlinnPhongBRDF, 
    brdfParameters: { 
        shininess: shininess
    }
});

Also, like I have mentioned several times:

MeshGouraudMaterial (Lambert lighting model per-vertex only, or Phong lighting could be used)

Which model is it then, Lambert of Phong? Now you have materials, where you can configure the lighting model. On the other hand:

MeshPhongMaterial (Blinn-Phong lighting model, per-fragment only)

This one already fixes the lighting model for you. It does not seem to follow a logical pattern...

In any event, I would keep the current material names the same. In the documentation we can explain that MeshLambertMaterial is per-vertex and may be better for slower devices. I would not confuse things by introducing MeshGouroudMaterial. – WestLangley

I agree! Because Gouraud is not a lighting model and Lambert is. If you want Gouraud shading, then have it as shading option.

By the way... maybe better names could be THREE.FaceShading, THREE.VertexShading and THREE.PixelShading (or THREE.FragmentShading). – mrdoob

Great! I agree that it is confusing that Phong is both a name for the lighting model and the per-fragment shading scheme. He was a busy guy. Naming shading schemes by what they do is much better!

http://jsfiddle.net/9u1ux4Lg/ – mrdoob

Ok. Now I see. You want the normals to flat, but the lighting model applied per-fragment. Hadn't considered such a case... Yes, I agree that this is something a Geometry should do. Maybe something like this then: new Geometry({ type: THREE.FlatGeometry }) new Geometry({ type: THREE.SmoothGeometry }) - default Or / and perhaps: geometry.makeFlat() - duplicates vertices, makes normals flat geometry.makeSmooth() - removes duplicates, makes normals smooth In that case the material would only every call geometry.getVertexNormals() in my previous examples.

I'm still not sure that doing new THREE.FlatShadedGeometry( geometry ) is a good idea. It seems to create (or at least indicates that it does create) much overhead. On the other hand, there might be more work to do then what I currently think. If making the geometry flat-to-smooth or smooth-to-flat changes like >50% of different attributes inside the Geometry then I guess making a new class would be better. Actually it might be ok, because you would need to change the way how computeVertexNormals() behaves. But when I think about the current BufferGeometry, then what if I want to do a FlatBufferGeometry? For that I would need to do something like this?

var geometry = new THREE.IcosahedronGeometry(200, 0);
var flatGeometry = new THREE.FlatGeometry(geometry);
var bufferGeometry = new THREE.BufferGeometry().fromGeometry(flatGeometry); 

It kind of seems that there is something wrong with naming classes here as well. BufferGeometry and (Shape)Geometry seem kind of conflicting. And now if you do FlatGeometry then it is even more confusing.

So I guess my idea for that would be to still have .makeFlat() / flatten() in the Geometry. That function could also change to what functionality the computeVertexNormals() points to.

And if a material is specified to have THREE.FaceShading then it can use geometry.makeFlat() if needed. Although the material probably doesn't want to change the actual geometry. So perhaps something like geometry.getFlat() and cache the result. Or throw an error if the geometry is not already flattened.

Although, now I see, why specifying the shading in the material might not be that useful...

erichlof commented 9 years ago

By the way... maybe better names could be THREE.FaceShading, THREE.VertexShading and THREE.PixelShading (of THREE.FragmentShading).

@mrdoob I like this too. This naming scheme goes along with what is actually happening under the hood, and leaves nothing unsaid. It also gives devs a performance tier that they can select the appropriate shading scheme from, to suit their app as a whole, or on an individual per-model basis.

So here would be the new material list starting from cheapest to most computationally expensive:

1. THREE.MeshBasicMaterial (shading: THREE.FaceShading set by default only.
 lightingModel: No lighting model available, or THREE.NoLighting set by default)

2. THREE.MeshLambertMaterial (shading: THREE.FaceShading, THREE.VertexShading set by default.
 lightingModel: THREE.LambertLighting set by default, THREE.OrenNayarLighting in the future?)

3. THREE.MeshPhongMaterial (shading: THREE.FaceShading (like mrdoob's jsfiddle above), THREE.FragmentShading set by default.
  lightingModel: THREE.BlinnPhongLighting by default, THREE.PhongLighting as an option?)

4. THREE.MeshPhysicalMaterial (shading: THREE.FragmentShading set by default only.
  lightingModel: THREE.CookTorranceLighting, THREE.WardLighting, THREE.GGXLighting etc. etc..)

@jee7 brings up good points about BufferGeometry, not sure what to do with geometry-side of things at the moment.

But at any rate, I feel the above naming scheme gives the most options to the user as well as being clear and descriptive of what is going on under the hood. Chances are if they want an advanced Physical per-fragment shading scheme, they'll know who Phong, Cook, Ward, are for lighting models - it's just a wiki click away also.

And a total beginner using MeshLambertMaterial will get the appropriate defaults set for him/her , without having to worry about shading schemes or lighting models. Also, the above naming can be made backwards-compatible (when I think of all the '' = new THREE.MeshLambertMaterial" 's out there in people's code already! :)

jee7 commented 9 years ago

Interesting suggestions for the material list. Although there are again both shading and lightingModel specified for each material. Problem I have with that is that different lighting models use different parameters. So you could call:

new THREE.MeshLambertMaterial({
    lightingModel: THREE.LambertLighting,
    color: 0xFFFFFF
});

And:

new THREE.MeshLambertMaterial({
    lightingModel: THREE.OrenNayarLighting,
    color: 0xFFFFFF,
    roughness: 0.5
});

How would you now write the doc for the MeshLambertMaterial if roughness only makes sense if a specific lightingModel is specified? This is the problem I mentioned the MeshPhysicalMaterial would also probably have.

Secondly, why is it MeshLambertMaterial? What does the Oren-Nayar model have to do with Lambert?

If this were the approach, then it would probably make sense more to name:

But that doesn't seem a good idea either. I also support keeping the current names and backwards compatibility at this point.

Maybe a better approach would be to have something like this:

var model = new THREE.OrenNayarLighting({
    roughness: 0.5
});
var material = new THREE.MeshMaterial({
    color: 0xFFFFFF,
    model: model
});

That way the current MeshLambertMaterial could just call that underneath with model: new THREE.LambertLighting() and MeshPhongMaterial could call new THREE.MeshPhongLighting({ specular: specular }).

The current classes would become kind of shortcuts for a more dynamic architecture that supports easy implementation of new lighting models.

Oh, and if you implement Oren-Nayar later on, then you can specify that the THREE.LambertModel() is a shortcut to THREE.OrenNayarModel({ rougness: 0 }).

Or, you could even have MeshDiffuseMaterial and the current MeshLambertMaterial is just a shortcut for that with lightingModel: new THREE.LambertModel().

As for the shading parameters. Seems good, if there is a reasonable way to implement it (have the geometry flattened if FaceShading is specified). It might even be something where there are two geometries in a mesh. One that the user changes and the other that is either a flattened or smoothed version of the original. Although I'm wondering that if the user specifies some specific non-flattened geometry, then uses FaceShading that will create a flattened copy, then debugging the original geometry might be confusing.

erichlof commented 9 years ago

Secondly, why is it MeshLambertMaterial? What does the Oren-Nayar model have to do with Lambert?

If this were the approach, then it would probably make sense more to name:

MeshLambertMaterial as MeshDiffuseMaterial MeshPhongMaterial as MeshMetallicMaterial

@jee7 Yes Oren-Nayar doesn't really belong in the Lambert class, but when I think of both names, the term 'diffuse' comes to mind. I had made a list just like yours yesterday, but I deleted my comment. Here was my version (that closely matches yours): MeshBasicMaterial (same as it is now) MeshDiffuseMaterial (same as Lambert, not backwards compatible name-wise however) MeshSpecularMaterial (same as Phong, not backwards compatible name-wise either) MeshPhysicalMaterial (currently being developed/merged)

This gets rid of the name-sakes (sorry Mr. Lambert and Mr. Phong) and moves their names into the appropriate lighting model in the options parameters. I still like mrdoob's naming suggestion for the shading scheme however: THREE.FaceShading, THREE.VertexShading, and THREE.FragmentShading. There is nothing left unsaid and no possible naming-model mixups like Lambert (Lambert lighting, Lambertian surface) and Phong (Phong shading and Phong lighting)

Maybe a better approach would be to have something like this:

var model = new THREE.OrenNayarLighting({ roughness: 0.5 }); var material = new THREE.MeshMaterial({ color: 0xFFFFFF, model: model });

I hadn't thought of it that way - that is very interesting! So you create a lightingModel object with its own set of supported parameters (which can definitely be documented), then pass it in to a more generic MeshMaterial object. Inside of the MeshMaterial object we could also place: shading: THREE.VertexShading, etc. as an additional parameter.

So the parameters of a generic THREE.MeshMaterial would be:

  1. color
  2. shading
  3. lightingModel
  4. map (?)
jee7 commented 9 years ago

So you create a lightingModel object with its own set of supported parameters (which can definitely be documented), then pass it in to a more generic MeshMaterial object. Inside of the MeshMaterial object we could also place: shading: THREE.VertexShading, etc. as an additional parameter.

Yes. :)

I can't give a full list of possible properties that class could have. Of those I know map, envMap, fog and wireframe would probably be good. Although, as for the last one, there might be something like:

if ( this.wireframe == true ) {
    this.lightingModel = new WireframeModel();
}

Not sure if that's a good idea though... Perhaps not. Wireframe would only probably change the draw call from GL_TRIANGLES to GL_LINES. So another "lighting model" would be overkill.

As far as I can tell, probably all the lighting models do specify the diffuse color. If that is read from a texture, then map should be there. If we want reflections then envMaps seems ok.

But then there are also things like MeshDepthMaterial and MeshNormalMaterial, which specify their own colors...

erichlof commented 9 years ago

Yeah there are just so many options and combos, it's difficult to compartmentalize them. I'm not familiar enough with where WebGLRenderer's development is headed but, my one concern about the latest generic material naming scheme above is, the data is getting nested a little more deeply.

Every animation frame, will WebGLRenderer have to comb through THREE.Mesh, which has THREE.MeshMaterial, which has a lightingModel parameter, which has a THREE.LightingModel object, which has maybe a parameter like shininess: 10? To get to the actual data every animation frame, there has to be a lot of searching, or maybe that could be better handled internally, using a bunch of underscore flags ( materialComboFragmentPhong or materialComboVertexLambert, etc.etc. ) and arguments that the WebGLRenderer can quickly branch from to use the correct algorithms when drawing?

jee7 commented 9 years ago

There can be an attribute cache map somewhere in the between then. Like inside that THREE.MeshMaterial. Or even at some closer level (although if one is using THREE.FaceMaterial then a closer level might not be possible). It could just be a JSON map and there could be a possibility to update it or turn it off altogether.

var THREE.MeshMaterial = function( parameters ) {
   this.parameters = parameters;
   this.uniformsCache = {}
   this.shouldUseCache= true;
   this.updateCache();
}

THREE.MeshMaterial.prototype = {

   getUniforms: function() {
      if ( !this.shouldUseCache ) {
          this.updateCache();
      }
      return this.uniformsCache;
   },

   updateCache: function() {
      this.uniformsCache = this.parameters.lightingModel.getUniforms().concat({ 
         color: this.parameters.color, 
         map: this.parameters.map
         //...
      });
   },

   useCache: function( shouldUseCache ) {
       if ( shouldUseCache == undefined ) {
           this.shouldUseCache = true;
       } else {
           this.shouldUseCache = shouldUseCache;
       }
   }
}

Or, I guess in JavaScript you could also do something like:

useCache: function( shouldUseCache ) {
       if ( shouldUseCache == undefined ) {
           shouldUseCache = true;
       }

       if ( shouldUseCache ) {
           var material = this;
           this.getUniforms = function() {

                return material.uniformsCache;
           }
       } else {
           var material = this;
           this.getUniforms = function() {
                material.uniformsCache = material.parameters.lightingModel.getUniforms().concat({ 
                     color: material.parameters.color, 
                     map: material.parameters.map
                     //...
                });
                //Don't know if you can call material.updateCache() here. 
                //Probably would need a proxy call to it.
                //Alternatively you could just have __getCachedUniforms() and __getNonCachedUniforms()
                //And then just point to the corresponding one.

                return material.uniformsCache;
           };
       }
   }

That should be probably faster, because you don't need to check the shouldUseCache each time you get the uniforms.

erichlof commented 9 years ago

Ah, thanks for the clarification, that does make sense. I'm kind of in the middle of learning some raw WebGL . A while ago, I started using THREE blindly, and was always afraid to view the WebGLRenderer.js source file :) . Now I am slowly going back and forth between THREE and raw WebGL online tuts and learning how THREE handles all the rendering-specific stuff under the hood. So, like you said, if this could be handled with a cache system, that's great - I'll take your word for it.

I'm coming at all this from more of an end-user's perspective. I'm interested to see what mrdoob and WestLangley think about the above-mentioned naming schemes.

jee7 commented 9 years ago

Don't take my word for it. :D I'm an end-user as well, it was just an idea. I agree that, let's wait for the actual devs to say something about this.

mrdoob commented 9 years ago

This is the kind of material system I would like to implement some day:

var material = new THREE.MeshLayerMaterial( [
    new THREE.LambertShading( 0xff0000 ),
    new THREE.UVMapping( texture, THREE.MultiplyBlending ),
    new THREE.UVMapping( aoTexture, THREE.MultiplyBlending, 1 ), // uv channel: 1
    new THREE.SphericalReflectionMapping( texture, THREE.AdditiveBlending )
] );
makc commented 9 years ago

what's with that "need to do special geometry for flat shading" again? I thought you agreed to use derivatives in back in #6117? or was that removed shortly after?

makc commented 9 years ago

ah, that was limited to phong, ok. nvm then.

whknight commented 8 years ago

disclaimer I am very new to all this: when it comes to BufferGeometry and per face shading why can't you use an indexed normal array? ie an array of normals where the location in the normal array is tied to the index array... for example and index array of 0,1,2,1,2,3 would specific two triangles sharing vertices 1 and 2...then a normal array would have entries for each element in the index array but with face normals at the entries so normal[1] and normal[3] would both be for vertex 1 but with the normal values respective to their faces. Does this make sense?

jee7 commented 8 years ago

@whknight But how do you know, which normal to pick and output from a vertex then? The general idea is that we have vertices (for example a cube would have 8) and each of them get some value. Of course we could send 3 values or an entire array to each of them. But now there is a problem on how we interpolate the values to the faces. When the vertex shader has ran, the next step is rasterization of faces and each raster (fragment) in a face has a fragment shader ran on it. The fragment shader can get values interpolated from neighboring vertices. Alas there is no way to send one value for a face located on one side of the vertex and another value to some other face. Unless you use the provoking vertex, which is not available in WebGL yet. That would allow you to specify that only the value from the first / last vertex is used for the entire face.

Maybe I didn't understand your idea fully and it certainly seems worth thinking about. But imagine that vertex 1. The vertex shader needs to output a normal (that is interpolated across all faces that have this vertex 1). Which normal will it be, normal[1] or normal[3]? It may work out in some specific cases, where you can actually interpolate some numeric value across the faces and have it always round to a unique integer per face. Unfortunately I doubt that this has a efficient general solution.

Edit: Oh, and if you are thinking about Gouraud shading, then that wouldn't work either. There you would need to output several (for each neighboring face) color values from the vertex shader.

whknight commented 8 years ago

Again I apologize because I am not fully familiar with the pipeline used for WebGL so if only one normal can be supplied from the vertex shader to the fragment shader (if I understand this correctly) then there would be no way to “switch” normal as you rasterize each face…but if you could take the location of the vertex as it is stored in the index buffer ex vertex array index 0 can be in the index buffer in multiple locations then you could use the location of the index buffer to point to a normal array Ex Vertex[0] = 0; Vertex[1] = 0; Vertex[2]= 1; //vertex0 Vertex[3]=0; Vertex[4]=1; Vertex[5]=0; //vertex1 Vertex[6]=1; Vertex[7]=0; Vertex[8]=0; //vertex2 Vertex[9]=-1; Vertex[10]=0; Vertex[11]=0; //vertex3

            Index[0] = 0;
            Index[1]=1;
            Index[2]=2;  //face0
            Index[3]=0;
            Index[4]=3;
            Index[5]=2;  //face1

            //I didn’t calculate actual normals here…they are just used for descriptive purposes

            Normal[0] = 0.514233;
            Normal[1] = 0.16234;
            Normal[2] = 1.2342;        //normal0 goes with index0 which is then tied to vertex0
            Normal[3] = 0.74873;
            Normal[4] = -2.13734;
            Normal[5] = -0.25452;    //normal1 goes with index1 which is then tied to vertex1
            Normal[6] = 1.64324;
            Normal[7] = 0.86353;
            Normal[8] = -0.4762;       //normal2 goes with index2 which is then tied to vertex2  these are the normals for face0
            Normal[9] = 0.87362;
            Normal[10] = -1.5367;
            Normal[11] = 0.3215;      //normal3 goes with index3 which is also tied to vertex0
            Normal[12] = -0.6324;
            Normal[13] = 0.8623;
            Normal[14] = 0.9535;      //normal4 goes with index4 which is tied to vertex3
            Normal[15] = -0.5264;
            Normal[16] = 0.87362;
            Normal[17] = 0.26613;    //normal5 goes with index5 which is also tied to vertex2 these are the normal for face1

This may not make sense I guess if the ordering of the operations in the pipeline preclude it…but it seems like if you use the index array to point into the normal array you can tie it together with the vertices to get normal for each face.

Thanks for listening to me rant on….

Warren

From: jee7 [mailto:notifications@github.com] Sent: Thursday, December 24, 2015 3:01 PM To: mrdoob/three.js Cc: Knight, Warren Subject: Re: [three.js] FlatShading on MeshLambertMaterial (#7130)

@whknighthttps://github.com/whknight But how do you know, which normal to pick and output from a vertex then? The general idea is that we have vertices (for example a cube would have 8) and each of them get some value. Of course we could send 3 values or an entire array to each of them. But now there is a problem on how we interpolate the values to the faces. When the vertex shader has ran, the next step is rasterization of faces and each raster (fragment) in a face has a fragment shader ran on it. The fragment shader can get values interpolated from neighboring vertices. Alas there is no way to send one value for a face located on one side of the vertex and another value to some other face. Unless you use the provoking vertexhttps://www.opengl.org/wiki/Primitive#Provoking_vertex, which is not available in WebGL yet. That would allow you to specify that only the value from the first / last vertex is used for the entire face.

Maybe I didn't understand your idea fully and it certainly seems worth thinking about. But imagine that vertex 1. The vertex shader needs to output a normal (that is interpolated across all faces that have this vertex 1). Which normal it will be, normal[1] or normal[3]? It may work out in some specific cases, where you can actually interpolate some numeric value across the faces and have it always round to a unique integer per face. Unfortunately I doubt that this has a efficient general solution.

— Reply to this email directly or view it on GitHubhttps://github.com/mrdoob/three.js/issues/7130#issuecomment-167154740.

jee7 commented 8 years ago

@whknight Alright. Let's recap the standard graphics pipeline a bit. Like I mentioned your regular cube would have 8 vertices. The vertex shader is ran for each of them once. You can pass any values from the vertex shader to the fragment shader. But they will be passed once.

In your case the Normal[0:2] and Normal[9:11] are both tied to vertex0. Now you can pass one or both of those on from the vertex shader. Whichever way you do it, all the neighboring faces get all the values.

You see, the Index array is only used to tell the GPU, where are the faces (between which vertices). Ie what vertex data we read in order to create a face. I found this illustration that might help. While it is not about WebGL (which is based roughly on OpenGL 2.0), and it is also somewhat outdated, the idea is the same. Vertex (element) consists of a position, color, normal, or any other data you wish to send it. The indices specify the vertices in between which we want to interpolate values / rasterize faces.

So unless there is something I'm not aware of, there is no way for two faces sharing a vertex to receive different data from it.

Oh, I did found something called gl_PrimitiveID, but that does not seem to be available in WebGL.

Regarding your idea again, it seems to assume that we can pass different outputs from the vertex shader depending on the face that is currently rendered. Because the vertex shader is ran once, we can't. For most vertices in most cases you do not want to run the vertex shader on them multiple times, because the operations done and the data will be exactly the same. So currently in WebGL the only available option is still to duplicate the vertices.

makc commented 8 years ago

still, fragment shader interpolates between 3 vertices, so there might be a way to figure the face from interpolated value and based on that use one of normals in the vertex. if actually done, this should impose the limit on the number of faces the vertex can be in.

whknight commented 8 years ago

Thanks for the primer on the graphics pipeline. I see now that there is no way physically to specify more than one normal per vertex based on the mechanics of the pipeline….thanks for your time to explain the pipeline to me. I was/am trying to use buffergeometry to generate a deeply subdivided icosahedron and wanted to flat shade the faces. Because its tessellated 8 times there are a lot of vertices and faces…I was trying to save memory and processing time by not duplicating vertices… I will try it out with duplicate vertices and see if that works okay…I may end up doing that anyhow because it appears to apply uv texture values I will need distinct vertices as well.

From: jee7 [mailto:notifications@github.com] Sent: Saturday, December 26, 2015 10:45 AM To: mrdoob/three.js Cc: Knight, Warren Subject: Re: [three.js] FlatShading on MeshLambertMaterial (#7130)

@whknighthttps://github.com/whknight Alright. Let's recap the standard graphics pipeline a bit. Like I mentioned your regular cube would have 8 vertices. The vertex shader is ran for each of them once. You can pass any values from the vertex shader to the fragment shader. But they will be passed once.

In your case the Normal[0:2] and Normal[9:11] are both tied to vertex0. Now you can pass one or both of those on from the vertex shader. Whichever way you do it, all the neighboring faces get all the values.

You see, the Index array is only used to tell the GPU, where are the faces (between which vertices). Ie what vertex data we read in order to create a face. I found this illustrationhttp://stackoverflow.com/a/2949191 that might help. While it is not about WebGL (which is based roughly on OpenGL 2.0), and it is also somewhat outdated, the idea is the same. Vertex (element) consists of a position, color, normal, or any other data you wish to send it. The indices specify the vertices in between which we want to interpolate values / rasterize faces.

So unless there is something I'm not aware of, there is no way for two faces sharing a vertex to receive different data from it.

Oh, I did found something called gl_PrimitiveIDhttps://www.opengl.org/sdk/docs/man/html/gl_PrimitiveID.xhtml, but that does not seem to be available in WebGL.

Regarding your idea again, it seems to assume that we can pass different outputs from the vertex shader depending on the face that is currently rendered. Because the vertex shader is ran once, we can't. For most vertices in most cases you do not want to run the vertex shader on them multiple times, because the operations done and the data will be exactly the same. So currently in WebGL the only available option is still to duplicate the vertices.

— Reply to this email directly or view it on GitHubhttps://github.com/mrdoob/three.js/issues/7130#issuecomment-167337686.

jee7 commented 8 years ago

@whknight You're welcome. "... it appears to apply uv texture values I will need distinct vertices as well." - Yes, any value that you want interpolated differently on the neighboring faces will currently need a duplicated vertex. That's why the vertices at the seams of the texture mapping are usually also duplicated.

@makc If you find a way, then let me know. I'm not sure if it's even a solvable problem though...

makc commented 8 years ago

@jee7 let me illustrate what I mean. start with indexed geometry / vertex normals only / some dumb shading here, now suppose we add face normals as vertex attributes to those vertices that form corresponding faces, and use interpolated vertex normal to pick proper face normal - almost there, but the order is bad - not a major problem, since there must exist a proper 'stacking' algo to fix it - brute force fix here.

makc commented 8 years ago

to be clear, I am not calling to implement something like this in the lib (or anywhere at all) - most of meshes will probably be ok, but some (like spheres - thanks to poles) will need insane amount of attributes.

whknight commented 8 years ago

That's why I am tessellating an icosahedron. The triangles are uniform so uv mapping doesn't get crazy at the poles and when you get to about 8 levels deep you have a good approx of a sphere. It ends up being every .01 of world space representing about 1 sq km (at the scale of the earth) Trying my hand at procedural generation

Sent from my iPhone

On Dec 27, 2015, at 8:04 PM, makc notifications@github.com<mailto:notifications@github.com> wrote:

to be clear, I am not calling to implement something like this in the lib (or anywhere at all) - most of meshes will probably be ok, but some (like spheres - thanks to poles) will need insane amount of attributes.

— Reply to this email directly or view it on GitHubhttps://github.com/mrdoob/three.js/issues/7130#issuecomment-167451746.

jee7 commented 8 years ago

@makc Cool, so you compare each interpolated normal in the fragment shader with all the face normals. Then pick the closest (in the sense of direction) one. Here are my ideas about that:

@whknight Hmm, if you already have a uv mapping on the icosahedron, then why not also do normal mapping? Would that solve your problem?

makc commented 8 years ago

@jee7 as you can guess I did not test the limits of this approach. you don't have to test 'all the faces', as it should be possible to use same attribute for faces that do not have common vertices. it should not even be the normal that you use to decide the face, it could be some other quantity. but again, duplicating the vertices is way easier, and does not complicate the shader.