Closed wrr closed 10 years ago
Interesting! Could you try doing a test with a geometry of, say, 500,000 triangles and check what the performance difference is?
Do you remember a reasoning behind calculating the reflection vector in the vertex shader?
Lambert
material was created first, I think, and it does calculations at the vertex level, interpolating the result across the face. Phong
material inherited that approach, and used to implement perPixel
calculations only as an option.
Making this change for Phong
makes sense. Lambert
still does lighting calculations at the vertex level, so I am not sure about the best approach for Lambert
or Basic
material in this case.
A 500k triangles geometry is here (the original approach): http://mixedbit.org/cubemap/orig_cylinder/webgl_materials_cubemap.html and here (per fragment reflection): http://mixedbit.org/cubemap/modified_cylinder/webgl_materials_cubemap.html
It is hard to measure a difference with stats. Do you know any more accurate tool?
The cylinder also illustrates the visual difference between these two approaches. Camera is configured to stay perpendicular to the cylinder when it moves up and down. In the first case, the reflection is stretched when a camera is close to the cylinder edges. In the second case the reflected image stays still, because the angle between the camera and the surface does not change.
Hmm... maybe 2 million triangles? :D
Still, maybe it's better to just go with perPixel envmap handling for simplification?
Still, maybe it's better to just go with perPixel envmap handling for simplification?
Probably. But MeshLambertMaterial
would become a mix of methodologies that is difficult to justify, logically.
I initially tried to parametrize materials with perFragmentReflection
setting, so all materials could turn the option on if needed (Phong
could have the option set to true
by default, Basic
and Lambert
to false
). But with such approach shaders' dependencies and ifdef
nesting became quite convoluted (Phong
already defines normal
and vWorldPosition
, other materials do not).
I gave up this attempt and only enabled per fragment reflection calculation for the Phong
material as @WestLangley suggested, which is very simple, because all needed variables are already there.
The PR related to this issue is merged, so I think this can be closed.
At this moment Three.js calculates a reflection vector for a material with envmap (and without a bump or a normal map) in a vertex shader: https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/envmap_vertex.glsl#L14
The result is then interpolated and used by the fragment shader: https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/envmap_fragment.glsl#L26
Such approach causes distortion for larger triangles (I think this is because the reflection vector does not change linearly, so the interpolation introduces errors). Please take a look here: http://mixedbit.org/cubemap/orig/webgl_materials_cubemap.html The reflection is distorted, when a camera moves, a boundary between two triangles becomes clearly visible.
A better approach seems to be to only calculate
vNormal
in the envmap vertex shader, let the GPU interpolate thevNormal
, and calculate reflection vector in the fragment shader (similarly to how this is done when bumpmap or normalmap are used). Then the reflection becomes mirror like, without any distortion: http://mixedbit.org/cubemap/modified/webgl_materials_cubemap.htmlI can work on a pull request if such change seems good. I'm not sure though if the proposed approach has no drawbacks (besides additional cost of calculating the reflection vector per fragment). Do you remember a reasoning behind calculating the reflection vector in the vertex shader?