Open holland01 opened 9 years ago
Take care of rgbGen vertex; remove the color attrib entirely if identity or identitylighting settings are used.
TextureBuffer
. To save memory, it would also be worthwhile to allocate all of the texture structs in the shader stages on the heap, and then delete each one after their pixel data has been transferred to the TextureBuffer
. Next up would be filling any 24 bit texture buffers with a 255 alpha channel for each pixel, since the texture array will have a GL_SRGB8_ALPHA8 internal format for each member. You will also need to supply some kind of index attribute which can looked up in the fragment shader to sample the appropriate texture. For simplicity's sake, you can add an integer for each vertex which takes care of this. If the integer in the shader is -1, there's no need to actually sample it. Be sure to create a separate struct which acts as a vertex for the GPU data; this way, the bspVertex_t struct won't need to be modified. You'll need to replace all client side buffer stores with that type as well.( x + y * width ) / ( width * height )
. Each st = 1.0 will map to the upper right corner of a primitive where each st = 0.0 will map to the lower left corner of the primitive. This means that there must be some offset O applied to the multiplication of the st by scalar S. So the equation would look something like: st_xy = O + st * S
. What might be useful is something like O = ( x + y * width ) / ( width * height )
and S = 1 / ( width * height )
. You could then clamp the result between 0 and one (obviously) to prevent an out of bounds coordinate access on the last primitive. The last primitive's access will be something like O = ( width - 1 + ( height - 1 ) * width ) / ( width * height )
. ( NOTE, O should actually be something more like O = vec2( x / width, y / height )
, and then do:
st_xy = vec2( x / width, y / height ) + st * vec2( 1 / width, 1 / height );
( Side note on the mip mapping: the bias uniform being provided in the shader might be causing problems for some of the mip levels. If this is the case, see if there's anything which can be done in terms of properly fetching a particular mip level at the shader level, and using mipWidth / megaDims.x, mipHeight / megaDims.y
in these instances instead. texelFetch
in GLSL might be a good mechanism for this, or seeing if there is another way to select the mip level to sample from within a draw call.
texture_t::minFilter
, texture_t::magFilter, texture_t::pixels
, etc. It would likely be a good idea to have 3 different texture arrays:Most of the WebGL support is there; optimizations are a major issue at the moment, so I'm focusing on that (in addition to making sure everything is being drawn as close as possible to the real deal).
Main TODO
WebGL SupportSecondary
For Vertex Deforms
glfwGetTime()
is multiplied directly with something like the deform spread; this may spread the animation movement out a bit...