cginternals / globjects

C++ library strictly wrapping OpenGL objects.
https://globjects.org
MIT License
538 stars 59 forks source link

Difference between VertexArray::binding and VertexAttribBinding::setAttribute? #339

Closed andrew-kennedy closed 6 years ago

andrew-kennedy commented 7 years ago

The binding method takes a GLuint bindingIndex and setAttribute takes a GLint attributeIndex which has me a bit confused. I've figured out that the setAttribute method corresponds to the numbered attributes in GLSL shaders specified using layout = 0 and such, but am unsure about what the binding index corresponds to.

I have previously successfully bound all attributes in a shader to one bindingIndex but I noticed in the tessellation example the Icosahedron binds position attributes to bindingIndex 0 and attributeIndex 0, and the normal attributes to bindingIndex 1 and attributeIndex 1.

At this point I'm pretty lost as to the nuance between these two in relation to traditional OpenGL calls.

The final point of confusion is that attributes must be enabled through VertexArray::enable calls but they are set through the VertexAttribBinding class. Presuming that we can have multiple VertexAttribBinding's in a VertexArray that are accessed through calls to VertexArray::binding, how do we select which VertexAttribBinding's attributes are going to be enabled when we do finally call VertexArray::enable?

scheibel commented 7 years ago

What you're mentioning is a pure OpenGL concept artifact. A good starting point is to see binding indices and attribute indices as synonyms, but they're different with modern OpenGL (as of 4.3).

A more detailed answer: From a shaders perspective, you want to have attribute inputs (e.g., locations, colors). From an application perspective, you have (separate) data buffers and configured views for attribute input on them. The former are called attribute indices and the latter binding indices. When developing graphics applications it may be that application code and shader code evolve independently. Thus, a strict connection between vertex data and vertex attributes in the shader is undesired. The solution for OpenGL is to provide a flexible mapping between those two index types that has to be neither injective nor surjective.

The OpenGL wiki page on Vertex Specification may introduce the broad image:

scheibel commented 7 years ago

The last part of your comment is probably an issue of globjects. I recall some commits from the early start of this project when the active method was indeed available on the VertexAttributeBinding class. However, the VertexAttributeBinding class wraps a connection between an attribute index and a binding index together with the other associations required. The parameter for the glEnableVertexArrayAttrib is a plain attribute index instead. I presume this mismatch is the reason of the current design.

andrew-kennedy commented 7 years ago

Thanks for the clarification. I think I understand the difference after reading those Khronos wiki pages. So what is the result of calling enable on an attribute index when the same index is bound in two VertexAttributeBinding classes? Undefined behavior? I apologize for my laziness because it's obviously in the code somewhere but I feel the effort to dig through globjects to determine this is very large.

scheibel commented 7 years ago

Documentation of internals: In globjects we keep track of configured vertex attribute bindings. Internally we use a map of the binding index to the binding object. The only interface to create such a vertex attribute binding is to request the binding through the VertexArray::binding member function. During creation of the VertexAttributeBinding instance, its binding index is passed and immutably stored (okay, maybe we should add a const to the declaration). What you can alter afterwards is the attribute index (which is by default the binding index).

Use case that provokes yiour scenario: If you define a VAO with two bindings (differing binding indices as assured by globjects) to get configured for the same attribute index, I think it is defined behavior on behalf of OpenGL (assumption: non-concurrent use of OpenGL).

Explanation: You issue a series of OpenGL commands (either plain or through globjects it doesn't matter). First you configure the vertex attribute binding. Presumably it is a full configuration and probably even valid. However, later you configure a second vertex attribute binding. Until now, OpenGL would not use any of the vertex attribute bindings until they are explicitly enabled. So you trigger a call to VertexArray::enable with a binding index. OpenGL would use the corresponding configuration if you would start rendering right away. Then you call the enable with the second binding index. Now I think OpenGL keeps track which vertex shader attribute is fetched using which configuration, recognizes the former configuration, and replaces the configuration with the latter one. Thus, I think it depends on the order of enable calls. The former configuration is ignored, the latter one is used.