First, I would like to thank you for this amazing tutorial. While reading Appendix A. Basic Optimization I have stumbled upon what I believe is a little typo on the Vertex Format section.
When talking about packing normalized data as 32-bit integers, the API example provided uses the wrong data type:
Sometimes, color values need higher precision than 8-bits, but less than 16-bits. If a color is in the linear RGB colorspace, it is often desirable to give them greater than 8-bit precision. If the alpha of the color is negligible or non-existent, then a special type can be used. This type is GL_UNSIGNED_INT_2_10_10_10_REV. It takes 32-bit unsigned normalized integers and pulls the four components of the attributes out of each integer. This type can only be used with normalization:
I believe the line glVertexAttribPointer(#, 4, GL_UNSIGNED_BYTE, GL_TRUE, ...); should be:
glVertexAttribPointer(#, 4, GL_UNSIGNED_INT_2_10_10_10_REV, GL_TRUE, ...);
Hello.
First, I would like to thank you for this amazing tutorial. While reading Appendix A. Basic Optimization I have stumbled upon what I believe is a little typo on the Vertex Format section. When talking about packing normalized data as 32-bit integers, the API example provided uses the wrong data type:
I believe the line
glVertexAttribPointer(#, 4, GL_UNSIGNED_BYTE, GL_TRUE, ...);
should be:glVertexAttribPointer(#, 4, GL_UNSIGNED_INT_2_10_10_10_REV, GL_TRUE, ...);
Regards.