I've always thought this was a stupid design decision that didn't make any sense, so when I implemented it for PortableGL I gave it the following prototype
I've never really had cause to regret that but in the process of porting the LearningOpenGL code, having to delete the cast on every call is a bit of an irritation. It occurred to me that maybe I should have matched the stupid spec and had a pgl extension version or macro wrapper that took a regular int. So if I went the macro route it would look like this:
That way you could ignore it when porting if you wanted to, and still use the pgl version from scratch for simplicity.
It seems like it'd be an easy change, and it is logically, but unfortunately the offset member is currently a GLsizei which is in int32_t so you get a warning or error casting from a 64 bit pointer. There are several solutions to that, but I think the one that obeys the spec (or at least the spirit of it) the closest would be to add the missing GL typesGLintptr and GLsizeiptr and define them as intptr_t and uintptr_t and use one of those for offset. This would work but it just feels a little wrong and maybe not worth the effort. It also would break backward compatibility but as a single header library designed to be statically compiled into projects and with a very small user base that probably shouldn't be a huge factor.
I'm on the fence about it, and I'm in no hurry either way so I figured I'd do an informal poll. I can't guarantee I'll end up deciding in favor of the majority, but I'd welcome any strong arguments for or against the change.
The OpenGL spec gives glVertexAttribPointer the following prototype:
However, the last parameter is "pointer is treated as a byte offset into the buffer object's data store." You can see the full description here.
What that means is that everyone has to cast the byte offset to a void pointer like so
I've always thought this was a stupid design decision that didn't make any sense, so when I implemented it for PortableGL I gave it the following prototype
I've never really had cause to regret that but in the process of porting the LearningOpenGL code, having to delete the cast on every call is a bit of an irritation. It occurred to me that maybe I should have matched the stupid spec and had a pgl extension version or macro wrapper that took a regular int. So if I went the macro route it would look like this:
That way you could ignore it when porting if you wanted to, and still use the pgl version from scratch for simplicity.
It seems like it'd be an easy change, and it is logically, but unfortunately the offset member is currently a GLsizei which is in
int32_t
so you get a warning or error casting from a 64 bit pointer. There are several solutions to that, but I think the one that obeys the spec (or at least the spirit of it) the closest would be to add the missing GL typesGLintptr
andGLsizeiptr
and define them asintptr_t
anduintptr_t
and use one of those for offset. This would work but it just feels a little wrong and maybe not worth the effort. It also would break backward compatibility but as a single header library designed to be statically compiled into projects and with a very small user base that probably shouldn't be a huge factor.I'm on the fence about it, and I'm in no hurry either way so I figured I'd do an informal poll. I can't guarantee I'll end up deciding in favor of the majority, but I'd welcome any strong arguments for or against the change.
Thanks!