Closed hugoam closed 2 years ago
This would ideally be turned into a Quest-specific driver workaround, i.e. something that's handled by Magnum itself and not an application-specific "patch". I had to do quite a few of these for Mac GL drivers, for Intel drivers on Windows, some for ARM Mali drivers, etc., so it would be nothing uncommon. Plus that way Magnum itself can benefit from that, instead of the bug having to be fixed again by every app using Magnum on the Quest.
Can you elaborate on what exactly is the bug and what needs to be done to circumvent it?
@mosra Unfortunately, this is all on WebGL and while detecting Quest might be possible somehow, it is super inconvenient. Magnum currently doesn't have any framework for detecting browser version etc, and even then, with the upcoming Quest Pro, that would no longer be sufficient to detecting the Quest 1/2 here 🤔
Yes, but still -- I have to deal with Quest as well, and any Quest bugs that aren't worked around by Magnum directly are viewed by certain end users as "Magnum is broken", which is not the case.
Thus I'd really like to have this handled directly here, even if it would ultimately mean one has to explicitly tell the library that "this is a Quest, do your best".:D
Here is a link to the Oculus bug: https://forums.oculusvr.com/t5/Quest-Development/WebGL-glUniform1iv-driver-bug-when-setting-sampler-texture-units/m-p/985095#M4929
Circumventing the bug would basically involve using glUniform1i
instead of glUniform1iv
in the AbstractShaderProgram::setUniform(Int, Int)
signature, the latter being broken.
There's also an argument that going through the former would be more optimized especially in the WASM <-> JS <-> Driver round trip so maybe it would even be simpler to switch to glUniform1i
altogether instead of adding a driver specific fix ?
using
glUniform1i
instead ofglUniform1iv
:100: Yes! This is something that's on my Emscripten TODO list for quite a while, together with all other variants -- basically every time there's just one number / vector instead of an array, it should use the non-pointer version. Won't matter much natively, but in case of WebGL it would avoid generating garbage on every call. If I count correctly, that's about:
uniform*ImplementationDefault
, which use()
and call into the non-v
glUniform*()
APIssetUniform()
variants, instead of delegating into the array variantActually I think I count only 4 new useful variants:
glUniform1f
glUniform1i
glUniform1ui
glUniform1d
The rest (2
, 3
and 4
) are probably not useful since there is no way to reach them from the current API, I believe (all the functions taking array views just map naturally to the v
variants)
EDIT: Ok I found one API function that could map to 2
, 3
and 4
variants but it would have to be detemplated:
template<std::size_t size, class T> void setUniform(Int location, const Math::Vector<size, T>& value)
Could become:
template<class T> void setUniform(Int location, const Math::Vector<2, T>& value); // map to glUniform2x
template<class T> void setUniform(Int location, const Math::Vector<3, T>& value); // map to glUniform3x
template<class T> void setUniform(Int location, const Math::Vector<4, T>& value); // map to glUniform4x
And those would map to the 4*3 additional variants e.g glUniform2f
Yep, we can close this now, thanks a lot :)
Hello,
We have to circumvent a bug in Oculus Quest which requires us to call a GL function on the program of an
GL::AbstractShaderProgram
directly, which requires the program to be bound. Unfortunately,GL::AbstractShaderProgram::use()
is private, so we can't call it from the derived class, and we can't either callglUseProgram()
ourselves without triggering an inconsistency with Magnum internal state (which will not know we changed the currently bound program and risk causing more silent bugs down the line).Would making
GL::AbstractShaderProgram::use()
protected make sense to allow more fine grained control from the derived class? Or do you have another suggestion to solve this use case?Thanks :)