Open levinli303 opened 4 years ago
On what hardware are you testing?
Can you launch with LIBGL_LOGSHADERERROR=1
to see if there are some FPE Shader that doesn't compile?
Also, can you try to git bisect
the issue ?
I am running it on iPhone X
the shaders all seems to compile fine, no error log, I'll try to use bisect with the iOS support commit.
Oh, an iPhone X, I will have some difficulties to test myself.
I see 2 commit that may cause the isse: 62c9db85746760d16f17c8a31b0e95384bd15c3d (but 1a30df34f6447fdbfc4a3de00c4a6185c2e1b00c is supposed to have fixed all potential issues) and 3ebf35dc84842be171b246e894a51552e27322bc
(but of course it can be something else completly)
Glad to see gl4es is (almost) working on iOS :)
running git bisect shows that it seems 761dfe66aff0a636a0e75b9a23b563d156c3aca6 is the bad commit
Ah. That's interresting. Can you tell me how many VertexAttrib and how many texture unit gl4es detect? Things like that:
LIBGL: Max vertex attrib: 16
LIBGL: Extension GL_OES_standard_derivatives detected and used
LIBGL: Max texture size: 2048
LIBGL: Max Varying Vector: 8
LIBGL: Texture Units: 8/8 (hardware: 8), Max lights: 8, Max planes: 6
LIBGL: Max Draw buffers: 1
at the start, if you can redirect gl4es logs to somewhere.
oh, it seems hardware test needed to be enabled on iOS. in my commit 4d8d9092aceebca47f23859d3dc2bc077d089657, I disabled it with gl4es_notest = 1.
LIBGL: Max vertex attrib: 16
LIBGL: Extension GL_OES_standard_derivatives detected and used
LIBGL: Max texture size: 4096
LIBGL: Max Varying Vector: 8
LIBGL: Texture Units: 8/8 (hardware: 8), Max lights: 8, Max planes: 6
LIBGL: Extension GL_EXT_texture_filter_anisotropic detected and used
LIBGL: Max Anisotropic filtering: 16
LIBGL: Hardware vendor is Apple Inc.
now it seemed to be fine
Ah ok, great. Still, it seems the "no hardware test" path is a bit broken. I'll try to fix this.
I'll re-enable the tests on Apple platform (unless you want to do a PR)
@ptitSeb I'll make a pr. good to see it finally running without a problem on iOS
Yes, that's good news \o/
Have same issue with NOEGL, but hardware tests depend on GetProcAddress
@mittorn still now with the new default value I have pushed a few hours ago?
no, it was was before. Textures works now, but without npot support it looks ugly (maybe, engine issue, it has bad non-npot hardware support since 2018)
Ah yes, I tried latest XashXT on the Pandora, that have no NPOT support, and yeah, it's not ideal. To get correct behavour I needed LIBGL_NPOT=2 LIBGL_DEFAULT_WRAP=0 LIBGL_MIPMAP=3
.
Issue is, if NPOT is supported in hardware, only the last LIBGL_MIPMAP=3
have an negative effect (I mean, disabling MipMap were it would run correctly). Also, LIBGL_NPOT=2
was supposed to force LIBGL_MIPMAP=3
, so may I just need to fix that...
Does custom renderer work on Pandora? It may need to update shaders in game/glsl, original shaders uses implicit int to float cast. Also, i fixed shadow shader that used arb_shadow or shader4 extension, now it has fallback. original game data has shaders in core.pak, it may be overriden by files in game directory. Disabling npot breaks menu buttons, it seems to be engine bug. same behaviour on old sis gpu without npot. world renders correctly because engine resizes world textures to nearest POT
I'm unsure if it's active. Does it needs some extra data inside valve folder (or somewhere here)?
Game data with test maps here: https://github.com/a1batross/XashXT_Original (clone with --depth 1) original half-life with xashxt and shaders from game data should work, but without any extra features except of skinning models on GPU XashXT uses GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB and GL_MAX_VARYING_FLOATS_ARB which availiable only in gles3 contexts, i replaced by gles2 alternative in last commit. Maybe it useful in gl4es:
// check for hardware skinning
pglGetIntegerv( GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB, &glConfig.max_vertex_uniforms );
pglGetIntegerv( GL_MAX_VERTEX_ATTRIBS_ARB, &glConfig.max_vertex_attribs );
pglGetIntegerv( GL_MAX_VARYING_FLOATS_ARB, &glConfig.max_varying_floats );
if( glConfig.hardware_type == GLHW_RADEON && glConfig.max_vertex_uniforms > 512 )
glConfig.max_vertex_uniforms /= 4; // only radion returns count of floats other returns count of vec4
if( !glConfig.max_vertex_uniforms )
{
pglGetIntegerv( GL_MAX_VERTEX_UNIFORM_VECTORS, &glConfig.max_vertex_uniforms );
glConfig.max_vertex_uniforms *= 4;
}
if( !glConfig.max_varying_floats )
{
pglGetIntegerv( GL_MAX_VARYING_VECTORS, &glConfig.max_varying_floats );
glConfig.max_varying_floats *= 4;
}
if float limits unavailiable (usually in gles2 contexts) use vector limit * 4
correct GPU skinning should require at least 512 uniforms, on 256 player and weapon models will break. I'll try to fallback this models to legacy code later
Mmmm, for now, it doesn't run on the Pandora. It looks like the GL Context is not created properly. I'll look into it later.
The 2 _ARB queries, I can probably handle them inside gl4es...
with current HEAD, a simple texture mapping does not render correctly with ES2, but correctly on ES1. tried with the commit e7e7068299c0b4eb0311b36ef432a3b579e1a6bc in September, it works out well both on ES1 and ES2