Closed niner closed 9 years ago
The GL3 renderer wasn't written against core context profile, will fail with mesa drivers.
From a quick gdebugger run, the following functions used by GL3 renderer will have to be replaced by equivalent functionality:
Function Name, Deprecation Reason, # of Calls, % of Calls, Deprecated at OpenGL Version, Removed at OpenGL Version glBindTexture, Application Generated Object Names, 1108, 1.07, 3.0, 3.1 glBindBuffer, Application Generated Object Names, 42, 0.04, 3.0, 3.1 glTexParameteri, Texture Clamp Wrap Mode, 22, 0.02, 3.0, 3.1 glDisable, Fixed Function Fragment Processing, 1, 0.00, 3.0, 3.1 glEnable, Fixed Function Fragment Processing, 1, 0.00, 3.0, 3.1 glGetIntegerv, Fixed Function Fragment Processing, 1, 0.00, 3.0, 3.1 glGetIntegerv, Max Varying, 1, 0.00, 3.2, None glGetString, Unified Extension String, 1, 0.00, 3.0, 3.1
glGetString is only used to log GPU vendor info, can be removed or made optional (for non-core context).
With latest commits we only have: glBindTexture, Application Generated Object Names glBindBuffer, Application Generated Object Names glGetString, Unified Extension String
glBindTexture, glBindBuffer are false positives. glGetString(GL_EXTENSIONS) is not called by us but by SDL or GLEW.
glGetString(GL_EXTENSION) is called by GLEW, see http://sourceforge.net/p/glew/bugs/174/?page=1
There is another issue I overlooked. VDrift is using client-side vertex arrays for dynamic (or non-performance critical) stuff like the GUI. They are deprecated as well from my understanding.
https://github.com/VDrift/vdrift/tree/vertex branch replaces current vertex code with a vertex buffer system.
Need to recheck what else remains to be fixed to support core profile, will have to replace glew eventually.
I've created a core branch which forces 3.3 core profile. Seems to work on Windows. Linux, MacOS testers are welcomed.
The branch depends on vertex data branch. gl2 renderer can not handle core profile yet. So this is gl3 only for now.
Seems to select the 3.3 core profile correctly, but then stuff goes wrong:
INFO: Multi-processor system detected. Run with -multithreaded argument to enable multithreading (EXPERIMENTAL).
INFO: Starting VDrift: 2012-07-22a, Revision: gc29d978, O/S: OS X
INFO: Home directory: /Users/timothyfurlong
INFO: Settings file: /Users/timothyfurlong/Library/Application Support/VDrift/VDrift.config
INFO: Data directory: /Users/timothyfurlong/Library/Developer/Xcode/DerivedData/vdrift-boldpqznhlzvtjbfolhqhlyhwziv/Build/Products/Debug/VDrift.app/Contents/Resources/data
INFO: Temporary directory: /Users/timothyfurlong/Library/Application Support/VDrift/tmp
INFO: Log file: /Users/timothyfurlong/Library/Application Support/VDrift/log.txt
INFO: Disabling antialiasing
INFO: Using GLEW 1.10.0
INFO: Video card information:
GL Vendor: Intel Inc.
GL Renderer: Intel HD Graphics 3000 OpenGL Engine
GL Version: 3.3 INTEL-10.0.16
Texture units: 16
Maximum texture size: 8192
ERROR: OpenGL error "invalid enumerant" during: bool GLWrapper::initialize():/Users/timothyfurlong/Documents/Development/VDrift/VDrift/vdrift/src/graphics/gl3v/glwrapper.cpp:55
ERROR: OpenGL error "invalid operation" during: Cubemap creation
INFO: Loading /Users/timothyfurlong/Library/Developer/Xcode/DerivedData/vdrift-boldpqznhlzvtjbfolhqhlyhwziv/Build/Products/Debug/VDrift.app/Contents/Resources/data/shaders/gl3/deferred.conf...
INFO: Loaded /Users/timothyfurlong/Library/Developer/Xcode/DerivedData/vdrift-boldpqznhlzvtjbfolhqhlyhwziv/Build/Products/Debug/VDrift.app/Contents/Resources/data/shaders/gl3/deferred.conf
ERROR: Unable to compile shader depthcopy.frag from file /Users/timothyfurlong/Library/Developer/Xcode/DerivedData/vdrift-boldpqznhlzvtjbfolhqhlyhwziv/Build/Products/Debug/VDrift.app/Contents/Resources/data/shaders/gl3/depthcopy.frag:
ERROR: 0:13: Use of undeclared identifier 'gl_FragColor'
ERROR: Initialization of GL3 renderer failed; that's OK, falling back to GL 1 or 2
Thanks Timo, that's much better :P. Windows drivers seem to be not too serious about core profile. I'll be testing a bit later under Linux with open source drivers. Will hit the same bugs hopefully.
Linux/Mesa only needed a tiny fix: Remove invariant attribute from out vec4 outputColor; in lightcompositing.frag
INFO: Video card information: GL Vendor: X.Org GL Renderer: Gallium 0.4 on AMD RV770 GL Version: 3.3 (Core Profile) Mesa 10.1.5 Texture units: 16 Maximum texture size: 8192
ERROR: OpenGL error "invalid enumerant" during: bool GLWrapper::initialize():/home/nan/vdrift/src/graphics/gl3v/glwrapper.cpp:55 INFO: Loading /run/media/nan/dev/vdrift/vdrift/data//shaders/gl3/deferred.conf... INFO: Loaded /run/media/nan/dev/vdrift/vdrift/data//shaders/gl3/deferred.conf INFO: GL3 initialization successful
Even though the rendering is too dark, it works: http://i.imgur.com/mf2vkEA.jpg
I will try a more recent Mesa build.
With Mesa 10.3 I am getting the same output as on Windows: http://vdrift.net/Forum/showthread.php?tid=1825
@Timo6 I've been looking for something akin to apitrace(OpenGL debugger) on Mac and it seems there is an OpenGL Profiler: https://developer.apple.com/library/mac/technotes/tn2178/_index.html#//apple_ref/doc/uid/DTS40007990
The idea is to try running VDrift with it and capture a call trace (Launching an application within Profiler.) to get a better idea where the driver hickups.
The longest trace I could get (trying to collect it caused VDrift to crash after 'falling back to GL 1 or 2' in log.txt, as opposed to exiting gracefully as it does when launching normally) is here: http://pastebin.com/2GwZXFCV, and with 'Include Backtraces' ticked: http://pastebin.com/kMAxgcEL
Thanks. It is looking quite good.The first two INVALID_ENUM and INVALID_OPERATION are non-critical, easy to fix. It seems to choke on the sixth shader depthcopy.frag
Looking it up gl_FragCoord is actually deprecated, I should have double-checked it the first time.
From the first look the fix would be to comment out the gl_FragColor line, as the shader should not output color anyway.
@Timo6 I've pushed some fixes, would be cool if you cold do another trace to see how far we get.
Sorry for the delay, been very busy and had some computer problems (so my results may not be very reliable as I've downgraded my OS among other things). With 4b442f7 and r1292 VDrift launches without any errors in the log, but the menu is running at 4fps on lowest settings. Trace (of launching, starting a race then quitting as quickly as possible to keep the trace size down!): https://gist.github.com/Timo6/484260f5b544aafa9e52, with backtraces: https://gist.github.com/Timo6/ade7ae19946a2047ec03.
Thanks. It is a start :)
What is the performance of a release build without profiling?
Also I assume gl2 deferred ran on your machine before (not core profile)? What was the performance there?
Yep, gl2 deferred on latest master (a8bd84f) gives 60fps on menus, and 40-65 (usually 60) in game on the same (lowest settings).
Digging through the trace I see a single draw call eating most of the frame time: 97565.11 µs glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0x00000078);
1/10 sec is quite a lot time spent there. I assume it is lightaccumulate.frag (the heaviest shader), but need to compare with my traces first to be sure.
The HD 3000 has 12 shader units > 100 GFLOPS I think, need to check how much time my amd card spends on this shader, assuming shader complexity is the problem here ;)
I did some testing with a GeForce 310M ~70GFLOPS. I get 10fps with shadows, reflections maxed out at 720p. I think you should get about the same numbers at least.
My current guess is that Apple glsl compiler might be having issues with the shader, hitting some fallback maybe. Need to think of some way to test it, maybe try some simplified variations of the shader in question.
PS: Another thing I've noticed is that the nVidia and my AMD card are rendering asynchronously with draw calls taking 40-80us max.
I've merged core profile patches into vertex branch. Define GL3CORE to enable it.
Been doing some more testing with 11b0990 and r1302. On lowest gl3 settings menus are up to 7fps. Will get some traces later. Max settings causes fall back to gl2/basic.conf with errors in gl3/blur.frag and gl2/deferredshadows.frag:
ERROR: Unable to compile shader blur.frag BOX HORIZONTAL from file /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl3/blur.frag:
ERROR: 0:12: Attempt to redeclare 'textureSize' as a variable
ERROR: 0:40: Attempt to use 'textureSize' as a variable
ERROR: 0:57: Use of undeclared identifier 'invViewportSize'
ERROR: 0:67: Use of undeclared identifier 'invViewportSize'
ERROR: ----- Start Shader Compile Log for /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/deferredshadows.frag -----
ERROR: ERROR: 0:56: Call to undeclared function 'shadow2D'
ERROR: 0:58: Use of undeclared identifier 'notshadowfinal'
ERROR: ----- End Shader Compile Log -----
ERROR: ----- Start Shader Link Log for /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/deferredshadows.vert and /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/deferredshadows.frag -----
ERROR: ERROR: One or more attached shaders not successfully compiled
ERROR: ----- End Shader Link Log -----
ERROR: Shader compilation failure: /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/deferredshadows.vert and /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/deferredshadows.frag
Thanks Timo. I'll look into it.
shadow2D is deprecated after version 120.
Why the hell does it not fail on linux/windows :P
@Timo6 fixes for textureSize and shadow2D are in master
With f480320 and r1306 gl3 shaders compile fine, gl2 gives:
ERROR: ----- Start Shader Compile Log for /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/deferredshadows.frag -----
ERROR: ERROR: 0:57: Swizzle of non-vector primitive float
ERROR: 0:59: Use of undeclared identifier 'notshadowfinal'
ERROR: ----- End Shader Compile Log -----
And the return value of a shadow sampler is a float. :)
Fix is in master. Thanks again for you tireless testing Timo.
Nice, gl2 working great now - I can turn the settings up much higher than I ever could before :smiley:
Hey @Timo6 I've pushed a noglew branch. It replaces glew/glu dependencies with a custom loader (graphics/glcore.h and graphics/glcore.cpp).
Would be awesome if you could give it a try on OSX. Seems to work quite well on Linux and Windows so far.
gl3 is fine, gl2 falls back to basic.conf:
INFO: Request OpenGL 3.3 Core Profile context.
INFO: Disabling antialiasing
INFO: Disabling vertical synchronization.
INFO: Video card information:
GL Vendor: Intel Inc.
GL Renderer: Intel HD Graphics 3000 OpenGL Engine
GL Version: 3.3 INTEL-8.24.15
Texture units: 16
Maximum texture size: 8192
INFO: Maximum anisotropy: 16
INFO: Maximum color attachments: 8
INFO: Maximum draw buffers (1 required): 8
INFO: Graphics card doesn't support framebuffer objects.
INFO: Fall back to: basic.conf
INFO: Renderer: /Users/timothyfurlong/Documents/Development/Data/vdrift-data/shaders/gl2/basic.conf
Interesting, ARB_framebuffer_object is a core feature since 3.2. I assume the driver doesn't expose it as ARB extension any more. Need to change the extension loading code to handle this, just enable extensions that are core.
Now doesn't like floating point textures:
INFO: Request OpenGL 3.3 Core Profile context.
INFO: Disabling antialiasing
INFO: Disabling vertical synchronization.
INFO: Video card information:
GL Vendor: Intel Inc.
GL Renderer: Intel HD Graphics 3000 OpenGL Engine
GL Version: 3.3 INTEL-10.0.18
Texture units: 16
Maximum texture size: 8192
INFO: Maximum anisotropy: 16
INFO: Maximum color attachments: 8
INFO: Maximum draw buffers (1 required): 8
INFO: Initialized render output: full_scene_depth (FBO)
ERROR: Your video card doesn't support floating point textures.
ERROR: Failed to load render output: full_scene_color 2D
INFO: Fall back to: basic.conf
INFO: Renderer: /Users/timothyfurlong/Library/Developer/Xcode/DerivedData/vdrift-boldpqznhlzvtjbfolhqhlyhwziv/Build/Products/Debug/VDrift.app/Contents/Resources/data/shaders/gl2/basic.conf
Yeah, sorry about that one, silly sopy/paste induced bug.
Cool, that works :)
Closing as fixed. Please open a new issue, if you still have any opengl related problems.
Mesa's r600 driver now supports OpenGL 3.3 as indicated by glxinfo:
OpenGL vendor string: X.Org OpenGL renderer string: Gallium 0.4 on AMD REDWOOD OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.2.0-devel OpenGL core profile shading language version string: 3.30 OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile
But vdrift still only detects GL version 3.0, since it does not set GL context to core:
INFO: GL Renderer: Gallium 0.4 on AMD REDWOOD INFO: GL Vendor: X.Org INFO: GL Version: 3.0 Mesa 10.2.0-devel INFO: Initialized GLEW 1.9.0 ERROR: Graphics card or driver does not support required GL_VERSION_3_3 ERROR: Initialization of GL3 failed; that's OK, falling back to GL 1 or 2 INFO: Video card information: Vendor: X.Org Renderer: Gallium 0.4 on AMD REDWOOD Version: 3.0 Mesa 10.2.0-devel Maximum texture size: 16384 Maximum varying floats: 128 Using GLEW 1.9.0
When forcing GL version to 3.3 using MESA_GL_VERSION_OVERRIDE=3.3 vdrift fails to initialize OpenGL with the following error message:
ERROR: OpenGL error "invalid enumerant" during: bool GLWrapper::initialize():src/gl3v/glwrapper.cpp:67
The game hangs on startup with a black screen and spewing OpenGL error "invalid enumerant" and OpenGL error "invalid operation" messages on the console.