Closed ronjouch closed 9 years ago
Can you provide your distro and information on what graphics card(s) you're using?
Can you provide your distro and information on what graphics card(s) you're using?
Sure! Arch Linux with an AMD chip using the free ati drivers. I can run WebGL in the browser fine, and warsow runs well.
~/Glitter/Build/Glitter ± master uname -a
Linux x 4.1.6-1-ARCH #1 SMP PREEMPT Mon Aug 17 08:52:28 CEST 2015 x86_64 GNU/Linux
~/Glitter/Build/Glitter ± master lspci | grep -i radeon
02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Madison [Mobility Radeon HD 5730 / 6570M]
Any chance you can step through using the debugger and check where exactly it crashes? e.g. is mWindow
a valid window pointer?
@Polytonic would like to, but the Glitter binary has no debug symbols; I can't set a breakpoint and I can't print the locals at the time of segfaulting. Can you help me find my way in cmake to generate a debug build?
EDIT trying with -DCMAKE_BUILD_TYPE=Debug
, compiling...
For gcc
, add -g
to the compiler flags here.
Alternately, you should be able to pass a flag when you run cmake: cmake -DCMAKE_BUILD_TYPE=Debug ..
or add the line set(CMAKE_BUILD_TYPE Debug)
to CMakeLists.txt
somewhere near the top.
Unfortunately, this does not compile for me using gcc-5 (Homebrew gcc5 5.2.0) 5.2.0
(OSX) so I can't reproduce your issue locally. I think @metiulekm was using MinGW under Windows though -- not sure if he's seen this before.
Without seeing more, my initial guess is that your computer is failing to create a valid OpenGL 4.1 context.
My debug build finished compiling, and you were right, at the start of the render loop, mWindow
is a null pointer. Ideas?
~/Glitter/Build/Glitter ± master gdb ./Glitter
(gdb) b main.cpp:24
Breakpoint 1 at 0x44e03b: file /home/ronj/Glitter/Glitter/Sources/main.cpp, line 24.
(gdb) run
Starting program: /home/ronj/Glitter/Build/Glitter/Glitter
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7fffeeca7700 (LWP 30236)]
Breakpoint 1, main (argc=1, argv=0x7fffffffe458) at /home/ronj/Glitter/Glitter/Sources/main.cpp:24
24 glfwMakeContextCurrent(mWindow);
(gdb) p mWindow
$1 = (GLFWwindow *) 0x0
(gdb) n
25 gladLoadGL();
(gdb) n
26 fprintf(stderr, "OpenGL %s\n", glGetString(GL_VERSION));
(gdb) n
OpenGL (null)
29 while (glfwWindowShouldClose(mWindow) == false) {
(gdb) p mWindow
$2 = (GLFWwindow *) 0x0
(gdb) n
Program received signal SIGSEGV, Segmentation fault.
0x000000000046cbe2 in glfwWindowShouldClose (handle=0x0)
at /home/ronj/Glitter/Glitter/Vendor/glfw/src/window.c:410
410 return window->closed;
Unfortunately, this does not compile for me using gcc-5 (Homebrew gcc5 5.2.0) 5.2.0 (OSX) so I can't reproduce your issue locally. I think @metiulekm was using MinGW under Windows though -- not sure if he's seen this before.
I have clang installed too, may using it change anything? Is it easy to tell cmake to switch to it?
See this SO post or if you're in bash
, you can try:
export CC=/path/to/clang
export CXX=/path/to/clang++
This one is more of a moonshot, but another thing you can try is use the slightly older OpenGL 3 instead.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
This one is more of a moonshot, but another thing you can try is use the slightly older OpenGL 3 instead.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
Hell of a moonshot :+1: , that worked! Thanks! No more segfault, stdout now says "OpenGL 3.3 (Core Profile) Mesa 10.6.5" and I have a nice gray screen, which I can set to blue by changing the glClearColor
line \o/
If that's what's expected, what would be best to help people stuck in the same corner? Documentation? A little assert mWindow != nullptr
and/or a cout << "couldn't initialize GL context, please try with older version like MAJOR=3 and MINOR=3"
?
Also, btw, how do production apps/games handle this? Do they try the most recent context and successively fall back to older values until they get a non-null mWindow
?
You seem to be using the Mesa (software) OpenGL renderer, which will work for going through the tutorials, but is probably very slow. You have a discrete GPU in your system, so I'm guessing something is misconfigured with your AMD drivers. Glitter should be printing a string that says OpenGL #.# AMD {Driver Revision}
. That, or I misunderstood how the AMD drivers work.
I could insert an assert(mWindow != nullptr)
, but most people should have graphics cards that can run OpenGL 4 just fine. Production games/apps typically standardize on the lowest common denominator of OpenGL context that has the right features they need. Typically 4.1
or 3.3
depending on the age of the game. Depending on the engine, it'll do just as you described, keep trying until it finds a valid context.
I'll be around for at least a few more hours, if you have any questions or get stuck on trying to switch to using the AMD renderer. Having that info would probably be helpful for other people on Linux as well.
You have a discrete GPU in your system, so I'm guessing something is misconfigured with your AMD drivers. Glitter should be printing a string that says OpenGL #.# AMD {Driver Revision}. That, or I misunderstood how the AMD drivers work.
Hmmm strange, I have the packages described by archwiki/amd/installation and think other games are correctly using it (going to check with something more intensive than Warsow). Checking that and trying to enable the AMD renderer.
I confirm Warsow uses my AMD hardware, there's no way my cpu would stay silent fullscreen with AA and all options on, and below is what the game console says, explicitly mentioning Gallium 0.4 on AMD REDWOOD
. Any idea of what could cause this game to use my hardware, but would prevent Glitter from doing the same? Config I could tweak? I'm waaay beyond my knowledge here.
----- R_Init -----
Using libGL.so.1 for OpenGL...Display initialization
..Xrandr Extension Version 1.4
..XFree86-Xinerama Extension Version 1.1
..Got colorbits 24, depthbits 24, stencilbits 8
...setting fullscreen mode 1600x900:
GL_VENDOR: X.Org
GL_RENDERER: Gallium 0.4 on AMD REDWOOD
GL_VERSION: 3.0 Mesa 10.6.5
GL_SHADING_LANGUAGE_VERSION: 1.30
GL_EXTENSIONS: GL_ARB_multisample GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_...<long long list>
GL_MAX_TEXTURE_SIZE: 16384
GL_MAX_TEXTURE_UNITS: 8
GL_MAX_CUBE_MAP_TEXTURE_SIZE: 16384
GL_MAX_TEXTURE_MAX_ANISOTROPY: 16
GL_MAX_VARYING_FLOATS: 128
GL_MAX_VERTEX_UNIFORM_COMPONENTS: 16384
GL_MAX_VERTEX_ATTRIBS: 16
GL_MAX_FRAGMENT_UNIFORM_COMPONENTS: 16384
mode: 1600x900, fullscreen, widescreen
[... other non-video init...]
Using libGL.so.1 for OpenGL...Display initialization
..Xrandr Extension Version 1.4
..XFree86-Xinerama Extension Version 1.1
..Got colorbits 24, depthbits 24, stencilbits 8
...setting fullscreen mode 1600x900:
Looks like it actually is using Mesa: GL_VERSION: 3.0 Mesa 10.6.5
I wonder if that's just how they implemented the free AMD driver. I've never worked with the free AMD drivers before though, so I have no idea here. If you really do have a 6750M
though, I know that card can do OpenGL 4; I had that card for several years with no problems. Something seems off ...
Yeah, my CPU usage while warsow is running (on a 4-yr-old Dell XPS1645 laptop) is at 4%, it seems doubtful the CPU would handle the load so effortlessly, doesn't it? You must be right, must be the way it's implemented.
Looks like the free AMD driver is stewarded by Mesa, hence the Mesa designation. I'm not sure why it isn't giving you an OpenGL 4 context though. This sounds like it's beyond me though. Might be worth filing a ticket with Mesa, or whoever maintains the driver you're using.
@ronjouch I added a check for a valid GL context. It should warn you now. Let me know if this helps!
Might be worth filing a ticket with Mesa, or whoever maintains the driver you're using.
Done at freedesktop#91802, hey I have no idea what I'm doing, can you check what I said?
@ronjouch I added a check for a valid GL context. It should warn you now. Let me know if this helps!
Great, thanks! Nit: you preserved the genericity of the error message (no mention of the context version in the error message) because mentioning the context version here might mislead people that actually have a whole other problem?
Well, that was fast, here's what the Mesa maintainer says:
Mesa 10.6.x provides up to OpenGL 3.3 for a lot of various GPUs:
http://people.freedesktop.org/~imirkin/glxinfo/glxinfo.html
Your GPU fits into the Evergreen/NI category. See http://xorg.freedesktop.org/wiki/RadeonFeature/#index5h2 .
Starting with Mesa 11.0, OpenGL 4.1 will be provided for GPUs driven by the nvc0 and radeonsi drivers (that would be NVIDIA Fermi, Kepler and AMD Southern and Sea Islands GPUs... I guess Vulcanic Islands GPUs will get it too).
Your GPU will not have GL 4.1 with Mesa 11.0, as it's still missing support for fp64 and tessellation. For future reference, this is the 'r600g' driver.
@Polytonic letting you consider whether that affects your choice of 4.1 because "most people should have graphics cards that can run OpenGL 4 just fine". Thanks again for the fast feedback and for the project!
Great, thanks! Nit: you preserved the genericity of the error message (no mention of the context version in the error message) because mentioning the context version here might mislead people that actually have a whole other problem?
Yeah, there are a number of reasons why context creation might fail, and not all of them are version related. That just happens to be a common reason, but not exclusively.
I'll think about rolling back to GL 3.3, but most cards that support modern GL should be 4+ compatible. In this case it seems like a shortcoming of the Mesa driver. I'll probably update this in the morning, when I've had some time to think this over.
Hopefully you should be unblocked with regards to using Glitter though. :smile:
Hi @Polytonic, here's a followup to this hn discussion. Below is what I am doing and the start of a debugging session.
Feel free to ask for more debug info and, again, I know nothing about OpenGL and might be doing something wrong, sorry if the problem is between the keyboard & the chair, and in this case I'll be glad to submit a PR improving the documentation.