Open ghost opened 2 years ago
From what I've read, there's not actually a meaningful way of telling OpenGL which GPU to use (unlike in more modern APIs, which let you enumerate over them and pick one out).
Apparently there are some symbols you can define to tell NVidia/AMD drivers that you want the high performance GPU, though - I'll try and figure out what the Rust equivalent of that code is.
Summary
I have a laptop with dual Windows/Ubuntu boot and I have noticed a performance difference between the two systems when running a tetra app. It turns out that the same app compiled on Windows selects an integrated GPU while on Ubuntu it selects a discrete one.
Steps to Reproduce
Take any of the examples, use
ContextBuilder::debug_info
and run on Windows 10.Additional Info
On Windows 10 I get:
On Ubuntu 21.10:
The same thing happens on 0.6.7 and the main branch.