17cupsofcoffee / tetra

🎮 A simple 2D game framework written in Rust
MIT License
907 stars 63 forks source link

tetra on Windows selects an integrated GPU when a discrete GPU is available #302

Open ghost opened 2 years ago

ghost commented 2 years ago

Summary

I have a laptop with dual Windows/Ubuntu boot and I have noticed a performance difference between the two systems when running a tetra app. It turns out that the same app compiled on Windows selects an integrated GPU while on Ubuntu it selects a discrete one.

Steps to Reproduce

Take any of the examples, use ContextBuilder::debug_info and run on Windows 10.

Additional Info

On Windows 10 I get:

OpenGL Vendor: Intel
OpenGL Renderer: Intel(R) HD Graphics 630
OpenGL Version: 3.2.0 - Build 27.20.100.8681
GLSL Version: 1.50 - Build 27.20.100.8681

On Ubuntu 21.10:

OpenGL Vendor: NVIDIA Corporation
OpenGL Renderer: NVIDIA GeForce GTX 1050 Ti/PCIe/SSE2
OpenGL Version: 3.2.0 NVIDIA 470.82.00
GLSL Version: 1.50 NVIDIA via Cg compiler 

The same thing happens on 0.6.7 and the main branch.

17cupsofcoffee commented 2 years ago

From what I've read, there's not actually a meaningful way of telling OpenGL which GPU to use (unlike in more modern APIs, which let you enumerate over them and pick one out).

Apparently there are some symbols you can define to tell NVidia/AMD drivers that you want the high performance GPU, though - I'll try and figure out what the Rust equivalent of that code is.