Open aliclubb opened 1 year ago
There is an in-progress pull request for that: #1009. Keep in mind that it's relatively performance-heavy. My pc can do regular x6 without lag but does get occasional lag on even just x2 software. I do have an older laptop, so maybe your performance will be better.
The proper solution is to just have the OpenGL renderer rewritten with compute shaders, so it doesn't have the issues of the current OpenGL renderer while not having the performance issues of trying to upscale in software.
Even then, adding internal upscaling to the software renderer, which also has the purpose of being the accurate reference renderer, would be a bad idea, as internal upscaling is fundamentally not accurate.
The proper solution is to just have the OpenGL renderer rewritten with compute shaders, so it doesn't have the issues of the current OpenGL renderer while not having the performance issues of trying to upscale in software.
Even then, adding internal upscaling to the software renderer, which also has the purpose of being the accurate reference renderer, would be a bad idea, as internal upscaling is fundamentally not accurate.
I agree that the best way forward would be to improve the accuracy of the current OpenGL renderer. I just wish I had coding skills to help with this. Currently I'm stuck using DeSmuME purely because it has the most accurate renderer that can upscale, but I'd much rather use melonDS for its local multiplayer support as well as being more lightweight.
I agree that the software renderer should be kept "pixel-perfect", but one should be able to add upscaling features that when used at 1x still provides "pixel-perfect" rendering. More accurate OpenGL renderer would definitely be the king here though.
Greetings! I'm curious, how easy would it be to implement internal resolution scaling for the software renderer? It's great that the OpenGL renderer supports it, but the software renderer is more accurate when it comes to certain things, such as edge marking.