Closed icefoxen closed 2 years ago
GLES-3.0 should be fairly portable these days, as Apple devices are finally getting WebGL2 support (i.e. it's a software update, not hardware). So targeting GLES-2.0 is a bit extreme today.
For wgpu
, it's fairly stable on the API side. There are still major internal changes coming, but I don't think we are going to see much changed in the API. Especially now that Google is rushing the WebGPU in Chrome out, the group is very hesitant to accept any breaking changes.
We recently got the GLES-3.0 target completely rewritten, a part of wgpu
repository now. And we'll follow up with WebGL2 support. It has issues - https://github.com/gfx-rs/wgpu/issues/1574 - but overall we are in a much better shape to support it than in gfx-hal days, where GL was an afterthought.
There is also wgpu-hal now, effectively a lean-and-mean successor to gfx-hal. I talked about it last Saturday on the meetup - see https://github.com/gfx-rs/wgpu/issues/1574 . It's an option if you want to go lower level (and take care of safety on your end).
Finally, a small correction to your table. Vulkan on Windows is available much less than it is on Linux. Some machines plain doesn't support it on Windows specifically (like Intel Haswell/Broadwell iGPUs, and more), some have outdated drivers (or the lack of them) on the user machine.
If you want to target OpenGL ES of any version directly, you may consider running with Angle. It's a giant C-based dependency, but at least it's more polished than MoltenVK.
Is it even possible to map wgpu's API to ggez's API. I haven't been able to get anything to work except for storing all the calls in context and drawing them after the draw() functions have been called, which just seems really annoying to deal with and make it quite bad for exposing raw commands to the user.
Possible replacements
I prefer glow out of these, or wgpu if we can integrate it properly.
I don't see a reason why it wouldn't be possible. Are you referring to the fact that wgpu-rs requires resources to be alive for the duration of a pass recording? If that's what you see the problem in, you can always do typed_arena::Arena<Arc<Resource>>
. Adding a resource to it would give you &Resource
that you can use in pass recording, since the typed arena references are long-lived.
Also, wgpu
and wgpu-hal
are different targets, unless you have already determined that of these two you'd go with wgpu anyway. It's good choice, I just want to ensure we are clear here.
Not just that it has to be alive, that's solvable but rather if you can pass the RenderPass down to the drawable vis Context without adding a lifetime (which RenderPass requires)
And yes I am referring to wgpu not wgpu-hal, I haven't checked out wgpu-hal yet.
I plan on forking good-web-game to see whether I can expand/update it to something more complete (i.e. more ggez-0.6.0).
But, to be honest, I won't start working on that for quite a while, since I'm going to be occupied with other things for a while.
In the best case it might turn out nicely and we might discuss whether it could become the new ggez. If not, then at least good-web-game gets an update, which is nice to have as well. Having an alternative (somewhat complete) implementation of ggez lying around could be useful for some people, for example when stumbling over a really annoying, but somewhat arcane bug in one of them that might (hopefully) not be present in the other one.
If done correctly I think I might even prefer wgpu over miniquad, but from a practical point of view it's just simpler / more realistic for me personally to tinker around with good-web-game, than it is to set up a whole new graphics stack based on wgpu.
That said, I'd of course offer my support for building the latter. I'm just probably not the one who should design it.
I plan on forking good-web-game to see whether I can expand/update it to something more complete (i.e. more ggez-0.6.0).
But, to be honest, I won't start working on that for quite a while, since I'm going to be occupied with other things for a while.
In the best case it might turn out nicely and we might discuss whether it could become the new ggez. If not, then at least good-web-game gets an update, which is nice to have as well. Having an alternative (somewhat complete) implementation of ggez lying around could be useful for some people, for example when stumbling over a really annoying, but somewhat arcane bug in one of them that might (hopefully) not be present in the other one.
If done correctly I think I might even prefer wgpu over miniquad, but from a practical point of view it's just simpler / more realistic for me personally to tinker around with good-web-game, than it is to set up a whole new graphics stack based on wgpu.
That said, I'd of course offer my support for building the latter. I'm just probably not the one who should design it.
Keep me updated! Or just ping me (or #good-web-game channel) on discord https://discord.com/invite/WfEp6ut
Ok, so I forked and expanded good-web-game a bit.
Using miniquad as a backend sure simplifies many things, but there are some areas where I'm unsure as to whether it's really the right fit for us. The most important one is probably that miniquad takes care of the whole event loop itself, which might mean that we'd no longer be being able to allow people to run their own event loops anymore.
There are other things like shaders, blend modes, etc., that aren't working right now, but I haven't looked into miniquad enough yet to figure out whether it's able to support them the way ggez does.
From 0.7 Reddit announcement:
For the future, I am working on something so that ggez can switch to wgpu, moving away from the old gfx pre-ll library. This should hopefully bring lots of fixes and make ggez easier to use.
Has wgpu been deemed a better fit for ggez than miniquad?
Has wgpu been deemed a better fit for ggez than miniquad?
Yes, it has.
There are several things playing into this decision, but the main reasons are:
Mesh
es) on the fly, ggez-style, requires ugly work-aroundswinit
(which doesn't matter for mobile applications, so it's fine for good-web-game, but actually is relevant for games designed to be played on desktop)miniquad will probably always be the chosen backend for good-web-game, as it does a wonderful job there delivering great portability with little effort, but ggez requires something slightly different.
It's not 100% clear whether wgpu will become the new backend though. @nobbele is currently working on a small renderer based on it. And if that turns out to work well as a replacement for gfx then we're surely going to go with it. If not, then the journey may continue. But I have a good feeling about this, so I don't think that'll be the case.
miniquad only supports shaders up to GLSL 100
this is not correct. Miniquad does not really care about the version used. WebGl1 and gl2 on the other hand support only glsl100.
multisampling on offscreen render targets isn't supported by miniquad (and it doesn't seem like this will change anytime soon)
renderbufferStorageMultisample is a part of WebGl2. If you don't need webgl1 - it's a very easy fix to use renderbufferStorageMultisample instead of renderbufferStorage
somewhat correctly deallocating vertex-/indexbuffers (for example for Meshes) on the fly, ggez-style, requires ugly work-arounds
It's more about a ggez problem with a way it works with allocating/deleting buffers.. I would fix it myself if I would able to reproduce it. You need someone with windows/nvidia and a bit of time to figure this out.
windowing is compact, but far less developed and robust than winit (which doesn't matter for mobile applications, so it's fine for good-web-game, but actually is relevant for games designed to be played on desktop)
This is correct, winit supports more features. But, funny enough, ggez's macos problems was easier to fix with miniquad than winit :)
I would say that if you don't need WebGl1(and soon gl2) and in general low-end devices support - WebGPU might be a better option. If you need lower-end devices - tweaking miniquad might be easier than rolling a rendering backend from scratch.
miniquad only supports shaders up to GLSL 100
this is not correct. Miniquad does not really care about the version used. WebGl1 and gl2 on the other hand support only glsl100.
Oh, I didn't know that. Thanks for putting this right.
multisampling on offscreen render targets isn't supported by miniquad (and it doesn't seem like this will change anytime soon)
renderbufferStorageMultisample is a part of WebGl2. If you don't need webgl1 - it's a very easy fix to use renderbufferStorageMultisample instead of renderbufferStorage
Sadly, I have no idea what you're talking about. If what you're saying is that once miniquad will support WebGl2 we can have multisampled renderbuffers in it, then I'll be delighted to be able to add them to good-web-game :)
somewhat correctly deallocating vertex-/indexbuffers (for example for Meshes) on the fly, ggez-style, requires ugly work-arounds
It's more about a ggez problem with a way it works with allocating/deleting buffers.. I would fix it myself if I would able to reproduce it. You need someone with windows/nvidia and a bit of time to figure this out.
I'm sry, but I'd say it's more a "not-macroquad" problem. Deleting buffers might happen close to never in macroquad, but it just does happen in other use cases. It's a very environment-specific bug though indeed, so it's probably not all that bad.
I would say that if you don't need WebGl1(and soon gl2) and in general low-end devices support - WebGPU might be a better option. If you need lower-end devices - tweaking miniquad might be easier than rolling a rendering backend from scratch.
It would be great to profit off of these benefits of miniquad, absolutely. But as it is right now, it's just not potent and robust enough yet to drive ggez on it's own. To reap as many of the benefits as possible, there's good-web-game, which I do plan on expanding further as miniquad progresses 🙂
Who knows, maybe one day it might be simply objectively supreme to ggez and we can just create a big fat PR making good-web-game the new ggez. But for now, I really want to give wgpu a chance.
Deleted a comment, I think I used the wrong tone :)
Anyway, good luck with whatever you will decide to use, I gave all the knowledge I have here. If you don't need webgl1, gl2 and generally higher-end hardware is a priority - wgpu may be a better choice.
But, if you don't need older hardware/browsers support - just using certain webgl2-only functions will fix all the problems you are talking about but the nvidia bug.
Yes, the nvidia bug, the only problem not related to webgl1/gl2 limitations, will require some debugging. And, no, its not "non-macroquad" bug. gwg is doing very specific thing with creating, drawing and deleting meshes on each frame. I wrote this code, I know that this was an ugly hack ;)
No worries, I think the tone was totally fine. I wish you the best of luck for the -quad-verse as well : )
Sadly, the nvidia bug just isn't the only non-webgl related problem we're facing here. There is:
winit
functionality ggez currently uses and what miniquad provides.quit
not quitting soon enough, or no Metal/Vulkan backends (which aren't as important as Wasm/WebGL, but which would still be nice to have)Also, creating, using and then dropping drawable objects in the same frame is nothing specific to gwg. It's not a hack, it's just something that ggez allows and which is heavily used in the examples due to its simplicity. ggez just has a different approach than for example macroquad. It's not very optimized, but for that it gives our users control over how and when ressources are actually allocated and released, so that they can optimize their code themselves, while still providing an easy to use API, which allows inefficient but simple code for when you're just starting out.
What's the progress? I've been looking on switching from sfml bindings to some other library and I think ggez would be a good fit if it switched to wgpu. Right now messing around with things like shaders and whatnot is an undocumented, messy nightmare.
I am working on it and LechintanTudor (#1012) says they're working on it too. It will most likely take quite a while for a stable release with wgpu but I would like to at least get an alpha done quite soon.
ggez
has usedgfx-rs
since version 0.3.0, and there have not been major changes to the basic structure since 0.5.0. In that timegfx-rs
has actually held up surprisingly decently, but currently has a number of downsides:gfx-hal
. Notably, it can't effectively target mobile or web platforms.SpriteBatch
andMeshBatch
are artifacts of this. It would be nice to have a lower level API and just do automatic batching of draw calls all the time, the waytetra
does.Below here is the tale of my own experiences. I fought with this quite a lot since ggez 0.5.0 but didn't quite get to a minimum viable product. Since we're handing off to new maintainers, here's the chronicle of my experiences and decisions. Hopefully it's useful.
So the first question for replacing
gfx-rs
is, what to replace it with? The options are:gfx-hal
orrendy
. I love these in theory, but I spent a long time playing with these in like 2019 and was not terribly impressed tbh. They are big, complicated, have a large dependency surface, and still have bugs. Maybe they've gotten better since then. (I kind of mentally putwgpu
in this category; in my experiencewgpu
is still in heavy enough development that it is not ready for prime time.)I eventually decided to use a low-level GPU API, so the next question was, which one? I didn't want to write my own portability layer with multiple backends, so which API to use was dictated by what platforms I wanted to support. As of mid 2021, the compatibility matrix looks something like this:
Web support info taken from https://caniuse.com/?search=webgl2 . OpenGL version equivalences figured out by digging through the specs. WebGL1 is kinda a subset of ES2 which is kinda a subset of OpenGL 2.1 with some extra things added. WebGL2 is a direct subset of ES3 which is a direct subset of OpenGL 4.3.
At the time I was considering this, my main goal was supporting the existing Windows+Linux OS's, plus web and old mobile devices. There's actually quite a lot of fun little Linux-based computers out there such as Raspberry Pi's < 4, the Pinebook Pro, quite high-quality retro handheld devices, and other stuff like that.
So I decided I would work with OpenGL ES2. ES2 is kinda ancient tech from the Bad Old Days of OpenGL, but that means there's tons of cheap as dirt chips out there that support it and it's often the first thing that new device drivers implement. For example the Pinebook Pro has a GPU that supports ES3, but the Mesa Panfrost driver for it still only reliably supports ES2. I would have much preferred Vulkan or ES3, but using ES2 meant that ggez would be able to run almost literally anywhere. And frankly once you get above a basic level of usability, it doesn't make a whole lot of difference one way or another.
My work on an OpenGL ES2 renderer for ggez lives in the
ggraphics
branch, here: https://github.com/ggez/ggez/tree/ggraphics/ggraphics. An older attempt usingrendy
lives here: https://github.com/ggez/ggraphics. Both use a much more "retained mode" style of drawing, where you fill a buffer with things to draw and then hand the GPU whole buffers at a time. Layered on top of that are further pieces of state, shader pipeline and render target. The ES2ggraphics
branch readme still probably describes it as well as possible. You should be able to make about 80% of ggez's immediate mode graphics API work with only minimal changes, it will make automatic draw batching possible soSpriteBatch
andMeshBatch
will become unnecessary, and it will forcefully clear up some of the places whereggez
's drawing API conflates different stages of the process (such as blend mode, some of the weirder things involving text drawing, etc).