toji / toji.dev

Personal "About Me" website.
https://toji.dev
MIT License
2 stars 1 forks source link

WebGPU <img>, <canvas>, and <video> Textures | Toji.dev #4

Open utterances-bot opened 8 months ago

utterances-bot commented 8 months ago

WebGPU , , and

Brandon Jones - Graphics and XR on the web

https://toji.dev/webgpu-best-practices/img-textures.html

fabmax commented 8 months ago

Thank you so much for writing these extensive guides!

I spent the last couple of weeks writing a WebGPU backend for my toy engine and had a lot of fun in doing so (WebGPU really is a lot nicer to work with than WebGL!)

I would love to see the doc on compressed textures in the future, as this is a topic I've completely ignored so far.

Anyway thanks again!

yanglebupt commented 4 months ago

There’s a myriad of ways that you can choose to generate mipmaps, with some of the fancier native libraries going so far as to do single-pass, compute shader-based, custom filtered downsampling

I find webgpu-spd packages used this compute shader way to generate mipmaps, but it require texture format is storage!!

However texture format such as rgba8unorm-srgb not support storage texture, in this case we have to switch to render pass for each mip level, starting at the largest, and render it into the next level down using a linear filter. !!

So, may I ask, is there any difference in performance between these two generation ways?

toji commented 4 months ago

SPD stands for "single pass downsampling", and as the name suggests, it generally tries to generate the full mip chain in a single compute pass. Generating mips via render passes, on the other hand, effectively requires one render pass per mip level. I'm willing to bet that the performance of generating any individual level is probably going to be about the same with either of these techniques, but the overhead of using multiple passes will make the render technique slower in the end (barring anything being really broken in the compute version.)

But like you said, some texture formats don't work with compute-based methods, in which case you'd fall back to the slower technique. If that happens a few times during your program's load you're probably fine, as the overhead won't be THAT big. If you're doing it multiple times per frame (for example: generating full mip chains for realtime-generated cube maps.) then it might become a concern.

And I should repeat the advice from the start of the article: Use compressed textures when possible! Most formats that store compressed textures will also store a pre-generated full mip chain along with them, which is always going to be the fastest option. :)