Open dnfield opened 1 year ago
I think more generally we should move to fully lazy mipmaps. One of the reasons we had to switch to the drip feed of image uploads is that generating all of the mipmaps for large images can consume multiple ms of GPU time ( see flutter/gallery sample app 1) .
Instead we should be able to compute which mip level would be selected and only generate that single mip level. I believe metal has APIs for this , Vulkan should too. Gles doesn't
I think we also need the buffer upload blit in order for lossy compression to work, right?
Instead we should be able to compute which mip level would be selected and only generate that single mip level.
We could write individual mip levels just before drawing, but there are some complications with this. For example, computing the range of mip levels that a draw call needs isn't trivial given our API surface. If the CTM isn't affine/has projective coordinates, then any or all of the mip levels may be used by one draw call. All of our user set samplers have mip filtering enabled.
Both lossy and lossless compression require device private textures.
Actually, you can still get lossless compression for host images via the optimizeforgpu command on a blit pass
If the CTM isn't affine/has projective coordinates, then any or all of the mip levels may be used by one draw call. All of our user set samplers have mip filtering enabled.
The vast majority of the time it will be a simple scale/translation matrix though, so that seems handleable by adding a fast path. I'll admit I don't know if the juice is worth the squeeze thougj.
The vast majority of the time it will be a simple scale/translation matrix though, so that seems handleable by adding a fast path.
I'm not sure this is something we can drop in as as "fast path", though. It'd be more like adding a new accounting system for textures in the Entities layer, where we dispatch another check per draw operation and inject new blit passes when needed.
Not saying we shouldn't consider it, but deferring generating mipmaps smells like a predictability tradeoff. Doing slight 3D rotations on stuff isn't terribly uncommon in modern apps. It'd be a shame if we defer jank to a moment that's harder for devs to anticipate/control for.
also potentially: https://developer.apple.com/documentation/metal/textures/predicting_which_mips_the_gpu_samples_with_level-of-detail_queries?language=objc
Hmm, I believe this is an MSL feature, and not something that can help us make decisions about which mip levels need to be generated in advance.
Doing more research into what Skia does. It looks like they try to compute which lod will be chosen and might be skipping mipmap creation for certain images?
Not sure.
Design doc for a retry mechanism for Image.toByteData
. Seems like it can be used for this as well: https://docs.google.com/document/d/1Uuiw3pdQxNFTA8OQuZ-kuvYg1NB42XgccQCZeqr4oII/edit?usp=sharing
FYI @gaaclarke , since we talked about this in the weekly. We'd need to register the callback here: https://github.com/flutter/engine/blob/main/lib/ui/painting/image_decoder_impeller.cc#L338-L350 and remove the usage of upload to storage.
After https://github.com/flutter/engine/pull/42349, we'll skip generating mipmaps when backgrounded on iOS.
We should generate mipmaps once the gPU is available.