Open leo-barnes opened 2 years ago
As an example, quite a lot of web content consist of JPGs or PNGs that are really small (on the order of 32x32).
Maybe there are coding tools that only make sense for images this small? Or maybe some gains could be made by initializing adaptive entropy coding statistics differently?
On Wed, Sep 7, 2022 at 7:17 AM leo-barnes @.***> wrote:
As an example, quite a lot of web content consist of JPGs or PNGs that are really small (on the order of 32x32).
Maybe there are coding tools that only make sense for images this small? Or maybe some gains could be made by initializing adaptive entropy coding statistics differently?
The reverse is true to, i.e. tools for larger blocks no longer makes sense, the syntax for them such as larger prediction block sizes and transform sizes probably can be conditionally removed.
— Reply to this email directly, view it on GitHub https://github.com/AOMediaCodec/av1-spec/issues/329#issuecomment-1239453618, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBBHLF74H4YRJFEXJN4BKLV5CPYFANCNFSM5XUWAWJQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Maybe the max block or transform size should be a frame / sequence parameter?
There is a want to be able to use AVIF for small to very small images. One of the hurdles there is the overhead of the container itself, which is looked into here: https://github.com/AOMediaCodec/av1-avif/issues/121 https://github.com/MPEGGroup/FileFormat/issues/59
But I'm kind of assuming that AV1 has not really been optimized for small still images, and especially for small lossless still images.
It would be great if some time is spent during AV2 research to look into if there are any extra tweaks that can be made to compress this use-case better.