Closed nahjaeho closed 1 year ago
Nice idea, based on a quick skim. I'll talk a look.
Sorry for the slow response. Some initial bits of feedback:
I like the idea. My only concern with the approach is that it's a little based on luck and whether the 10% of blocks you choose for the initial sample aligns with the bad blocks in the image. It tends to be a small number of blocks that throw up the perceptual problems - e.g. blocks with sharp edges and high contrast and chroma rate of change, or blocks on alpha transitions. PSNR as a metric is also a bit crude and not always good at highlighting perceptual errors (although there is no really good alternative either).
In terms of implementation:
I would like the implementation to be handled entirely in the command line front-end. I don't really want to add code to the core codec which is used at runtime on some platforms, where this type of technique is unlikely to be used. The same approach works fine - it just requires the front-end to build a "10% image" first.
Developers generally like deterministic results across platforms, so please avoid using the system rand()
calls as they are platform specific. There is a pseudo-random RNG in the codec which can be used with a static seed to ensure determinism.
Thanks for your review. I think FLIP is a better quality measurement metric than PSNR, but it is hard to intuitively set a 'proper' target error value in contrast to PSNR. Also, my experiments with the 10% of blocks on my QuickETC2 test set (including 64 textures) did not show any extreme cases; increasing the test block rates just increased compression overheads. Because my suggestion is an optional feature, I think users can manually reset block sizes if there are some quality issues with the block size set by the target PSNR value.
I think your concerns with the implementation is reasonable, so my previous paper (written in Korean) used a command-line approach as you mentioned. Avoiding the rand() function is minor, so it can be easily modified. https://doi.org/10.15701/kcgs.2022.28.2.21
It is likely that making a new python (or java) script would be possible, but it could take some time. You may maintain this request until then, or closing this request now is also okay.
I think users can manually reset block sizes if there are some quality issues with the block size set by the target PSNR value.
They could, but in my experience few developers would. Reviewing and managing settings per-texture is time consuming, so 99% of developers will just use the same settings for all textures of a specific type (type = e.g. diffuse, normals, etc.).
Closing for now.
I am writing to request a review of my modifications based on the following article: Jae-Ho Nah, "Addition of an adaptive block-size determination feature to astcenc, the reference ASTC encoder," Software Impacts, 2023. https://doi.org/10.1016/j.simpa.2023.100569
By passing a target PSNR value as an argument to astcenc, ASTC compression is performed using a block size that yields a PSNR value similar to or higher than the specified target value. In the following example, the block size in the argument list is regarded as the initial block size for block-size search, and the actual block size for compression is determined by the 'dbtarget' value. astcenc -cl example.png example.astc 6x6 -medium -dbtarget 40
What are your thoughts on the feature mentioned above? I believe it can be useful in maintaining uniform quality across multiple textures, but the authors of astcenc may have a different perspective. Even if my request is rejected, I would still like to hear the opinions of the authors and maintainers of astcenc.