shssoichiro / oxipng

Multithreaded PNG optimizer written in Rust
MIT License
2.95k stars 124 forks source link

Better use of local hardware? #561

Open TPS opened 1 year ago

TPS commented 1 year ago

I almost hate to bring this up (as it seems far-fetched & virtually fantasy @ this point), but is there any way to use locally available hardware resources (if any) to improve oxipng's performance in any aspect? After an hour of Googling, the best compilation of GPU techniques I found is the C repo @ https://github.com/BuzzHari/hp-project. Is there anything useful here or anywhere to apply?

AlexTMjugador commented 1 year ago

There are indeed close and interesting connections between models that run efficiently on GPUs and lossless data compression. But as far as I know, there are several major problems with "productionizing" these ideas in software like OxiPNG that have prevented workable solutions from emerging:

This is just my (hopefully somewhat informed?) take on why there are no good "plug this GPU thing to make OxiPNG 200% faster" things out there yet, and it's unlikely there will ever be. Of course, other people are welcome to chime in.

andrews05 commented 1 year ago

Just to add to the above points:

TPS commented 1 year ago

It's frustrating to have a PNG that's been "optimized" by any (combination of) tools 1 can throw at it, & then find any basic file compression (even OS-based transparent filesystem-kind) shrinking it further. It begs the question, "What further/else could be done?"

  • A smaller part of oxipng's time is spent on filtering and the heuristic algorithms that try to select the best filter. This may or may not be a GPU-friendly task (I'm not really one to know) but I think if we wanted to improve performance here we would likely look first to SIMD optimisations.
andrews05 commented 1 year ago

I would say the answer to your problem is simply "use a more modern image format", but there is one other potential area where advancements in compression and/or performance may be found: AI.

Here's a recent news article about a paper discussing the usage of AI for lossless data compression, including comparisons with PNG (though the AI isn't actually producing PNGs itself). But this is way out of my league. And I imagine there's little interest from experts in applying AI to older formats because it's much more exciting to explore what the AI can do without being constrained to ancient data specifications. Still, in theory, I'm sure AI could be applied to reorder colour palettes, select filters and construct deflate streams to produce optimised PNGs...

TPS commented 1 year ago

I get the feeling the AI concept is much like throw more hardware (GPU πŸ˜…, SIMD, threads, memory) at the problem: until someone actually does a proof-of-concept & then a community develops it further from there, it's just… vaporware.

TPS commented 1 year ago

I would say the answer to your problem is simply "use a more modern image format", but there is one other potential area where advancements in compression and/or performance may be found: AI.

Also, my understanding of the article & chart is that the AI did develop some new "modern image format" that's purely custom to the datasets presented, not (necessarily) a general purpose format like anything developed by humans.

But, unless the AI is able to explain it to humans, it might be a "black box" algorithm involved in constructing it (much like current AI themselves) & so we might never know, too.

andrews05 commented 1 year ago

Of interest regarding AI: https://cloudinary.com/blog/jpeg-xl-and-automatic-image-quality

TPS commented 1 year ago

Interesting in that this AI use seems to be for more efficient use of lossy compression.