Irrational-Encoding-Wizardry / guide.encode.moe

A guide for fansubbing
https://guide.encode.moe
Creative Commons Attribution Share Alike 4.0 International
141 stars 20 forks source link

Optimize PNGs and explain adding compressed images to guide #53

Closed OrangeChannel closed 4 years ago

OrangeChannel commented 4 years ago

Reduces REPO size from 31 -> 20.5 MiB (34% decrease) and probably will speed up website loading speed by a decent amount on the descaling, masking, and video artifact pages. Doesn't decrease actual clone size by that much I'd assume (if it does at all) but haven't tested.

FichteFoll commented 4 years ago

Doesn't decrease actual clone size by that much I'd assume (if it does at all) but haven't tested.

I'd imagine it increases clone size, rather, since git doesn't do binary diffs afaik. By 20MiB, that is. Unfortunately the only way to get rid of the old pngs is to rewrite history and I'd be very wary of doing that. However, since this is primarily a website, we should value the user experience more than the "dev" experience and it seems like others agree, so I'm merging without further consideration other than this.

Starsam80 commented 4 years ago

Have you tried using pingo to recompress the images? On my computer it saved an additional ~4MiB using it's lossless web preset:

$ du -s --exclude .git .
31948   .
$ pingo -s9 **/*.png

  pingo:
  -------------------------------------------------------------------------
  46 files => 15063.63 KB - (50.70%) saved
  -------------------------------------------------------------------------
$ du -s --exclude .git .
16880   .
FichteFoll commented 4 years ago

Thanks for the comment, @Starsam80. I'm holding off the merge until @OrangeChannel looked into that, then, since I don't want to bloat the repo unnecessarily.

OrangeChannel commented 4 years ago

Huh, super interesting actually. Not sure how or what pingo is doing but it's seemingly over 100x as fast as Zopfli was and seems to have reduced the size of the repo from 31MiB to 16.3MiB (47.3% decrease). I also compressed the few JPG's we have with pingo jpeg lossless thing but it saved like 1 KiB total, probably just metadata scraping.

Edit: sadly pingo isn't open source, and the aur binary package seems to be broken. But it does have a GUI for Windows. Also, for the future when WebP is better supported, pingo seems to have a decent WebP optimizer as well.

shssoichiro commented 4 years ago

My main concern with what pingo is doing to get that much more savings: Is it truly lossless? Their website seems to indicate that it is, and certainly we can prove if the images are identical... But as the maintainer of oxipng, I haven't heard of any lossless techniques to get these levels of gains. I suppose if I had a tool to give 20% better compression than any other tool, and significantly faster, I wouldn't want to share that secret either.

Starsam80 commented 4 years ago

The only thing that I noticed after looking into it further was it converted 16-bit png files into 8-bit ones. So yes, technically not truly lossless, but given that Chrome, Firefox, and Edge all can't display 16-bit colours anyways from my testing it shouldn't be that big of a deal breaker. For completeness, the affected files are:

encoding/images/dehalod0.png
encoding/images/dh0.png
encoding/images/halos.png
encoding/images/halos0.png
encoding/images/luma0.png
encoding/images/mask_inner0.png
encoding/images/mask_outer0.png
encoding/images/src0.png
OrangeChannel commented 4 years ago

Yea, RGB48 and GRAY16 images (the ones in the list @Starsam80 posted) have been changed to RGB24 / GRAY8 from my testing. I've hashed a direct screenshot from VS of the non-affected files and they're all identical, but the affected ones do have different CRC32's even though the VS screenshot tool always saves as RGB24 (I'm assuming this is because whatever method of dithering pingo is using is not the same as VS).

OrangeChannel commented 4 years ago

halos0 is infact changed slightly in one specific area I've found and it's a rounding issue from 16 -> 8.

G:56 on 8-bit ---> RGB24 (56,56,56) in VSEdit screenshot G:14588 on 16-bit --> RGB24 (57,57,57) in VSEdit screenshot

So, I mean this GRAY16 accuracy isn't going to be visible on any browser anyways so the rounding would occur one way or another for someone on the website.

OrangeChannel commented 4 years ago

@Lypheo do you remember how you exported these images? You actually used core.imwri.Write instead of just saving the snapshot? Or did you not even use VSEdit?

FichteFoll commented 4 years ago

For the purpose of this project, a small but consistent rounding error on files browsers wouldn't display anyway seems like a fine choice to make, imo. Ideally we should probably export with 8-bit width in the first place and dither using the best method possible, in case data is at a higher precision in the VS pipeline at the moment.

In fact, we might want to document how to properly create screenshots like this so they can be compared more easily on sites such as slowpics, which usually render in the browser.

OrangeChannel commented 4 years ago

I had made a script for saving images from VS a while ago, https://github.com/OrangeChannel/my-python-scripts/blob/532a4330e6cad4a994d49d3e52a015e4662037fc/VapourSynth/vscompare.py#L102, should I just link to this or just do a short explanation of why we should save/dither to RGB24 and upscale the chroma planes.

FichteFoll commented 4 years ago

Certainly wouldn't hurt to add a sentence or two for why this matters.

OrangeChannel commented 4 years ago

If my new suggestion on https://github.com/Irrational-Encoding-Wizardry/guide.encode.moe/pull/55#issuecomment-590022285 gets merged in, I'll re-base and change the paragraph above the TODO here into an important warning box.

FichteFoll commented 4 years ago

Now that #55 is merged, this one is also clear from my side after the adjustments.

OrangeChannel commented 4 years ago

Re-based and re-formatted the previously mentioned paragraph into an "warning box". Ready for merging.