spillerrec / cgCompress

Efficiently store Visual Novel cgs by using a multi-image format for storing all variations of one image in one file. This way, only the differences need to be stored which reduces the file size significantly.
GNU General Public License v3.0
16 stars 1 forks source link

Split images to reduce pixel count #6

Open spillerrec opened 7 years ago

spillerrec commented 7 years ago

Consider this image: mask 015-cropped While seemingly okay, this could be split into three parts, the left eye, right eye, and mouth. Doing so would reduce the amount of pixels with 50%, while only increasing file size with 2% (~200 bytes). Secondly, we will be able to decode the images in parallel, speeding up decoding.

spillerrec commented 7 years ago

A more difficult image is this: 3 We can easily separate the mouth, as there is a free line from top to bottom. The two eyes however slightly intersects: splitting Again, this reduces the pixel count with 53.5 %, with a 2 % file size increase with WebP. FLIF however gets 30 % smaller, investigate whats going wrong! Fewer pixels also means fewer pixels to render/overlay.

We might need a better separation algorithm. Some images might also overlap more severely, so we have to make areas transparent. What we need to make sure is that we gain something significant from splitting two regions. For example, the left eye could be split further, into eye brow and eye ball, but this only reduces pixel count by 2 %. How should we decide if the gain is big enough?

p1 p2 p3

spillerrec commented 7 years ago

I guess there was never any need to actually require the areas to be disconnected to each other: splitting (33 % pixel savings, WebP 1% bigger, FLIF 9 % bigger.) I have absolutely no idea on how to make an algorithm for that though.

To avoid too much overhead in the ZIP archive and stack.xml file, we could create a very simple file format, which contains the split images. Just a very simple header, the positions of the splits, and the offsets to the data. Overhead should be small enough not to worry about it (other than the overhead of the file formats of course).

Rendering might also have some considerations when doing it in parallel, like cache lines and SIMD. Might be worth taking aligning into account when splitting if it can make it more simple to implement those.

spillerrec commented 7 years ago

We could also do the splitting after decoding the files, and just store the pre-computed areas. This allows us to keep the runtime memory usage low while not affecting compression. Of course this hinders parallel decoding, so perhaps a hybrid method is a good idea. It is also not completely free to do so, but it could also just be used to reduce pixel access for the single access case (decode -> render -> cleanup). Parameters..., parameters...

The decoder could potentially also premerge (remove transparency) those sub-images which are only used for specific images, so they are faster to render. Effect might be too little to be worth it though, as it also only makes sense for cases where you switch back and forth, like with characters graphics.