Open GuardedAirplane opened 7 years ago
Like most compression algorithms, there is something inherently sequential since otherwise you wouldn't be able to take advantage of previously-decoded pixels to improve the prediction and entropy coding of the current pixels.
There might be a few possibilities to partially parallelize some parts of it, but it's probably not going to be worth the effort.
However, if you really want to do things in parallel, you can always simply cut the original image in parts, e.g. one subimage per core. Then you can encode the subimages independently as separate FLIF files (this might change the overall compression ratio a bit but probably not by much for most kinds of images), and of course you can then decode each portion of the image in parallel.
What about progressive encoding/decoding? I know the point of it is to show parts of the image as you stream in the data, but imagine you have the whole file on your disk already, so access to all the data.
In that case, if you know the offsets at which the next interlacing step starts, wouldn't you in theory be able to pipeline that so that you could start decoding the next step as the first was being decoded?
No, that wouldn't work. You need the pixel values of the previous interlacing steps to be able to decode the next one.
Yes but not all pixels; for any given pixel you just need the adjacent ones right? So as you decode enough adjacent pixels, could the next interlacing step could already start working on the parts that have enough info decoded?
Mind you, I'm not familiar with the FLIF encoder; I was thinking of hoe PNGs are encoded by row so I'm pretty sure this could work with that format. But if FLIF is smarter about using the whole image for its predictions, I can imagine it wouldn't work.
why don't you split the channels into rgb or hue colorspace and encode each channel on a thread.
Note: FLIF is designed to exploit the similarity between pixels in a channel, and across channels. Hence, any attempt at parallelism will very likely have compression tradeoffs.
One approach which will work without any modifications to FLIF is to tile the image, as has been already suggested. Breaking up an image into smaller tiles will allow parallel encoding and decoding, but will reduce compression efficiency a bit. It is suitable for very large images (say when the FLIF compressed size is greater than a mega byte).
@JobLeonard :
Yes but not all pixels; for any given pixel you just need the adjacent ones right? So as you decode enough adjacent pixels, could the next interlacing step could already start working on the parts that have enough info decoded?
The problem is that the bit stream from entropy decoder in FLIF can't be fast forwarded. It is intertwined with the high-level pixel data (it is context adaptive). While in PNG, the entropy codec is not adaptive, so it can be forwarded independently of the higher level processing.
@gaming-hacker :
why don't you split the channels into rgb or hue colorspace and encode each channel on a thread.
This can be done outside of the FLIF format, by the user. It is similar to the tiled approach, but the splitting would be by channel. The two approaches could be combined as well.
Currently the FLIF encoder appears to be single-threaded. Is this a natural limitation of the MANIAC algorithm, or is its absence due to the general difficulty of writing multithreaded code?