Closed JobLeonard closed 2 years ago
These sound like nice changes, but they are a very low priority for me. If you find yourself with time to spare, any (non-breaking) improvements would be appreciated!
Yes, I totally agree this may go on the bottom of the "low priority" pile :). If I find some time I'll definitely look into it more, but that's a big if for myself too. I'd also have to learn Julia again after not messing with it for a few years ;).
It would be fun to try though, so I'll write down the ideas I have at the moment so I can pick them up later. I guess the best way to introduce a non-breaking change would be something like:
@enum GIFDITHER FloydSteinberg=1 Bayer=2
function gif(anim::Animation, fn = (isijulia() ? "tmp.gif" : tempname()*".gif");
fps::Integer = 20, dither::GIFDITHER = FloydSteinberg)
(hope I got the syntax right, like I said: it's been a while)
The default argument would keep the current behaviour. As a start we could make the calls to IM (or FFMPEG) use Bayer dithering if that is passed as an argument.
Other than that it's a matter of playing with those settings.
@JobLeonard I think this would be cool. Note that we've changed the implementation (on master) to always use ffmpeg.
@mkborregaard: I agree this would be cool, but sadly this is one the giant "I wish I had time for this side-project" pile that every programmer has. You know how it is ;).
However, I just did some internet research and we should be able to get most of these things by changing only two flags for the FFMPEG conversion!
(for the record, I don't have Julia set up on my laptop right now so this would take a lot of time to test, but it looks should be relatively easy to try out for active contributors)
I found this blog entry from the person who improved GIF animation support in FFMPEG, which tips for how to improve the resulting quality.
As explained at the bottom of the aforementioned blog, one thing to do is to optimise for frame difference.
JuliaPlots currently doesn't optimize for frame difference. This needlessly bloats the images. For example, the Lorenz attractor gif from the documentation page is currently 4,2 MiB. Throwing it through GIMP's "Optimize for GIF" filter decreases that to 2,5 MiB. If we had done so before adding that quantisation noise the savings would much even greater, since the GIF compression added noise that can't be compressed away:
In the source frames, the only thing changing from frame to frame is the tip of the plot, so that would make things a lot smaller!
According to the linked blog post, in many situations this also should remove most of our error diffusion problems, since it crops that to the parts that actually change.
This a simple matter of setting diff_mode
to rectangle
in the options of paletteuse
(see documentation of paletteuse
), so a one-line change.
Now, this wouldn't always be so effective; in the waves example the bounding rectangle is almost as big as the original image:
However, the next thing might help with that
In the same blog linked above it mentions that we can change what is prioritised in the palette generation:
In the
palettegen
filter, the main and probably only tweaking you will want to play with is thestats_mode
option. This option will basically allow you to specify if you are more interested in the whole/overall video, or only what's moving. If you usestats_mode=full
(the default), all pixels will be part of the color statistics. If you usestats_mode=diff
, only [the pixels that differ] from previous frame will be accounted.
This has the potential to add some artefacts (as shown in the blog), but I think the odds of that are very slim in this particular context: we're not converting regular videos with photographic backgrounds. Instead the input is a plot with static, mostly white background. In fact, the opposite is likely true: if we don't set this flag, those background colours would probably be over-represented in the histograms used during palette generation.
FFMPEG documentation on `palettegen
So the previous two ideas I suggest implementing immediately (it's just two lines to change, after all), but this one is just a "look at this and consider if it's worth the change" thing
I just came across gif.ski the other day, which is a new CLI gif encoder specifically created to maximise GIF image quality. It has Windows, Mac and Linux binaries, and was made by Kornel Lesiński, the guy behind pngquant
and ImageOptim - he probably knows what he's doing ;). The results speak for themselves:
It uses per-frame palettes to the maximum. It looks at next and previous frame to select only the colors that are actually needed, and then uses dithering that's aware of surrounding frames and can seamlessly integrate dithering from previous frame to the next (sort of like 3D dithering rather than just 2D).
And it uses pngquant to generate palettes, and pngquant is quite serious about maximizing quality.
comment by Kornel on on reddit
It also does the aforementioned frame diffing before applying quantisation, although this example gif changes so much you barely notice:
Of course, the question is whether these results would be better for this particular use-case - the tool seems aimed at optimising color fidelity for converting regular video to GIF. Close, but not quite the same use-case. It also generates a new palette per frame. So the resulting GIF would probably be a lot larger, for a visual improvement that is not quite worth it.
Wow thanks for this amazing introduction to the wonders of ffmpeg :-) Would you mind putting your suggested changes in a PR? (the ones not involving gif.ski, though thanks for that ref too).
Just to be clear: I added these changes in the Github browser, I have not tested it myself, nor even checked if I set the flags up in a way that ffmpeg understands.
So I leave that part up to you ;)
Thanks a million for this!
I was just going to propose supporting a visually stable ditherer, but then I looked up some background info to get that started and realised that this might be worth a discussion of its own. Not exactly a high-priority issue, but still a fun enhancement to discuss I think.
The starting point was that I noticed that the (beautiful) example gifs have a lot of areas that should not change from frame to frame, yet still have different pattern noise. For example, zoom in on the grey grid in the background of the Lorenz Attractor gif:
This noise is due to the error diffusion methods used. However, there are stable GIF encoders that do not suffer from this. Since plots usually have few colours and change very little from frame to frame, I would expect this kind of optimisation to greatly reduce size and improve visual quality at the same time. Since you're using ImageMagick with FFMPEG as a fallback I've looked at what options they provide.
For IM there is this gist exploring ImageMagick's GIF encoding options that has some suggestions and also warns against the default settings:
Perhaps I missed it, but glancing at the code it appears that you use the default settings so the output GIF likely suffers from the same issues.
The author then suggests to use a custom colour map based on the total, uncompressed input frames, combine that with an ordered dithering pattern to stabilise the frames, and then add transparancy to further optimise the gif.
For the FFMPEG fallback we could try the
a_dither
option - the examples also show the Bayer dithering pattern. It is fairly simple code , and also available in FFMPEG. ImageMagick does not seem to directly support it as far as I can tell, but it lets you customise the ordered dithering option so it's possible to implement something like it.Just some thoughts to get a discussion started. I'm not suggesting one solution over the other at this point, but I think it's a topic worth exploring.
EDIT: replaced dead link with custom zoomed in gif