image-rs / jpeg-decoder

JPEG decoder written in Rust
Apache License 2.0
150 stars 87 forks source link

jpeg-decoder is slower than libjpeg-turbo #155

Open Shnatsel opened 4 years ago

Shnatsel commented 4 years ago

jpeg_decoder::decoder::Decoder::decode_internal seems to take 50% of the decoding time, or over 75% if using Rayon because this part is not parallelized. This part alone takes more time than libjpeg-turbo takes to decode the entire image.

It appears that jpeg-decoder reads one byte at a time from the input stream and executes some complex logic for every byte, e.g. in HuffmanDecoder::read_bits and a number of other functions called from decode_internal. I suspect performing a single large read (a few Kb in size), then using something that lowers to memchr calls to find marker boundaries would be much faster.

Profiled using this file: https://commons.wikimedia.org/wiki/File:Sun_over_Lake_Hawea,_New_Zealand.jpg via image crate, jpeg-decoder v0.1.19

Single-treaded profile: https://share.firefox.dev/30ZTmks Parallel profile: https://share.firefox.dev/3dqzE49

lovasoa commented 4 years ago

did you use a BufReader for this test?

Shnatsel commented 4 years ago

Yes. Here's the code used for testing:

fn main() -> std::io::Result<()> {
    let path = std::env::args().skip(1).next().unwrap();
    let _ = image::io::Reader::open(path)?
        .with_guessed_format()
        .unwrap()
        .decode()
        .unwrap();
    Ok(())
}

image::io::Reader::open does require BufRead: https://github.com/image-rs/image/blob/0b21ce8bc8d0b697964820e649fd40127ef404fa/src/io/reader.rs#L124

Shnatsel commented 4 years ago

Initial experiments with buffering are available in the buffered-reads branch but do not demonstrate significantly better results so far.

Shnatsel commented 4 years ago

jpeg_decoder::huffman::HuffmanDecoder::read_bits accounts for 23% of all time spent, does byte-by-byte reads and spends most of its time calling std::io::Read::read_exact. Plus has additional complex logic because of its inability to return a byte it has already read to the reader. So that's probably where buffered reads would actually make a difference.

thomcc commented 4 years ago

Came across the link to this in zulip, but... For what it's worth there's a very good series on how to do bitwise io performantly in compressors on Fabien Giesen's blog, if you haven't seen it before:

Sorry if this is old news.

Shnatsel commented 3 years ago

I've done some more profiling and tinkering, and I believe my earlier assumptions are incorrect. In parallel mode most of the time is spent in jpeg_decoder::idct::dequantize_and_idct_block_8x8_inner. Here's a finer-grained profile to back that up.

I've also verified this experimentally by speeding up that function and seeing it reflected in end-to-end performance gain.

This is really good news because the function is self-contained and takes up 75% of the end-to-end execution time, so any optimizations we can make to it will translate to large gains in end-to-end decoding performance. The function can be found here.

lovasoa commented 3 years ago

see my pull request that uses simd for this function: https://github.com/image-rs/jpeg-decoder/pull/146

Shnatsel commented 3 years ago

After looking at it some more I don't think we can do much here without parallelization and/or SIMD, since the IDCT algorithm appears to be identical to the fallback one in libjpeg-turbo (which normally uses hand-written assembly with SIMD instructions).

Shnatsel commented 3 years ago

After looking at IDCT some more, particularly the threaded worker, there's really no reason why it cannot be made multi-threaded by component. They are already decoded independently and 95% of the infrastructure is already there. https://github.com/image-rs/jpeg-decoder/blob/master/src/worker/threaded.rs already does most of the heavy lifting, but doesn't split the image by component. This should be a nearly flat 3x speedup except for grayscale images.

Shnatsel commented 3 years ago

I've opened https://github.com/image-rs/jpeg-decoder/pull/168 for parallelizing IDCT. We can combine it with SIMD later to hopefully outperform libjpeg-turbo in the future.

Sadly it doesn't do all that much for performance because we get bottlenecked by the reader thread instead, as described in the original post. Most of the time is now spent in jpeg_decoder::decoder::decode_block.

It's time to dust off those BufReader optimizations that didn't seem to do anything! Nope, the branch buffered-reads still makes no difference. It's slightly worse, if anything.

Profile after IDCT parallelization

lovasoa commented 3 years ago

Is that the profile for a release build ? It contains function calls for things like core::num::wrapping::::sub, that I would have expected to be inlined in a production build.

image
Shnatsel commented 3 years ago

They're inlined! perf is just that good. I'm using this in Cargo.toml:

[profile.release]
debug=true

and profiling with perf record --call-graph=dwarf so that it uses debug info to see into inlined functions.

willcrichton commented 3 years ago

Just another data point. I'm using jpeg-decoder via the image crate in a WASM project. I've noticed that loading JPEGs is very slow, roughly 200ms to decode a 2048 x 2048 image. Here's a screenshot of the Chrome profile of a single load, along with the most common functions calls at the bottom.

Screen Shot 2021-02-25 at 11 43 40 AM

It seems like most of the time is spent in color_convert_line_ycbcr. I don't see that mentioned on the thread, so a different kind of bottleneck for WASM perhaps?

lovasoa commented 3 years ago

In what situation would you want to decode a JPEG in wasm ? You would have to ship a large wasm jpeg decoder to your users, that is always going to run slower than the native jpeg decoder in their browser. If you have a project that handles images in wasm, I would suggest handling the image loading and decoding with native browser APIs, and passing only a UInt8Array containing the pixels to your wasm.

willcrichton commented 3 years ago

@lovasoa yes I could implement all that. It's just significantly more convenient to use image, since it works cross-platform and my app also targets native. If the JPEG decoder were fast enough then I wouldn't bother with platform-specific code.

HeroicKatora commented 3 years ago

@willcrichton This would be a more useful data point if you submited traces, not screenshots. Spending 30% of time in memset and memcpy is surely not optimal either and anyone debugging would surely want to know where in the callgraph they occur.

willcrichton commented 3 years ago

Sure thing, here's the trace. wasm-jpeg-decoder.json.zip

Shnatsel commented 3 years ago

I'm afraid that JPEG decoding will always be significantly slower in WASM than it is in native code. It's very computationally expensive and relies on SIMD and/or parallelization to perform well, and WASM allows neither.

willcrichton commented 3 years ago

For the record, I implemented a web image loader: https://github.com/willcrichton/learn-opengl-rust/blob/88c0282be6bc855dd52d61e5395c3fa1df2c3fc4/src/io.rs#L54-L107

I haven't done a rigorous benchmark, but based on my observations from the traces:

Traces for the interested. traces.zip

lovasoa commented 3 years ago

@willcrichton : :sunglasses: cool, this looks very useful, you should publish this as a small crate on crates.io ! One small remark: maybe I read too quickly, but it looks like you are waiting for the image to have fully loaded to start creating your canvas and creating a context. So your CPU will idle while the image is being downloaded, then it will be busy exclusively decoding the image (probably on a single core), then creating the canvas.

edit : Here is a small demo: http://jsbin.com/xunatebovu/edit

Shnatsel commented 2 years ago

As of version 0.2.6, on a 6200x8200 CMYK image, jpeg-decoder is actually faster than libjpeg-turbo on my 4-core machine!

Without the rayon feature it's 700ms for jpeg-decoder vs 800ms for libjpeg-turbo. And according to perf it's only utilizing 1.38 CPU cores, not all 4, so similar gains should be seen on dual-core machines as well.

The rayon feature is not currently usable due to #245, but once it is fixed I expect the decoding time to drop to 600ms.

Even without parallelism jpeg-decoder is within striking distance of libjpeg-turbo: 850ms as opposed to 800ms.

Shnatsel commented 2 years ago

Oops. I fear the celebration has been premature.

Now that I've tested it on a selection of photos, it appears that jpeg-decoder is still considerably slower than libjpeg-turbo even with parallelism: it takes 6 seconds to decode a corpus of photos with libjpeg-turbo and 10 seconds with jpeg-decoder. (measuring without rayon so far because of #245).

Huffman decoding continues bottlenecking decoding. In fact, on 3000x4000 photos Huffman decoding alone takes about as much time as libjpeg-turbo's entire decoding process.