Open John-Nagle opened 2 years ago
The Second Life protocol is my main use-case for jpeg2k. Right now I have only used jpeg2k with the Bevy engine. It's asset decoding support doesn't provide a way to ask for low-res textures, so it wasn't a big priority to expose the full OpenJpeg interface.
There is some support right now for lower res decoding (to get a smaller texture when the full resolution isn't needed). I haven't tried OpenJpeg with only part of the image file. When loading the image from a file, OpenJpeg can do seeking to only read what it needs. So if a lower resolution or small decode area is asked for in the decode parameters, then it will only read what it needs. I don't think the OpenJpeg streams where designed for doing partial reads over the network.
You can see the current decode parameters in the example: https://github.com/Neopallium/jpeg2k/blob/main/examples/convert_jp2.rs
I haven't looked into how the Viewer decides how many bytes (HTTP byte range requests) to request. The J2K header might provide some of this info, but the first http request would most likely always ask for the same number of bytes. A "smart" asset server could possibly store an index/metadata extracted from the J2K image and return the J2K header + first resolution level back to the viewer. The full J2K spec also has JPIP Part 9 defines tools for supporting incremental and selective access to imagery and metadata in a networked environment
which was designed for this use-case, but I don't think the SL Viewers uses it.
I wasn't able to find much examples on how to use OpenJpeg when making this crate. So this first release was just getting it to work. I am interested in feedback on what API to expose (wrapping all unsafe access to openjpeg-sys).
I can read, say, 2K bytes, and then ask for a decode. That's what Second Life viewers do. They ask the HTTP server for a part of the file. Can I get the info that tells me what resolutions are available and how much of the file they need? Or simply say "here's a vector of bytes, give me the best resolution in there."
It seems that doing progressive decoding is not as easy as I thought.
Progressive download/decoding should work like:
I have just done some testing with partially downloaded J2C (Jpeg 2000 codestream, which is the format SL uses). I can get OpenJpeg to decode the header/codestream info, but it is failing to decode a low-res image even when requesting only a single layer and the lowest resolution. I pushed some code changes to jpeg2k that gives access to the header/codestream info.
Even when using j2c data captured from traffic between the SL Viewer and asset server. From what I have seen the SL viewer always requests the first 600 bytes for each texture before requesting more. One texture was downloaded in chunks: 600, 5,545, 18,433. I haven't confirmed if the Viewer was progressively displaying that texture.
So far I haven't been able to find any details on how to do progressive decoding with OpenJpeg.
Right. I have all the priority queue stuff running in rust. . But I download the whole image, convert it to a PNG, and reduce it in size to simulate reading part of the JPEG 2000 file.
The SL viewers are using the Kakadu viewer if built by Linden Lab or Firestorm. If you build it yourself, which I used to do, it uses OpenJPEG, unless you buy a Kakadu license. Here's a discussion of the current build procedure.. So it does seem to work. I've built it myself in the past, but don't have the current build environment set up.
The SL viewer does the same calls (opj_decode
) when using OpenJpeg, but is using an older version 1.4 or 1.5 (except on Linux it is 2.0). I just did a quick test with those version (using OpenJpeg's CLI tools), 1.4 and 1.5 and decoded a partial j2c file (600 bytes) to a png file successfully, but versions >=2.0 fail (tested 2.0 and 2.1). So this seems to be a regressions in OpenJpeg.
For now you can use this crate to decode the textures directly to the resolution that you need (LOD style). For that you just need the reduce
decode parameter which is already supported (see examples/convert_jp2.rs
). Until partial decoding is fixed in OpenJpeg, you will still need to download the full image.
One improvement to the API would be to allow getting the image size before doing the decode step, so that the reduce
setting can be chosen based on the image size (small images don't really need to be reduced). Or maybe have a "Requested maximum resolution" setting and have jpeg2k
calculate the reduce
value for you.
Also there is another Open Source Jpeg2000 library Grok and crate grokj2k-sys. It's performance is close to Kakadu.
Grok v9.5.0 (CLI grk_decompress
) is able to do partial decode, but results for two 600 byte tests are not as good (has a lot of transparent gaps) as OpenJpeg 1.5.
Originally I was going to support both OpenJpeg and Grokj2k, but failed to get Grok to decode the image (only got the image header info, the component data was NULL).
Until partial decoding is fixed in OpenJpeg, you will still need to download the full image.
OK for now. Would you please file an issue with OpenJPEG to get them to fix that? Thanks.
Grok
I tried Grok. It won't cross compile from Linux to Windows, or didn't in an earlier version. There was a dependency problem I need to revisit that. From the issues list, there are a lot of problems with incorrect decoding, bu they are getting fixed. I'd suggest revisiting that in a few months. I think that's a good long-term direction.
Grok support in your package would be useful. There is a Grok interface for Rust, "grokj2k-sys" but it's Affero GPL 3.0 licensed, which is very restrictive for a library shim, especially since Grok itself is only 2-clause BSD licensed. If you link grokj2k-sys, your whole program becomes Affero GPL 3.0.
Keep at it, please. Multiple people need a JPEG 2000 for Rust that Just Works. Thanks.
When I add grok support it would be behind a feature flag. To bad about the AGPL license. It wouldn't be to hard to make a new sys crate for the From library.
I'll be trying your package soon. Just got past a big problem in Rend3.
One short term option would be to backport the openjpeg-sys crate to the 1.5 release. Not sure if there are any security issues with that older release. A feature flag can be used to select the older release.
Luckily someone had already started to fix decoding of partial download in OpenJpeg. Their PR was out-of-date and had some outstanding cleanup. I updated/fixed that PR an submitted a new one: https://github.com/uclouvain/openjpeg/pull/1407
For the time being, I am going to fork openjpeg-sys to use my branch.
Grok itself is only 2-clause BSD licensed
Grok itself is AGPL: https://github.com/GrokImageCompression/grok/blob/master/LICENSE
hmm. It's license is complex, since some of it is covered by ... the 2-clauses BSD License...
That make is very difficult to use.
It's license is complex, since some of it is covered by
... the 2-clauses BSD License...
it is originally a fork of OpenJPEG which is 2-clauses BSD licensed, but Grok specific changes (and there are a tons. it is close to a rewrite) are AGPL only, so for all practical purposes its use is governed by AGPL
Based on the license, it seems that supporting Grok in this crate will not happen. Safer to just fork this repo and create a different create for Grok later (might not happen).
I should have a new release soon with support for decoding partial j2c streams.
FYI, I pushed the code for version 0.6.0 that has partial decode support. Right now I can't publish it when it uses a git dependency.
For now you can use:
jpeg2k = { git = "https://github.com/Neopallium/jpeg2k" }
to get the new version.
This code might be useful to you for converting the j2k image components into a rend3::types::Texture
:
https://github.com/Neopallium/bevy_jpeg2k/blob/fe54b81579b0e4114832298c7b9b142917036927/src/lib.rs#L65
Grok itself is AGPL: https://github.com/GrokImageCompression/grok/blob/master/LICENSE
Oh, right. I saw "2 clause BSD" there, but that's from before their fork.
Grok is a commercial product; there's a pay version. The free version seems to be restricted to avoid it being used much. Oh well.
This code might be useful to you for converting the j2k image components into a
rend3::types::Texture
: https://github.com/Neopallium/bevy_jpeg2k/blob/fe54b81579b0e4114832298c7b9b142917036927/src/lib.rs#L65
That's useful. I wonder what code is generated for
for (r, (g, (b, a))) in r.data().iter().zip(g.data().iter().zip(b.data().iter().zip(a.data().iter()))) {
pixels.extend_from_slice(&[*r as u8, *g as u8, *b as u8, *a as u8]);
}
If the Rust compiler can figure out that reduces to a memcopy, I'd be really impressed.
I have found that it is best to use iterators for code like this. The Rust compiler can reason better about the bounds and avoid generating bounds checks inside the loop.
I don't think there is anyway to use a memcopy for that, since the r,g,b,a components need to be interleaved for the texture. One useful tool for seeing what the compiler produces is this: https://rust.godbolt.org/z/nE4dT3P3K
I might have found a faster way using flat_map
: https://rust.godbolt.org/z/fEEWqjM9a
The components_to_pixels_flat_map
version seems to be able to use SIMD instructions and doesn't need to make a function call inside the loop (extend_from_slice
must make sure there is space available in `pixels).
Since components -> pixels is going to be common code, I will add helper functions to jpeg2k
.
https://godbolt.org/ supports many different languages too.
@John-Nagle You can see the new get_pixels()
method usage here:
https://github.com/Neopallium/bevy_jpeg2k/blob/59f18ef2aba1c9c6cdaaf13ee6a3ab39b5b6064b/src/lib.rs#L63
Using flat_map
instead of a for loop improves performance by 40% for the components -> pixels conversion. The reason for the speed-up is that the rust compiler can check the component lengths and reason about the final Vec<u8>
length before looping over the data and it can produce vectorized code so the loop can process more than one pixel at a time.
Finally got back to this.
I'm trying to get partial decoding to work. All the right stuff seems to be implemented at the jpeg2k, openjpeg-sys, and OpenJpeg levels. But they don't play well together.
The "strict-mode" feature has to be turned on; otherwise jpeg2k silently ignores turning off strict mode in parameters.
So I use, in Cargo.toml,
jpeg2k = {version = "0.6.2", features = ["image", "strict-mode"]}
and got the compile error:
john@Nagle-LTS:~/projects/jpeg2000-decoder/target/debug$ cargo build
Compiling jpeg2k v0.6.2
error[E0425]: cannot find function `opj_decoder_set_strict_mode` in crate `sys`
--> /home/john/.cargo/registry/src/github.com-1ecc6299db9ec823/jpeg2k-0.6.2/src/codec.rs:446:29
|
446 | let res = unsafe { sys::opj_decoder_set_strict_mode(self.as_ptr(), mode as i32) == 1 };
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ not found in `sys`
So I tried just compiling jpeg2k standalone, getting the latest version with git, then:
cargo build --features strict-mode
This got the same compile error, just compiling jpeg2k by itself. This suggests some kind of versioning problem.
It looks like jpeg2k is at version 0.6.1 in the repository but at 0.6.2 in crates.io. Something is out of sync.
What's puzzling is that it should still work. jpeg2k pulls in
openjpeg-sys = { version = "1.0", default-features = false }
although it really needs 1.0.7 for strict mode to work. However, when I check Cargo.lock, I see
name = "openjpeg-sys"
version = "1.0.7"
so the latest version was used anyway.
Looking inside openjpeg-sys, it turns out that opj_decoder_set_strict_mode was added to openjpeg-sys in 2022, and it is in at version 1.0.7. It's in there. See https://github.com/kornelski/openjpeg-sys/blob/main/src/ffi.rs#L1092
So this ought to compile. It doesn't.
Ah, I see what's wrong. Crates.io and Github are out of sync for openjpeg-sys.. Filed an issue over at openjpeg-sys. Don't know if it will do any good.
Sorry I forgot to push 0.6.2 here. It was a small bug fix for 4 component images (RGBA).
strict-mode
requires the current main
branch of openjpeg-sys
. So right now it requires using the git repo instead of crates.io:
openjpeg-sys = { git ="https://github.com/kornelski/openjpeg-sys.git", default-features = false, branch = "main" }
I have also been working on a c2rust port of openjpeg
here: https://github.com/Neopallium/openjpeg
It compiles and is drop-in compatible with the C version, no threading support (disabled during the c2rust run) until most of the unsafe code has been rewritten.
openjpeg-sys was just updated on crates.io to 1.0.8, adding support for opj_decoder_set_strict_mode. "cargo update" fetched that, and now your package works with "strict mode" off.
Here's an example of a decoded picture:
And this is what happens when you truncate the data from from 650678 bytes to 65736 bytes:
So, progressive mode works now!
If I truncate the data too much, I get "Error: Null pointer from openjpeg-sys".
How far can you push this? Down to about 4K bytes, it seems. At 2K bytes, the picture goes to greyscale. Here's the above at 4K bytes:
I published jpeg2k version 0.6.3 with the minimum version set to 1.0.8 for openjpeg-sys.
Maybe if the image is encoded with more resolution levels it will decode with less bytes.
I wonder what size the SecondLife client uses when requesting the first chunk of images.
I wonder what size the SecondLife client uses when requesting the first chunk of images.
I'm not sure. The viewer source code is on Github now, with better search tools than the old Bitbucket system. Here's my code for that, not yet in use. My current plan is to request 4K bytes the first time. That will get me an image at least 32 pixels on the longest side. If I need more, I'll make a second request of the asset servers. By that time, I'll know from the first request how big the image is.
In my own viewer, I want to have about one texel per screen pixel. So, no matter how big the image is, I will only request what I need. My current fetcher is running the OpenJPEG command line program in a subprocess, launching it once for each image, and decompressing the whole image, which I then reduce. This is painfully slow, but it got me going.
I like the Rust port idea. The main problem with the OpenJPEG C code is its long history of buffer overflows and CERT advisories. Rust will help, but only if it's safe Rust. What you get out of c2rust looks like C in Rust syntax, with explicit pointer manipulation. You have a big job ahead cleaning that up. I appreciate that you're tackling it.
I'm not sure. The viewer source code is on Github now, with better search tools than the old Bitbucket system. Here's my code for that, not yet in use. My current plan is to request 4K bytes the first time.
I had a packet dump of SL client (Singularity 1.8.9.8338) and OpenSimulator. Looks like the client requests byte range 0-599
first. That might only be enough to get metadata of the image (width/height). I will try to do some tests with those assets and see if OpenJpeg is able to decode anything useful from just the first 600 bytes. After that first request the next request varies in size (byte range starting at 599 with end of 8192/1023/1535/767/6143). Either it is using some priority logic to decide how much to request (doesn't seem to match with the full image size) or uses some metadata from the j2k stream.
I think the official client uses a commercial J2K library, the open source builds seem to use OpenJpeg.
I like the Rust port idea. The main problem with the OpenJPEG C code is its long history of buffer overflows and CERT advisories. Rust will help, but only if it's safe Rust. What you get out of c2rust looks like C in Rust syntax, with explicit pointer manipulation. You have a big job ahead cleaning that up. I appreciate that you're tackling it.
The main reason I went the c2rust route is that Openjpeg has a large amount of test cases and the generated Rust code compiled and worked. The biggest issue I had with the generated code is that c2rust expands C macros, but I have replaced those with Rust macros and removed the duplicate code. It will be a long and slow process.
I do small refactors and run rerun the tests, this helps to ensure the refactors don't add bugs that would be hard to find later. Once the core code has been ported to safe Rust, I plan to split out the C Openjpeg interface and make it a wrapper around the safe Rust core.
The main problem with the OpenJPEG C code is its long history of buffer overflows and CERT advisories.
Another short-term solution is to compile the Openjpeg code to WASM with a simple API (pass raw j2k bytes in, get simple Image object with header and image data out). Not sure if threads are supported when targeting wasm, but if processing many images (SL-style clients) then using a thread pool to process multiple images in parallel would work. WASM engines like wasmtime
allow spawning multiple instances of the same WASM module (share the code, but not memory/stack), so a thread pool can use one instance per-thread. If the Openjpeg code hits a bug, then that instance can be released and re-created (clearing the stack/memory).
Someone else did that to safely use Openjpeg in a service: https://github.com/Neopallium/jpeg2k/issues/2
If I truncate the data too much, I get "Error: Null pointer from openjpeg-sys".
Try with just the first 600 bytes. Also I recommend disabling threads in openjpeg-sys
.
I did some testing using the partial images from the packet dump and jpeg2k
was able to decode them, even with just 600 bytes. Didn't try viewing the decoded images. Also most of them were 128/256/512 square images, so not a great sampling.
I had to disable thread support in openjpeg, because I was getting random crashes. So it seems that progressive decoding doesn't work correctly with threads. Valgrind showed a lot of "Invalid read of size 1" in memory that was free'd, so most likely one thread hit a decode error or end-of-stream and that caused shared data to be free'd before the other threads had a chance to finish running.
Maybe the error you got was caused by the threading issue.
I used this to build with threads disabled in openjpeg:
jpeg2k = { version = "0.6.3", default-features = false, features = ["image", "strict-mode"] }
I had to disable thread support in openjpeg, because I was getting random crashes
can you reproduce that with the opj_decompress utility ? If so, a report to the openjpeg issue trackers with a reproducer would be appropriate
can you reproduce that with the opj_decompress utility ? If so, a report to the openjpeg issue trackers with a reproducer would be appropriate
It should also happen with opj_decompress
, but I haven't tried that. I was pulling the partial image data form a local sqlite
db holding the http traffic.
I will need to try recreating the crashes with different images, or track down the original image source to make sure that it can be published.
Try with just the first 600 bytes. Also I recommend disabling threads in openjpeg-sys.
First 600 bytes, dump2_jp2, built with default options, same image as above truncated to 600 bytes:
> ~/projects/jpeg2000-decoder/samples$ ~/projects/jpeg2k/target/release/examples/dump_jp2 file1-600.jp2
[2023-03-01T19:36:20Z ERROR jpeg2k::codec] "Expected a SOC marker \n"
Error: Null pointer from openjpeg-sys
Rebuilt jpeg2k with
cargo build --examples --release --no-default-features --features image
which should turn threads off. Same result.
The 600-byte file:
@John-Nagle That file is using the Jpeg2000 "file format" (.jp2 extension). Those 600 bytes is about 90% XML metadata, no image data.
All of the images I tested were just the j2k "code-stream" (i.e. just the image data, no extra metadata). It looks like either the SL client or OpenSimulator server are converting uploaded images to the code-stream format.
I think that the difference is that the jp2 format is more for editing software and cameras. The "j2k codestream" format is mostly just for transfer?
I have updated the examples/convert_jp2.rs
example to support saving as j2k/jp2 file formats, so it can be used to convert jp2 to j2k format.
You can try converting your jp2 image to j2k for testing smaller transfer sizes.
cargo run --release --example convert_jp2 -- file1.jp2 file1.j2k
Ah. That's good info.
If you take too little of the file, though, color info starts to disappear.
600 bytes
Full size.
At 1024 bytes, there's full color.
I'm going to use 1024 as a minimum, since that will fit in one 1500 byte MTU network packet with the HTTP headers.
I've read a few thousand test textures from Second Life assets, and they all decompress OK with 1024 bytes of input.
Almost all the textures at Bug Island, where troublesome objects are placed for regression testing, decompress OK both at 1024 and at full resolution. Two are unreadable with OpenJPEG:
http://asset-cdn.glb.agni.lindenlab.com/?texture_id=c1f614a5-ffe1-1c68-5ed6-0689a62f6b7d
http://asset-cdn.glb.agni.lindenlab.com/?texture_id=18b67d1e-6496-98d0-4410-c9bd4f56a8b9
Nothing blows up; I'm just getting CodecError("Failed to decode image"))
GIMP can't read them either, so I'm not concerned that they can't be decompressed. The important thing is that they not crash the decoder. There's some validation before JPEG 2000 files get uploaded to the asset server, but these made it through validation.
Stability is looking OK. Maybe I don't have to run the decoder in a subprocess.
I spoke too soon. I'm now getting intermittent crashes, including bad memory references:
cargo test --release
... Hundreds of successful decodes...
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=36b68663-b68d-9923-bf10-0c55c52426b5
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (512, 512) })
Reduction ratio: 4, discard level 2, butes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=b14b75f7-9ece-a673-2760-03fadcd74f09
Image stats: Some(ImageStats { bytes_per_pixel: 4, dimensions: (1024, 1024) })
Reduction ratio: 8, discard level 3, butes to read = 58982
error: test failed, to rerun pass `--bin jpeg2000_decoder`
Caused by:
process didn't exit successfully: `/home/john/projects/jpeg2000-decoder/target/release/deps/jpeg2000_decoder-8a4d1f19ac02503d --nocapture` (signal: 11, SIGSEGV: invalid memory reference)
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=f35781b8-6e5b-ead4-99a8-f98ba592cea5
Image stats: Some(ImageStats { bytes_per_pixel: 4, dimensions: (1024, 1024) })
Reduction ratio: 8, discard level 3, bytes to read = 58982
error: test failed, to rerun pass `--bin jpeg2000_decoder`
Caused by:
process didn't exit successfully: `/home/john/projects/jpeg2000-decoder/target/release/deps/jpeg2000_decoder-8a4d1f19ac02503d --nocapture` (signal: 11, SIGSEGV: invalid memory reference)
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=6e75b2fa-83c4-8424-8964-5b176c30f7f0
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (1024, 1024) })
Reduction ratio: 8, discard level 3, bytes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test decode::fetch_multiple_textures_serial ... FAILED
Debug mode, too:
cargo test
...
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=36b68663-b68d-9923-bf10-0c55c52426b5
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (512, 512) })
Reduction ratio: 4, discard level 2, bytes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test decode::fetch_multiple_textures_serial ... FAILED
Another failure on same file:
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=36b68663-b68d-9923-bf10-0c55c52426b5
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (512, 512) })
Reduction ratio: 4, discard level 2, bytes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test decode::fetch_multiple_textures_serial ... FAILED
To reproduce:
git clone https://github.com/John-Nagle/jpeg2000-decoder.git
If necessary, get from tag "Decoder_Crash"
cargo test fetch_multiple_textures_serial -- --nocapture
or
cargo test fetch_multiple_textures_serial --release -- --nocapture
Do you have threads enabled? Also you can enable/disable threads using the OPJ_NUM_THREADS
environment variable.
1 thread:
OPJ_NUM_THREADS=1 cargo test fetch_multiple_textures_serial --release -- --nocapture
All cores:
OPJ_NUM_THREADS=ALL_CPUS cargo test fetch_multiple_textures_serial --release -- --nocapture
@rouault
I have done some testing with the same set of partial j2k images using opj_decompress
. So far I had to use the batch mode:
rm -f ./j2k/partial/*.png; OPJ_NUM_THREADS=ALL_CPUS opj_decompress -allow-partial -ImgDir ./j2k/partial/ -OutFor PNG
To get crashes, even then some runs complete without crashing. So it most likely is a race condition between the threads.
Running it under valgrind
seems to cause the crash to happen quickly:
rm -f ./j2k/partial/*.png; OPJ_NUM_THREADS=2 valgrind opj_decompress -allow-partial -ImgDir ./j2k/part
ial/ -OutFor PNG
==1701693== Memcheck, a memory error detector
==1701693== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==1701693== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==1701693== Command: opj_decompress -allow-partial -ImgDir ./j2k/partial/ -OutFor PNG
==1701693==
Folder opened successfully
File Number 0 "d5061bbf-03b6-494d-a2f9-8ca3e922e32c_partial_600.j2k"
[INFO] Start to read j2k main header (0).
[INFO] Main header has been correctly decoded.
[INFO] No decoded area parameters, set the decoded area to the whole image
[WARNING] Tile part length size inconsistent with stream length
[INFO] Stream reached its end !
[INFO] Header of tile 1 / 1 has been read.
[WARNING] read: segment too long (87) with max (64) for codeblock 0 (p=0, b=0, r=0, c=0)
==1701693== Thread 3:
==1701693== Invalid read of size 2
==1701693== at 0x48D1B1D: opj_mqc_init_dec_common (mqc.c:434)
==1701693== by 0x48D1B7B: opj_mqc_init_dec (mqc.c:447)
==1701693== by 0x48F7482: opj_t1_decode_cblk (t1.c:2068)
==1701693== by 0x48F61E6: opj_t1_clbl_decode_processor (t1.c:1704)
==1701693== by 0x4897490: opj_worker_thread_function (thread.c:675)
==1701693== by 0x4896E89: opj_thread_callback_adapter (thread.c:392)
==1701693== by 0x4D7B608: start_thread (pthread_create.c:477)
==1701693== by 0x4CA0132: clone (clone.S:95)
==1701693== Address 0x531cd9a is 26 bytes after a block of size 16 in arena "client"
==1701693==
==1701693== Invalid write of size 1
==1701693== at 0x48D1B2B: opj_mqc_init_dec_common (mqc.c:435)
==1701693== by 0x48D1B7B: opj_mqc_init_dec (mqc.c:447)
==1701693== by 0x48F7482: opj_t1_decode_cblk (t1.c:2068)
==1701693== by 0x48F61E6: opj_t1_clbl_decode_processor (t1.c:1704)
==1701693== by 0x4897490: opj_worker_thread_function (thread.c:675)
==1701693== by 0x4896E89: opj_thread_callback_adapter (thread.c:392)
==1701693== by 0x4D7B608: start_thread (pthread_create.c:477)
==1701693== by 0x4CA0132: clone (clone.S:95)
==1701693== Address 0x531cd9a is 26 bytes after a block of size 16 in arena "client"
==1701693==
valgrind: m_mallocfree.c:305 (get_bszB_as_is): Assertion 'bszB_lo == bszB_hi' failed.
valgrind: Heap block lo/hi size mismatch: lo = 80, hi = 16711760.
This is probably caused by your program erroneously writing past the
end of a heap block and corrupting heap metadata. If you fix any
invalid writes reported by Memcheck, this assertion failure will
probably go away. Please try that before reporting this as a bug.
Do you have threads enabled? Also you can enable/disable threads using the OPJ_NUM_THREADS environment variable.
@John-Nagle
I see that the Cargo.toml
has thread support disabled. Also I can't run the test since the samples/bugislanduuidlist.txt
file is missing.
Edit: Just realized I can copy the uuids from your message.
Thanks. I thought I had threads turned off. I will check.
Where can I send you the list? I don't want to post it publicly.
You can email it to me (email address in profile).
I am not seeing crashes with the 3 uuids from your message.
In Cargo.toml, I have:
jpeg2k = {version = "0.6.2", default-features = false, features = ["image", "strict-mode"]}
Shouldn't that turn threads off?
Tried:
OPJ_NUM_THREADS=1 cargo test fetch_multiple_textures_serial -- --nocapture
Result, after more than 100 successful decodes:
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=4885d078-07cb-8263-515a-6cb72c86a007
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (512, 512) })
Reduction ratio: 4, discard level 2, bytes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test decode::fetch_multiple_textures_serial ... FAILED
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=33292f66-0075-571c-7744-ae809d02c73f
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (1024, 1024) })
Reduction ratio: 8, discard level 3, bytes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
note: run with RUST_BACKTRACE=1
environment variable to display a backtrace
test decode::fetch_multiple_textures_serial ... FAILED
Then a successful run over 500 different files.
Then
Asset url: http://asset-cdn.glb.agni.lindenlab.com/?texture_id=33292f66-0075-571c-7744-ae809d02c73f
Image stats: Some(ImageStats { bytes_per_pixel: 3, dimensions: (1024, 1024) })
Reduction ratio: 8, discard level 3, bytes to read = 44236
thread 'decode::fetch_multiple_textures_serial' panicked at 'Fetch failed: Jpeg(CodecError("Failed to decode image"))', src/bin/jpeg2000_decoder/decode.rs:338:51
note: run with RUST_BACKTRACE=1
environment variable to display a backtrace
test decode::fetch_multiple_textures_serial ... FAILED
jpeg2k = {version = "0.6.2", default-features = false, features = ["image", "strict-mode"]}
Threads should be disabled, since the threads
feature flag is not enabled.
So it's not a thread problem.
It's an intermittent problem that seems to occur about one to three times per thousand files decompressed.
You can try running the test binary directly: ./target/release/deps/jpeg2000_decoder-d0779005b28d2fb5 --nocapture fetch_multiple_textures_serial
The file name might be different for you, not sure how it is named, but you can get the file name from the cargo test ...
run (look at the start right after the compile).
Running the test file directly would allow running it under valgrind
.
Can you double check your version of jpeg2k
and openjpeg-sys
using cargo tree
?
cargo tree
jpeg2000-decoder v0.1.0 (/home/john/projects/jpeg2000-decoder)
├── anyhow v1.0.69
├── argparse v0.2.2
├── image v0.23.14
│ ├── bytemuck v1.13.0
│ ├── byteorder v1.4.3
│ ├── color_quant v1.1.0
│ ├── gif v0.11.4
│ │ ├── color_quant v1.1.0
│ │ └── weezl v0.1.7
│ ├── jpeg-decoder v0.1.22
│ │ └── rayon v1.6.1
│ │ ├── either v1.8.1
│ │ └── rayon-core v1.10.2
│ │ ├── crossbeam-channel v0.5.7
│ │ │ ├── cfg-if v1.0.0
│ │ │ └── crossbeam-utils v0.8.15
│ │ │ └── cfg-if v1.0.0
│ │ ├── crossbeam-deque v0.8.3
│ │ │ ├── cfg-if v1.0.0
│ │ │ ├── crossbeam-epoch v0.9.14
│ │ │ │ ├── cfg-if v1.0.0
│ │ │ │ ├── crossbeam-utils v0.8.15 (*)
│ │ │ │ ├── memoffset v0.8.0
│ │ │ │ │ [build-dependencies]
│ │ │ │ │ └── autocfg v1.1.0
│ │ │ │ └── scopeguard v1.1.0
│ │ │ │ [build-dependencies]
│ │ │ │ └── autocfg v1.1.0
│ │ │ └── crossbeam-utils v0.8.15 (*)
│ │ ├── crossbeam-utils v0.8.15 (*)
│ │ └── num_cpus v1.15.0
│ │ └── libc v0.2.139
│ ├── num-iter v0.1.43
│ │ ├── num-integer v0.1.45
│ │ │ └── num-traits v0.2.15
│ │ │ [build-dependencies]
│ │ │ └── autocfg v1.1.0
│ │ │ [build-dependencies]
│ │ │ └── autocfg v1.1.0
│ │ └── num-traits v0.2.15 (*)
│ │ [build-dependencies]
│ │ └── autocfg v1.1.0
│ ├── num-rational v0.3.2
│ │ ├── num-integer v0.1.45 (*)
│ │ └── num-traits v0.2.15 (*)
│ │ [build-dependencies]
│ │ └── autocfg v1.1.0
│ ├── num-traits v0.2.15 (*)
│ ├── png v0.16.8
│ │ ├── bitflags v1.3.2
│ │ ├── crc32fast v1.3.2
│ │ │ └── cfg-if v1.0.0
│ │ ├── deflate v0.8.6
│ │ │ ├── adler32 v1.2.0
│ │ │ └── byteorder v1.4.3
│ │ └── miniz_oxide v0.3.7
│ │ └── adler32 v1.2.0
│ ├── scoped_threadpool v0.1.9
│ └── tiff v0.6.1
│ ├── jpeg-decoder v0.1.22 (*)
│ ├── miniz_oxide v0.4.4
│ │ └── adler v1.0.2
│ │ [build-dependencies]
│ │ └── autocfg v1.1.0
│ └── weezl v0.1.7
├── jpeg2k v0.6.3
│ ├── anyhow v1.0.69
│ ├── image v0.23.14 (*)
│ ├── log v0.4.17
│ │ └── cfg-if v1.0.0
│ ├── openjpeg-sys v1.0.8
│ │ └── libc v0.2.139
│ │ [build-dependencies]
│ │ └── cc v1.0.79
│ └── thiserror v1.0.38
│ └── thiserror-impl v1.0.38 (proc-macro)
│ ├── proc-macro2 v1.0.51
│ │ └── unicode-ident v1.0.6
│ ├── quote v1.0.23
│ │ └── proc-macro2 v1.0.51 (*)
│ └── syn v1.0.109
│ ├── proc-macro2 v1.0.51 (*)
│ ├── quote v1.0.23 (*)
│ └── unicode-ident v1.0.6
├── serde-llsd v0.1.0
│ ├── anyhow v1.0.69
│ ├── ascii85 v0.2.1
│ ├── base64 v0.13.1
│ ├── chrono v0.4.23
│ │ ├── iana-time-zone v0.1.53
│ │ ├── num-integer v0.1.45 (*)
│ │ ├── num-traits v0.2.15 (*)
│ │ └── time v0.1.45
│ │ └── libc v0.2.139
│ ├── enum-as-inner v0.3.4 (proc-macro)
│ │ ├── heck v0.4.1
│ │ ├── proc-macro2 v1.0.51 (*)
│ │ ├── quote v1.0.23 (*)
│ │ └── syn v1.0.109 (*)
│ ├── hex v0.4.3
│ ├── quick-xml v0.22.0
│ │ └── memchr v2.5.0
│ └── uuid v0.8.2
│ └── getrandom v0.2.8
│ ├── cfg-if v1.0.0
│ └── libc v0.2.139
├── ureq v2.6.2
│ ├── base64 v0.13.1
│ ├── flate2 v1.0.25
│ │ ├── crc32fast v1.3.2 (*)
│ │ └── miniz_oxide v0.6.2
│ │ └── adler v1.0.2
│ ├── log v0.4.17 (*)
│ ├── once_cell v1.17.1
│ ├── rustls v0.20.8
│ │ ├── log v0.4.17 (*)
│ │ ├── ring v0.16.20
│ │ │ ├── libc v0.2.139
│ │ │ ├── once_cell v1.17.1
│ │ │ ├── spin v0.5.2
│ │ │ └── untrusted v0.7.1
│ │ │ [build-dependencies]
│ │ │ └── cc v1.0.79
│ │ ├── sct v0.7.0
│ │ │ ├── ring v0.16.20 (*)
│ │ │ └── untrusted v0.7.1
│ │ └── webpki v0.22.0
│ │ ├── ring v0.16.20 (*)
│ │ └── untrusted v0.7.1
│ ├── url v2.3.1
│ │ ├── form_urlencoded v1.1.0
│ │ │ └── percent-encoding v2.2.0
│ │ ├── idna v0.3.0
│ │ │ ├── unicode-bidi v0.3.10
│ │ │ └── unicode-normalization v0.1.22
│ │ │ └── tinyvec v1.6.0
│ │ │ └── tinyvec_macros v0.1.1
│ │ └── percent-encoding v2.2.0
│ ├── webpki v0.22.0 (*)
│ └── webpki-roots v0.22.6
│ └── webpki v0.22.0 (*)
└── url v2.3.1 (*)
OpenJPEG can give me the header info with the image size, and then I can ask for a fraction of the resolution without reading the entire stream. But I don't think you expose that functionality. Is there some way to do that?
Use case is wanting a low-rez version from the asset server for Open Simulator or Second Life. Often, many assets are only read at low-rez because they are for distant objects. So the network connection only reads part of the data and then is closed.
(Current code is running OpenJPEG in a subprocess, and reading too much. Looks like this: https://player.vimeo.com/video/640175119)