FN90 / openjpeg

Automatically exported from code.google.com/p/openjpeg
0 stars 0 forks source link

Loss decoding quality in 2.0.0 #254

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
> What steps will reproduce the problem?

Try to convert the attached image with opj_decompress (fresh sources) into a 
png one.

> What is the expected output? 

Converted image with good quality, like generated by j2k_to_image from 1.5.1.

> What do you see instead?

Bad image quality instead

> What version of the product are you using? On what operating system?

actual SVN sources on Mint Maya (Linux 3.2.0-40-generic-pae #64-Ubuntu  i686 
i686 i386 GNU/Linux)

> Please provide any additional information below.

The attached image is extracted from a PDF file which normally shown by MuPDF 
engine before its migration from 1.5.1 to 2.0.0

Possibly this issue has the same cause with #252, but no error messages written 
on decoding.

Original issue reported on code.google.com by Alexander.V.Kasatkin@gmail.com on 29 Jan 2014 at 3:23

Attachments:

GoogleCodeExporter commented 9 years ago

Original comment by mathieu.malaterre on 20 Feb 2014 at 4:59

GoogleCodeExporter commented 9 years ago

Original comment by mathieu.malaterre on 20 Feb 2014 at 4:59

GoogleCodeExporter commented 9 years ago
Last known revision to be working with this file is: r960
See also bug #80

Original comment by mathieu.malaterre on 24 Feb 2014 at 8:35

GoogleCodeExporter commented 9 years ago

Original comment by mathieu.malaterre on 25 Feb 2014 at 2:19

GoogleCodeExporter commented 9 years ago
This issue was updated by revision r2436.

Original comment by mathieu.malaterre on 25 Feb 2014 at 5:15

GoogleCodeExporter commented 9 years ago
Going back in history it does not look like changes from jp2_apply_pclr() did 
affect this issue. Reverting to r960 + removing jp2_apply_pclr() shows that 
decompression changed anyway.

Original comment by mathieu.malaterre on 4 Mar 2014 at 11:09

GoogleCodeExporter commented 9 years ago
dwt.c changes from r960 did not affect output.

Original comment by mathieu.malaterre on 4 Mar 2014 at 11:12

GoogleCodeExporter commented 9 years ago
If I use kdu_transcode to extract the J2K file or convert to JPX:

 kdu_transcode -i data/input/nonregression/issue254.jp2 -o /tmp/clean.jpx -jpx_layers sLUM,0 Sprofile=PROFILE2

Everything goes well. I suspect the issues is in the weird tileparts 
organisation.

Original comment by mathieu.malaterre on 4 Mar 2014 at 2:45

GoogleCodeExporter commented 9 years ago
Attaching the hack to read the file properly. Basically the issue can be seen 
directly at the end of the file:

$ jp2file.py issue254.jp2 
[...]
  550577  : New marker: SOT (Start of tile-part)

    Tile       : 23
    Length     : 1055
    Index      : 5
    Tile-Parts : 5
[...]

Now that OpenJPEG is all paranoid about Tile-Part index and double checks 
everything this will be hard to get this file in.

Kakadu seems to cope with this file very well, I guess they assumed that 
Tile-Part-Index *can* be equal to Tile-Parts number...

Original comment by mathieu.malaterre on 4 Mar 2014 at 3:15

Attachments:

GoogleCodeExporter commented 9 years ago
In case this is not clear for people reading the report, the code currently 
reads as:

                if (l_num_parts != 0) { /* Number of tile-part header is provided by this tile-part header */

                [handle bugs]
                        l_tcp->m_nb_tile_parts = l_num_parts;
                }

                /* If know the number of tile part header we will check if we didn't read the last*/
                if (l_tcp->m_nb_tile_parts) {
                        if (l_tcp->m_nb_tile_parts == (l_current_part+1)) {
                                p_j2k->m_specific_param.m_decoder.m_can_decode = 1; /* Process the last tile-part header*/
                        }
                }

In which case the loop fails the *first* time current tile-part is equal to 4 
(num-part==5).

Original comment by mathieu.malaterre on 4 Mar 2014 at 3:38

GoogleCodeExporter commented 9 years ago
Low priority, the file is invalid.

Original comment by mathieu.malaterre on 7 Mar 2014 at 2:47

GoogleCodeExporter commented 9 years ago
When I add in 'j2k.c' of openjpeg-2.x-r2833:

                opj_read_bytes(p_header_data,&l_num_parts ,1);          /* TNsot */
                ++p_header_data;

                if (l_num_parts != 0) {

the line:

                opj_read_bytes(p_header_data,&l_num_parts ,1);          /* TNsot */
                ++p_header_data;
                ++l_num_parts; <============= added

                if (l_num_parts != 0) {

then the image in question '1.jp2' shows the same good quality as
with 'openjpeg-1.5.x'.

And incrementing the value of TNsot SEEMS to do no harm to other
JP2/J2k images.

winfried

Original comment by szukw...@arcor.de on 14 Apr 2014 at 10:42

GoogleCodeExporter commented 9 years ago
And the image '18_1805_a4_2.jp2' of Issue 327, that was
a little bit blurred: with this change it is sharp.

The 'image 18_1805_a4_2.jp2' has a meth value of 3
and an ICC profile. When I allow to accept the condition

if(jp2->meth == 2 || jp2->meth == 3)

in 'opj_jp2_read_colr()', then the ICC profile is read in
and used. This image now looks just like the one I get
with KDU.

winfried

Original comment by szukw...@arcor.de on 14 Apr 2014 at 3:52

GoogleCodeExporter commented 9 years ago
Thanks a lot for help. 
We've apply both patches from #12 & #13 - and it works fine.

So we've returned back to OpenJPEG 2.0 implementation for our fork in the 
EBookDroid project.

Also we've made some code cleanup and started NEON optimization for some 
functions.
The source code can be found inside the archive in openjpeg/ and openjpeg-simd/ 
folders.

https://drive.google.com/file/d/0B6DTNNagrvFPOXkzYUE4N0drNDA

Original comment by Alexander.V.Kasatkin@gmail.com on 28 Apr 2014 at 2:51

GoogleCodeExporter commented 9 years ago
Are the patches from #12 going to make it into a release? We also need them for 
poppler to not lose quality versus when we use openjpeg1

Original comment by tsdg...@gmail.com on 29 Sep 2014 at 9:06

GoogleCodeExporter commented 9 years ago
Later I found that 'reduction' (1-5) shows the same bug: the text of
the image in question '1.jp2' is blurred again.

winfried

Original comment by szukw...@arcor.de on 30 Sep 2014 at 7:03

GoogleCodeExporter commented 9 years ago
Kakadu 7.4 ouputs exactly the same blurred image as OpenJPEG 2.x.
The JP2 file is invalid : number of tile_parts shall be 6 and the value is 
given in the code-stream.

In 15444-1 : 
TNsot:Number of tile-parts of a tile in the codestream. Two values are allowed: 
the correct number of tile-parts
for that tile and zero. A zero value indicates that the number of tile-parts of 
this tile is not specified in
this tile-part.

I leave open because we should add a warning.
Patch from Winfried won't be applied. It would not be compliant with 15444-1.

Original comment by antonin on 30 Sep 2014 at 12:05

GoogleCodeExporter commented 9 years ago
typo:
"The JP2 file is invalid : number of tile_parts shall be 6 and the value 5 is 
given in the code-stream."

Original comment by antonin on 30 Sep 2014 at 12:06

GoogleCodeExporter commented 9 years ago
On LINUX:
====================
kdu_expand -version

This is Kakadu's "kdu_expand" application.
    Compiled against the Kakadu core system, version v7.4
    Current core system version is v7.4

kdu_expand -i issue254-1-cyrillic-from-pdf.jp2 -o kdu-cyrillic.tif

The TIFF image is sharp, not blurred.

kdu_expand -i issue327-18_1805_a4_2.jp2 -o issue327-18_1805_a4_2.jp2.tif

The TIFF image is sharp, not blurred.

On MS-WINDOWS Win7:
====================
IrfanView-4.35:

irfanview shows issue254-1-cyrillic-from-pdf.jp2 sharp, not blurred.

irfanview shows issue327-18_1805_a4_2.jp2 sharp, not blurred; but
 mono.

kdu_show.exe 7.4:

kdu_show shows issue254-1-cyrillic-from-pdf.jp2 blurred.

kdu_show shows issue327-18_1805_a4_2.jp2 sharp, not blurred.

The image of issue254 and of issue327 both have TNsot 5.

winfried

Original comment by szukw...@arcor.de on 30 Sep 2014 at 5:11

GoogleCodeExporter commented 9 years ago
Regarding 'issue254-1-cyrillic-from-pdf.jp2':

openjpeg-v1, j2k.c:

  if (partno >= numparts) {
    opj_event_msg(j2k->cinfo, EVT_WARNING, "SOT marker inconsistency in tile %d: tile-part index greater (%d) than number of tile-parts (%d)\n", tileno, partno, numparts);
    numparts = partno+1; <============
  }

This would mean that openjpeg-v1

  'would not be compliant with 15444-1'

winfried

Original comment by szukw...@arcor.de on 30 Sep 2014 at 10:59

GoogleCodeExporter commented 9 years ago
ok my mistake, I rephrase : OpenJPEG 1.x is more tolerant to non-compliant 
codestreams than OpenJPEG 2.x.

Patch reproducing 1.x behaviour welcome (the currently proposed patch is not 
satisfactory).

Thanks

Original comment by antonin on 1 Oct 2014 at 1:21

GoogleCodeExporter commented 9 years ago
Antonin, please check the attached patch.

winfried

Original comment by szukw...@arcor.de on 3 Oct 2014 at 5:49

Attachments:

GoogleCodeExporter commented 9 years ago
I checked the patch.

We're not there yet. Problem right now is that all TP with index 5 are actually 
not read at all because p_j2k->m_specific_param.m_decoder.m_can_decode is set 
to 1 when processing TP index 4. 

Patch should provide a way to check that further data is available for 
decoding, and process it although we're beyond the (false) TNsot. 

The patch increases num_parts to 6 when TPsot equals 4 while it should do it 
when TPsot equals 5.

Original comment by antonin on 3 Oct 2014 at 2:59

GoogleCodeExporter commented 9 years ago
Issue 423 has been merged into this issue.

Original comment by m.darb...@gmail.com on 5 Dec 2014 at 9:45

GoogleCodeExporter commented 9 years ago
Attached image from issue 423.

Original comment by m.darb...@gmail.com on 5 Dec 2014 at 9:47

Attachments:

GoogleCodeExporter commented 9 years ago
Here's another patch that tries to deal with this issue.

It's been checked against the test suite. This reveals 4 regressions for files 
which actually need this correction (I only checked that they are in fact 
taking the new code path & that there is a visual improvement).

How it works :
The code stream will be streamed looking for the next SOT of the current tile 
when m_can_decode==1.
If found & TPsot==TNsot then it assumes the encoder was buggy & updates all 
number of tile parts for every tile in the image having a tile part count.

It runs only once per image (& not per tile) with following call to read_sot 
knowing wether to make a correction or not.

Major drawback : it needs to stream the whole code stream once more. It's 
skipping data between every SOT but neverthless, for big "legal" images 
streamed from disk, this can slow down the decoding process a bit I think.

One good thing is that it can be disabled with 1 line of code which means it's 
easy to make this patch a "parameter" (c.f. issue 439 about breaking the 
API/ABI). 

Original comment by m.darb...@gmail.com on 5 Dec 2014 at 11:22

Attachments:

GoogleCodeExporter commented 9 years ago
It's been checked on the 4 "regressions" of the test suite as said before but 
also obviously on the image from this issue as well the ones from issue 327 & 
issue 423 with proper results.

Original comment by m.darb...@gmail.com on 5 Dec 2014 at 11:24

GoogleCodeExporter commented 9 years ago

Original comment by m.darb...@gmail.com on 5 Dec 2014 at 11:25

GoogleCodeExporter commented 9 years ago
The 'issue254.patch' does work only as before in comment #16: only with 
reduction 0. With reduction 1 .. 5 it is blurred again.

winfried

Original comment by szukw...@arcor.de on 6 Dec 2014 at 5:20

GoogleCodeExporter commented 9 years ago
Well, I guess it's not a bug when working with reductions since you're cutting 
out high frequencies.
kakadu outputs the exact same thing for all levels.

Original comment by m.darb...@gmail.com on 6 Dec 2014 at 9:53

GoogleCodeExporter commented 9 years ago
@antonin, @mathieu,

Since this is "OK" against test suite, could you review/comment patch ?

Original comment by m.darb...@gmail.com on 6 Dec 2014 at 10:49

GoogleCodeExporter commented 9 years ago
Guys, how is this working? We have just added openjpeg2 support to poppler but 
can not switch it on by default since it's regressing in to many files by what 
seems to be this bug. Any patch i should try? Or is this still work in progress?

Original comment by tsdg...@gmail.com on 4 Jan 2015 at 11:08

GoogleCodeExporter commented 9 years ago
Updated patch against r2993

Original comment by m.darb...@gmail.com on 20 Jan 2015 at 9:02

Attachments:

GoogleCodeExporter commented 9 years ago
Patch issue254.patch of #33 works on LINUX 64-Bit for

 issue254-1-cyrillic-from-pdf.jp2 i.e. the above mentioned 1.jp2
and
 issue327-18_1805_a4_2.jp2

Thanks, m.darbois.

winfried

Original comment by szukw...@arcor.de on 22 Jan 2015 at 6:22

GoogleCodeExporter commented 9 years ago
@matthieu: many thanks for having updated the patch. It fixes indeed the bug 
(and 4 additional images  in the test suite, as you noticed). But it currently 
raises a speed problem. With image from issue 327 (18_1805_a4_1.jp2), it takes 
on my (old) MBP 7 sec for decoding while trunk takes 1.5s. Strangely enough, 
each tile is decoded much more slowly and I suspect the new function 
opj_j2k_get_sot_values executed for each tile to be the cause ... Would you 
mind having a look to this, check that you too obsrve the perf loss and see if 
you can optimize the patch ? The fix seems to be needed as some widely spread 
J2K encoder is generating wrong code-streams out there but currently the speed 
loss is not acceptable. Thanks !

Original comment by antonin on 23 Jan 2015 at 4:31

GoogleCodeExporter commented 9 years ago

Original comment by antonin on 23 Jan 2015 at 4:31

GoogleCodeExporter commented 9 years ago

Original comment by antonin on 23 Jan 2015 at 4:43

GoogleCodeExporter commented 9 years ago
@antonin,

You shouldn't check the speed loss on those buggy images because you're not 
decoding the same data.

Using data/input/nonregression/Bretagne2.j2k, I get roughly 2500ms with or 
without patch in op_j2k_decode (the overhead is lost in measure error I guess).

You can also try the following which demonstrates the "can be disabled with 1 
line of code" "feature" of the patch :
in opj_j2k_decode (with patch applied, line 9794), add the following line :
p_j2k->m_specific_param.m_decoder.m_nb_tile_parts_correction_checked = 1;

Once this line is added, You should get the same time to decode 
18_1805_a4_1.jp2 as without the patch (in my case 1890ms). In this case, the 
code stream is not parsed to check for invalid TNsot anymore & no correction is 
applied.

Now, add the following line after the previous one :
p_j2k->m_specific_param.m_decoder.m_nb_tile_parts_correction = 1;

The code stream still isn't parsed for error but we force the correction by 1 
(which is a simple addition also done in the case where no correction is 
applied - add 0).
You should get roughly the same time to decode 18_1805_a4_1.jp2 as with just 
the patch applied (in my case 4100ms). Once again, the overhead gets lost in 
measure error.

That being said, I'm quite confident for "small" images (18_1805_a4_1.jp2 is 
2000x3200 CMYK 8bpp, Bretagne2.j2k is 2592x1944 RGB 8bpp). The overhead might 
be different for huge images (I can't test this though).

Original comment by m.darb...@gmail.com on 23 Jan 2015 at 11:05

GoogleCodeExporter commented 9 years ago
I have done some measurements of time on my old LINUX 32-Bit
machine. It uses

 model name      : AMD Athlon(tm)
 cpu MHz         : 1243.290
 cache size      : 512 KB
 flags           : fpu vme de pse tsc msr pae mce cx8 apic
                   sep mtrr pge mca cmov pat pse36 mmx fxsr
                   sse syscall mmxext 3dnowext 3dnow

'opj_decompress.c' I have changed as follows:

 gettimeofday(&start, NULL);

 opj_stream_create_default_file_stream()
 (...)
 opj_stream_destroy()

 gettimeofday(&stop, NULL);
 printf("SHOW_DELAY %ld ticks\n",
 (stop.tv_sec - start.tv_sec) * 1000000
  + (stop.tv_usec - start.tv_usec));

issue254-1-cyrillic-from-pdf.jp2, 554'678 B
-------------------------------------------
ORIG : min. 682 969 ticks
       max. 870 649 ticks
PATCH: min. 864 620 ticks
       max. 948 271 ticks

issue327-18_1805_a4_2.jp2, 717'669 B
-------------------------------------------
ORIG : min. 638 245 ticks
       max. 642 006 ticks
PATCH: min. 637 713 ticks
       max. 664 906 ticks

Bretagne2.j2k, 5'386'631 B
-------------------------------------------
ORIG : min. 6 359 608 ticks
       max. 6 396 216 ticks
PATCH: min. 6 325 365 ticks
       max. 6 346 159 ticks

issue327-18_1805_a4_1-CMYK.jp2, 8'442'002 B
-------------------------------------------
ORIG : min. 5 711 146 ticks
       max. 5 882 892 ticks
PATCH: min. 5 716 189 ticks
       max. 5 728 665 ticks

seq-17.jp2, 8'329'217 B
-------------------------------------------
ORIG : min. 18 562 747 ticks
       max. 19 126 261 ticks
PATCH: min. 18 532 778 ticks
       max. 18 926 785 ticks

The file openjump_jpeg2000_erdas.jp2, 30'293'603 B, I could not
test: memory out.

winfried

Original comment by szukw...@arcor.de on 24 Jan 2015 at 8:01

GoogleCodeExporter commented 9 years ago
One small addition: the incorrect tile-part header fields that are the cause of 
the original problem are a known issue of some Adobe libraries that are used by 
Photoshop (I think Acrobat is affected as well). Further details here:  

http://wiki.opf-labs.org/display/TR/Erroneous+tile-part+information+in+images+cr
eated+by+Adobe+Photoshop

Had a look at Alexander's JP2 and this has all the characteristics of those 
crappy Adobe JPX's!

Just thought this might be useful in case people are wondering if their files 
are affected by this issue.

Original comment by johan.va...@gmail.com on 13 Apr 2015 at 1:02