Open tcj opened 8 years ago
That analysis sounds reasonable. The example above is a Microsoft scanned image donated by Cornell. https://archive.org/details/cu31924051987323
It doesn't make sense to me to have special case code for this in a non-image processing app. It shouldn't care whether the scaling is done via a fast path supported by the codec or after the fact on the decompressed image. I'd push the requirement back on opj_decompress
to support a -scale
qualifier to supplement -reduce
. https://github.com/uclouvain/openjpeg/issues As an aside, I'm not sure why Internet Archive is using a closed source program (kdu_expand
) here in the first place when OpenJPEG has an open source JPEG2000 decoder.
Please pardon any mistakes I make in the reporting of this issue, but I clearly barely know what I am talking about.
We are getting errors in the syslog and nginx log like the following:
and:
2015/12/29 16:29:22 [error] 30935#0: *23910069 FastCGI sent in stderr: "PHP Warning: BookReader Processing Error: unzip -p '/2/items/cu31924051987323/cu31924051987323_jp2.zip' 'cu31924051987323_jp2/cu31924051987323_0005.jp2' | /petabox/sw/bin/kdu_expand -no_seek -quiet -reduce 4 -rotate 0 -region {0.000000,0.000000},{1.000000,1.000000} -i /dev/stdin -o /tmp/stdout.bmp | (bmptopnm 2>/dev/null) | pnmtojpeg -quality 75 -- in /var/cache/petabox/petabox/www/datanode/BookReader/BookReaderImages.inc.php on line 453
notes from h___ at archive dot org: