Open GoogleCodeExporter opened 9 years ago
Original comment by jun_f...@foxitsoftware.com
on 27 May 2015 at 5:19
Memory usage is usually a complicated issue. It's not easy for us to estimate
reasonable memory usage for pdf rendering. The memory usage depends on the
content of pdf files. In this case, it takes most memory in the following main
process: parsing PDF file, decoding JPX and rendering decoded bitmap. For
example, the decoded bitmap is 33 MB. In order to decode JPX, some memory is
used for buffer and data structures. The total memory usage can be twice or
three times more than 33 MB in the process of decoding JPX. At the same time,
the parsed result is still there and it keeps the memory. After JPX is decoded,
twice or more memory is used for rendering.
Original comment by jun_f...@foxitsoftware.com
on 27 May 2015 at 9:24
Seems to be a problem in openjpeg - either openjpeg is an inefficient library
or .jp2 is a memory hungry format... either way, it may be difficult or
impossible to find a fix.
Pfdium uses 350MB to render this file, but most of this is presumably openjpeg
since using openjpeg's opj_decode directly using 260MB:
/usr/bin/time -f "%M KB" openjpeg/opj_decompress -i ~/Downloads/ruimte.jp2 -o
ruimte.bmp
If I convert the embedded file to a 100% quality jpg, which is 3.2MB when
compressed, and then embed that in a PDF, and render that using foxit, it only
uses 95MB. So .jpg files are much less memory hungry.
If I re-encode the embedded .jp2 file with 512px square tiles, instead of a
single tile as it has now, then openjpeg only uses 100MB to decompress it
instead of the 260MB it uses now.
Original comment by ol...@google.com
on 28 May 2015 at 2:00
Attachments:
Original issue reported on code.google.com by
ol...@google.com
on 27 May 2015 at 7:40Attachments: