multiSnow / mcomix3

End of Fork
Other
97 stars 39 forks source link

High CPU consumption with `.pdf` files #144

Closed sakkamade closed 3 years ago

sakkamade commented 3 years ago

What is the problem

Any sort of .pdf file (which I tried) with Maximum number of concurrent threads set to 0 sends all threads of my CPU to 100% of usage, and does not liberate it until I close the application or the file.

With 1, amount of CPU usage is still constantly above normal: ~50% on single thread, and leaps to other up to 20-30%.

Extra information

multiSnow commented 3 years ago

Okular uses poppler library to read and render each page of pdf. Of course it should have very good performance since only required page will be decoded only in required dpi and rendered directly without any encoding.

While, mcomix uses mutool (command line tools of mupdf) to 'extract' all of the pages of the pdf in their original dpi and save them as png to avoid quality loses. It means that mutool will be called on every page and do both decode and encode. It is making you feel slow and low performance.

sakkamade commented 3 years ago

No, I would not say the performance is low. It's not low nor slow. It's just, the PC becomes too loud... Like, too loud.

I was wondering, does the program extracts every single page at once, right when I open the document? And may this process be slowed down?

sakkamade commented 3 years ago

The value of Maximum number of pages to store in the cache seem to make no difference.

multiSnow commented 3 years ago

Yes, the program extracts every single page at once the pdf (or any other supported archives) opened. It is so designed by the author of comix and mcomix. The only way to slow down this prorcess is setting the 'Maximum number of concurrent extraction threads' to 1. 'Maximum number of pages to store in the cache' is not related to the archive extracting, but the image decoding.

sakkamade commented 3 years ago

It's all clear to me now. Thank you!

Closing it.