Closed stackyjoe closed 5 years ago
Please open a PR for any fixes that you uncover. Moving the loading to the current page +- 2 is an idea I have been toying with as well but have not started writing up yet. Feel free to hack on that if you like.. ;-)
I was wrong about the QImage thing, QImages are designed to be passed by copy. I'm still playing with the source, but I'm also running into concurrency issues. As is, the loadPage() function is called in each thread, and they all attempt to write into a hashmap. Some very small portion of the time they step on each others toes and crash the program.
This appears to be fixed with the latest version of lumina-pdf.
Howdy. I use Lumina Desktop on my old laptop, so I'm a bit sensitive to performance issues. My issues are:
This is true for the current release of 1.4.1, but is actually aggravated in the current master branch.
I suspect that the fix for Issue #528 actually caused issue (1), or perhaps worsened it. As is every page is turned into a QImage and stored in memory now, long and well-compressed text documents really eat up RAM. As an example, top(1) listed lumina-pdf as using 2.2G of RAM on a 4MB PDF which was 300 pages long.
As far as the speed of loading, there are some very simple fixes that help. The QImage objects made by the backend are passed by value in several places, when they could be passed by const reference. Making this change seems to speed it up noticeably even on comparatively small files.
The trivial memory leaks are... trivial. For example MainUI's constructor calls new on several objects and the destructor does not delete those objects. It's all freed by the operating system after shutdown, of course, but I don't think there is any reason not to just wrap it in a std::unique_ptr.
I've been playing with the source and have at least made the easy performance fixes, along with some light cleaning with clang-format. I don't see an easy way to fix the memory hunger, it'll require reworking how the program actually operates. Should I file a PR?