Closed seantrons closed 6 months ago
Although a farcry from an automated test suite, we could look into wiring performance logging into the paint event to determine how long it takes to render new frames for panes or the whole screen even.
I feel like I remember addressing this in a different issue, but I'll mark it down here as well. Wiring directly into the standard Qt paint event isn't a viable solution thus far when working with PyQtGraph. It just isn't called every time a frame is rendered like one would expect.
Anyways, as noted with the mention, this issue is becoming more important with the desire to implement parallel processing. We're going to need to look into our options for benchmarking and testing. Determinations will involve figuring out which is:
I've started looking into the following projects for this task:
pytest
, pytest-benchmark
, and pytest-qt
: https://docs.pytest.org/en/8.0.x/asv
airspeed-velocity: https://asv.readthedocs.io/en/v0.6.1/Options from the "standard" library:
timeit
cProfile
line_profiler
memory_profiler
Considering the fact this is a Python project and because it's intended for use in data visualization, having historical memory profiling may be a good idea. Especially if we are going to make use of full heavy weight processes like in the proposal for #14 .
I took a dive through the PyQtGraph source code and looked around the portion of the code base which handled updates to the display. For the most part, they are just modifying / streamlining some of PyQt's routines and abstracting away some complexities for their end users. The actual render portion still remains within the CPP codebase for Qt. That is, the levels dive down through our code => pyqtgraph => pyqt / pyside => qt (cpp)
Attempting to dive down and measure the speed on the CPP level would be a lot of unnecessary effort and rendering technically consists of the procedure calls through all the layers, including our own.
For that reason, we should be good to just use the abstract set_data
method and add the performance logging decorator (labeling it 'render').
Providing an update on this. I've opted to use pytest
since the ecosystem seems to be fairly fleshed out and filled with simplifications that can be taken advantage of to make this task less cumbersome, especially since the tests have to be written retroactively.
The following new development dependencies have been added to the project:
pytest
pytest-qt
pytest-mock
pytest-benchmark
pytest-xdist
(enables parallel distribution of tests across multiple CPUs)Alright, I'm about done with the benchmarking update and will post the results as an image in a future comment. Creating all of them was somewhat tedious, but the suite is already pointing out areas that could use a little optimization. I've skipped any benchmarks that would otherwise test widgets containing a QTimer instance for simplicity. Most of these use callbacks that are either "fast enough" or unrelated to data display like the sigint timer for the main application (enables shutdown from the terminal).
Benchmarking made available as of commit: 2a17f2e
Benchmarking results. Note that time is expressed in nanoseconds. benchmarks.log
Closing this out, in the future we need to implement a database to track these things over time.
Adding onto this.
This morning I noticed something that may be of interest down the line. For the Plot2DPanes, plot initialization requires an obscene amount of time compared to simply updating the data (somewhat intuitive). Check out this benchmark output.
It's from a simple set of dummy tests I created to check this out as a curiosity. Initialization will always be expensive, but we may be able to take advantage of this from an optimization standpoint.
It would be good to have either a suite that performs speed testing / benchmarking or at the very least a small info graphic that illustrates the expected rendering speeds in several common use cases.