dls-controls / scanpointgenerator

Python generators for malcolm and GDA scans
Apache License 2.0
5 stars 6 forks source link

CompoundGenerator performance test #39

Open c-mita opened 7 years ago

c-mita commented 7 years ago

There is a performance test for CompoundGenerator.prepare, that checks prepare runs in "reasonable" time when given a "unified" set of generators with sizes that multiply to 200 million points.

It's useful to keep an eye on the run time of this test, and keep the time limit quite tight, as if the startup for CompoundGenerator is too slow for large scans it undermines the effort made in #22 and its easy to negatively affect it (for instance, commit 3de9d669 in #24 added ~2 seconds to this test for a 30% increase in run time).

But the run times on travis are far too inconsistent for a tight bound, and builds are prone to the odd failure, particularly at peak times.

Do we want to keep the test or remove it? Do we want to relax the time limit? Or do we want to do some weird hacks for Travis, say applying the tighter limits for a workstation, but not for Travis VMs?

GDYendell commented 7 years ago

Would taking an average of a few attempts give a more consistent value? Or would multiple runs in quick succession lead to similar long run times? That may allow us to put a tighter bound on.

c-mita commented 7 years ago

I'm wondering if reducing the number of concurrent builds would help - the Python tests used to be quite reliable until I added the Jython one.

Averaging multiple runs might help.

c-mita commented 7 years ago

The conversion from np.bool to np.int8 also negatively impacted this test (adding another couple of seconds). This conversion was necessary for use of the project in Jython.

The latest release of Dawn (with updates in the scisoftpy stuff) allows use of np.bool, but it performs worse.

So all paths are bad.