Open orbeckst opened 4 years ago
So as far as I can tell (I might be missing something obvious), there no longer is a means by which we can set the search method in a selection call, so I guess that needs removing?
I’m not sure how to write the tests so that they work with different call signatures across different versions. Try/except? @tylerjereddy , any hints? You have a lot of experience with asv?
You can use a try
/except
to get the benchmarks running again--I just tried the diff below locally (the flags
are part of the setup, so they should be moved to setup()
as well I think).
That said, I do share some of @IAlibay concerns re: semantics before and after--do we actually preserve an identical calculation type over history here with modern MDA not using those flags? If not, producing a plot with performance before/after the transition for the same benchmark name may be dubious.
If the meaning of the benchmark has changed after removal of flags, the "failing" benchmark still has a 0 exit code (shouldn't prevent the overall asv run), so could just be left as is I suppose, or eventually removed.
Could maybe consider adding i.e., time_geometric_selections_modern()
or something, but actually even writing a new benchmark may be tricky to have "fair" over time if the defaults under the hood are different because flags were removed.
Well, I guess not that hard to check--just debug print the results (atomgroups/sizes) over various versions of MDA; a little tedious though.
diff --git a/benchmarks/benchmarks/selections.py b/benchmarks/benchmarks/selections.py
index b07fc7348..c6afa3c6b 100644
--- a/benchmarks/benchmarks/selections.py
+++ b/benchmarks/benchmarks/selections.py
@@ -66,16 +66,19 @@ class GeoSelectionBench(object):
dynamic_selection,
periodic_selection):
self.u = MDAnalysis.Universe(GRO)
+ try:
+ # set core flags for PBC accounting
+ MDAnalysis.core.flags['use_periodic_selections'] = periodic_selection[0]
+ MDAnalysis.core.flags['use_KDTree_routines'] = periodic_selection[1]
+ except AttributeError:
+ # modern MDAnalysis
+ pass
def time_geometric_selections(self,
selection_string,
dynamic_selection,
periodic_selection):
- # set core flags for PBC accounting
- MDAnalysis.core.flags['use_periodic_selections'] = periodic_selection[0]
- MDAnalysis.core.flags['use_KDTree_routines'] = periodic_selection[1]
-
if hasattr(MDAnalysis.Universe, 'select_atoms'):
self.u.select_atoms(selection_string, updating=dynamic_selection)
else:
@richardjgowers or @lilyminium may be able to tell at a glance which subsets of calculations are actually the same before/after flags were in place? I suspect just leaving the benchmark alone may be the way forward, with a new time_
benchmark for "modern" selections.
I don't think we use pkdtree anywhere unless it's explicitly passed in as the chosen method, which as @IAlibay points out we can't do with atom selections. Distance and geometry selections are either brute force or NSGrid depending on how many atoms are involved (<10 = bruteforce) or how big the box is relative to the cutoff. This part of the code doesn't seem to have been changed for 2 years, so I think the tests that don't set 'use_KDTree_routines'=True
should still be the same as the versions that have come out since.
Expected behavior
All ASV benchmarks for the latest develop run (see benchmarks.benchmarks for the code and the MDAnalysis/benchmarks repo + the benchmarks wiki for more details.)
Actual behavior
All GeoSelectionBench.time_geometric_selections tests fail as shown below (recent output from nightly run on Python 3.6). This seems to be related to the removal of the flags #782 .