Open orbeckst opened 6 years ago
These things came up in the context of me trying out various tests in #66. We need to test these problems separately here, just to make sure it's not something from me running the tests incorrectly.
Okay let me check them and I will get back at you.
@orbeckst @iparask It's due to line 74 in leaflet.py
self._atomgroup = atomgroups
distributed.client
cannot pickle atomgroup
or universe
. Currently, the only way is to avoid using atomgroup
or universe
type self
attribute.
EDIT (@orbeckst ): see #79
So mdanalysis technically supports pickle and unpickle. We never documented how they should be used though. @richardjgowers @jbarnoud
Hello @orbeckst @kain88-de,
conserning the distributed
issue, I could use a deep copy and essentially create a new numpy object in memory for the atomgroups. Although, I think that a solution such as the one in #65 is more reasonable. Any preference on this?
The reason for the first error is that the number of atoms that are present are not dividable with the number of processes (see leaflet.py#L192). There are two things, I can think of doing here:
n_jobs
that divide the number of atoms and be as close as possible to what the user has selected. This would also mean that the cluster utilization will drop.n_jobs
and filter them out during the reduce phase.
Any preference here?If we can use pickling of AGs then that would be great. Otherwise the approach in the standard serial version should work, whereby you
However, come to think, that will be awful for performance because you would be doing this for every frame. So scratch that idea.
Can you write it such that only coordinates are communicated to the dask workers? Numpy arrays are not problematic.
Perhaps @VOD555 and @kain88-de have some better ideas.
Have the partitions got to be the same size? Is it not possible to have some that have different sizes?
Changing n_jobs
, at least if the default is to use as many workers as cores, might lead to unexpected performance degradation. There was some discussion on related matters in #71 (and associated PR #75).
If possible, unequal partition sizes would be my preferred solution, followed by dummies. Alternatively, oversubscribing workers might also help but I'd be interested in seeing performance data.
I'll have a look at the pickling, see if I can recall how it works. But I never really needed to use it. @mnmelo is probably one who knows the most about it, though. Ping?
For your n_jobs problem you can also use [make balanced slices] https://github.com/MDAnalysis/pmda/blob/master/pmda/util.py#L62 . It solves the same problem with our standard classes problem.
On Mon, Nov 5, 2018 at 6:44 PM Jonathan Barnoud notifications@github.com wrote:
I'll have a look at the pickling, see if I can recall how it works. But I never really needed to use it. @mnmelo https://github.com/mnmelo is probably one who knows the most about it, though. Ping?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/MDAnalysis/pmda/issues/76#issuecomment-435967151, or mute the thread https://github.com/notifications/unsubscribe-auth/AEGnVssElasUmsHIxLsRbhq9U_S5pj9Aks5usHkEgaJpZM4YA_kn .
Pickling should work like:
u = mda.Universe(...., anchor_name='this')
# make a pickle of each atomgroup
pickles = [pickle.dumps(ag) for ag in atomgroups]
# In parallel processes
# make a Universe with the same anchor_name
# this only has to happen once per worker, so could be done using `init_func` in multiprocessing
u = mda.Universe(....., anchor_name='this')
ags = [pickle.loads(s) for s in pickles]
@iparask could have a look at this issue again? It would be good to have this fixed for the SciPy paper.
Very quick issue with varying things that came up during PR #66; apologies for the messy report. See PR #81 for initial (failing) tests.
n_jobs
LeafletFinder with
n_jobs == 2
does not pass tests, see https://github.com/MDAnalysis/pmda/pull/66#discussion_r228832763distributed
LeafletFinder with
scheduler
as distributed.client fails , see also started PR #81.(complete error message from pytest)