Open grst opened 3 years ago
Hey,
please report back with containerized environments as discussed on Twitter.
Hi,
I updated the pipeline to use this singularity container. The problem persists.
I tried also with NUMBA_DISABLE_JIT=1
. The leiden clustering is stable now (at least across the four nodes), yet the UMAP again looks different to both the previous versions and between the two groups of CPUs:
On
On
Do you have any idea if the issue is with our implementation of these, or with the underlying libraries? E.g. if you just run UMAP directly from the UMAP library can you get exact reproducibility? I don't think we would be able to promise more reproducibility than what they offer, and irreproducibility across machines is a known issue there: https://github.com/lmcinnes/umap/issues/525.
That's a good point, and it is not:
reducer = umap.UMAP(min_dist=0.5)
embedding = reducer.fit_transform(adata.obsm["X_scVI"])
adata.obsm["X_umap"] = embedding
again produces stable results on only 3/4 CPUs.
Ok, let's forget about UMAP. It's only a nice figure to get an overview of the data and I don't use it for downstream stuff. Irreproducible clustering, on the other hand, is quite a deal-breaker, as for instance cell-type annotations depend on it. I mean, why would I even bother releasing the source code of an analysis alongside the paper if it is not reproducible anyway?
I found out a few more things:
adata.obsp["connectivities"]
. pp.neighbors
with NUMBA_DISABLE_JIT=1
, the clustering is stable on all four nodes (but terribly slow, ofc)adata.obsp["connectivities"] = np.round(adata.obsp["connectivities"], decimals=3)
adata.obsp["distances"] = np.round(adata.obsp["distances"], decimals=3)
So should we add a deterministic flag for Leiden clustering which enforces NUMBA_DISABLE_JIT=1
and is disabled by default? Users then have the option to enable this for publication code.
I have yet to install the most latest scanpy
and I do not have CPUs to test for this specific case, but I had some issue of reproducing leiden
results from scanpy
from a published paper, and found that running leiden
10 times (per n_iteration
option) resolved any discrepancy. Wonder whether it adds to the discussion.
@grst I don't think leiden
is the issue here, but pynndescent
.
My guess is this is going to have to do with the CPU that gives different results being much older using a different instruction set that the other intel processors. This could be triggered by either use of any parallelism at all or pynndescent
being pretty liberal with the use of numba
's fastmath
, and different CPUs having different features.
Do you get the same graph out of pynndescent
if you are make the computation single threaded? If not, we may be able to look at the assembly to see which feature sets give different results. It's possible these can be turned off with a flag. But we may then have to consider what kind of a performance hit we'd be willing to take for exact reproducibility on discontinued chip sets.
I don't think leiden is the issue here, but pynndescent.
:100:
Do you get the same graph out of pynndescent if you are make the computation single threaded
I have already been exporting those four env vars before running the analysis. Is there anything else that might be threaded?
export MKL_NUM_THREADS="1"
export OPENBLAS_NUM_THREADS="1"
export OMP_NUM_THREADS="1"
export NUMBA_NUM_THREADS="1"
threadpoolctl does a nice job of accounting for most possible options. But do you see more than a single thread being used?
Just to be sure, I additionally included
from threadpoolctl import threadpool_limits
threadpool_limits(1)
I also confirmed that only a single thread was actually used. The results didn't change.
Alright, if you would like to achieve reproducibility the next things to play around with would probably be these CPU feature flag variables for numba. In particular: NUMBA_CPU_NAME=generic
.
another :100:
with NUMBA_CPU_NAME=generic
the leiden results are the same on all four nodes and also the UMAPs look a lot more similar (but not identical).
In terms of speed I didn't notice a bit difference (maybe 10s slower, but that would require proper benchmarking ofc).
What do you think the right thing to do here is?
I don't think we can modify the global environment for our users by setting those numba flags by default. Is this just something we should document?
How about creating a page about reproducibility in the docs, similar to the one by pytorch? It could gather all information around reproducibility with scanpy, such as
and also state the limits, e.g.
In addition to that, @Zethson, what do you think of creating a mlf-core template for single-cell analyses that sets the right defaults?
In addition to that, @Zethson, what do you think of creating a mlf-core template for single-cell analyses that sets the right defaults?
Mhm, certainly a cool option, but nothing that I could tackle in the next weeks due to time constraints. I would start here with a reproducibility section in the documentation and maybe a "deterministic" switch in the Scanpy settings which sets all required numba flags.
How about creating a page about reproducibility in the docs, similar to the one by pytorch? It could gather all information around reproducibility with scanpy, such as
- use of containers
- numba flags
@grst, this would be great. Would you be up for writing this at some point?
I'm thinking it could either go:
Yeah, not sure when I can make it, though.
We could then also consider extending it with rapids single cell @Intron7 . More of a scverse reproducibility
page and maybe also bring in scvi tools
@Zethson UMAP
is a nightmare when it comes to cuml
. It's not reproducible on the same GPU even if you set a seed. However I think Leiden might be a candidate with a brute-knn. But I have to check this.
I assume cuml
is based on atomic operations which are non-deterministic. Just documenting the behavior would be good enough for now @Intron7 . I would suggest that this is something that we pick up when your status changes @Intron7 and then we can team up as three?
I noticed that running the same single-cell analyses on different nodes of our HPC produces different results. Starting from the same anndata object with a precomputed
X_scVI
latent representation, the UMAP and leiden-clustering looks different.On
On
Minimal code sample (that we can copy&paste without having any data)
A git repository with example data, notebook and a nextflow pipeline is available here: https://github.com/grst/scanpy_reproducibility
A report of the analysis executed on four different CPU architectures is available here: https://grst.github.io/scanpy_reproducibility/
Versions