theislab / cellrank

CellRank: dynamics from multi-view single-cell data
https://cellrank.org
BSD 3-Clause "New" or "Revised" License
341 stars 47 forks source link

Error while computing terminal states #833

Closed thuthao closed 2 years ago

thuthao commented 2 years ago

Hi, I was following the instructions from Cell Rank Basic Tutorials. When I tried to calculate the terminal states

cr.tl.terminal_states(adata, cluster_key="clusters", weight_connectivities=0.2, method='krylov')

I got an error message and am not sure how to fix this.

Accessing `adata.obsp['T_fwd']`
Computing transition matrix based on logits using `'deterministic'` mode
/local/30743437/ipykernel_26781/1878820466.py:1: DeprecationWarning: `[cellrank.tl](http://cellrank.tl/).terminal_states` will be removed in version `2.0`. Please use the `cellrank.kernels` or `cellrank.estimators` interface instead.
  cr.tl.terminal_states(adata, cluster_key="clusters", weight_connectivities=0.2)
/gstore/home/nguyt170/.conda/envs/thaoenv/lib/python3.8/site-packages/cellrank/tl/_init_term_states.py:156: DeprecationWarning: `cellrank.tl.transition_matrix` will be removed in version `2.0`. Please use the `cellrank.kernels` or `cellrank.estimators` interface instead.
  kernel = transition_matrix(
Estimating `softmax_scale` using `'deterministic'` mode
100%|██████████| 21848/21848 [00:03<00:00, 5548.48cell/s]
Setting `softmax_scale=3.8179`
100%|██████████| 21848/21848 [00:04<00:00, 5446.33cell/s]
    Finish (0:00:08)
Using a connectivity kernel with weight `0.2`
Computing transition matrix based on `adata.obsp['connectivities']`
    Finish (0:00:00)
Computing eigendecomposition of the transition matrix
Adding `adata.uns['eigendecomposition_fwd']`
       `.eigendecomposition`
    Finish (0:00:01)
Computing Schur decomposition
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [32], in <module>
----> 1 cr.tl.terminal_states(adata, cluster_key="clusters", weight_connectivities=0.2)

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/cellrank/tl/_utils.py:1643, in _deprecate.<locals>.wrapper(wrapped, instance, args, kwargs)
   1636     warnings.simplefilter("always", DeprecationWarning)
   1637     warnings.warn(
   1638         f"`cellrank.tl.{wrapped.__name__}` will be removed in version `{version}`. "
   1639         f"Please use the `cellrank.kernels` or `cellrank.estimators` interface instead.",
   1640         stacklevel=2,
   1641         category=DeprecationWarning,
   1642     )
-> 1643 return wrapped(*args, **kwargs)

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/cellrank/tl/_init_term_states.py:265, in terminal_states(adata, estimator, mode, n_states, cluster_key, key, force_recompute, show_plots, copy, return_estimator, fit_kwargs, **kwargs)
    242 @_deprecate(version="2.0")
    243 @inject_docs(m=VelocityMode, b=BackwardMode)
    244 @d.dedent
   (...)
    262     **kwargs,
    263 ) -> Optional[Union[AnnData, BaseEstimator]]:
--> 265     return _initial_terminal(
    266         adata,
    267         estimator=estimator,
    268         mode=mode,
    269         backward=False,
    270         n_states=n_states,
    271         cluster_key=cluster_key,
    272         key=key,
    273         force_recompute=force_recompute,
    274         show_plots=show_plots,
    275         copy=copy,
    276         return_estimator=return_estimator,
    277         fit_kwargs=fit_kwargs,
    278         **kwargs,
    279     )

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/cellrank/tl/_init_term_states.py:173, in _initial_terminal(adata, estimator, backward, mode, backward_mode, n_states, cluster_key, key, force_recompute, show_plots, copy, return_estimator, fit_kwargs, **kwargs)
    166 if cluster_key is None:
    167     _info_if_obs_keys_categorical_present(
    168         adata,
    169         keys=["leiden", "louvain", "cluster", "clusters"],
    170         msg_fmt="Found categorical observation in `adata.obs[{!r}]`. Consider specifying it as `cluster_key`.",
    171     )
--> 173 _fit(
    174     mc,
    175     n_lineages=n_states,
    176     cluster_key=cluster_key,
    177     compute_absorption_probabilities=False,
    178     **fit_kwargs,
    179 )
    181 if show_plots:
    182     mc.plot_spectrum(real_only=True)

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/cellrank/tl/_init_term_states.py:99, in _fit(estim, n_lineages, keys, cluster_key, compute_absorption_probabilities, **kwargs)
     96         n_lineages = estim.eigendecomposition["eigengap"] + 1
     98 if n_lineages > 1:
---> 99     estim.compute_schur(n_lineages, method=kwargs.pop("method", "krylov"))
    101 try:
    102     estim.compute_macrostates(
    103         n_states=n_lineages, cluster_key=cluster_key, **kwargs
    104     )

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/cellrank/tl/estimators/mixins/decomposition/_schur.py:165, in SchurMixin.compute_schur(self, n_components, initial_distribution, method, which, alpha)
    162 start = logg.info("Computing Schur decomposition")
    164 try:
--> 165     self._gpcca._do_schur_helper(n_components)
    166 except ValueError as e:
    167     if "will split complex conjugate eigenvalues" not in str(e):

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/pygpcca/_gpcca.py:853, in GPCCA._do_schur_helper(self, m)
    851                 logging.info("Using pre-computed Schur decomposition")
    852 else:
--> 853     self._p_X, self._p_R, self._p_eigenvalues = _do_schur(
    854         self._P, eta=self._eta, m=m, z=self._z, method=self._method
    855     )

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/pygpcca/_gpcca.py:254, in _do_schur(P, eta, m, z, method, tol_krylov)
    251     P_bar = np.diag(np.sqrt(eta)).dot(P).dot(np.diag(1.0 / np.sqrt(eta)))
    253 # Make a Schur decomposition of P_bar and sort the Schur vectors (and form).
--> 254 R, Q, eigenvalues = sorted_schur(P_bar, m, z, method, tol_krylov=tol_krylov)  # Pbar!!!
    256 # Orthonormalize the sorted Schur vectors Q via modified Gram-Schmidt-orthonormalization,
    257 # if the (Schur)vectors aren't orthogonal!
    258 if not np.allclose(Q.T.dot(Q * eta[:, None]), np.eye(Q.shape[1]), rtol=1e6 * EPS, atol=1e6 * EPS):

File ~/.conda/envs/thaoenv/lib/python3.8/site-packages/pygpcca/_sorted_schur.py:372, in sorted_schur(P, m, z, method, tol_krylov)
    369         warnings.warn(NO_PETSC_SLEPC_FOUND_MSG)
    371 if method != "krylov" and issparse(P):
--> 372     raise ValueError("Sparse implementation is only available for `method='krylov'`.")
    374 # make sure we have enough eigenvalues to check for block splitting
    375 n = P.shape[0]

ValueError: Sparse implementation is only available for `method='krylov'`.

Thank you for your help. Thao

michalk8 commented 2 years ago

Hi @thuthao , looks like the wrong method for computing Schur decomposition is being used, which is strange, since you don't pass method=... and the default is krylov. Could you try running the code below instead?

cr.tl.terminal_states(adata, cluster_key="clusters", weight_connectivities=0.2,
                      fit_kwargs={'method': 'krylov'})

Also, please consider using the kernels/estimators interface, since this API is deprecated as mentioned in this tutorial: https://cellrank.readthedocs.io/en/stable/kernels_and_estimators.html

Marius1311 commented 2 years ago

Hi @thuthao, any updates on this yet?

thuthao commented 2 years ago

Hi @Marius1311, Thanks for following up with this question. Fortunately, everything works now. In conclusion, the improper installation of the package was the cause of this error. I had more than 5k cells and intended to use the krylov version of cellrank but 'conda install -c conda-forge -c bioconda cellrank-krylov' did not work with some errors about incompatible version. So, what I did was using 'conda install -c conda-forge -c bioconda cellrank' and installed the dependencies. However, after I installed the dependencies, the numpy package was not compatible anymore (1.21 and less). The only thing I did was uninstall and reinstall the proper package of numpy and it fixed all the errors.

Marius1311 commented 2 years ago

Great, happy that this worked and thanks for updating us!