YosefLab / PopV

MIT License
50 stars 10 forks source link

UnicodeDecodeError from annotate_data #18

Closed maxim-h closed 1 year ago

maxim-h commented 1 year ago

Hi,

I've been trying to run the tutorial with my own data. The Process_Query was ran as follows:

adata = Process_Query(
    new_query,
    ref_adata,
    query_labels_key=query_labels_key,
    query_batch_key=query_batch_key,
    ref_labels_key=ref_labels_key,
    ref_batch_key=ref_batch_key,
    unknown_celltype_label=unknown_celltype_label,
    save_path_trained_models=output_model_fn,
    cl_obo_folder="./PopV/ontology/",
    prediction_mode="retrain",  # 'fast' mode gives fast results (does not include BBKNN and Scanorama and makes more inaccurate errors)
    n_samples_per_label=n_samples_per_label,
    use_gpu=True,
    compute_embedding=True,
    hvg=None,
).adata

The main 2 modifications I had to make were:

  1. Recreate the query object (named new_query here) while casting query_adata.X from dtype=numpy.float64 to dtype=numpy.float32. Otherwise I got an error here from somewhere inside AnnData.concat because dtype in query didn't match dtype in reference. Might file a separate issue about it later, but sure yet to whom.
  2. Change prediction_mode to "retrain" because I had different set of features between the query and reference.

Then once I got to this cell I got an error I don't understand

from popv.annotation import annotate_data
annotate_data(adata, save_path=f"{output_folder}/popv_output")

First I got some normal output:

Output ``` Found 20437 genes among all datasets [[0. 0.05625606 0.00932836 0.0749383 0.76862464 0.34284655 0.00278164 0.0206044 ] [0. 0. 0.90882638 0.01745878 0.01790831 0.03103783 0.83449235 0.03103783] [0. 0. 0. 0.11007463 0.03358209 0.04664179 0.71349096 0.19776119] [0. 0. 0. 0. 0.2987106 0.21571534 0.00139082 0.06506619] [0. 0. 0. 0. 0. 0.28581662 0.00556328 0.09670487] [0. 0. 0. 0. 0. 0. 0.05563282 0.23128243] [0. 0. 0. 0. 0. 0. 0. 0.10292072] [0. 0. 0. 0. 0. 0. 0. 0. ]] Processing datasets (1, 2) Processing datasets (1, 6) Processing datasets (0, 4) Processing datasets (2, 6) Processing datasets (0, 5) Processing datasets (3, 4) Processing datasets (4, 5) Processing datasets (5, 7) Processing datasets (3, 5) Processing datasets (2, 7) Processing datasets (2, 3) Processing datasets (6, 7) Epoch 87/87: 100%|██████████| 87/87 [09:45<00:00, 6.73s/it, loss=7.61e+03, v_num=1] ```

But then some UnicodeDecodeError:

Traceback: ```python --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) Cell In[55], line 3 1 from popv.annotation import annotate_data ----> 3 annotate_data(adata, save_path=f"{output_folder}/popv_output") File [prefix]/PopV/.venv/lib/python3.8/site-packages/popv/annotation.py:59, in annotate_data(adata, methods, save_path, methods_kwargs) 57 current_method = getattr(algorithms, method)(**methods_kwargs.pop(method, {})) 58 current_method.compute_integration(adata) ---> 59 current_method.predict(adata) 60 current_method.compute_embedding(adata) 61 all_prediction_keys += [current_method.result_key] File [prefix]/PopV/.venv/lib/python3.8/site-packages/popv/algorithms/_onclass.py:128, in ONCLASS.predict(self, adata) 125 cl_ontology_file = adata.uns["_cl_ontology_file"] 126 nlp_emb_file = adata.uns["_nlp_emb_file"] --> 128 celltype_dict, clid_2_name = self.make_celltype_to_cell_ontology_id_dict( 129 cl_obo_file 130 ) 131 self.make_cell_ontology_id(adata, celltype_dict, self.cell_ontology_obs_key) 133 train_model = OnClassModel( 134 cell_type_nlp_emb_file=nlp_emb_file, cell_type_network_file=cl_ontology_file 135 ) File [prefix]/PopV/.venv/lib/python3.8/site-packages/popv/algorithms/_onclass.py:66, in ONCLASS.make_celltype_to_cell_ontology_id_dict(self, cl_obo_file) 51 """ 52 Make celltype to ontology id dict and vice versa. 53 (...) 63 dictionary of ontology id to celltype names 64 """ 65 with open(cl_obo_file) as f: ---> 66 co = obonet.read_obo(f) 67 id2name = {id_: data.get("name") for id_, data in co.nodes(data=True)} 68 id2name = {k: v for k, v in id2name.items() if v is not None} File [prefix]/PopV/.venv/lib/python3.8/site-packages/obonet/read.py:30, in read_obo(path_or_file, ignore_obsolete) 13 """ 14 Return a networkx.MultiDiGraph of the ontology serialized by the 15 specified path or file. (...) 27 not be added to the graph. 28 """ 29 obo_file = open_read_file(path_or_file) ---> 30 typedefs, terms, instances, header = get_sections(obo_file) 31 obo_file.close() 33 if "ontology" in header: File [prefix]/PopV/.venv/lib/python3.8/site-packages/obonet/read.py:77, in get_sections(lines) 75 continue 76 stanza_type_line = next(stanza_lines) ---> 77 stanza_lines = list(stanza_lines) 78 if stanza_type_line.startswith("[Typedef]"): 79 typedef = parse_stanza(stanza_lines, typedef_tag_singularity) File [~]/.micromamba/envs/python3.8/lib/python3.8/encodings/ascii.py:26, in IncrementalDecoder.decode(self, input, final) 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 7735: ordinal not in range(128) ```

Any pointers on how to troubleshoot it?

maxim-h commented 1 year ago

I thought at first that perhaps the error is coming from the ONCLASS algorithm

So I tried running without it:

annotate_data(adata, methods = ["knn_on_scvi", "scanvi", "knn_on_bbknn", "svm", "rf", "knn_on_scanorama", "celltypist",], save_path=f"{output_folder}/popv_output")

It did in fact run further along. But in the end still resulted in the same error:

STDERR outpur ```python ... Saving scanvi label prediction to adata.obs["popv_scanvi_prediction"] Saving UMAP of scanvi results to adata.obs["X_scanvi_umap_popv"] Integrating data with bbknn Saving knn on bbknn results to adata.obs["popv_knn_on_bbknn_prediction"] Saving UMAP of bbknn results to adata.obs["X_bbknn_umap_popv"] Computing support vector machine. Storing prediction in adata.obs["popv_svm_prediction"] Computing random forest classifier. Storing prediction in adata.obs["popv_rf_prediction"] Integrating data with scanorama Saving knn on scanorama results to adata.obs["popv_knn_on_scanorama_prediction"] Saving UMAP of scanorama results to adata.obs["X_scanorama_umap_popv"] Saving celltypist results to adata.obs["popv_celltypist_prediction"] 🍳 Preparing data before training ✂ 442 non-expressed genes are filtered out ⚖ Scaling input data 🏋 Training data using logistic regression ✅ Model training done! 🔬 Input data has 45768 cells and 20437 genes 🔗 Matching reference genes in the model 🧬 19995 features used for prediction ⚖ Scaling input data 🖋 Predicting labels ✅ Prediction done! 👀 Detected a neighborhood graph in the input object, will run over-clustering on the basis of it ⛓ Over-clustering input data with resolution set to 20 🗳 Majority voting the predictions ✅ Majority voting done! Using predictions ['popv_knn_on_scvi_prediction', 'popv_scanvi_prediction', 'popv_knn_on_bbknn_prediction', 'popv_svm_prediction', 'popv_rf_prediction', 'popv_ knn_on_scanorama_prediction', 'popv_celltypist_prediction'] for PopV consensus Traceback (most recent call last): File "./popV.py", line 90, in annotate_data(adata, methods = ["knn_on_scvi", "scanvi", "knn_on_bbknn", "svm", "rf", "knn_on_scanorama", "celltypist",], save_path=f"{output_folder}/popv_output") File "[prefix]/PopV/.venv/lib/python3.8/site-packages/popv/annotation.py", line 73, in annotate_data ontology_vote_onclass(adata, all_prediction_keys) File "[prefix]/PopV/.venv/lib/python3.8/site-packages/popv/annotation.py", line 144, in ontology_vote_onclass G = _utils.make_ontology_dag(adata.uns["_cl_obo_file"]) File "[prefix]/PopV/.venv/lib/python3.8/site-packages/popv/_utils.py", line 147, in make_ontology_dag co = obonet.read_obo(obofile) File "[prefix]/PopV/.venv/lib/python3.8/site-packages/obonet/read.py", line 30, in read_obo typedefs, terms, instances, header = get_sections(obo_file) File "[prefix]/PopV/.venv/lib/python3.8/site-packages/obonet/read.py", line 77, in get_sections stanza_lines = list(stanza_lines) File "[~]/.micromamba/envs/python3.8/lib/python3.8/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 7735: ordinal not in range(128) ```
maxim-h commented 1 year ago

Ok, the issue is quite mysterious, but probably came from a non-recommended installation method. Reinstalling everything strictly as recommended solved it.

canergen commented 1 year ago

Was it an error on Colab or locally? It was using the wrong codec, I guess. You can verify it by directly reading the cell ontology file. obonet.read_obo(obofile). It looks to me like a problem with obonet. The casting error is interesting. We can manually cast everything to float64 (I think this behavior has changed recently in scanpy). Was it also the case when installing as recommended or did you use a newer scanpy version? If you set retrain to True, it is recommended to set hvg in Process_Query to 4000 to not run on all genes (lower memory usage).

maxim-h commented 1 year ago

I ran everything locally. Yes, the casting is a problem even in properly installed version. Well, almost properly. As you see I use micromamba instead of conda. When not adjusting the object beforehand this is the result of Process_Query

Traceback: ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_base.py:376, in spmatrix.asformat(self, format, copy) 375 try: --> 376 return convert_method(copy=copy) 377 except TypeError: File [~].micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_coo.py:403, in coo_matrix.tocsr(self, copy) 402 indices = np.empty_like(col, dtype=idx_dtype) --> 403 data = np.empty_like(self.data, dtype=upcast(self.dtype)) 405 coo_tocsr(M, N, self.nnz, row, col, self.data, 406 indptr, indices, data) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_sputils.py:53, in upcast(*args) 51 return t ---> 53 raise TypeError('no supported conversion for types: %r' % (args,)) TypeError: no supported conversion for types: (dtype(' 3 adata = Process_Query( 4 query_adata, 5 ref_adata, 6 query_labels_key=query_labels_key, 7 query_batch_key=query_batch_key, 8 ref_labels_key=ref_labels_key, 9 ref_batch_key=ref_batch_key, 10 unknown_celltype_label=unknown_celltype_label, 11 save_path_trained_models=output_model_fn, 12 cl_obo_folder="./PopV/ontology/", 13 # prediction_mode="inference", # 'fast' mode gives fast results (does not include BBKNN and Scanorama and makes more inaccurate errors) 14 prediction_mode="retrain", # 'fast' mode gives fast results (does not include BBKNN and Scanorama and makes more inaccurate errors) 15 n_samples_per_label=n_samples_per_label, 16 use_gpu=True, 17 compute_embedding=True, 18 hvg=None, 19 ).adata File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/popv/preprocessing.py:213, in Process_Query.__init__(self, query_adata, ref_adata, ref_labels_key, ref_batch_key, query_labels_key, query_batch_key, query_layers_key, prediction_mode, cl_obo_folder, unknown_celltype_label, n_samples_per_label, pretrained_scvi_path, save_path_trained_models, hvg, use_gpu, compute_embedding, return_probabilities) 210 self.setup_dataset(self.ref_adata, "reference") 211 self.check_validity_anndata(self.ref_adata, "reference") --> 213 self.preprocess() File[~]/.micromamba/envs/PopV/lib/python3.8/site-packages/popv/preprocessing.py:276, in Process_Query.preprocess(self) 274 self.adata.obs["_dataset"] = "query" 275 else: --> 276 self.adata = anndata.concat( 277 (self.ref_adata, self.query_adata), 278 axis=0, 279 label="_dataset", 280 keys=["ref", "query"], 281 join="outer", 282 fill_value=self.unknown_celltype_label, 283 ) 285 if self.prediction_mode != "fast": 286 # Necessary for BBKNN. 287 batch_before_filtering = set(self.adata.obs["_batch_annotation"]) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:923, in concat(adatas, axis, join, merge, uns_merge, label, keys, index_unique, fill_value, pairwise) 921 has_raw = [a.raw is not None for a in adatas] 922 if all(has_raw): --> 923 raw = concat( 924 [ 925 AnnData( 926 X=a.raw.X, 927 dtype=a.raw.X.dtype, 928 obs=pd.DataFrame(index=a.obs_names), 929 var=a.raw.var, 930 varm=a.raw.varm, 931 ) 932 for a in adatas 933 ], 934 join=join, 935 label=label, 936 keys=keys, 937 index_unique=index_unique, 938 fill_value=fill_value, 939 axis=axis, 940 ) 941 elif any(has_raw): 942 warn( 943 "Only some AnnData objects have `.raw` attribute, " 944 "not concatenating `.raw` attributes.", 945 UserWarning, 946 ) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:870, in concat(adatas, axis, join, merge, uns_merge, label, keys, index_unique, fill_value, pairwise) 865 # Annotation for other axis 866 alt_annot = merge_dataframes( 867 [getattr(a, alt_dim) for a in adatas], alt_indices, merge 868 ) --> 870 X = concat_Xs(adatas, reindexers, axis=axis, fill_value=fill_value) 872 if join == "inner": 873 layers = inner_concat_aligned_mapping( 874 [a.layers for a in adatas], axis=axis, reindexers=reindexers 875 ) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:625, in concat_Xs(adatas, reindexers, axis, fill_value) 619 raise NotImplementedError( 620 "Some (but not all) of the AnnData's to be concatenated had no .X value. " 621 "Concatenation is currently only implmented for cases where all or none of" 622 " the AnnData's have .X assigned." 623 ) 624 else: --> 625 return concat_arrays(Xs, reindexers, axis=axis, fill_value=fill_value) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:440, in concat_arrays(arrays, reindexers, axis, index, fill_value) 437 elif any(isinstance(a, sparse.spmatrix) for a in arrays): 438 sparse_stack = (sparse.vstack, sparse.hstack)[axis] 439 return sparse_stack( --> 440 [ 441 f(as_sparse(a), axis=1 - axis, fill_value=fill_value) 442 for f, a in zip(reindexers, arrays) 443 ], 444 format="csr", 445 ) 446 else: 447 return np.concatenate( 448 [ 449 f(x, fill_value=fill_value, axis=1 - axis) (...) 452 axis=axis, 453 ) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:441, in (.0) 437 elif any(isinstance(a, sparse.spmatrix) for a in arrays): 438 sparse_stack = (sparse.vstack, sparse.hstack)[axis] 439 return sparse_stack( 440 [ --> 441 f(as_sparse(a), axis=1 - axis, fill_value=fill_value) 442 for f, a in zip(reindexers, arrays) 443 ], 444 format="csr", 445 ) 446 else: 447 return np.concatenate( 448 [ 449 f(x, fill_value=fill_value, axis=1 - axis) (...) 452 axis=axis, 453 ) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:282, in Reindexer.__call__(self, el, axis, fill_value) 281 def __call__(self, el, *, axis=1, fill_value=None): --> 282 return self.apply(el, axis=axis, fill_value=fill_value) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:295, in Reindexer.apply(self, el, axis, fill_value) 293 return self._apply_to_df(el, axis=axis, fill_value=fill_value) 294 elif isinstance(el, sparse.spmatrix): --> 295 return self._apply_to_sparse(el, axis=axis, fill_value=fill_value) 296 else: 297 return self._apply_to_array(el, axis=axis, fill_value=fill_value) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/anndata/_core/merge.py:351, in Reindexer._apply_to_sparse(self, el, axis, fill_value) 345 if axis == 1: 346 idxmtx = sparse.coo_matrix( 347 (np.ones(len(self.new_pos), dtype=bool), (self.old_pos, self.new_pos)), 348 shape=(len(self.old_idx), len(self.new_idx)), 349 dtype=idxmtx_dtype, 350 ) --> 351 out = el @ idxmtx 353 if len(to_fill) > 0: 354 out = out.tocsc() File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_base.py:630, in spmatrix.__matmul__(self, other) 627 if isscalarlike(other): 628 raise ValueError("Scalar operands are not allowed, " 629 "use '*' instead") --> 630 return self._mul_dispatch(other) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_base.py:541, in spmatrix._mul_dispatch(self, other) 539 if self.shape[1] != other.shape[0]: 540 raise ValueError('dimension mismatch') --> 541 return self._mul_sparse_matrix(other) 543 # If it's a list or whatever, treat it like a matrix 544 other_a = np.asanyarray(other) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_compressed.py:512, in _cs_matrix._mul_sparse_matrix(self, other) 509 K2, N = other.shape 511 major_axis = self._swap((M, N))[0] --> 512 other = self.__class__(other) # convert to this format 514 idx_dtype = get_index_dtype((self.indptr, self.indices, 515 other.indptr, other.indices)) 517 fn = getattr(_sparsetools, self.format + '_matmat_maxnnz') File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_compressed.py:33, in _cs_matrix.__init__(self, arg1, shape, dtype, copy) 31 arg1 = arg1.copy() 32 else: ---> 33 arg1 = arg1.asformat(self.format) 34 self._set_self(arg1) 36 elif isinstance(arg1, tuple): File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_base.py:378, in spmatrix.asformat(self, format, copy) 376 return convert_method(copy=copy) 377 except TypeError: --> 378 return convert_method() File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_coo.py:403, in coo_matrix.tocsr(self, copy) 401 indptr = np.empty(M + 1, dtype=idx_dtype) 402 indices = np.empty_like(col, dtype=idx_dtype) --> 403 data = np.empty_like(self.data, dtype=upcast(self.dtype)) 405 coo_tocsr(M, N, self.nnz, row, col, self.data, 406 indptr, indices, data) 408 x = self._csr_container((data, indices, indptr), shape=self.shape) File [~]/.micromamba/envs/PopV/lib/python3.8/site-packages/scipy/sparse/_sputils.py:53, in upcast(*args) 50 _upcast_memo[hash(args)] = t 51 return t ---> 53 raise TypeError('no supported conversion for types: %r' % (args,)) TypeError: no supported conversion for types: (dtype('

For the reference here is my environment:

micromamba list ``` List of packages in environment: "[~]/.micromamba/envs/PopV" Name Version Build Channel ──────────────────────────────────────────────────────────────── _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge bzip2 1.0.8 h7f98852_4 conda-forge ca-certificates 2022.12.7 ha878542_0 conda-forge ld_impl_linux-64 2.40 h41732ed_0 conda-forge libffi 3.4.2 h7f98852_5 conda-forge libgcc-ng 12.2.0 h65d4601_19 conda-forge libgomp 12.2.0 h65d4601_19 conda-forge libnsl 2.0.0 h7f98852_0 conda-forge libsqlite 3.40.0 h753d276_0 conda-forge libuuid 2.32.1 h7f98852_1000 conda-forge libzlib 1.2.13 h166bdaf_4 conda-forge ncurses 6.3 h27087fc_1 conda-forge openssl 3.0.8 h0b41bf4_0 conda-forge pip 23.0 pyhd8ed1ab_0 conda-forge python 3.8.16 he550d4f_1_cpython conda-forge readline 8.1.2 h0f457ee_0 conda-forge setuptools 67.1.0 pyhd8ed1ab_0 conda-forge tk 8.6.12 h27826a3_0 conda-forge wheel 0.38.4 pyhd8ed1ab_0 conda-forge xz 5.2.6 h166bdaf_0 conda-forge ```
pip freeze ``` absl-py==1.4.0 aiohttp==3.8.3 aiosignal==1.3.1 alabaster==0.7.13 anndata==0.8.0 annoy==1.17.1 asttokens==2.2.1 astunparse==1.6.3 async-timeout==4.0.2 attrs==22.2.0 Babel==2.11.0 backcall==0.2.0 bbknn==1.5.1 beautifulsoup4==4.11.2 bleach==6.0.0 cached-property==1.5.2 cachetools==5.3.0 celltypist==1.3.0 certifi==2022.12.7 charset-normalizer==2.1.1 chex==0.1.6 click==8.1.3 comm==0.1.2 contextlib2==21.6.0 contourpy==1.0.7 cycler==0.11.0 Cython==0.29.33 debugpy==1.6.6 decorator==5.1.1 defusedxml==0.7.1 dm-tree==0.1.8 docrep==0.3.2 docutils==0.17.1 entrypoints==0.4 et-xmlfile==1.1.0 etils==1.0.0 executing==1.2.0 fastjsonschema==2.16.2 fbpca==1.0 filelock==3.9.0 flatbuffers==23.1.21 flax==0.6.5 fonttools==4.38.0 frozenlist==1.3.3 fsspec==2023.1.0 gast==0.4.0 gdown==4.6.0 geosketch==1.2 google-auth==2.16.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 grpcio==1.51.1 h5py==3.8.0 huggingface-hub==0.11.1 idna==3.4 igraph==0.10.4 imagesize==1.4.1 imgkit==1.2.2 importlib-metadata==4.2.0 importlib-resources==5.10.2 intervaltree==3.1.0 ipykernel==6.21.1 ipython==8.9.0 ipywidgets==8.0.4 jax==0.4.3 jaxlib==0.4.3 jedi==0.18.2 Jinja2==3.1.2 joblib==1.2.0 jsonschema==4.17.3 jupyter_client==7.4.9 jupyter_core==5.2.0 jupyterlab-pygments==0.2.2 jupyterlab-widgets==3.0.5 keras==2.11.0 kiwisolver==1.4.4 leidenalg==0.9.1 libclang==15.0.6.1 lightning-utilities==0.6.0.post0 llvmlite==0.39.1 Markdown==3.3.4 markdown-it-py==2.1.0 MarkupSafe==2.1.2 matplotlib==3.6.3 matplotlib-inline==0.1.6 mdurl==0.1.2 mistune==2.0.5 ml-collections==0.1.1 msgpack==1.0.4 mudata==0.2.1 multidict==6.0.4 multipledispatch==0.6.0 natsort==8.2.0 nbclient==0.7.2 nbconvert==7.2.9 nbformat==5.7.3 nbsphinx==0.8.12 nbsphinx-link==1.3.0 nest-asyncio==1.5.6 networkx==3.0 nltk==3.8.1 numba==0.56.4 numpy==1.21.6 numpyro==0.11.0 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 oauthlib==3.2.2 obonet==0.3.1 OnClass==1.2 openpyxl==3.1.0 opt-einsum==3.3.0 optax==0.1.4 orbax==0.1.1 packaging==23.0 pandas==1.5.3 pandocfilters==1.5.0 parso==0.8.3 patsy==0.5.3 pexpect==4.8.0 pickleshare==0.7.5 Pillow==9.4.0 pkgutil_resolve_name==1.3.10 platformdirs==3.0.0 popv @ git+https://github.com/czbiohub/PopV@6d4cbd6d4e6e2bedf260252e66f85f0316fb739c prompt-toolkit==3.0.36 protobuf==3.19.0 psutil==5.9.4 ptyprocess==0.7.0 pure-eval==0.2.2 pyasn1==0.4.8 pyasn1-modules==0.2.8 Pygments==2.14.0 pynndescent==0.5.8 pyparsing==3.0.9 pyro-api==0.1.2 pyro-ppl==1.8.4 pyrsistent==0.19.3 PySocks==1.7.1 python-dateutil==2.8.2 pytorch-lightning==1.9.0 pytz==2022.7.1 PyYAML==6.0 pyzmq==25.0.0 regex==2022.10.31 requests==2.28.2 requests-oauthlib==1.3.1 rich==13.3.1 rsa==4.9 scanorama==1.7.3 scanpy==1.9.1 scikit-learn==0.24.2 scikit-misc==0.1.4 scipy==1.10.0 scvi-tools==0.20.0 seaborn==0.12.2 sentence-transformers==2.2.2 sentencepiece==0.1.97 session-info==1.0.0 six==1.15.0 snowballstemmer==2.2.0 sortedcontainers==2.4.0 soupsieve==2.3.2.post1 Sphinx==4.3.2 sphinxcontrib-applehelp==1.0.4 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==2.0.1 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 stack-data==0.6.2 statsmodels==0.13.5 stdlib-list==0.8.0 tensorboard==2.11.2 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 tensorflow==2.11.0 tensorflow-estimator==2.11.0 tensorflow-io-gcs-filesystem==0.30.0 tensorstore==0.1.31 termcolor==2.2.0 texttable==1.6.7 threadpoolctl==3.1.0 tinycss2==1.2.1 tokenizers==0.13.2 toolz==0.12.0 torch==1.13.1 torchmetrics==0.11.1 torchvision==0.14.1 tornado==6.2 tqdm==4.64.0 traitlets==5.9.0 transformers==4.26.0 typing_extensions==4.2.0 umap-learn==0.5.3 urllib3==1.26.14 wcwidth==0.2.6 webencodings==0.5.1 Werkzeug==2.2.2 widgetsnbextension==4.0.5 wrapt==1.14.1 yarl==1.8.2 zipp==3.12.1 ```