nielsphk / velocyto.R

0 stars 0 forks source link

velocyto. R analysis, umap smooth knn distance failures #1

Open yanpinlu opened 2 years ago

yanpinlu commented 2 years ago

When I was using R installed on Linux for analysis, there was a problem at runumap, how can I fix it? A_5s <- ReadVelocity(file = "/home/lvelocyto/190706A_drl_count.loom") bm <- as.Seurat(x = A_5s) bm <- SCTransform(object = bm, assay = "spliced") bm <- RunPCA(object = bm, verbose = FALSE) bm <- FindNeighbors(object = bm, dims = 1:20) bm <- FindClusters(object = bm) bm <- RunUMAP(object = bm, dims = 1:20) Warning: The default method for RunUMAP has changed from calling Python UMAP via reticulate to the R-native UWOT using th e cosine metric To use Python UMAP via reticulate, set umap.method to 'umap-learn' and metric to 'correlation' This message will be shown once per session 20:27:33 UMAP embedding parameters a = 0.9922 b = 1.112 20:27:33 Read 11329 rows and found 20 numeric columns 20:27:33 Using Annoy for neighbor search, n_neighbors = 30 20:27:33 Building Annoy index with metric = cosine, n_trees = 50 0% 10 20 30 40 50 60 70 80 90 100% [----|----|----|----|----|----|----|----|----|----| **| 20:27:35 Writing NN index file to temp file /tmp/RtmpoNUZSR/file88363f984c80 20:27:35 Searching Annoy index using 1 thread, search_k = 3000 20:27:37 Annoy recall = 0.2648% 20:27:38 Commencing smooth kNN distance calibration using 1 thread 20:27:38 11329 smooth knn distance failures Error in x2set(Xsub, n_neighbors, metric, nn_method = nn_sub, n_trees, : Non-finite entries in the input matrix @

jlmelville commented 2 years ago

That's a problem that arises from the uwot library which I maintain. Usually, this means that you have identical rows in your input data. You should filter those out if possible.

The exact failure mode and message here is not very helpful though, so I should look into whether than can be improved.

yanpinlu commented 2 years ago

That's a problem that arises from the uwot library which I maintain. Usually, this means that you have identical rows in your input data. You should filter those out if possible.

The exact failure mode and message here is not very helpful though, so I should look into whether than can be improved.

You have developed a very powerful package that should be a problem with my data, I have changed other ways and have avoided this problem, thank you very much for your reply