Open maithermbarros opened 3 months ago
There is some ongoing work to improve memory usage for this code and some other memory intensive functions in pycisTopic that will eventually appear in the polars_1xx branch of pycisTopic. https://github.com/aertslab/pycisTopic/tree/polars_1xx
Thanks for getting back to me. Do you have any workarounds for the meantime? I would really like to be able to run SCENIC+ on this dataset and to do so I need to run this step. I tried using multiprocessing in python but it still doesn't work.
Thanks for getting back to me. Do you have any workarounds for the meantime? I would really like to be able to run SCENIC+ on this dataset and to do so I need to run this step. I tried using multiprocessing in python but it still doesn't work.
No workarounds for now, but likely in a few weeks. Topic modeling with Mallet got some speedup and reduced memory usage. diff_features
code will be next.
Hi, just wanted to ask if there are any updates. I used 780GB of memory on the HPC but still not able to get normalize_scores running. " Unable to allocate 710. GiB for an array with shape (680104, 140085) and data type float64."
Not yet. Last weeks other projects had higher priority.
Would reading the data in chunks help for now? Or downsampling the number of cells before running this step?
Would reading the data in chunks help for now? Or downsampling the number of cells before running this step?
Downsampling the number of cells would help.
I managed it as a job the HPC though without having to downsample :) thank you for your response!
I finally had some time to work on it. Now it would be possible to theoretically even run it on a laptop: https://github.com/aertslab/pycisTopic/issues/179#issuecomment-2460210793
Hello. Thanks for developing SCENIC+, it is super dope and it is giving me nice results so far.
What type of problem are you experiencing and which function is you problem related too While preparing multiome datasets to run SCENIC+, when running
find_highly_variable_features
my python process gets killed. I am running a python script directly in my workstation which has good memory:Is this problem data set related? If so, provide information on the problematic data set It works without issues on another dataset of mine (~26k cells, ~ 480k regions) except in this larger dataset with cells ~45k cells and ~540k regions
Describe alternatives you've considered I tried running this step within Rstudio through reticulate() and in a python jupyter notebook too but it also gets killed because of memory.
Additional context I am running all of this in a conda environment where I installed scenicplus, pycistopic and pycistarget. I tried running this step of the pipeline using a python script:
Then it gets killed:
I also tried running
normalize_scores
first, save the output as a pkl file to then runfind_highly_variable_features
but it doesn't work either.Version information Report versions of modules relevant to this error
Any help/insight would be greatly appreciated as I really need to finish preparing this file to then run SCENIC+. Thank you!