Open lishuangshuang0616 opened 5 months ago
Hi Shuangshuang, thank you for the development of this tool. I see the same error while analysing scRNA-seq data using version 2.1.2. Would you recommend me to downgrade to an older version for now? Cheers, Nora
The memory error logic of scRNA should be different. I guess it may be the oligo filtering or saturation calculation. Can you show the error information? @Wenjun-Liu
@lishuangshuang0616 Sure, please see the log file attached. 20240623.txt
This error is caused by insufficient memory provided during the STAR alignment. Your sample species is human, so 50G of memory should be enough for analysis.
When I ran this on a cluster with 128 GB of RAM, the job was killed just a few mins after with the OUT_OF_MEMORY error. And the above log file was from a machine with 256 GM of RAM... Have you guys seen this happen before?
I haven't encountered this problem. Have you successfully run other samples before? It seems that after loading the genome, the alignment runs for 10 minutes and then stops because of excessive memory. Can you take a screenshot of the 01.data page?
I will provide you with a new scStar scStar.zip and then you unzip and replace the original package _/home/liuwj/ws/Pologroup/users/liuwj/miniconda3/envs/dnbc4tools/lib/python3.8/site-packages/dnbc4tools/software/scStar.
I'm not sure if this helps, because I don't know the cause of the problem yet.
Here's the 01.data output. I am trying to re-run the job with the new scStar now. Thanks a lot for looking into this.
In the current version 2.1.2 of the scATAC analysis pipeline, some samples encounter SIGNALS errors during merging due to high memory usage. This issue will be optimized in the next version. The cause is that some fragments are present in a very large number of cell barcodes, leading to excessive memory consumption during computation. The optimization direction is to filter out these positions when calculating the Jaccard index, as these positions are not genuine.