Closed jiawei-zhong closed 1 year ago
Hi @jiawei-zhong,
Thank you for reporting the issue. It seems to be a memory issue. Have you tried to allocate more memory?
Hi Matin,
Thank you for your suggestion. We have 2 TB memory, I thought it's enough. Aftering testing, it works for ~10k cells but not for >20k cells.
I noted that you performed the group analysis on TMS FACS data (~100k cells) on paper. Could you please provide me with your memory information?
Thanks again! Jiawei
Hi @jiawei-zhong ,
Thank you for the information. By memory I meant RAM, not disk space. RAM is typically 8GB or 16GB for a laptop, but the disk space can be very large, like 1TB. I used a 32GB computing node for analyzing the 100K TMS FACS data set.
Yes, I have 2 TB of RAM. That's weird - the server wasn't out of RAM when the job crashed, maybe the problem from server setting. Thanks for the info anyway.
Hi Matin,
Thank you for you amazing tools. compute-score works good for me. But when I run perform-downstream --group-analysis, it will first use the whole cores of server, and then appear "Segmentation fault (core dumped)". Do you have any idea about this? My h5ad datset has ~60000 cells.
Thanks, Jiawei