Closed angkoomin closed 2 years ago
Hi @angkoomin ,
Based on the error message, it seems like the stack limit is not actually unlimited, as the value listed is only 12GB while your job seems to have 500GB available overall. This may be a limitation that your grid job submission system has.
The random_trees clustering method makes use of some recursion (limited to a depth of 3), but with big datasets it seems like it uses a significant amount of stack. The solutions would either be to figure out how to actually remove the limit on the stack size, or to use the Leiden clustering instead (but this may not be possible in your case if you want to use Uphyloplot2 downstream). One way to test if R actually has "unlimited" stack size without having to run inferCNV and wait for an error is to simply run "Cstack_info()" in R and check the size value. If it is unlimited, the value will be NA, so if you see any other value, that is the current limit set.
Regards, Christophe.
Hi there,
I have been running infercnv for a few days to create the following:
infercnv_obj = infercnv::run(infercnv_object_005, cutoff=0.1, out_dir= "Terminal_output/005", cluster_by_groups=FALSE, plot_steps=T, scale_data=T, HMM=TRUE, denoise=T, tumor_subcluster_partition_method = c("random_trees"), noise_filter=0.12, analysis_mode='subclusters', resume_mode=FALSE, HMM_type='i6')
However, I'm getting the error of "C stack usage 11961892 is too close to the limit"
I have read through some of the previous similar issues, which the developer has suggested to run: ulimit -s unlimited prior to running R on the terminal. I have done so and still having the same issue. As the job runs for a few days, I've been trying to troubleshoot every 4 days without any success as the same errors keep occurring.
Image below is my script on the terminal:
Any other suggestions would be much appreciated.