Open Wynandi opened 1 week ago
Hi Victoria and thanks for using DROP! We've seen this issue before and it is usually a problem with memory. It is easier if you just submit one job at a time and increase the memory up until it runs successfully. Let us know if the issue still persists.
Thanks for the response.
I have increased the limit to of 150GB, and after this I get server errors telling me that I exceeded limits. If 150GB sounds too little I will try to request further access.
I just ran the FRASER2 separately to check once again everything works. It took me 3-4 hours for 70 samples to count the reads and generate splicing metrics with FRASER2 outside DROP.
Br, Victoria
Dear developers,
I have successfully run the AberrantExpression module but encountered issues with the AberrantSplicing module. My samples have been previously analyzed using OUTRIDER, FRASER, and FRASER2, and I am now transitioning to the DROP pipeline (v1.4.0). Given that the samples worked with the earlier tools, I believe they should not be the source of the problem. I have not yet tested with external example samples. Additionally, the drop_demo ran without any issues.
This is the error I keep getting:
Error in reducer$value.cache[[as.character(idx)]] <- values : wrong args for environment subassignment Calls: countSplitReads ... .bploop_impl -> .collect_result -> .reducer_add -> .reducer_add In addition: Warning message: In parallel::mccollect(wait = FALSE, timeout = 1) : 1 parallel job did not deliver a result Execution halted
To tackle it I have checked the annotation file and the config file. In these, I am unable to find anything wrong. Additionally, I have checked my settings for the batch job for the command "snakemake aberrantSplicing --cores 40" (also ran interactively once). According to this list:
Attatched below are all the relevant files. I hope you can aid me in figuring out this problem. Thank you!
config.txt fraser_annotation_patientIDsremoved.tsv.txt job_err_32351_4294967294.txt
Br, Victoria Lillback