gagneurlab / drop

Pipeline to find aberrant events in RNA-Seq data, useful for diagnosis of rare disorders
MIT License
133 stars 44 forks source link

AberrantSplicing module error: wrong args for environment subassignment #581

Open Wynandi opened 1 week ago

Wynandi commented 1 week ago

Dear developers,

I have successfully run the AberrantExpression module but encountered issues with the AberrantSplicing module. My samples have been previously analyzed using OUTRIDER, FRASER, and FRASER2, and I am now transitioning to the DROP pipeline (v1.4.0). Given that the samples worked with the earlier tools, I believe they should not be the source of the problem. I have not yet tested with external example samples. Additionally, the drop_demo ran without any issues.

This is the error I keep getting:

Error in reducer$value.cache[[as.character(idx)]] <- values : wrong args for environment subassignment Calls: countSplitReads ... .bploop_impl -> .collect_result -> .reducer_add -> .reducer_add In addition: Warning message: In parallel::mccollect(wait = FALSE, timeout = 1) : 1 parallel job did not deliver a result Execution halted

To tackle it I have checked the annotation file and the config file. In these, I am unable to find anything wrong. Additionally, I have checked my settings for the batch job for the command "snakemake aberrantSplicing --cores 40" (also ran interactively once). According to this list:

  1. As in AberrantExpression
  2. Changed batch memory to 100G
  3. Kept 100G and added ntasks to 2 From now on, reduce number of files to 35, I have 70+
  4. Previous but on 2 nodes
  5. Added 150G, ntasks 2, nnodes 2 and cpus 40
  6. Add server network connection commands to batchjob (also cores were still 20 in the snakemake --cores command, updated now to 40)

Attatched below are all the relevant files. I hope you can aid me in figuring out this problem. Thank you!

config.txt fraser_annotation_patientIDsremoved.tsv.txt job_err_32351_4294967294.txt

Br, Victoria Lillback

AtaJadidAhari commented 1 week ago

Hi Victoria and thanks for using DROP! We've seen this issue before and it is usually a problem with memory. It is easier if you just submit one job at a time and increase the memory up until it runs successfully. Let us know if the issue still persists.

Wynandi commented 1 week ago

Thanks for the response.

I have increased the limit to of 150GB, and after this I get server errors telling me that I exceeded limits. If 150GB sounds too little I will try to request further access.

I just ran the FRASER2 separately to check once again everything works. It took me 3-4 hours for 70 samples to count the reads and generate splicing metrics with FRASER2 outside DROP.

Br, Victoria