Closed acurry-hyde closed 1 year ago
Dear exomePeak2 User,
Thank you for highlighting the bug in the current version of exomePeak2. We discovered that the memory overflow issue originated from the removal of the read count filter for bins during last year's significant methodological update. This change was designed to enhance sensitivity, but it unfortunately led to memory problems when the mode is set to 'full_transcript'. We have addressed this issue in our latest update (version 1.9.3), which is now available on GitHub at exomePeak2 GitHub Repository. Note that the outcome in exon only mode will not be affected by this upgrade. To update your package, please use remotes::install_github("ZW-xjtlu/exomePeak2").
We appreciate your continued support and feedback.
Best regards, Zhen Wei
Hello - I've been attempting to run exomepeak2 on hg38 full transcript for chr10 only, single samples provided for ip and input control vs treated however it's been failing consistently due to 1) memory limit issues and 2) now with increased memory it's failing with the error: " caught segfault address (nil), cause 'unknown'"
I'm using an HPC linux OS, requesting a single node, 20 threads and 128gb memory. I'm using a conda compiled R4.3 environment. Within R I'm increasing the memory limit to 100GB using the 'ulimit' package.
1) memory limit issue:
2) memory increased, caught segment error
As we would prefer to run exomepeak2 on the whole bam file, splitting into chromosomes was a workaround, however, this single chromosome is not running to completion. I've read that exomepeak2 is highly memory intensive, however, it says for large genomes that 4GB memory is needed...
We've successfully run exomepeak2 on exons only, but we need the full transcript data. Any help/guidance would be greatly appreciated!