Open a-diamant opened 2 years ago
Hello Anna. As you may have guessed, the error you are receiving points to your server running out of usable memory when running UNAGI. It seems like the problem comes from the file size when combining the genome coverage of positively and negatively stranded reads.
A quick fix you can try to reduce that size is to change the genomecov options in the conf.ini file. Look for the following line:
genomecov_options=genomecov -d
And change it to:
genomecov_options=genomecov -dz
This will ignore the zero-coverage zones and reduce the memory consumption, especially if you're using a sparser genome like human (UNAGI was initially created for yeast cDNA reads) Let me know if you are still experiencing problems after that change.
We internally ran UNAGI on smaller fastq files, so our run times might differ from yours, but with files averaging 1Gb in size, the run time was about 5 minutes.
Please let us know how it went. (Also: Bonjour depuis l'autre bout du monde !)
Hello! Thank you for your answer. Yes, the problem comes indeed from the custom python fuction that combines two coverage files. I found a solution here: https://github.com/mglubber/UNAGI/blob/refactor/app/unagi.py and replaced your original combineCoverage function with the version from the link above. P.S. Bonjour! Merci pour votre aide! En fait, c'est plutot "Glory to Ukraine!" pour moi.
Hello! Do you have any ideas where this error may come from? I tried to run UNAGI several times but it's crashing on "Generating the genome coverage for each position" step. What is the usual jobtime for UNAGI? I'm running it on six fastq files (from 10 to 16 Gb each)? Thank you for your suggestions!
P.S. I'm using university cluster, the technical information is here: https://calculs.univ-cotedazur.fr/?page_id=450&lang=en