Closed lucacozzuto closed 2 years ago
@lucacozzuto can you provide some additional information, for example a small piece of the input file? And also the command -- you've snipped the verbose output and progress but we don't see the arguments or anything like filenames. If you feel there would be confidential info in the filenames, then it would help if you could copy them to generic filenames and post the exact command line you used. Thanks!!!
Many thanks for your quick answer! This is the command line
falco -o KO_fastqc -t 1 KO.fq.gz
The file is huge (59G) and there are some reads that are up to 1 Mb
hello,
thank you for reaching out about the issue. I was able to replicate the problem with synthetic very large reads.
This seems not as much a memory issue as it is a bug in falco where we weren't accounting for the maximum read length to be as large as the ones currently produced by oxford nanopore.
If you are working with a clone of the repo, I pushed a fix at 2f82110 that may resolve the issue. On my 16 GB RAM machine I was able to run falco on a simulated read of size 30 million to completion.
If at all possible, could you let us know if you can run falco to completion on your data with this commit?
Thank you very much in advance!
Dear @guilhermesena1, it worked! Thanks for this fix, I managed to add it to my nextflow pipeline for replacing fastQC. I made a Docker file with your tool, so in case you want I can add it to your repo.
Best,
Luca
Dear developers, thanks for your valuable tool! I'm trying to use it for some nanopore data and I got the following error:
I used 80Gb of RAM so I don't think I have a problem with RAM.
Luca