I'm using norgal to assemble the mitogenome of purple maize (aprox. 560 000 bp).
My dataset is of 37,6 GB for the two fastq.gz (compressed) files. Decompressed they are aprox. 300 gb each.
Do you think I should sub sample my files with tools like seqtk or just head?
How much reads should I have to obtain the mitogenome?
How much time do you think will take the program to run with my data?
Hi!
I'm using norgal to assemble the mitogenome of purple maize (aprox. 560 000 bp). My dataset is of 37,6 GB for the two fastq.gz (compressed) files. Decompressed they are aprox. 300 gb each. Do you think I should sub sample my files with tools like seqtk or just head? How much reads should I have to obtain the mitogenome? How much time do you think will take the program to run with my data?
Thank you in advance :)