Closed pavlo888 closed 4 years ago
Hi Pablo,
The WARNING message might be due to the conda version. I encourage you to have always the most updated version (conda update conda
).
However, the other errors seem to be a problem of the yml file across operative systems. I modified this yml file in order to solve that issue. You can get the new yml file by typing
wget https://anaconda.org/nmquijada/tormes-1.0/2019.04.25.180147/download/tormes-1.0.yml
and then, following the other instructions from the README.
Try to update conda and re-install tormes again. Let me know how it goes!
Cheers, Narciso
Hi Narciso,
I have updated Conda and then tried installing the the Tormes package again using the yml file you kindly provided me with but I still get an error. Do you have any idea on how to solve this problem? Thanks in advance.
(base) Pablos-MacBook-Air:PVA_anvio_fasta pablovargas$ conda env create -n tormes-1.0 --file tormes-1.0.yml
Collecting package metadata: done
Solving environment: failed
ResolvePackageNotFound:
sistr_cmd=1.0.2
tormes=1.0
mauvealigner=1.2.0
mauve=2.4.0.r4736
On Thu, Apr 25, 2019 at 6:15 PM nmquijada notifications@github.com wrote:
Hi Pablo,
The WARNING message might be due to the conda version. I encourage you to have always the most updated version (conda update conda).
However, the other errors seem to be a problem of the yml file across operative systems. I modified this yml file in order to solve that issue. You can get the new yml file by typing
wget https://anaconda.org/nmquijada/tormes-1.0/2019.04.25.180147/download/tormes-1.0.yml
and then, following the other instructions from the README.
Try to update conda and re-install tormes again. Let me know how it goes!
Cheers, Narciso
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1#issuecomment-486739852, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEQU2FGPTIMLJPBURWLPSHKLBANCNFSM4HF5LDNQ .
Hi Pablo,
Sorry for the late reply... I don't know what is going on, as we tried to install TORMES in different operative system and we didn't find those errors... Perhaps, you can try to install TORMES as a new environment. I won't suggest this as the definitive solution, but it might allow you to use TORMES while we solve the incompatibilities issue. Try to do the following. This will generate a "tormes-environment" environment:
conda update conda
conda config --add channels defaults
conda config --add channels bioconda
conda config --add channels conda-forge
conda create -n tormes-environment python=3.5
conda activate tormes-environment
conda install abricate blast fasttree kraken mauve parallel prinseq prokka quast roary sistr_cmd spades trimmomatic kma parsnp perl-data-dumper megahit sickle-trim mlst
conda install -c r r-essentials r-plotly r-biocmanager
conda install -c bioconda bioconductor-ggtree
conda install -c conda-forge imagemagick tabulate r-plotly r-biocmanager
conda install -c anaconda git
conda install -c nmquijada tormes
conda deactivate
To activate this environment, type conda activate tormes-environment
.
Remember to type tormes-setup
the first time you are using TORMES.
Please let me know if it worked.
Narciso
Dear Narciso, I have been experiencing the same set of problems as Pablo when trying to install tormes. I tried to install tormes as a new environment as you suggested. The process worked except for the final "conda install -c nmquijada tormes" command. When I reach this step, I get the following error:
PackagesNotFoundError: The following packages are not available from current channels:
Current channels:
I interpret this as there is no such file on your nmquijada page that is labeled "tormes", or is there something simple I am missing? Thank you very much for your time. I am using a 2017 MacBook Pro with OS Mojave, if that is of you to you.
Sincerely,
Jack
Dear Narciso,
I have been able to successfully install the Tormes environment in a Linux environment using the yml file. Then I run the tormes-setup command and tried to conduct my first run. However, it failed. It says that the mauve.jar file is not present in the bin, although I have installed mauve manually using the conda install command.
(tormes-1.0) sam@BioInf2:~/Downloads/tormes_output$ tormes --metadata samples_agro_test.txt --output Agro_TORMES_2019 --threads 8 Software: /home/sam/anaconda2/envs/tormes-1.0/bin/abricate found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/convert found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/fasttree found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/kraken found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/kraken-report found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/megahit found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/mlst found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/parallel found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/prinseq-lite.pl found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/prokka found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/quast found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/roary found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/roary2svg.pl found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/sickle found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/spades.py found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/trimmomatic found
ERROR: Software /home/sam/anaconda2/envs/tormes-1.0/bin/../share/mauve-2.4.0.r4736-0/Mauve.jar not found! Please check if:
Could you help me solve this issue?
Cheers, Pablo
Hi Pablo,
Good that you managed to install tormes. We are getting closer...
So, did you install mauve when running the tormes .yml file or did you install it by your own?
Before running, tormes is checking that every software is installed in your computer. Yo can also install the software by your own, but then you have to make sure that you include the path in the tormes config_file.txt, that is the file where tormes will look for the location of the different software included in the pipeline (in your computer, it might be in /home/sam/anaconda2/envs/tormes-1.0/files/).
After activating tormes environment, could you please paste the output of which Mauve
?
Thanks
Narciso
Hi Narciso,
I run the .yml file to install Tormes. So if Mauve was there, then it was installed. However, when I tried running it, there was a message that Mauve was not installed; so I tried installing it myself using the conda install command. But I have not edited the config_file.txt yet.
When inputting the command which Mauve, I get the following response: /home/sam/anaconda2/envs/tormes-1.0/bin/Mauve
Should I edit the config_file.txt or should I do something else? It seems that Mauve is installed in the Tormes environment. Right?
Cheers, Pablo
On Thu, Oct 3, 2019 at 12:24 PM Narciso M. Quijada notifications@github.com wrote:
Hi Pablo, Good that you managed to install tormes. We are getting closer... So, did you install mauve when running the tormes .yml file or did you installed by your own? Before running, tormes is checking that every software is installed in your computer. Yo can also install the software by your own, but then you have to make sure that you include the path in the tormes config_file.txt, that is the file where tormes will look for the location of the different software included in the pipeline (in your computer, it might be in /home/sam/anaconda2/envs/tormes-1.0/files/). After activating tormes environment, could you please paste the output of which Mauve ? Thanks Narciso
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AK4VSEX5RE7NCZKERUMFL23QMXB4NA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAHXSIQ#issuecomment-537884962, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEVHQV7NE33KMHT5543QMXB4NANCNFSM4HF5LDNQ .
OK.
If you do ls -l /home/sam/anaconda2/envs/tormes-1.0/bin/Mauve
(after activating tormes environment), you will see that the Mauve is actually a symbolic link to a file, that in your computer should be in /home/sam/anaconda2/envs/tormes-1.0/share/mauve-[YOUR DOWNLOADED VERSION]/
If you installed Mauve manually, it won't be the same version that the config_file.txt is looking for (mauve-2.4.0.r4736-0).
The file needed for running in the command line is the Mauve.jar, that will also be in /home/sam/anaconda2/envs/tormes-1.0/share/mauve-[YOUR DOWNLOADED VERSION]/
So please, modify the config_file.txt accordingly, in order to include this Mauve.jar path.
Cheers,
Narciso
Hi again, Thanks to your issue, I found that I have a bug in the tormes-setup script, that is generating the contfig_file.txt (among other things). It was devised for using the current version of Mauve of that time (mauve-2.4.0.r4736-0) and I found out they relased a novel one (mauve-2.4.0.r4736-1). So the only thing you have to change in the config_file.txt is (in the row of "MAUVE"):
/home/sam/anaconda2/envs/tormes-1.0/bin/../share/mauve-2.4.0.r4736-0/Mauve.jar
by
/home/sam/anaconda2/envs/tormes-1.0/bin/../share/mauve-2.4.0.r4736-1/Mauve.jar
That might fix your issue. I will fix the bug in the tormes-setup to be included in the following releases. Thanks! Narciso
Hi Narciso,
I edited the Mauve path in the config_file.txt file and it worked! However, now I got another error :(
When the package is running, it stops halfway with this message:
Approximated maximum memory consumption: 333M writing new database writing clustering information program completed !
Total CPU time 53377.68 -----FINDING THE BEST ALIGNMENT FOR EACH GENE SEQUENCE----- gene_db_alignment_against_seq.sh: 1: gene_db_alignment_against_seq.sh: Syntax error: redirection unexpected
I checked the gene_db_alignment_against_seq.sh file but I am not sure what needs to be edited or changed.
Could you please help me out on this issue? I really appreciate your supporting help so far. !!!
Cheers, Pablo
On Thu, Oct 3, 2019 at 3:57 PM Narciso M. Quijada notifications@github.com wrote:
Hi again, Thanks to your issue, I found that I have a bug in the tormes-setup script, that is generating the contfig_file.txt (among other things). It was devised for using the current version of Mauve of that time (mauve-2.4.0.r4736-0) and I found out they relased a novel one (mauve-2.4.0.r4736-1). So the only thing you have to change in the config_file.txt is (in the row of "MAUVE"):
/home/sam/anaconda2/envs/tormes-1.0/bin/../share/mauve-2.4.0.r4736-0/Mauve.jar
by
/home/sam/anaconda2/envs/tormes-1.0/bin/../share/mauve-2.4.0.r4736-1/Mauve.jar
That might fix your issue. I will fix the bug in the tormes-setup to be included in the following releases. Thanks! Narciso
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AK4VSERHK3QGSJHZLZVCWETQMX23NA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAIJFSI#issuecomment-537957065, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEXLNZSI2Z2OEPJ3DF3QMX23NANCNFSM4HF5LDNQ .
Dear Pablo, Good that the Mauve thing worked! Let's go to the next one... Would you mind sharing with me the tormes.log file so I can trace the error? Thanks! Narciso
Hi Narciso,
Sure thing. You can find it attached.
Cheers, Pablo
On Fri, Oct 4, 2019 at 11:38 AM Narciso M. Quijada notifications@github.com wrote:
Dear Pablo, Good that the Mauve thing worked! Let's go to the next one... Would you mind sharing with me the tormes.log file so I can trace the error? Thanks! Narciso
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AK4VSEXKFCWHAV27S5ZKDATQM4FKXA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEALDTBY#issuecomment-538327431, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEURGNEJBHECTSRSMHTQM4FKXANCNFSM4HF5LDNQ .
Hi Pablo,
I cannot find the attached file. Could you please do it again? Thanks
Hi Narciso,
I am attaching it again. In case you cannot find, I am pasting the text below.
This is tormes version 1.0
Script used: /home/sam/anaconda2/envs/tormes-1.0/bin/tormes --metadata samples_agro_test.txt --output Agro_TORMES_2019 --threads 8
Parameters set:
TORMES pipeline started at: 2019-10-03 16:02 Quality filtering process started at: 2019-10-03 16:03 Assembly started at: 2019-10-03 16:07 Species identification started at: 2019-10-03 16:23 MLST started at: 2019-10-03 16:25 Antibiotic resistance and virulence genes search started at: 2019-10-03 16:25 Annotation started at: 2019-10-03 16:25 Pangenome analysis started at: 2019-10-03 16:30 Tormes report started at: 2019-10-03 16:36 WARNING: html report file could not be created TORMES pipeline finished at: 2019-10-03 16:36
On Fri, Oct 4, 2019 at 12:05 PM Narciso M. Quijada notifications@github.com wrote:
Hi Pablo,
I cannot find the attached file. Could you please do it again? Thanks
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AK4VSEQQZ4FAZPFIVGGHHKTQM4IODA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEALFW5I#issuecomment-538336117, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEUIZ4YYK32YADTMIMTQM4IODANCNFSM4HF5LDNQ .
Hi Pablo,
Still don't find the attachment, but thanks for sharing the text anyway.
Would you mind to run the same analysis by redirecting the screen outputs and errors to a file? Like this:
/home/sam/anaconda2/envs/tormes-1.0/bin/tormes --metadata samples_agro_test.txt --output Agro_TORMES_2019 --threads 8 &>>error-tormes.txt 2>>error-tormes.txt &
You won't probably see nothing in the screen during the analysis, but you can trace the status by looking into Agro_TORMES_2019/tormes.log file.
And please share with me that "error-tormes.txt" file afterwards.
Cheers,
Narciso
Hi Narciso,
The command you sent does not seem to work.
(tormes-1.0) sam@BioInf2:~/Downloads/tormes_output$ tormes --metadata samples_agro_test.txt --output Agro_TORMES_2019 --threads 8 &>>error-tormes.txt 2>>error-tormes.txt & [1] 27587
On Fri, Oct 4, 2019 at 12:28 PM Narciso M. Quijada notifications@github.com wrote:
Hi Pablo,
Still don't find the attachment, but thanks for sharing the text anyway. Would you mind to run the same analysis by redirecting the screen outputs and errors to a file? Like this: /home/sam/anaconda2/envs/tormes-1.0/bin/tormes --metadata samples_agro_test.txt --output Agro_TORMES_2019 --threads 8 &>>error-tormes.txt 2>>error-tormes.txt & You won't probably see nothing in the screen during the analysis, but you can trace the status by looking into Agro_TORMES_2019/tormes.log file. And please share with me that "error-tormes.txt" file afterwards. Cheers, Narciso
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AK4VSEXMBLZEJ4B3IBAPVO3QM4LDZA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEALHK5I#issuecomment-538342773, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEVDUJOLJJ52OIIRHDDQM4LDZANCNFSM4HF5LDNQ .
Hi Pablo,
Yes, it is running in the background of your computer. The number after the [1] is the job ID, that you can trace in your computer by using commands such as top
By the syntax you run, you send actions to the background of your computer but you still can use the same terminal for other issues.
If you don't close the terminal, a message like "Done 25787" will appear when the action is finished. That's why I said that in the meantime you can check the Agro_TORMES_2019/tormes.log file.
Additionally, you'll se that the "error-tormes.txt" file has been created in the directory where you run the command.
Hello Narciso,
I'm trying to install Tormes in a AWS distribution but I'm having issues with this line:
conda env create -n tormes-1.0 --file tormes-1.0.yml
I get:
Collecting package metadata (repodata.json): done Solving environment: - Killed
I this because I need to use other type of AWS machine?
Thank you!
Hi Narciso,
Sorry for the late reply but I was out of office for a side project.
I have successfully run the command you recommended. Most of the results are there but still some are missing. Attached you can find the log.
Hopefully we can solve this soon.
Cheers, Pablo
On Fri, Oct 4, 2019 at 12:37 PM Narciso M. Quijada notifications@github.com wrote:
Hi Pablo, Yes, it is running in the background of your computer. The number after the [1] is the job ID, that you can trace in your computer by using commands such as top By the syntax you run, you send actions to the background of your computer but you still can use the same terminal for other issues. If you don't close the terminal, a message like "Done 25787" will appear when the action is finished. That's why I said that in the meantime you can check the Agro_TORMES_2019/tormes.log file. Additionally, you'll se that the "error-tormes.txt" file has been created in the directory where you run the command.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AK4VSEU43SGQN4HL62BMDK3QM4MGTA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEALH7JI#issuecomment-538345381, or mute the thread https://github.com/notifications/unsubscribe-auth/AK4VSEUFLD6AD7NLB5VNXE3QM4MGTANCNFSM4HF5LDNQ .
Software: /home/sam/anaconda2/envs/tormes-1.0/bin/abricate found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/convert found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/fasttree found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/kraken found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/kraken-report found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/megahit found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/mlst found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/parallel found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/prinseq-lite.pl found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/prokka found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/quast found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/roary found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/roary2svg.pl found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/sickle found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/spades.py found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/trimmomatic found Software: /home/sam/anaconda2/envs/tormes-1.0/bin/../share/mauve-2.4.0.r4736-1/Mauve.jar found Binaries for MAUVE found
Thanks for using tormes version 1.0 Status can be shown in "/home/sam/Downloads/Agro_TORMES_2019/tormes.log"
TrimmomaticPE: Started with arguments: -threads 8 -phred33 /home/sam/Downloads/Agro_TORMES_2019/Raw_reads/MAFF210265_R1.fastq.gz /home/sam/Downloads/Agro_TORMES_2019/Raw_reads/MAFF210265_R2.fastq.gz /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.noadapt.R1.fastq.gz /dev/null /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.noadapt.R2.fastq.gz /dev/null ILLUMINACLIP:/home/sam/anaconda2/envs/tormes-1.0/bin/../files/adapters.fa:1:30:11 Using PrefixPair: 'AGATGTGTATAAGAGACAG' and 'AGATGTGTATAAGAGACAG' Using PrefixPair: 'TACACTCTTTCCCTACACGACGCTCTTCCGATCT' and 'GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT' Using Long Clipping Sequence: 'GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG' Using Long Clipping Sequence: 'TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG' Using Long Clipping Sequence: 'AGATCGGAAGAGCTCGTATGCCGTCTTCTGCTTG' Using Long Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA' Skipping duplicate Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA' Using Long Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATT' Using Long Clipping Sequence: 'AGATCGGAAGAGCGGTTCAGCAGGAATGCCGAG' Using Long Clipping Sequence: 'AGATCGGAAGAGCGGTTCAGCAGGAATGCCGAGACCGATCTCGTATGCCGTCTTCTGCTTG' Using Long Clipping Sequence: 'TTTTTTTTTTAATGATACGGCGACCACCGAGATCTACAC' Using Long Clipping Sequence: 'AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC' Using Long Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT' Using Long Clipping Sequence: 'TTTTTTTTTTCAAGCAGAAGACGGCATACGA' Skipping duplicate Clipping Sequence: 'AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC' Using Long Clipping Sequence: 'GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT' Using Long Clipping Sequence: 'TACACTCTTTCCCTACACGACGCTCTTCCGATCT' Using Long Clipping Sequence: 'CTGTCTCTTATACACATCTCCGAGCCCACGAGAC' Using Long Clipping Sequence: 'CAAGCAGAAGACGGCATACGAGATCGGTCTCGGCATTCCTGCTGAACCGCTCTTCCGATCT' Using Long Clipping Sequence: 'CTGTCTCTTATACACATCTGACGCTGCCGACGA' Using Long Clipping Sequence: 'AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGACGCTCTTCCGATCT' ILLUMINACLIP: Using 2 prefix pairs, 17 forward/reverse sequences, 0 forward only sequences, 0 reverse only sequences Input Read Pairs: 944446 Both Surviving: 926206 (98.07%) Forward Only Surviving: 18220 (1.93%) Reverse Only Surviving: 20 (0.00%) Dropped: 0 (0.00%) TrimmomaticPE: Completed successfully TrimmomaticPE: Started with arguments: -threads 8 -phred33 /home/sam/Downloads/Agro_TORMES_2019/Raw_reads/NCPPB4042_R1.fastq.gz /home/sam/Downloads/Agro_TORMES_2019/Raw_reads/NCPPB4042_R2.fastq.gz /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/NCPPB4042.noadapt.R1.fastq.gz /dev/null /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/NCPPB4042.noadapt.R2.fastq.gz /dev/null ILLUMINACLIP:/home/sam/anaconda2/envs/tormes-1.0/bin/../files/adapters.fa:1:30:11 Using PrefixPair: 'AGATGTGTATAAGAGACAG' and 'AGATGTGTATAAGAGACAG' Using PrefixPair: 'TACACTCTTTCCCTACACGACGCTCTTCCGATCT' and 'GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT' Using Long Clipping Sequence: 'GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG' Using Long Clipping Sequence: 'TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG' Using Long Clipping Sequence: 'AGATCGGAAGAGCTCGTATGCCGTCTTCTGCTTG' Using Long Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA' Skipping duplicate Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA' Using Long Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATT' Using Long Clipping Sequence: 'AGATCGGAAGAGCGGTTCAGCAGGAATGCCGAG' Using Long Clipping Sequence: 'AGATCGGAAGAGCGGTTCAGCAGGAATGCCGAGACCGATCTCGTATGCCGTCTTCTGCTTG' Using Long Clipping Sequence: 'TTTTTTTTTTAATGATACGGCGACCACCGAGATCTACAC' Using Long Clipping Sequence: 'AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC' Using Long Clipping Sequence: 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT' Using Long Clipping Sequence: 'TTTTTTTTTTCAAGCAGAAGACGGCATACGA' Skipping duplicate Clipping Sequence: 'AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC' Using Long Clipping Sequence: 'GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT' Using Long Clipping Sequence: 'TACACTCTTTCCCTACACGACGCTCTTCCGATCT' Using Long Clipping Sequence: 'CTGTCTCTTATACACATCTCCGAGCCCACGAGAC' Using Long Clipping Sequence: 'CAAGCAGAAGACGGCATACGAGATCGGTCTCGGCATTCCTGCTGAACCGCTCTTCCGATCT' Using Long Clipping Sequence: 'CTGTCTCTTATACACATCTGACGCTGCCGACGA' Using Long Clipping Sequence: 'AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGACGCTCTTCCGATCT' ILLUMINACLIP: Using 2 prefix pairs, 17 forward/reverse sequences, 0 forward only sequences, 0 reverse only sequences Input Read Pairs: 799997 Both Surviving: 785256 (98.16%) Forward Only Surviving: 14733 (1.84%) Reverse Only Surviving: 8 (0.00%) Dropped: 0 (0.00%) TrimmomaticPE: Completed successfully Estimate size of input data for status report (this might take a while for large files) done Parse and process input data
status: 0 %
status: 1 %
status: 2 %
status: 3 %
status: 4 %
status: 5 %
status: 6 %
status: 7 %
status: 8 %
status: 9 %
status: 10 %
status: 11 %
status: 12 %
status: 13 %
status: 14 %
status: 15 %
status: 16 %
status: 17 %
status: 18 %
status: 19 %
status: 20 %
status: 21 %
status: 22 %
status: 23 %
status: 24 %
status: 25 %
status: 26 %
status: 27 %
status: 28 %
status: 29 %
status: 30 %
status: 31 %
status: 32 %
status: 33 %
status: 34 %
status: 35 %
status: 36 %
status: 37 %
status: 38 %
status: 39 %
status: 40 %
status: 41 %
status: 42 %
status: 43 %
status: 44 %
status: 45 %
status: 46 %
status: 47 %
status: 48 %
status: 49 %
status: 50 %
status: 51 %
status: 52 %
status: 53 %
status: 54 %
status: 55 %
status: 56 %
status: 57 %
status: 58 %
status: 59 %
status: 60 %
status: 61 %
status: 62 %
status: 63 %
status: 64 %
status: 65 %
status: 66 %
status: 67 %
status: 68 %
status: 69 %
status: 70 %
status: 71 %
status: 72 %
status: 73 %
status: 74 %
status: 75 %
status: 76 %
status: 77 %
status: 78 %
status: 79 %
status: 80 %
status: 81 %
status: 82 %
status: 83 %
status: 84 %
status: 85 %
status: 86 %
status: 87 %
status: 88 %
status: 89 %
status: 90 %
status: 91 %
status: 92 %
status: 93 %
status: 94 %
status: 95 %
status: 96 %
status: 97 %
status: 98 %
status: 99 %
done
Clean up empty files done Input and filter stats: Input sequences (file 1): 785,256 Input bases (file 1): 169,407,390 Input mean length (file 1): 215.74 Input sequences (file 2): 785,256 Input bases (file 2): 169,491,396 Input mean length (file 2): 215.84 Good sequences (pairs): 678,006 Good bases (pairs): 308,153,093 Good mean length (pairs): 454.50 Good sequences (singletons file 1): 24,483 (3.12%) Good bases (singletons file 1): 5,817,390 Good mean length (singletons file 1): 237.61 Good sequences (singletons file 2): 7,896 (1.01%) Good bases (singletons file 2): 1,792,413 Good mean length (singletons file 2): 227.00 Bad sequences (file 1): 82,767 (10.54%) Bad bases (file 1): 9,515,500 Bad mean length (file 1): 114.97 Bad sequences (file 2): 24,483 (3.12%) Bad bases (file 2): 5,893,050 Bad mean length (file 2): 240.70 Sequences filtered by specified parameters: trim_qual_right: 2227 min_len: 165445 min_qual_mean: 14449 Estimate size of input data for status report (this might take a while for large files) done Parse and process input data
status: 0 %
status: 1 %
status: 2 %
status: 3 %
status: 4 %
status: 5 %
status: 6 %
status: 7 %
status: 8 %
status: 9 %
status: 10 %
status: 11 %
status: 12 %
status: 13 %
status: 14 %
status: 15 %
status: 16 %
status: 17 %
status: 18 %
status: 19 %
status: 20 %
status: 21 %
status: 22 %
status: 23 %
status: 24 %
status: 25 %
status: 26 %
status: 27 %
status: 28 %
status: 29 %
status: 30 %
status: 31 %
status: 32 %
status: 33 %
status: 34 %
status: 35 %
status: 36 %
status: 37 %
status: 38 %
status: 39 %
status: 40 %
status: 41 %
status: 42 %
status: 43 %
status: 44 %
status: 45 %
status: 46 %
status: 47 %
status: 48 %
status: 49 %
status: 50 %
status: 51 %
status: 52 %
status: 53 %
status: 54 %
status: 55 %
status: 56 %
status: 57 %
status: 58 %
status: 59 %
status: 60 %
status: 61 %
status: 62 %
status: 63 %
status: 64 %
status: 65 %
status: 66 %
status: 67 %
status: 68 %
status: 69 %
status: 70 %
status: 71 %
status: 72 %
status: 73 %
status: 74 %
status: 75 %
status: 76 %
status: 77 %
status: 78 %
status: 79 %
status: 80 %
status: 81 %
status: 82 %
status: 83 %
status: 84 %
status: 85 %
status: 86 %
status: 87 %
status: 88 %
status: 89 %
status: 90 %
status: 91 %
status: 92 %
status: 93 %
status: 94 %
status: 95 %
status: 96 %
status: 97 %
status: 98 %
status: 99 %
status: 99 %
done
Clean up empty files done Input and filter stats: Input sequences (file 1): 926,206 Input bases (file 1): 188,186,385 Input mean length (file 1): 203.18 Input sequences (file 2): 926,206 Input bases (file 2): 188,344,981 Input mean length (file 2): 203.35 Good sequences (pairs): 759,119 Good bases (pairs): 334,124,010 Good mean length (pairs): 440.15 Good sequences (singletons file 1): 27,063 (2.92%) Good bases (singletons file 1): 6,309,229 Good mean length (singletons file 1): 233.13 Good sequences (singletons file 2): 8,671 (0.94%) Good bases (singletons file 2): 1,909,128 Good mean length (singletons file 2): 220.17 Bad sequences (file 1): 140,024 (15.12%) Bad bases (file 1): 14,819,401 Bad mean length (file 1): 105.83 Bad sequences (file 2): 27,063 (2.92%) Bad bases (file 2): 6,419,905 Bad mean length (file 2): 237.22 Sequences filtered by specified parameters: trim_qual_right: 3509 min_len: 279165 min_qual_mean: 15766 Command line: /home/sam/anaconda2/envs/tormes-1.0/bin/spades.py --careful -1 /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_1.fastq.gz -2 /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_2.fastq.gz -o /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly -t 8
System information: SPAdes version: 3.13.0 Python version: 3.5.6 OS: Linux-4.4.0-21-generic-x86_64-with-debian-stretch-sid
Output dir: /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly Mode: read error correction and assembling Debug mode is turned OFF
Dataset parameters: Multi-cell mode (you should set '--sc' flag if input data was obtained with MDA (single-cell) technology or --meta flag if processing metagenomic dataset) Reads: Library number: 1, library type: paired-end orientation: fr left reads: ['/home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_1.fastq.gz'] right reads: ['/home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_2.fastq.gz'] interlaced reads: not specified single reads: not specified merged reads: not specified Read error correction parameters: Iterations: 1 PHRED offset will be auto-detected Corrected reads will be compressed Assembly parameters: k: automatic selection based on read length Repeat resolution is enabled Mismatch careful mode is turned ON MismatchCorrector will be used Coverage cutoff is turned OFF Other parameters: Dir for temp files: /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/tmp Threads: 8 Memory limit (in Gb): 15
======= SPAdes pipeline started. Log can be found here: /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/spades.log
===== Read error correction started.
== Running read error correction tool: /home/sam/anaconda2/envs/tormes-1.0/share/spades-3.13.0-0/bin/spades-hammer /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/corrected/configs/config.info
0:00:00.008 4M / 4M INFO General (main.cpp : 75) Starting BayesHammer, built from refs/heads/spades_3.13.0, git revision 8ea46659e9b2aca35444a808db550ac333006f8b 0:00:00.008 4M / 4M INFO General (main.cpp : 76) Loading config from /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/corrected/configs/config.info 0:00:00.009 4M / 4M INFO General (main.cpp : 78) Maximum # of threads to use (adjusted due to OMP capabilities): 8 0:00:00.010 4M / 4M INFO General (memory_limit.cpp : 49) Memory limit set to 15 Gb 0:00:00.010 4M / 4M INFO General (main.cpp : 86) Trying to determine PHRED offset 0:00:00.010 4M / 4M INFO General (main.cpp : 92) Determined value is 33 0:00:00.010 4M / 4M INFO General (hammer_tools.cpp : 36) Hamming graph threshold tau=1, k=21, subkmer positions = [ 0 10 ] 0:00:00.010 4M / 4M INFO General (main.cpp : 113) Size of aux. kmer data 24 bytes === ITERATION 0 begins === 0:00:00.011 4M / 4M INFO K-mer Index Building (kmer_index_builder.hpp : 301) Building kmer index 0:00:00.011 4M / 4M INFO General (kmer_index_builder.hpp : 117) Splitting kmer instances into 128 files using 8 threads. This might take a while. 0:00:00.011 4M / 4M INFO General (file_limit.hpp : 32) Open file limit set to 1024 0:00:00.011 4M / 4M INFO General (kmer_splitters.hpp : 89) Memory available for splitting buffers: 0.624837 Gb 0:00:00.011 4M / 4M INFO General (kmer_splitters.hpp : 97) Using cell size of 524288 0:00:00.013 8G / 8G INFO K-mer Splitting (kmer_data.cpp : 97) Processing /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_1.fastq.gz 0:00:10.094 8G / 8G INFO K-mer Splitting (kmer_data.cpp : 107) Processed 759119 reads 0:00:10.094 8G / 8G INFO K-mer Splitting (kmer_data.cpp : 97) Processing /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_2.fastq.gz 0:00:19.855 8G / 8G INFO K-mer Splitting (kmer_data.cpp : 107) Processed 1518238 reads 0:00:19.855 8G / 8G INFO K-mer Splitting (kmer_data.cpp : 112) Total 1518238 reads processed 0:00:19.886 32M / 8G INFO General (kmer_index_builder.hpp : 120) Starting k-mer counting. 0:00:20.154 32M / 8G INFO General (kmer_index_builder.hpp : 127) K-mer counting done. There are 21553504 kmers in total. 0:00:20.154 32M / 8G INFO General (kmer_index_builder.hpp : 133) Merging temporary buckets. 0:00:20.241 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 314) Building perfect hash indices 0:00:20.922 32M / 8G INFO General (kmer_index_builder.hpp : 150) Merging final buckets. 0:00:21.858 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 336) Index built. Total 10002592 bytes occupied (3.71266 bits per kmer). 0:00:21.859 32M / 8G INFO K-mer Counting (kmer_data.cpp : 356) Arranging kmers in hash map order 0:00:22.918 368M / 8G INFO General (main.cpp : 148) Clustering Hamming graph. 0:00:46.569 368M / 8G INFO General (main.cpp : 155) Extracting clusters 0:00:49.731 368M / 8G INFO General (main.cpp : 167) Clustering done. Total clusters: 12640200 0:00:49.732 200M / 8G INFO K-mer Counting (kmer_data.cpp : 376) Collecting K-mer information, this takes a while. 0:00:49.788 696M / 8G INFO K-mer Counting (kmer_data.cpp : 382) Processing /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_1.fastq.gz 0:01:14.308 696M / 8G INFO K-mer Counting (kmer_data.cpp : 382) Processing /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_2.fastq.gz 0:01:38.182 696M / 8G INFO K-mer Counting (kmer_data.cpp : 389) Collection done, postprocessing. 0:01:38.229 696M / 8G INFO K-mer Counting (kmer_data.cpp : 403) There are 21553504 kmers in total. Among them 9470578 (43.9399%) are singletons. 0:01:38.229 696M / 8G INFO General (main.cpp : 173) Subclustering Hamming graph 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 649) Subclustering done. Total 30 non-read kmers were generated. 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 650) Subclustering statistics: 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 651) Total singleton hamming clusters: 7062084. Among them 5335372 (75.5495%) are good 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 652) Total singleton subclusters: 54510. Among them 54386 (99.7725%) are good 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 653) Total non-singleton subcluster centers: 5613259. Among them 3092002 (55.0839%) are good 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 654) Average size of non-trivial subcluster: 2.58165 kmers 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 655) Average number of sub-clusters per non-singleton cluster: 1.01607 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 656) Total solid k-mers: 8481760 0:01:43.072 696M / 8G INFO Hamming Subclustering (kmer_cluster.cpp : 657) Substitution probabilities: 4,4 0:01:43.087 696M / 8G INFO General (main.cpp : 178) Finished clustering. 0:01:43.087 696M / 8G INFO General (main.cpp : 197) Starting solid k-mers expansion in 8 threads. 0:01:57.984 696M / 8G INFO General (main.cpp : 218) Solid k-mers iteration 0 produced 74848 new k-mers. 0:02:13.676 696M / 8G INFO General (main.cpp : 218) Solid k-mers iteration 1 produced 983 new k-mers. 0:02:27.690 696M / 8G INFO General (main.cpp : 218) Solid k-mers iteration 2 produced 0 new k-mers. 0:02:27.690 696M / 8G INFO General (main.cpp : 222) Solid k-mers finalized 0:02:27.690 696M / 8G INFO General (hammer_tools.cpp : 220) Starting read correction in 8 threads. 0:02:27.691 696M / 8G INFO General (hammer_tools.cpp : 233) Correcting pair of reads: /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_1.fastq.gz and /home/sam/Downloads/Agro_TORMES_2019/cleaned_reads/MAFF210265.ok_2.fastq.gz 0:02:34.714 1G / 8G INFO General (hammer_tools.cpp : 168) Prepared batch 0 of 759119 reads. 0:02:47.343 1G / 8G INFO General (hammer_tools.cpp : 175) Processed batch 0 0:02:48.468 1G / 8G INFO General (hammer_tools.cpp : 185) Written batch 0 0:02:48.827 696M / 8G INFO General (hammer_tools.cpp : 274) Correction done. Changed 402005 bases in 226062 reads. 0:02:48.827 696M / 8G INFO General (hammer_tools.cpp : 275) Failed to correct 0 bases out of 329742561. 0:02:48.828 32M / 8G INFO General (main.cpp : 255) Saving corrected dataset description to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/corrected/corrected.yaml 0:02:48.828 32M / 8G INFO General (main.cpp : 262) All done. Exiting.
== Compressing corrected reads (with gzip)
== Dataset description file was created: /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/corrected/corrected.yaml
===== Read error correction finished.
===== Assembling started.
== Running assembler: K21
0:00:00.000 4M / 4M INFO General (main.cpp : 74) Loaded config from /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/K21/configs/config.info 0:00:00.000 4M / 4M INFO General (main.cpp : 74) Loaded config from /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/K21/configs/careful_mode.info 0:00:00.000 4M / 4M INFO General (memory_limit.cpp : 49) Memory limit set to 15 Gb 0:00:00.000 4M / 4M INFO General (main.cpp : 87) Starting SPAdes, built from refs/heads/spades_3.13.0, git revision 8ea46659e9b2aca35444a808db550ac333006f8b 0:00:00.000 4M / 4M INFO General (main.cpp : 88) Maximum k-mer length: 128 0:00:00.000 4M / 4M INFO General (main.cpp : 89) Assembling dataset (/home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/dataset.info) with K=21 0:00:00.000 4M / 4M INFO General (main.cpp : 90) Maximum # of threads to use (adjusted due to OMP capabilities): 8 0:00:00.000 4M / 4M INFO General (launch.hpp : 51) SPAdes started 0:00:00.000 4M / 4M INFO General (launch.hpp : 58) Starting from stage: construction 0:00:00.000 4M / 4M INFO General (launch.hpp : 65) Two-step RR enabled: 0 0:00:00.000 4M / 4M INFO StageManager (stage.cpp : 132) STAGE == de Bruijn graph construction 0:00:00.008 4M / 4M INFO General (read_converter.hpp : 77) Converting reads to binary format for library #0 (takes a while) 0:00:00.008 4M / 4M INFO General (read_converter.hpp : 78) Converting paired reads 0:00:00.338 88M / 88M INFO General (binary_converter.hpp : 93) 16384 reads processed 0:00:00.574 108M / 108M INFO General (binary_converter.hpp : 93) 32768 reads processed 0:00:01.001 148M / 148M INFO General (binary_converter.hpp : 93) 65536 reads processed 0:00:01.855 228M / 228M INFO General (binary_converter.hpp : 93) 131072 reads processed 0:00:03.557 392M / 392M INFO General (binary_converter.hpp : 93) 262144 reads processed 0:00:06.951 720M / 720M INFO General (binary_converter.hpp : 93) 524288 reads processed 0:00:12.747 1012M / 1012M INFO General (binary_converter.hpp : 117) 759086 reads written 0:00:13.060 4M / 1012M INFO General (read_converter.hpp : 87) Converting single reads 0:00:13.106 132M / 1012M INFO General (binary_converter.hpp : 117) 28 reads written 0:00:13.107 4M / 1012M INFO General (read_converter.hpp : 95) Converting merged reads 0:00:13.153 132M / 1012M INFO General (binary_converter.hpp : 117) 0 reads written 0:00:13.154 4M / 1012M INFO General (construction.cpp : 111) Max read length 251 0:00:13.154 4M / 1012M INFO General (construction.cpp : 117) Average read length 217.188 0:00:13.154 4M / 1012M INFO General (stage.cpp : 101) PROCEDURE == k+1-mer counting 0:00:13.163 4M / 1012M INFO General (kmer_index_builder.hpp : 117) Splitting kmer instances into 64 files using 8 threads. This might take a while. 0:00:13.171 4M / 1012M INFO General (file_limit.hpp : 32) Open file limit set to 1024 0:00:13.171 4M / 1012M INFO General (kmer_splitters.hpp : 89) Memory available for splitting buffers: 0.624837 Gb 0:00:13.171 4M / 1012M INFO General (kmer_splitters.hpp : 97) Using cell size of 1048576 0:00:22.256 6G / 6G INFO General (kmer_splitters.hpp : 289) Processed 3036400 reads 0:00:22.256 6G / 6G INFO General (kmer_splitters.hpp : 295) Adding contigs from previous K 0:00:22.307 32M / 6G INFO General (kmer_splitters.hpp : 308) Used 3036400 reads 0:00:22.307 32M / 6G INFO General (kmer_index_builder.hpp : 120) Starting k-mer counting. 0:00:22.401 32M / 6G INFO General (kmer_index_builder.hpp : 127) K-mer counting done. There are 5968127 kmers in total. 0:00:22.401 32M / 6G INFO General (kmer_index_builder.hpp : 133) Merging temporary buckets. 0:00:22.434 32M / 6G INFO General (stage.cpp : 101) PROCEDURE == Extension index construction 0:00:22.434 32M / 6G INFO K-mer Index Building (kmer_index_builder.hpp : 301) Building kmer index 0:00:22.434 32M / 6G INFO General (kmer_index_builder.hpp : 117) Splitting kmer instances into 128 files using 8 threads. This might take a while. 0:00:22.434 32M / 6G INFO General (file_limit.hpp : 32) Open file limit set to 1024 0:00:22.434 32M / 6G INFO General (kmer_splitters.hpp : 89) Memory available for splitting buffers: 0.623698 Gb 0:00:22.434 32M / 6G INFO General (kmer_splitters.hpp : 97) Using cell size of 524288 0:00:23.146 8G / 8G INFO General (kmer_splitters.hpp : 380) Processed 5968127 kmers 0:00:23.147 8G / 8G INFO General (kmer_splitters.hpp : 385) Used 5968127 kmers. 0:00:23.156 32M / 8G INFO General (kmer_index_builder.hpp : 120) Starting k-mer counting. 0:00:23.231 32M / 8G INFO General (kmer_index_builder.hpp : 127) K-mer counting done. There are 5949472 kmers in total. 0:00:23.231 32M / 8G INFO General (kmer_index_builder.hpp : 133) Merging temporary buckets. 0:00:23.261 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 314) Building perfect hash indices 0:00:23.500 32M / 8G INFO General (kmer_index_builder.hpp : 150) Merging final buckets. 0:00:23.529 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 336) Index built. Total 2766976 bytes occupied (3.72063 bits per kmer). 0:00:23.531 40M / 8G INFO DeBruijnExtensionIndexBu (kmer_extension_index_build: 99) Building k-mer extensions from k+1-mers 0:00:23.944 40M / 8G INFO DeBruijnExtensionIndexBu (kmer_extension_index_build: 103) Building k-mer extensions from k+1-mers finished. 0:00:23.944 40M / 8G INFO General (stage.cpp : 101) PROCEDURE == Early tip clipping 0:00:23.944 40M / 8G INFO General (construction.cpp : 253) Early tip clipper length bound set as (RL - K) 0:00:23.944 40M / 8G INFO Early tip clipping (early_simplification.hpp : 181) Early tip clipping 0:00:25.033 40M / 8G INFO Early tip clipping (early_simplification.hpp : 184) 139269 22-mers were removed by early tip clipper 0:00:25.033 40M / 8G INFO General (stage.cpp : 101) PROCEDURE == Condensing graph 0:00:25.051 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 355) Extracting unbranching paths 0:00:25.629 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 374) Extracting unbranching paths finished. 57145 sequences extracted 0:00:25.947 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 310) Collecting perfect loops 0:00:26.076 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 343) Collecting perfect loops finished. 1 loops collected 0:00:26.181 44M / 8G INFO General (stage.cpp : 101) PROCEDURE == Filling coverage indices (PHM) 0:00:26.181 44M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 301) Building kmer index 0:00:26.181 44M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 314) Building perfect hash indices 0:00:26.415 44M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 336) Index built. Total 2771504 bytes occupied (3.71507 bits per kmer). 0:00:26.418 68M / 8G INFO General (construction.cpp : 388) Collecting k-mer coverage information from reads, this takes a while. 0:00:35.298 68M / 8G INFO General (construction.cpp : 508) Filling coverage and flanking coverage from PHM 0:00:35.623 68M / 8G INFO General (construction.cpp : 464) Processed 114001 edges 0:00:35.653 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == EC Threshold Finding 0:00:35.656 36M / 8G INFO General (kmer_coverage_model.cpp : 181) Kmer coverage valley at: 4 0:00:35.656 36M / 8G INFO General (kmer_coverage_model.cpp : 201) K-mer histogram maximum: 48 0:00:35.656 36M / 8G INFO General (kmer_coverage_model.cpp : 237) Estimated median coverage: 53. Coverage mad: 19.2738 0:00:35.656 36M / 8G INFO General (kmer_coverage_model.cpp : 259) Fitting coverage model 0:00:35.808 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 2 0:00:35.946 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 4 0:00:36.479 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 8 0:00:37.697 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 16 0:00:37.812 36M / 8G INFO General (kmer_coverage_model.cpp : 309) Fitted mean coverage: 54.3022. Fitted coverage std. dev: 17.6107 0:00:37.813 36M / 8G INFO General (kmer_coverage_model.cpp : 334) Probability of erroneous kmer at valley: 0.719005 0:00:37.813 36M / 8G INFO General (kmer_coverage_model.cpp : 358) Preliminary threshold calculated as: 8 0:00:37.813 36M / 8G INFO General (kmer_coverage_model.cpp : 362) Threshold adjusted to: 8 0:00:37.813 36M / 8G INFO General (kmer_coverage_model.cpp : 375) Estimated genome size (ignoring repeats): 5241190 0:00:37.813 36M / 8G INFO General (genomic_info_filler.cpp : 112) Mean coverage was calculated as 54.3022 0:00:37.813 36M / 8G INFO General (genomic_info_filler.cpp : 127) EC coverage threshold value was calculated as 8 0:00:37.813 36M / 8G INFO General (genomic_info_filler.cpp : 128) Trusted kmer low bound: 0 0:00:37.813 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Raw Simplification 0:00:38.412 36M / 8G INFO General (simplification.cpp : 128) PROCEDURE == InitialCleaning 0:00:38.481 36M / 8G INFO General (graph_simplification.hpp : 662) Flanking coverage based disconnection disabled 0:00:38.481 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Self conjugate edge remover 0:00:38.493 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Self conjugate edge remover triggered 0 times 0:00:38.493 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Simplification 0:00:38.494 36M / 8G INFO General (simplification.cpp : 357) Graph simplification started 0:00:38.494 36M / 8G INFO General (graph_simplification.hpp : 634) Creating parallel br instance 0:00:38.494 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 1 0:00:38.494 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:38.517 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 484 times 0:00:38.517 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:38.872 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 7304 times 0:00:38.872 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:38.892 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 0 times 0:00:38.892 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 2 0:00:38.892 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:38.894 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 9 times 0:00:38.894 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:38.894 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 0 times 0:00:38.894 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.021 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 7210 times 0:00:39.021 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 3 0:00:39.021 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.022 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 26 times 0:00:39.022 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.031 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 61 times 0:00:39.031 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.097 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 2446 times 0:00:39.097 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 4 0:00:39.097 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.098 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 1 times 0:00:39.098 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.101 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 24 times 0:00:39.101 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.116 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 424 times 0:00:39.116 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 5 0:00:39.116 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.116 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 2 times 0:00:39.116 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.117 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 3 times 0:00:39.117 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.129 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 288 times 0:00:39.129 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 6 0:00:39.129 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.129 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.129 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.130 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 5 times 0:00:39.130 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.132 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 54 times 0:00:39.132 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 7 0:00:39.132 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.132 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.132 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.133 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 2 times 0:00:39.133 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.135 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 54 times 0:00:39.135 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 8 0:00:39.135 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.135 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.135 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.136 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 1 times 0:00:39.136 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.138 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 53 times 0:00:39.138 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 9 0:00:39.138 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.138 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 1 times 0:00:39.138 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.138 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 0 times 0:00:39.138 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.139 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 20 times 0:00:39.139 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 10 0:00:39.139 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.139 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.139 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.139 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 0 times 0:00:39.139 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.140 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 17 times 0:00:39.140 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 11 0:00:39.140 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.153 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 1 times 0:00:39.153 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.183 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 1 times 0:00:39.183 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.202 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 0 times 0:00:39.202 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 12 0:00:39.202 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.202 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.202 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.203 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 0 times 0:00:39.203 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:39.203 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 0 times 0:00:39.203 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Simplification Cleanup 0:00:39.203 36M / 8G INFO General (simplification.cpp : 196) PROCEDURE == Post simplification 0:00:39.203 36M / 8G INFO General (graph_simplification.hpp : 453) Disconnection of relatively low covered edges disabled 0:00:39.203 36M / 8G INFO General (graph_simplification.hpp : 489) Complex tip clipping disabled 0:00:39.203 36M / 8G INFO General (graph_simplification.hpp : 634) Creating parallel br instance 0:00:39.203 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.227 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.227 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.256 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 0 times 0:00:39.256 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:39.275 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 0 times 0:00:39.275 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:39.304 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 0 times 0:00:39.304 36M / 8G INFO General (simplification.cpp : 330) Disrupting self-conjugate edges 0:00:39.306 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Removing isolated edges 0:00:39.324 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Removing isolated edges triggered 517 times 0:00:39.325 36M / 8G INFO General (simplification.cpp : 470) Counting average coverage 0:00:39.326 36M / 8G INFO General (simplification.cpp : 476) Average coverage = 56.2191 0:00:39.326 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Contig Output 0:00:39.342 36M / 8G INFO General (contig_output_stage.cpp : 40) Writing GFA to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/assembly_graph_with_scaffolds.gfa 0:00:39.387 36M / 8G INFO General (contig_output.hpp : 22) Outputting contigs to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/before_rr.fasta 0:00:39.437 36M / 8G INFO General (contig_output_stage.cpp : 51) Outputting FastG graph to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/assembly_graph.fastg 0:00:39.640 36M / 8G INFO General (contig_output.hpp : 22) Outputting contigs to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/simplified_contigs.fasta 0:00:39.706 36M / 8G INFO General (contig_output.hpp : 22) Outputting contigs to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/final_contigs.fasta 0:00:39.760 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Contig Output 0:00:39.760 36M / 8G INFO General (contig_output_stage.cpp : 40) Writing GFA to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/assembly_graph_with_scaffolds.gfa 0:00:39.805 36M / 8G INFO General (contig_output.hpp : 22) Outputting contigs to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/before_rr.fasta 0:00:39.858 36M / 8G INFO General (contig_output_stage.cpp : 51) Outputting FastG graph to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/assembly_graph.fastg 0:00:40.041 36M / 8G INFO General (contig_output.hpp : 22) Outputting contigs to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/simplified_contigs.fasta 0:00:40.091 36M / 8G INFO General (contig_output.hpp : 22) Outputting contigs to /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly//K21/final_contigs.fasta 0:00:40.142 36M / 8G INFO General (launch.hpp : 149) SPAdes finished 0:00:40.148 32M / 8G INFO General (main.cpp : 109) Assembling time: 0 hours 0 minutes 40 seconds Max read length detected as 251 Default k-mer sizes were set to [21, 33, 55, 77, 99, 127] because estimated read length (251) is equal to or greater than 250
== Running assembler: K33
0:00:00.000 4M / 4M INFO General (main.cpp : 74) Loaded config from /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/K33/configs/config.info 0:00:00.000 4M / 4M INFO General (main.cpp : 74) Loaded config from /home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/K33/configs/careful_mode.info 0:00:00.000 4M / 4M INFO General (memory_limit.cpp : 49) Memory limit set to 15 Gb 0:00:00.000 4M / 4M INFO General (main.cpp : 87) Starting SPAdes, built from refs/heads/spades_3.13.0, git revision 8ea46659e9b2aca35444a808db550ac333006f8b 0:00:00.000 4M / 4M INFO General (main.cpp : 88) Maximum k-mer length: 128 0:00:00.000 4M / 4M INFO General (main.cpp : 89) Assembling dataset (/home/sam/Downloads/Agro_TORMES_2019/assembly/MAFF210265_assembly/dataset.info) with K=33 0:00:00.000 4M / 4M INFO General (main.cpp : 90) Maximum # of threads to use (adjusted due to OMP capabilities): 8 0:00:00.000 4M / 4M INFO General (launch.hpp : 51) SPAdes started 0:00:00.000 4M / 4M INFO General (launch.hpp : 58) Starting from stage: construction 0:00:00.000 4M / 4M INFO General (launch.hpp : 65) Two-step RR enabled: 0 0:00:00.000 4M / 4M INFO StageManager (stage.cpp : 132) STAGE == de Bruijn graph construction 0:00:00.000 4M / 4M INFO General (read_converter.hpp : 59) Binary reads detected 0:00:00.000 4M / 4M INFO General (construction.cpp : 111) Max read length 251 0:00:00.000 4M / 4M INFO General (construction.cpp : 117) Average read length 217.188 0:00:00.000 4M / 4M INFO General (stage.cpp : 101) PROCEDURE == k+1-mer counting 0:00:00.000 4M / 4M INFO General (kmer_index_builder.hpp : 117) Splitting kmer instances into 64 files using 8 threads. This might take a while. 0:00:00.000 4M / 4M INFO General (file_limit.hpp : 32) Open file limit set to 1024 0:00:00.000 4M / 4M INFO General (kmer_splitters.hpp : 89) Memory available for splitting buffers: 0.624837 Gb 0:00:00.000 4M / 4M INFO General (kmer_splitters.hpp : 97) Using cell size of 524288 0:00:07.899 6G / 6G INFO General (kmer_splitters.hpp : 289) Processed 2880484 reads 0:00:08.470 6G / 6G INFO General (kmer_splitters.hpp : 289) Processed 3036400 reads 0:00:08.470 6G / 6G INFO General (kmer_splitters.hpp : 295) Adding contigs from previous K 0:00:09.244 32M / 6G INFO General (kmer_splitters.hpp : 308) Used 3036400 reads 0:00:09.244 32M / 6G INFO General (kmer_index_builder.hpp : 120) Starting k-mer counting. 0:00:09.414 32M / 6G INFO General (kmer_index_builder.hpp : 127) K-mer counting done. There are 6149974 kmers in total. 0:00:09.414 32M / 6G INFO General (kmer_index_builder.hpp : 133) Merging temporary buckets. 0:00:09.466 32M / 6G INFO General (stage.cpp : 101) PROCEDURE == Extension index construction 0:00:09.466 32M / 6G INFO K-mer Index Building (kmer_index_builder.hpp : 301) Building kmer index 0:00:09.466 32M / 6G INFO General (kmer_index_builder.hpp : 117) Splitting kmer instances into 128 files using 8 threads. This might take a while. 0:00:09.466 32M / 6G INFO General (file_limit.hpp : 32) Open file limit set to 1024 0:00:09.466 32M / 6G INFO General (kmer_splitters.hpp : 89) Memory available for splitting buffers: 0.623698 Gb 0:00:09.466 32M / 6G INFO General (kmer_splitters.hpp : 97) Using cell size of 262144 0:00:10.067 8G / 8G INFO General (kmer_splitters.hpp : 380) Processed 6149974 kmers 0:00:10.067 8G / 8G INFO General (kmer_splitters.hpp : 385) Used 6149974 kmers. 0:00:10.077 32M / 8G INFO General (kmer_index_builder.hpp : 120) Starting k-mer counting. 0:00:10.131 32M / 8G INFO General (kmer_index_builder.hpp : 127) K-mer counting done. There are 6136669 kmers in total. 0:00:10.131 32M / 8G INFO General (kmer_index_builder.hpp : 133) Merging temporary buckets. 0:00:10.183 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 314) Building perfect hash indices 0:00:10.392 32M / 8G INFO General (kmer_index_builder.hpp : 150) Merging final buckets. 0:00:10.449 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 336) Index built. Total 2853896 bytes occupied (3.72045 bits per kmer). 0:00:10.451 40M / 8G INFO DeBruijnExtensionIndexBu (kmer_extension_index_build: 99) Building k-mer extensions from k+1-mers 0:00:10.798 40M / 8G INFO DeBruijnExtensionIndexBu (kmer_extension_index_build: 103) Building k-mer extensions from k+1-mers finished. 0:00:10.798 40M / 8G INFO General (stage.cpp : 101) PROCEDURE == Early tip clipping 0:00:10.798 40M / 8G INFO General (construction.cpp : 253) Early tip clipper length bound set as (RL - K) 0:00:10.798 40M / 8G INFO Early tip clipping (early_simplification.hpp : 181) Early tip clipping 0:00:11.539 40M / 8G INFO Early tip clipping (early_simplification.hpp : 184) 235264 34-mers were removed by early tip clipper 0:00:11.539 40M / 8G INFO General (stage.cpp : 101) PROCEDURE == Condensing graph 0:00:11.566 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 355) Extracting unbranching paths 0:00:12.032 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 374) Extracting unbranching paths finished. 42493 sequences extracted 0:00:12.307 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 310) Collecting perfect loops 0:00:12.419 40M / 8G INFO UnbranchingPathExtractor (debruijn_graph_constructor: 343) Collecting perfect loops finished. 0 loops collected 0:00:12.435 44M / 8G INFO General (stage.cpp : 101) PROCEDURE == Filling coverage indices (PHM) 0:00:12.435 44M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 301) Building kmer index 0:00:12.435 44M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 314) Building perfect hash indices 0:00:12.602 44M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 336) Index built. Total 2855888 bytes occupied (3.71499 bits per kmer). 0:00:12.606 68M / 8G INFO General (construction.cpp : 388) Collecting k-mer coverage information from reads, this takes a while. 0:00:19.802 68M / 8G INFO General (construction.cpp : 508) Filling coverage and flanking coverage from PHM 0:00:20.122 68M / 8G INFO General (construction.cpp : 464) Processed 84766 edges 0:00:20.141 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == EC Threshold Finding 0:00:20.142 36M / 8G INFO General (kmer_coverage_model.cpp : 181) Kmer coverage valley at: 4 0:00:20.142 36M / 8G INFO General (kmer_coverage_model.cpp : 201) K-mer histogram maximum: 45 0:00:20.142 36M / 8G INFO General (kmer_coverage_model.cpp : 237) Estimated median coverage: 49. Coverage mad: 17.7912 0:00:20.143 36M / 8G INFO General (kmer_coverage_model.cpp : 259) Fitting coverage model 0:00:20.197 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 2 0:00:20.336 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 4 0:00:20.789 36M / 8G INFO General (kmer_coverage_model.cpp : 295) ... iteration 8 0:00:21.618 36M / 8G INFO General (kmer_coverage_model.cpp : 309) Fitted mean coverage: 50.881. Fitted coverage std. dev: 16.84 0:00:21.619 36M / 8G INFO General (kmer_coverage_model.cpp : 334) Probability of erroneous kmer at valley: 0.670135 0:00:21.620 36M / 8G INFO General (kmer_coverage_model.cpp : 358) Preliminary threshold calculated as: 8 0:00:21.620 36M / 8G INFO General (kmer_coverage_model.cpp : 362) Threshold adjusted to: 8 0:00:21.620 36M / 8G INFO General (kmer_coverage_model.cpp : 375) Estimated genome size (ignoring repeats): 5248120 0:00:21.620 36M / 8G INFO General (genomic_info_filler.cpp : 112) Mean coverage was calculated as 50.881 0:00:21.620 36M / 8G INFO General (genomic_info_filler.cpp : 127) EC coverage threshold value was calculated as 8 0:00:21.620 36M / 8G INFO General (genomic_info_filler.cpp : 128) Trusted kmer low bound: 0 0:00:21.620 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Raw Simplification 0:00:21.620 36M / 8G INFO General (simplification.cpp : 128) PROCEDURE == InitialCleaning 0:00:21.620 36M / 8G INFO General (graph_simplification.hpp : 662) Flanking coverage based disconnection disabled 0:00:21.620 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Self conjugate edge remover 0:00:21.622 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Self conjugate edge remover triggered 0 times 0:00:21.622 36M / 8G INFO StageManager (stage.cpp : 132) STAGE == Simplification 0:00:21.622 36M / 8G INFO General (simplification.cpp : 357) Graph simplification started 0:00:21.622 36M / 8G INFO General (graph_simplification.hpp : 634) Creating parallel br instance 0:00:21.622 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 1 0:00:21.622 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:21.629 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 571 times 0:00:21.630 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:21.819 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 6223 times 0:00:21.819 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:21.824 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 6 times 0:00:21.824 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 2 0:00:21.824 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:21.825 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 6 times 0:00:21.825 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Bulge remover 0:00:21.825 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Bulge remover triggered 1 times 0:00:21.825 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Low coverage edge remover 0:00:21.938 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Low coverage edge remover triggered 5357 times 0:00:21.938 36M / 8G INFO General (simplification.cpp : 362) PROCEDURE == Simplification cycle, iteration 3 0:00:21.938 36M / 8G INFO Simplification (parallel_processing.hpp : 165) Running Tip clipper 0:00:21.939 36M / 8G INFO Simplification (parallel_processing.hpp : 167) Tip clipper triggered 15 tim
Hi Pablo, Are you sure that the analysis finished? It seems to be interrupted without any error... Could you please attach the "error-tormes.txt" file instead of copying it? Maybe there are some lines missing... Additionally, can you please also attach the "tormes.log" file? Thanks!
Hi, I 've created a tormes-environment. But when I type tormes-setup, following error poped up. -bash: tormes-setup: command not found Could you guide me through it. I appreciate your help. Zhi
Hi @lz870718 Did you have problems by installing tormes as explained in the "installation" instructions (https://github.com/nmquijada/tormes#installation) ? When you created your own tormes environment, did you follow all the steps showed here https://github.com/nmquijada/tormes/issues/1#issuecomment-502601138 ? There are other approaches we can follow, I just need some information of what you did. Cheers, Narciso
Hi Narciso, Thank you so much for your reply. I install the package successfully after typing the command line by line instead of copy and paste the whole thing.
Now I got another error while running the package.
Quitting from lines 189-198 (tormes_report.Rmd) gzip: /gpfs/home/user1/tormes_result_RG/Raw_reads/*fastq: No such file or directory The strange part of it is I didn't put my fastq file under /gpfs/home/user1/tormes_result_RG/Raw_reads/.
Could you help me with this question? Thank you, Zhi
Hi @lz870718 So "Quitting from lines 189-198 (tormes_report.Rmd)" is the last error you find, right? It means that something went wrong during the report generation. Are the analysis included in TORMES properly performed? It would be very useful for me if you can send me an entire report of the error you encounter, which can be done easily by following the instructions in https://github.com/nmquijada/tormes/issues/6#issuecomment-538347884 Could you please share the error-tormes.txt file?
As a security step, I included that TORMES cp your reads in a new directory (Raw_reads) inside the tormes directory specified by -o/--output
, that's why it reported the warning in "/gpfs/home/user1/tormes_result_RG/Raw_reads/".
Cheers,
Narciso
Hi Narciso,
Thank you for your quick response. Please find the error in the txt file attached.
Best, Zhi
On Fri, Nov 22, 2019 at 10:07 AM Narciso M. Quijada < notifications@github.com> wrote:
Hi @lz870718 https://github.com/lz870718 So "Quitting from lines 189-198 (tormes_report.Rmd)" is the last error you find, right? It means that something went wrong during the report generation. Are the analysis included in TORMES properly performed? It would be very useful for me if you can send me an entire report of the error you encounter, which can be done easily by following the instructions in #6 (comment) https://github.com/nmquijada/tormes/issues/6#issuecomment-538347884 Could you please share the error-tormes.txt file?
As a security step, I included that TORMES cp your reads in a new directory (Raw_reads) inside the tormes directory specified by -o/--output, that's why it reported the warning in "/gpfs/home/user1/tormes_result_RG/Raw_reads/". Cheers, Narciso
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AC7FJ6VFV67QLOSPPBFHPULQU7YUBA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEE54YOI#issuecomment-557567033, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC7FJ6XBQDVZD4T4DG4QDI3QU7YUBANCNFSM4HF5LDNQ .
Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/abricate found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/convert found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/fasttree found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/kraken found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/kraken-report found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/megahit found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/mlst found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/parallel found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/prinseq-lite.pl found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/prokka found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/quast found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/roary found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/roary2svg.pl found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/sickle found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/spades.py found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/trimmomatic found Software: /gpfs/home/liz15/.conda/envs/tormes-environment/bin/../share/mauve-2.4.0.r4736-1/Mauve.jar found Binaries for MAUVE found
Thanks for using tormes version 1.0 Status can be shown in "/gpfs/home/liz15/GENOME/tormes.log"
gzip: /gpfs/home/liz15/GENOME/Raw_reads/fastq: No such file or directory gzip: /gpfs/home/liz15/GENOME/cleaned_reads/gz.gz: No such file or directory gzip: /gpfs/home/liz15/GENOME/cleaned_reads/fastq: No such file or directory paste: /gpfs/home/liz15/GENOME/temp1: No such file or directory cut: /gpfs/home/liz15/GENOME/kraken_summary.tmp: No such file or directory cut: /gpfs/home/liz15/GENOME/kraken_summary.tmp: No such file or directory cp: cannot stat ‘/gpfs/home/liz15/GENOME/mlst/mlst.tab’: No such file or directory cp: cannot stat ‘/gpfs/home/liz15/GENOME/antibiotic_resistance_genes//tab’: No such file or directory cp: cannot stat ‘/gpfs/home/liz15/GENOME/virulence_genes/tab’: No such file or directory WARNING: Can not open '/gpfs/home/liz15/GENOME/report_files/resfinder_min90.tab' to summarize. WARNING: Can not open '/gpfs/home/liz15/GENOME/report_files/card_min90.tab' to summarize. WARNING: Can not open '/gpfs/home/liz15/GENOME/report_files/argannot_min90.tab' to summarize. cut: /gpfs/home/liz15/GENOME/report_files/*resfinder_min90.tab: No such file or directory cut: /gpfs/home/liz15/GENOME/report_files/tmp2.txt: No such file or directory paste: /gpfs/home/liz15/GENOME/report_files/tmp2.txt: No such file or directory
processing file: tormes_report.Rmd
0% | ||
---|---|---|
.. | 3% |
ordinary text without R code
|
|.... | 6%
label: unnamed-chunk-1 (with options)
List of 3
$ echo : logi FALSE
$ message: logi FALSE
$ warning: logi FALSE
ggtree v1.14.6 For help: https://guangchuangyu.github.io/software/ggtree
If you use ggtree in published research, please cite the most appropriate paper(s):
Guangchuang Yu, David Smith, Huachen Zhu, Yi Guan, Tommy Tsan-Yuk Lam. ggtree: an R package for visualization and annotation of phylogenetic trees with their covariates and other associated data. Methods in Ecology and Evolution 2017, 8(1):28-36, doi:10.1111/2041-210X.12628
Guangchuang Yu, Tommy Tsan-Yuk Lam, Huachen Zhu, Yi Guan. Two methods for mapping and visualizing associated data on phylogeny using ggtree. Molecular Biology and Evolution 2018, accepted. doi: 10.1093/molbev/msy194
Attaching package: 'plotly'
The following object is masked from 'package:ggplot2':
last_plot
The following object is masked from 'package:stats':
filter
The following object is masked from 'package:graphics':
layout
|
|...... | 10%
ordinary text without R code
|
|........ | 13%
label: unnamed-chunk-2 (with options)
List of 1
$ echo: logi FALSE
|
|.......... | 16%
ordinary text without R code
|
|............. | 19%
label: unnamed-chunk-3 (with options)
List of 1
$ echo: logi FALSE
Quitting from lines 71-74 (tormes_report.Rmd)
Error in read.table("sequencing_assembly_report.txt", header = T, sep = "\t", :
no lines available in input
Calls:
Execution halted
Hi Zhi, Your analysis fails from the very beginning... Something is not working with your input files. Can you please attach the metadata and the tormes.log files? Please, try to attach the documents by using the GitHub webpage. When answering via email everything appears pasted in the comment and it is more challenging to analyze. Thanks! Narciso
RG_samples_metadata.txt tormes.log Thank you Narciso, please see attachment!
Hi Zhi, Thanks for sharing. The metadata seems to be correct (TORMES quits if some reads in the metadata don't exist) but no analysis are performed. Is the error you reported here https://github.com/nmquijada/tormes/issues/1#issuecomment-557569975 the entire error you receive? Please, try to attach as you did with the tormes.log Do you have permissions in the "/gpfs/data/proteomics/projects/Zhi/Silverman/WGS/" directory? Thanks, Narciso
Hi Narciso,
Yes, I've attached all error messages. And I do have the permission of the directory. It seems like tormes didn't generate fastq under gzip: /gpfs/home/liz15/GENOME/Raw_reads/ successfully. I don't know why.
Thank you, Zhi
On Fri, Nov 22, 2019 at 10:51 AM Narciso M. Quijada < notifications@github.com> wrote:
Hi Zhi, Thanks for sharing. The metadata seems to be correct (TORMES quits if some reads in the metadata don't exist) but no analysis are performed. Is the error you reported here #1 (comment) https://github.com/nmquijada/tormes/issues/1#issuecomment-557569975 the entire error you receive? Please, try to attach as you did with the tormes.log Do you have permissions in the "/gpfs/data/proteomics/projects/Zhi/Silverman/WGS/" directory? Thanks, Narciso
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AC7FJ6TMFZIBIRBBEYULY73QU75X5A5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEE6BJWA#issuecomment-557585624, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC7FJ6SJ5FRZEE4JC5KZHF3QU75X5ANCNFSM4HF5LDNQ .
Hi Zhi,
Unzipping the reads is a requirement for Prinseq to work. Let's do it with other quality filtering software. Can you please run (after activating tormes environment):
tormes --metadata RG_samples_metadata.txt --output GENOME --filtering trimmomatic --threads 16 &>>error-tormes.txt 2>>error-tormes.txt &
Please, change the number of threads according to your capacities.
Then, attach the error-tormes.txt file.
Let's see!
error-tormes.txt Please find new error file. Thank you!
Hi Zhi, Thanks for sharing. You didn't delete any error-tormes.txt file, so every time you ran th code I told you, the new errors were pasted to the same file. However, I see that it showed the error "ERROR: GENOME already exist! Please check". This error has to stop TORMES. Which operative system are you using? I have some ideas of what can be going on... Please, do:
rm error-tormes.txt
tormes --metadata RG_samples_metadata.txt --output GENOME --filtering trimmomatic --threads 16 &>>error-tormes.txt 2>>error-tormes.txt &
And share again the error-tormes.txt. Thanks!
Please find new error message. I am running tormes on a linux cluster. Thank you. error-tormes.txt
Ok... It is the first time we encounter something like this. Even in the absence of input files, the different software might return an error. However, only Roary is reporting that... I will think about it and come back to you again...
Sure. I appreciate your help. -Zhi
Just one quick thing. Do you know which Linux distribution is your cluster? (we tested TORMES on Centos and Mint)
We are using SLURM.
On Fri, Nov 22, 2019 at 11:52 AM Narciso M. Quijada < notifications@github.com> wrote:
Just one quick thing. Do you know which Linux distribution is your cluster? (we tested TORMES on Centos and Mint)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AC7FJ6WJ7IZSTPB7ZBVCK63QVAE5NA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEE6G4JY#issuecomment-557608487, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC7FJ6VJA5MWP25UHR23LDLQVAE5NANCNFSM4HF5LDNQ .
Redhat is the operating system.
On Fri, Nov 22, 2019 at 11:57 AM Zhi Li lizhi718@gmail.com wrote:
We are using SLURM.
On Fri, Nov 22, 2019 at 11:52 AM Narciso M. Quijada < notifications@github.com> wrote:
Just one quick thing. Do you know which Linux distribution is your cluster? (we tested TORMES on Centos and Mint)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1?email_source=notifications&email_token=AC7FJ6WJ7IZSTPB7ZBVCK63QVAE5NA5CNFSM4HF5LDN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEE6G4JY#issuecomment-557608487, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC7FJ6VJA5MWP25UHR23LDLQVAE5NANCNFSM4HF5LDNQ .
Hi there, Have you test if TORMES works for you now with the new version? Best, Narciso
Hi @nmquijada,
This is amazing news!!!! An update to the package.
I am very excited to make it work.
However, I already have an issue with the metadatas file :(
When I run
tormes -m samples_metadata.txt -o /home/sam/Downloads/tormes-test1/out
I get
ERROR: Some fields in samples_metadata.txt are blank! Please check
In attachment you can find the metadata file. I generated by editing the one made with the command tormes example-metadata
Do you have any idea what could be wrong?
Cheers, Pablo samples_metadata.txt
Hi Pablo,
Thanks for sharing the file. There were two issues with your metadata file:
The fields in the file must be tab-separated. Between your samples "Sample3" and "Sample4" and the next field ("GENOME") there was an space and not a tab, and so each field was not properly separated and returned the error. Find the corrected file here: samples_metadata-ok_v1.txt
You left the "Description" field in the header, but no description was added. You have two options, either removing "Description" from the header (as in the file attached above) or adding information belonging to that field, using something like "No data" for those samples were you don't have/want to add information. I modified your file as an example here: samples_metadata-ok_v2.txt
Let me know if it works! Narciso
Hi Narciso,
Thanks a lot for reviewing my files! I have used samples_metadata-ok_v2.txt and I got the following output:
LG Wang, TTY Lam, S Xu, Z Dai, L Zhou, T Feng, P Guo, CW Dunn, BR Jones, T Bradley, H Zhu, Y Guan, Y Jiang, G Yu. treeio: an R package for phylogenetic tree input and output with richly annotated and associated data. Molecular Biology and Evolution 2019, accepted. doi: 10.1093/molbev/msz240
|....... | 10%
ordinary text without R code
|.......... | 14%
label: unnamed-chunk-2 (with options)
List of 1
$ echo: logi FALSE
|............ | 17%
ordinary text without R code
|.............. | 21%
label: unnamed-chunk-3 (with options)
List of 1
$ echo: logi FALSE
Quitting from lines 72-75 (tormes_report.Rmd)
Error in file(file, "rt") : cannot open the connection
Calls: <Anonymous> ... withCallingHandlers -> withVisible -> eval -> eval -> read.table -> file
In addition: Warning messages:
1: package 'ggplot2' was built under R version 3.6.3
2: package 'knitr' was built under R version 3.6.3
3: package 'plotly' was built under R version 3.6.3
4: package 'RColorBrewer' was built under R version 3.6.3
5: package 'reshape2' was built under R version 3.6.3
Execution halted
I am using 5 genomes. Should I use more?
Cheers, Pablo
Hi Pablo,
The number of genomes is not a problem. Something went wrong with the generation of the report. Could you please share the "report_files" directory or tgz? (if you have sensitive information, we can find other way for sharing data). Apart from the report, did the rest of the analysis run properly?
Thanks, Narciso
Hi Narciso,
Attached you will find the report_files. Hope we can solve this problem soon. The rest of the pipeline generated only empty files.
Cheers, Pablo
On Thu, May 7, 2020 at 3:02 PM Narciso M. Quijada notifications@github.com wrote:
Hi Pablo,
The number of genomes is not a problem. Something went wrong with the generation of the report. Could you please share the "report_files" directory or tgz? (if you have sensitive information, we can find other way for sharing data). Apart from the report, did the rest of the analysis run properly?
Thanks, Narciso
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmquijada/tormes/issues/1#issuecomment-625241525, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK4VSEWINW5J6JVZL4JAHADRQKWPBANCNFSM4HF5LDNQ .
Hi Pablo,
I cannot see you attached file. Please use the GitHub webpage next time for doing so instead of replying via email. But if all the files generated are empty, something failed for you since the beginning... Can you please send me the genomes? I will reproduce the analysis in my system. You can send them to my email: nmartinquijada at gmail dot com
Sorry for the inconveniences, Narciso
Closing issue due to outdated TORMES version
Dear nmquijada,
I am having some issues with the installation of the package. When I try the commands for the installation using Conda, I get this output:
`(base) Pablos-MacBook-Air:~ pablovargas$ conda env create -n tormes-1.0 --file tormes-1.0.yml WARNING: The conda.compat module is deprecated and will be removed in a future release. WARNING: The conda.compat module is deprecated and will be removed in a future release. Collecting package metadata: done Solving environment: failed
ResolvePackageNotFound:
`
I am not sure what the problem is since this is the first time a package does not install. Perhaps it got something to do with the Conda version?
Thanks in advance for your support.
Cheers, Pablo