xiaoli-dong / metaerg

42 stars 18 forks source link

Diamond not running from docker #1

Open Anuravi123 opened 4 years ago

Anuravi123 commented 4 years ago

Hello,

I get this error when I try to run metaerg unning: diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019\/tmp\/FOAM1.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM1.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019\/tmp\/FOAM2.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM2.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout metaerg.pl_10162019\/tmp\/Pfam-A.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/Pfam-A.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null, No such file or directory

[Wed Oct 16 13:30:09 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null, No such file or directory

Perl exited with active threads: 7 running and unjoined 1 finished and unjoined 0 running and detached [Wed Oct 16 13:30:09 2019] Could not run command:/usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /hpc-home/ravia/tools/metaerg/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaerg.pl_10162019/tmp metaerg.pl_10162019/tmp/cds.faa, No such file or directory

I check the cds.faa and this file is formed. There seems to be something wrong with running diamond.

Could you help me with this.

Thanks!

Anu

Anuravi123 commented 4 years ago

the Hmm models run fine and everything just stops after diamond fails.

Finesim97 commented 4 years ago

Can you post your docker call?

Finesim97 commented 4 years ago

Nevermind, having the same problem when running MetaErg on our GridEngine on the NFS system without docker. Second run works. From the /hpc-home pathname I guess it is a NFS as well?

Finesim97 commented 4 years ago

Ok, after I removed the 2> /dev/null it was possible to see that diamond was complaining about not being able to open the database genomedb file, even tough it was there. Next I replaced --quiet with the --log flag to get some more info, but I also noticed the double -k 1 flag. Strangely after removing the duplicated flag, the 80 parallel jobs run fine. Maybe just a coincidence, but I added this change to my PR. #3

xiaoli-dong commented 4 years ago

Hi Anu,

I guess you were running docker image. Could you show me the exact commands you have used? The following is the one example command:

docker run --shm-size 2g --rm -u $(id -u):$(id -g) -it -v /your_local_dir/:/data/ xiaolidong/docker-metaerg metaerg.pl --dbdir /data/db --outdir /data/your_analysis_result_dir /data/your_sequence.fasta

In the above command, the downloaded and uncompressed database is in your_local_dir, your input Fasta sequence is also in that directory. "-v /your_local_dir/:/data/" is basically mount your local directory to the docker container and can be accessed using /data.

Thanks, Xiaoli

Hello,

I get this error when I try to run metaerg unning: diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019/tmp/FOAM1.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM1.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019/tmp/FOAM2.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM2.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout metaerg.pl_10162019/tmp/Pfam-A.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/Pfam-A.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null, No such file or directory

[Wed Oct 16 13:30:09 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null, No such file or directory

Perl exited with active threads: 7 running and unjoined 1 finished and unjoined 0 running and detached [Wed Oct 16 13:30:09 2019] Could not run command:/usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /hpc-home/ravia/tools/metaerg/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaerg.pl_10162019/tmp metaerg.pl_10162019/tmp/cds.faa, No such file or directory

I check the cds.faa and this file is formed. There seems to be something wrong with running diamond.

Could you help me with this.

Thanks!

Anu

xiaoli-dong commented 4 years ago

Docker image is built using the newest version of the diamond.

If you are using the installed version, you need to update your diamond to the newest version.

Thanks,

Xiaoli

Ok, after I removed the 2> /dev/null it was possible to see that diamond was complaining about not being able to open the database genomedb file, even tough it was there. Next I replaced --quiet with the --log flag to get some more info, but I also noticed the double -k 1 flag. Strangely after removing the duplicated flag, the 80 parallel jobs run fine. Maybe just a coincidence, but I added this change to my PR. #3

Anuravi123 commented 4 years ago

Dear Xiaoli,

As mentioned in my previous email, I converted the docker container to a singularity- so this is the command that I use:

singularity exec ~/tools/metaerg/metaerg.img metaerg.pl --dbdir ~/tools/metaerg/db --depth barcode01_metabat2_depth.txt \

barcode01.fasta

I didn’t build the image with diamond installed, I assumed diamond was present within the docker container.

Anu

From: Xiaoli Dong notifications@github.com Reply-To: xiaoli-dong/metaerg reply@reply.github.com Date: Saturday, 19 October 2019 at 23:30 To: xiaoli-dong/metaerg metaerg@noreply.github.com Cc: "Anuradha Ravi (QIB)" Anuradha.Ravi@quadram.ac.uk, Author author@noreply.github.com Subject: Re: [xiaoli-dong/metaerg] Diamond not running from docker (#1)

Hi Anu,

I guess you were running docker image. Could you show me the exact commands you have used? The following is the one example command:

docker run --shm-size 2g --rm -u $(id -u):$(id -g) -it -v /your_local_dir/:/data/ xiaolidong/docker-metaerg metaerg.pl --dbdir /data/db --outdir /data/your_analysis_result_dir /data/your_sequence.fasta

In the above command, the downloaded and uncompressed database is in your_local_dir, your input Fasta sequence is also in that directory. "-v /your_local_dir/:/data/" is basically mount your local directory to the docker container and can be accessed using /data.

Thanks, Xiaoli

Hello,

I get this error when I try to run metaerg unning: diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019/tmp/FOAM1.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM1.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019/tmp/FOAM2.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM2.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout metaerg.pl_10162019/tmp/Pfam-A.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/Pfam-A.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null, No such file or directory

[Wed Oct 16 13:30:09 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null, No such file or directory

Perl exited with active threads: 7 running and unjoined 1 finished and unjoined 0 running and detached [Wed Oct 16 13:30:09 2019] Could not run command:/usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /hpc-home/ravia/tools/metaerg/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaerg.pl_10162019/tmp metaerg.pl_10162019/tmp/cds.faa, No such file or directory

I check the cds.faa and this file is formed. There seems to be something wrong with running diamond.

Could you help me with this.

Thanks!

Anu

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AHTQZPGBYX6FZ5F6DZHYLATQPOC7VA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBX56LY#issuecomment-544202543, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHTQZPB35AUBBK2BBARF7Z3QPOC7VANCNFSM4JBMC5UA.

Anuravi123 commented 4 years ago

Dear Xioli Dong,

Thank you for your email. I actually made a singularity image from the docker container so I can use it in the HPC cluster. I assumed that the docker container has the diamond tools present within.

Do you say I would need to create a singularity image with Metaerg and diamond?

From: Xiaoli Dong notifications@github.com Reply-To: xiaoli-dong/metaerg reply@reply.github.com Date: Saturday, 19 October 2019 at 23:33 To: xiaoli-dong/metaerg metaerg@noreply.github.com Cc: "Anuradha Ravi (QIB)" Anuradha.Ravi@quadram.ac.uk, Author author@noreply.github.com Subject: Re: [xiaoli-dong/metaerg] Diamond not running from docker (#1)

Docker image is built using the newest version of the diamond.

If you are using the installed version, you need to update your diamond to the newest version.

Thanks,

Xiaoli

Ok, after I removed the 2> /dev/null it was possible to see that diamond was complaining about not being able to open the database genomedb file, even tough it was there. Next I replaced --quiet with the --log flag to get some more info, but I also noticed the double -k 1 flag. Strangely after removing the duplicated flag, the 80 parallel jobs run fine. Maybe just a coincidence, but I added this change to my PR. #3https://github.com/xiaoli-dong/metaerg/pull/3

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AHTQZPDYTNYYQBIBBIETETDQPODKZA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBX575I#issuecomment-544202741, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHTQZPF6TBXZ3GYNLZNMFALQPODKZANCNFSM4JBMC5UA.

xiaoli-dong commented 4 years ago

Docker image already included newest version of the diamond

Xjaoli

On Sunday, October 20, 2019, Anuravi123 notifications@github.com wrote:

Dear Xioli Dong,

Thank you for your email. I actually made a singularity image from the docker container so I can use it in the HPC cluster. I assumed that the docker container has the diamond tools present within.

Do you say I would need to create a singularity image with Metaerg and diamond?

From: Xiaoli Dong notifications@github.com Reply-To: xiaoli-dong/metaerg reply@reply.github.com Date: Saturday, 19 October 2019 at 23:33 To: xiaoli-dong/metaerg metaerg@noreply.github.com Cc: "Anuradha Ravi (QIB)" Anuradha.Ravi@quadram.ac.uk, Author < author@noreply.github.com> Subject: Re: [xiaoli-dong/metaerg] Diamond not running from docker (#1)

Docker image is built using the newest version of the diamond.

If you are using the installed version, you need to update your diamond to the newest version.

Thanks,

Xiaoli

Ok, after I removed the 2> /dev/null it was possible to see that diamond was complaining about not being able to open the database genomedb file, even tough it was there. Next I replaced --quiet with the --log flag to get some more info, but I also noticed the double -k 1 flag. Strangely after removing the duplicated flag, the 80 parallel jobs run fine. Maybe just a coincidence, but I added this change to my PR. #3< https://github.com/xiaoli-dong/metaerg/pull/3>

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/ xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token= AHTQZPDYTNYYQBIBBIETETDQPODKZA5CNFSM4JBMC5UKYY3PNVWWK3TUL52H S4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBX575I#issuecomment-544202741, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ AHTQZPF6TBXZ3GYNLZNMFALQPODKZANCNFSM4JBMC5UA.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AMR5C46OWPHKJMVSRX43SQTQPS5NJA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBYTFBQ#issuecomment-544289414, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C4YL23JDB5GAJFH5UKTQPS5NJANCNFSM4JBMC5UA .

Anuravi123 commented 4 years ago

Dear Xiaoli

Thank you. I can try the latest docker image and let you know if it works alright.. 🙂

Thank you for your response and helping me with this

Anu


From: Xiaoli Dong notifications@github.com Sent: 20 October 2019 22:01 To: xiaoli-dong/metaerg metaerg@noreply.github.com Cc: Anuradha Ravi (QIB) Anuradha.Ravi@quadram.ac.uk; Author author@noreply.github.com Subject: Re: [xiaoli-dong/metaerg] Diamond not running from docker (#1)

Docker image already included newest version of the diamond

Xjaoli

On Sunday, October 20, 2019, Anuravi123 notifications@github.com wrote:

Dear Xioli Dong,

Thank you for your email. I actually made a singularity image from the docker container so I can use it in the HPC cluster. I assumed that the docker container has the diamond tools present within.

Do you say I would need to create a singularity image with Metaerg and diamond?

From: Xiaoli Dong notifications@github.com Reply-To: xiaoli-dong/metaerg reply@reply.github.com Date: Saturday, 19 October 2019 at 23:33 To: xiaoli-dong/metaerg metaerg@noreply.github.com Cc: "Anuradha Ravi (QIB)" Anuradha.Ravi@quadram.ac.uk, Author < author@noreply.github.com> Subject: Re: [xiaoli-dong/metaerg] Diamond not running from docker (#1)

Docker image is built using the newest version of the diamond.

If you are using the installed version, you need to update your diamond to the newest version.

Thanks,

Xiaoli

Ok, after I removed the 2> /dev/null it was possible to see that diamond was complaining about not being able to open the database genomedb file, even tough it was there. Next I replaced --quiet with the --log flag to get some more info, but I also noticed the double -k 1 flag. Strangely after removing the duplicated flag, the 80 parallel jobs run fine. Maybe just a coincidence, but I added this change to my PR. #3< https://github.com/xiaoli-dong/metaerg/pull/3>

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/ xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token= AHTQZPDYTNYYQBIBBIETETDQPODKZA5CNFSM4JBMC5UKYY3PNVWWK3TUL52H S4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBX575I#issuecomment-544202741, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ AHTQZPF6TBXZ3GYNLZNMFALQPODKZANCNFSM4JBMC5UA.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AMR5C46OWPHKJMVSRX43SQTQPS5NJA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBYTFBQ#issuecomment-544289414, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C4YL23JDB5GAJFH5UKTQPS5NJANCNFSM4JBMC5UA .

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AHTQZPEDXS2EQVBKNLUWUS3QPTBLDA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBYT27Q#issuecomment-544292222, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHTQZPAMYD36DHEMWIZNGNLQPTBLDANCNFSM4JBMC5UA.

xiaoli-dong commented 4 years ago

Can you write me your local database path, exact command you were using, and the output error log? I think the image online should be fine because someone in US was using the image and everything went through.

Xiaoli

On Sun., Oct. 20, 2019, 2:28 p.m. Anuravi123 notifications@github.com wrote:

Dear Xioli Dong,

Thank you for your email. I actually made a singularity image from the docker container so I can use it in the HPC cluster. I assumed that the docker container has the diamond tools present within.

Do you say I would need to create a singularity image with Metaerg and diamond?

From: Xiaoli Dong notifications@github.com Reply-To: xiaoli-dong/metaerg reply@reply.github.com Date: Saturday, 19 October 2019 at 23:33 To: xiaoli-dong/metaerg metaerg@noreply.github.com Cc: "Anuradha Ravi (QIB)" Anuradha.Ravi@quadram.ac.uk, Author < author@noreply.github.com> Subject: Re: [xiaoli-dong/metaerg] Diamond not running from docker (#1)

Docker image is built using the newest version of the diamond.

If you are using the installed version, you need to update your diamond to the newest version.

Thanks,

Xiaoli

Ok, after I removed the 2> /dev/null it was possible to see that diamond was complaining about not being able to open the database genomedb file, even tough it was there. Next I replaced --quiet with the --log flag to get some more info, but I also noticed the double -k 1 flag. Strangely after removing the duplicated flag, the 80 parallel jobs run fine. Maybe just a coincidence, but I added this change to my PR. #3< https://github.com/xiaoli-dong/metaerg/pull/3>

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub< https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AHTQZPDYTNYYQBIBBIETETDQPODKZA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBX575I#issuecomment-544202741>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/AHTQZPF6TBXZ3GYNLZNMFALQPODKZANCNFSM4JBMC5UA>.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AMR5C46OWPHKJMVSRX43SQTQPS5NJA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBYTFBQ#issuecomment-544289414, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C4YL23JDB5GAJFH5UKTQPS5NJANCNFSM4JBMC5UA .

Anuravi123 commented 4 years ago

Hello, the error that I get is exactly what I have pasted as the issue. The command that I used is

singularity exec ~/tools/metaerg/metaerg.img metaerg.pl --dbdir ~/tools/metaerg/db --depth barcode01_metabat2_depth.txt barcode01.fasta

I am sure my database is being recognised since it does direct the database to the correct path. Also, I have attached to you the slurm output from the tool, to give you the output of the whole process. Hope this is useful.

Thanks

Anuravi123 commented 4 years ago

erl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [Wed Oct 16 13:03:23 2019] Creating new output folder: metaerg.pl_10162019 [Wed Oct 16 13:03:23 2019] Running: mkdir -p metaerg.pl_10162019 [Wed Oct 16 13:03:23 2019] Running: mkdir -p metaerg.pl_10162019\/tmp [Wed Oct 16 13:03:23 2019] Running: mkdir -p metaerg.pl_10162019\/data [Wed Oct 16 13:03:23 2019] Using filename prefix: metaerg.pl_10162019.XXX [Wed Oct 16 13:03:23 2019] metaerg.pl --dbdir /hpc-home/ravia/tools/metaerg/db --mincontiglen 200 --minorflen 180 --prefix metaerg.pl_10162019 --outdir metaerg.pl_10162019 --locustag metaerg.pl --increment 1 --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 -hmmcutoff --cut_ga -hmmevalue 1e-5 barcode01.fasta

[Wed Oct 16 13:03:23 2019] Loading and checking input file: metaerg.pl_10162019/metaerg.pl_10162019.fna [Wed Oct 16 13:03:24 2019] Wrote 599 contigs [Wed Oct 16 13:03:25 2019] **Start to predicate genes

[Wed Oct 16 13:03:25 2019] Running: /usr/bin/perl /NGStools/metaerg/bin/predictFeatures.pl --dbdir /hpc-home/ravia/tools/metaerg/db --evalue 1e-05 --gtype meta --gc 11 --minorflen 180 --prefix metaerg.pl_10162019 --cpus 8 --outdir metaerg.pl_10162019/tmp --force metaerg.pl_10162019/metaerg.pl_10162019.fna perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [Wed Oct 16 13:03:25 2019] Loading and checking input file: metaerg.pl_10162019/metaerg.pl_10162019.fna [Wed Oct 16 13:03:26 2019] Wrote 599 contigs [Wed Oct 16 13:03:26 2019] Searching for CRISPR repeats [Wed Oct 16 13:03:26 2019] Running: minced -gffFull metaerg.pl_10162019/tmp/metaerg.pl_10162019.fna metaerg.pl_10162019/tmp/crisprs.temp [Wed Oct 16 13:03:31 2019] Found 10 CRISPRs [Wed Oct 16 13:03:32 2019] predict_CRISPRs took: 6 wallclock secs ( 0.22 usr 0.03 sys + 3.86 cusr 0.87 csys = 4.98 CPU) to run

[Wed Oct 16 13:03:32 2019] Predicting tRNAs [Wed Oct 16 13:03:32 2019] Running: aragorn -l -t -gc11 metaerg.pl_10162019/tmp/metaerg.pl_10162019.crispr.masked.fna -w -o metaerg.pl_10162019/tmp/tRNA.temp [Wed Oct 16 13:03:37 2019] Found 212 tRNAs [Wed Oct 16 13:03:37 2019] predict_tRNA_aragorn took: 5 wallclock secs ( 0.09 usr 0.04 sys + 5.06 cusr 0.03 csys = 5.22 CPU) to run

[Wed Oct 16 13:03:37 2019] Predicting Ribosomal RNAs [Wed Oct 16 13:03:37 2019] Running: /NGStools/metaerg/bin/rRNAFinder.pl --dbdir /hpc-home/ravia/tools/metaerg/db --threads 8 --evalue 1e-05 --domain meta --outdir metaerg.pl_10162019/tmp metaerg.pl_10162019/tmp/metaerg.pl_10162019.tRNA.masked.fna perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [13:03:38] This is rRNAFinder.pl 1.1.0 [13:03:38] Written by Xiaoli Dong xdong@ucalgary.ca [13:03:38] Obtained from https://sourceforge.net/projects/rrnafinder/ [13:03:38] Scanning metaerg.pl_10162019/tmp/metaerg.pl_10162019.tRNA.masked.fna for arc rRNA genes using /NGStools/metaerg/bin/../hmmrna/arc.hmm... please wait [13:03:38] Command: nhmmer --cpu 8 -E 1e-05 -o /dev/null --tblout metaerg.pl_10162019/tmp/arc.tblout \/NGStools\/metaerg\/bin\/..\/hmmrna\/arc.hmm metaerg.pl_10162019\/tmp\/metaerg.pl_10162019.tRNA.masked.fna [13:04:35] Scanning metaerg.pl_10162019/tmp/metaerg.pl_10162019.tRNA.masked.fna for bac rRNA genes using /NGStools/metaerg/bin/../hmmrna/bac.hmm... please wait [13:04:35] Command: nhmmer --cpu 8 -E 1e-05 -o /dev/null --tblout metaerg.pl_10162019/tmp/bac.tblout \/NGStools\/metaerg\/bin\/..\/hmmrna\/bac.hmm metaerg.pl_10162019\/tmp\/metaerg.pl_10162019.tRNA.masked.fna [13:05:26] Scanning metaerg.pl_10162019/tmp/metaerg.pl_10162019.tRNA.masked.fna for euk rRNA genes using /NGStools/metaerg/bin/../hmmrna/euk.hmm... please wait [13:05:26] Command: nhmmer --cpu 8 -E 1e-05 -o /dev/null --tblout metaerg.pl_10162019/tmp/euk.tblout \/NGStools\/metaerg\/bin\/..\/hmmrna\/euk.hmm metaerg.pl_10162019\/tmp\/metaerg.pl_10162019.tRNA.masked.fna [13:06:50] write to metaerg.pl_10162019/tmp/5SrRNA.ffn [13:06:50] write to metaerg.pl_10162019/tmp/16SrRNA.ffn [13:06:50] write to metaerg.pl_10162019/tmp/23SrRNA.ffn [13:06:50] Command: /usr/bin/perl /NGStools/metaerg/bin/rna2taxon.pl --dbdir /hpc-home/ravia/tools/metaerg/db/blast --cpus 8 --dbtype ssu --evalue 1e-05 --identities 70,80,85,90,95,97,99 --coverage 80 metaerg.pl_10162019/tmp/16SrRNA.ffn > metaerg.pl_10162019/tmp/16SrRNA.tax.txt perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [13:06:50] blastn -query metaerg.pl_10162019/tmp/16SrRNA.ffn -db /hpc-home/ravia/tools/metaerg/db/blast/silva_SSURef_Nr99.fasta -dust no -num_threads 8 -evalue 1e-05 -out metaerg.pl_10162019/tmp/16SrRNA.ffn.blastn.outfmt7.txt -outfmt "7 qseqid qlen slen qstart qend sstart send length qcovhsp pident evalue bitscore stitle" -max_target_seqs 5

[13:06:50] Running: blastn -query metaerg.pl_10162019/tmp/16SrRNA.ffn -db /hpc-home/ravia/tools/metaerg/db/blast/silva_SSURef_Nr99.fasta -dust no -num_threads 8 -evalue 1e-05 -out metaerg.pl_10162019/tmp/16SrRNA.ffn.blastn.outfmt7.txt -outfmt "7 qseqid qlen slen qstart qend sstart send length qcovhsp pident evalue bitscore stitle" -max_target_seqs 5 [13:19:21] classifyRNA takes : 751 wallclock secs ( 0.00 usr 0.00 sys + 748.88 cusr 0.68 csys = 749.56 CPU) to run

[13:19:21] Command: /usr/bin/perl /NGStools/metaerg/bin/rna2taxon.pl --dbdir /hpc-home/ravia/tools/metaerg/db/blast --cpus 8 --dbtype lsu --evalue 1e-05 --identities 70,80,85,90,95,97,99 --coverage 80 metaerg.pl_10162019/tmp/23SrRNA.ffn > metaerg.pl_10162019/tmp/23SrRNA.tax.txt perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [13:19:21] blastn -query metaerg.pl_10162019/tmp/23SrRNA.ffn -db /hpc-home/ravia/tools/metaerg/db/blast/silva_LSURef.fasta -dust no -num_threads 8 -evalue 1e-05 -out metaerg.pl_10162019/tmp/23SrRNA.ffn.blastn.outfmt7.txt -outfmt "7 qseqid qlen slen qstart qend sstart send length qcovhsp pident evalue bitscore stitle" -max_target_seqs 5

[13:19:21] Running: blastn -query metaerg.pl_10162019/tmp/23SrRNA.ffn -db /hpc-home/ravia/tools/metaerg/db/blast/silva_LSURef.fasta -dust no -num_threads 8 -evalue 1e-05 -out metaerg.pl_10162019/tmp/23SrRNA.ffn.blastn.outfmt7.txt -outfmt "7 qseqid qlen slen qstart qend sstart send length qcovhsp pident evalue bitscore stitle" -max_target_seqs 5 [13:24:44] classifyRNA takes : 323 wallclock secs ( 0.00 usr 0.00 sys + 321.51 cusr 0.32 csys = 321.83 CPU) to run

[13:24:44] Command: /usr/bin/perl /NGStools/metaerg/bin/rna2taxon.pl --dbdir /hpc-home/ravia/tools/metaerg/db/blast --cpus 8 --dbtype lsu --evalue 1e-05 --identities 70,80,85,90,95,97,99 --coverage 80 metaerg.pl_10162019/tmp/5SrRNA.ffn > metaerg.pl_10162019/tmp/5SrRNA.tax.txt perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [13:24:44] blastn -query metaerg.pl_10162019/tmp/5SrRNA.ffn -db /hpc-home/ravia/tools/metaerg/db/blast/silva_LSURef.fasta -dust no -num_threads 8 -evalue 1e-05 -out metaerg.pl_10162019/tmp/5SrRNA.ffn.blastn.outfmt7.txt -outfmt "7 qseqid qlen slen qstart qend sstart send length qcovhsp pident evalue bitscore stitle" -max_target_seqs 5

[13:24:44] Running: blastn -query metaerg.pl_10162019/tmp/5SrRNA.ffn -db /hpc-home/ravia/tools/metaerg/db/blast/silva_LSURef.fasta -dust no -num_threads 8 -evalue 1e-05 -out metaerg.pl_10162019/tmp/5SrRNA.ffn.blastn.outfmt7.txt -outfmt "7 qseqid qlen slen qstart qend sstart send length qcovhsp pident evalue bitscore stitle" -max_target_seqs 5 [13:24:44] classifyRNA takes : 0 wallclock secs ( 0.00 usr 0.00 sys + 0.42 cusr 0.04 csys = 0.46 CPU) to run

16SrRNA [13:24:44] ####################Command: cat metaerg.pl_10162019/tmp/16SrRNA.tax.txt >> metaerg.pl_10162019/tmp/rRNA.tax.txt 5SrRNA [13:24:44] ####################Command: cat metaerg.pl_10162019/tmp/5SrRNA.tax.txt >> metaerg.pl_10162019/tmp/rRNA.tax.txt 23SrRNA [13:24:44] ####################Command: cat metaerg.pl_10162019/tmp/23SrRNA.tax.txt >> metaerg.pl_10162019/tmp/rRNA.tax.txt rRNA prediction took:1266 wallclock secs ( 0.03 usr 0.02 sys + 1257.61 cusr 6.73 csys = 1264.39 CPU) to run Wed Oct 16 13:24:44 2019] Found 76 rRNAs [Wed Oct 16 13:24:45 2019] predict_rRNA took:1268 wallclock secs ( 0.09 usr 0.04 sys + 1257.71 cusr 6.76 csys = 1264.60 CPU) to run

[Wed Oct 16 13:24:45 2019] Predicting coding sequences [Wed Oct 16 13:24:45 2019] Running: prodigal -g 11 -p meta -m -f gff -q -i metaerg.pl_10162019/tmp/metaerg.pl_10162019.rRNA.masked.fna -a metaerg.pl_10162019/tmp/cds.faa.temp.1 -d metaerg.pl_10162019/tmp/cds.ffn.temp.1 -o metaerg.pl_10162019/tmp/cds.gff.temp.1 [Wed Oct 16 13:30:05 2019] Found 23984 CDS [Wed Oct 16 13:30:06 2019] predict_CDs took:321 wallclock secs ( 7.27 usr 0.09 sys + 312.87 cusr 0.21 csys = 320.44 CPU) to run

[Wed Oct 16 13:30:06 2019] Connecting features back to sequences [Wed Oct 16 13:30:06 2019] Writing all feature gff file to metaerg.pl_10162019/tmp [Wed Oct 16 13:30:08 2019] **Finish predicating genes

[Wed Oct 16 13:30:08 2019] **Start annotate cds

[Wed Oct 16 13:30:08 2019] Running: /usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /hpc-home/ravia/tools/metaerg/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaerg.pl_10162019/tmp metaerg.pl_10162019/tmp/cds.faa perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_GB.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). [Wed Oct 16 13:30:08 2019] Running: diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019\/tmp\/FOAM1.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM1.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout metaerg.pl_10162019\/tmp\/FOAM2.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/FOAM2.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout metaerg.pl_10162019\/tmp\/Pfam-A.hmm.tblout /hpc-home/ravia/tools/metaerg/db/hmm/Pfam-A.hmm metaerg.pl_10162019/tmp/cds.faa > /dev/null 2>&1 [Wed Oct 16 13:30:08 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 1 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/uniprot_sprot.blasttable 2> /dev/null, No such file or directory

[Wed Oct 16 13:30:09 2019] Could not run command:diamond blastp -k 1 --quiet -k 1 --masking 0 -p 8 -q metaerg.pl_10162019/tmp/cds.faa -d /hpc-home/ravia/tools/metaerg/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaerg.pl_10162019/tmp/genomedb.blasttable 2> /dev/null, No such file or directory

Perl exited with active threads: 7 running and unjoined 1 finished and unjoined 0 running and detached [Wed Oct 16 13:30:09 2019] Could not run command:/usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /hpc-home/ravia/tools/metaerg/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaerg.pl_10162019/tmp metaerg.pl_10162019/tmp/cds.faa, No such file or directory

xiaoli-dong commented 4 years ago

I had never used singularity and cannot give any suggestion on that. Yours is a docker image, right? If that's the case, please make sure you are using the most current version. When you are using docker image, you need to define --outdir option. Can you try it again? Let me know how it goes.?

xiaoli

On Sun, 20 Oct 2019 at 15:47, Anuravi123 notifications@github.com wrote:

Hello, the error that I get is exactly what I have pasted as the issue. The command that I used is

singularity exec ~/tools/metaerg/metaerg.img metaerg.pl --dbdir ~/tools/metaerg/db --depth barcode01_metabat2_depth.txt barcode01.fasta

I am sure my database is being recognised since it does direct the database to the correct path. Also, I have attached to you the slurm output from the tool, to give you the output of the whole process. Hope this is useful.

Thanks

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AMR5C47UTXLH2AXX4DAY3Z3QPTGXFA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBYUZ4Q#issuecomment-544296178, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C46EKYO2TJJWDVSPC2LQPTGXFANCNFSM4JBMC5UA .

SheikGeomicro commented 4 years ago

Hi Xiaoli,

I had the same problem using a non-docker install. I had all of the dependencies including the most up to date diamond package installed. I followed @Finesim97 advice and deleted the second -k 1 flag in the script and diamond seems to have finished with the Diamond portion.

xiaoli-dong commented 4 years ago

Thank for letting me know. I have now deleted the additional "-k 1" from the non-docker install, I will update the Docker version

Xiaoli

On Monday, October 21, 2019, SheikGeomicro notifications@github.com wrote:

Hi Xiaoli,

I had the same problem using a non-docker install. I had all of the dependencies including the most up to date diamond package installed. I followed @Finesim97 https://github.com/Finesim97 advice and deleted the second -k 1 flag in the script and diamond seems to have finished with the Diamond portion.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AMR5C42YM6PYDSP6R2DCGR3QPZESDA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB4HQKY#issuecomment-544765995, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C47SXOE45MZS4S3PTIDQPZESDANCNFSM4JBMC5UA .

xiaoli-dong commented 4 years ago

I have updated the docker image to get rid of one of "-k 1" option. Actually, that was not the reason that diamond did not go through. I have been working with a few people running docker image, some of them did not really understand -v option when they were running docker image. -v option is basically mount your local directory to the docker container and then MetaErg can recognize it. The input database path, output directory and the input fasta file must be relative to that directory: After retrieve the docker image using "docker pull xiaolidong/docker-metaerg", then run:

docker run --shm-size 2g --rm -u $(id -u):$(id -g) -it -v my_local_dir:/data/ xiaolidong/docker-metaerg metaerg.pl --dbdir /data/db --outdir /data/myoutput /data/contig.fasta

Anuravi123 commented 4 years ago

Hello, I will try with the new docker image, but I ran MetaERG without docker and i could run the whole analyses. However it did stop while attempting to create html files-

[Mon Oct 21 13:52:30 2019] Running: /home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_tree_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene> metaerg_barcode01/data/metabolic.cds.profile.tree.json;/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_sunburst_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene -p metaerg_barcode01/data/metabolic.cds.profile.sunburst;/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_sunburst_json.abund.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Foam -p metaerg_barcode01/data/metabolic.cds.profile.sunburst Illegal division by zero at /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_sunburst_json.abund.pl line 137. [Mon Oct 21 13:52:30 2019] Could not run command:/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_tree_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene> metaerg_barcode01/data/metabolic.cds.profile.tree.json;/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_sunburst_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene -p metaerg_barcode01/data/metabolic.cds.profile.sunburst;/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_sunburst_json.abund.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Foam -p metaerg_barcode01/data/metabolic.cds.profile.sunburst, Inappropriate ioctl for device

[Mon Oct 21 13:52:31 2019] Could not run command:/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/output_reports.pl -d ./metaerginstall/db -g metaerg_barcode01/data/all.gff -o metaerg_barcode01 -f metaerg_barcode01/metaerg.pl_10212019.fna,

This doesnt bother me too much since I wanted the global analyses rather than the HTML files but might be good to know what caused that error.

xiaoli-dong commented 4 years ago

Could you send me your input file to xdong@ucalgary.ca? I need to duplicate the error in order to fix it. Fasta file and depth file

Thanks

On Tuesday, October 22, 2019, Anuravi123 notifications@github.com wrote:

Hello, I will try with the new docker image, but I ran MetaERG without docker and i could run the whole analyses. However it did stop while attempting to create html files-

[Mon Oct 21 13:52:30 2019] Running: /home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/out put_tree_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene> metaerg_barcode01/data/ metabolic.cds.profile.tree.json;/home/ubuntu/volume_1/ metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/ metaerginstall/metaerg/bin/output_sunburst_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene -p metaerg_barcode01/data/metabolic.cds.profile. sunburst;/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/out put_sunburst_json.abund.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Foam -p metaerg_barcode01/data/metabolic.cds.profile.sunburst Illegal division by zero at /home/ubuntu/volume_1/metaerg/ metaerginstall/metaerg/bin/output_sunburst_json.abund.pl line 137. [Mon Oct 21 13:52:30 2019] Could not run command:/home/ubuntu/volume_1/ metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/ metaerginstall/metaerg/bin/output_tree_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene> metaerg_barcode01/data/ metabolic.cds.profile.tree.json;/home/ubuntu/volume_1/ metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/ metaerginstall/metaerg/bin/output_sunburst_json.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Compound-process-gene -p metaerg_barcode01/data/metabolic.cds.profile. sunburst;/home/ubuntu/volume_1/metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/metaerginstall/metaerg/bin/out put_sunburst_json.abund.pl -i metaerg_barcode01/data/metabolic.cds.profile.tab.txt -t Foam -p metaerg_barcode01/data/metabolic.cds.profile.sunburst, Inappropriate ioctl for device

[Mon Oct 21 13:52:31 2019] Could not run command:/home/ubuntu/volume_1/ metaerg/metaerginstall/bin/perl /home/ubuntu/volume_1/metaerg/ metaerginstall/metaerg/bin/output_reports.pl -d ./metaerginstall/db -g metaerg_barcode01/data/all.gff -o metaerg_barcode01 -f metaerg_barcode01/metaerg.pl_10212019.fna,

This doesnt bother me too much since I wanted the global analyses rather than the HTML files but might be good to know what caused that error.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1?email_source=notifications&email_token=AMR5C42CB2X32BFF63GVZA3QP3HVHA5CNFSM4JBMC5UKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB5HH7Y#issuecomment-544895999, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C4YDNITHYR7RSQB6S3TQP3HVHANCNFSM4JBMC5UA .

vrou1995 commented 4 years ago

I have updated the docker image to get rid of one of "-k 1" option. Actually, that was not the reason that diamond did not go through. I have been working with a few people running docker image, some of them did not really understand -v option when they were running docker image. -v option is basically mount your local directory to the docker container and then MetaErg can recognize it. The input database path, output directory and the input fasta file must be relative to that directory: After retrieve the docker image using "docker pull xiaolidong/docker-metaerg", then run:

docker run --shm-size 2g --rm -u $(id -u):$(id -g) -it -v my_local_dir:/data/ xiaolidong/docker-metaerg metaerg.pl --dbdir /data/db --outdir /data/myoutput /data/contig.fasta

Hi,

I used the command above, making sure that the directory mounts to the docker image but for some reason it still gets stuck at the diamond stage thinking that the cds.faa file doesn't exist even though I checked and it does.

I pulled the docker image today so it should be the version with only a single -k1 flag.

Do you have any idea why metaerg might be failing to complete the analysis?

Many thanks,

Vincent

Finesim97 commented 4 years ago

Hi, could you post your exact command, error log and show the output of docker image inspect xiaolidong/docker-metaerg

vrou1995 commented 4 years ago

Hi,

My exact command was:

docker run --shm-size 2g --rm -u $(id -u):$(id -g) -it -v /Users/vr2858/Desktop/MetaErg_analysis/data:/data xiaolidong/docker-metaerg metaerg.pl --dbdir /data/db --outdir /data/full-metagenome-trial /data/final_assembly.fasta

I'm not entirely sure what you mean by my error log as there is no log file in the tmp folder, but here is what the program spat out as it failed:

[Mon Feb 17 16:19:19 2020] Running: /usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /data/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir /data/full-metagenome-trial/tmp /data/full-metagenome-trial/tmp/cds.faa [Mon Feb 17 16:19:19 2020] Running: diamond blastp -k 1 --quiet --masking 1 -p 8 -q /data/full-metagenome-trial/tmp/cds.faa -d /data/db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > /data/full-metagenome-trial/tmp/uniprot_sprot.blasttable 2> /dev/null [Mon Feb 17 16:19:19 2020] Running: diamond blastp -k 1 --quiet --masking 0 -p 8 -q /data/full-metagenome-trial/tmp/cds.faa -d /data/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > /data/full-metagenome-trial/tmp/genomedb.blasttable 2> /dev/null [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/Pfam-A.hmm.tblout /data/db/hmm/Pfam-A.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/FOAM2.hmm.tblout /data/db/hmm/FOAM2.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/FOAM1.hmm.tblout /data/db/hmm/FOAM1.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/TIGRFAMs.hmm.tblout /data/db/hmm/TIGRFAMs.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/FOAM3.hmm.tblout /data/db/hmm/FOAM3.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/FOAM4.hmm.tblout /data/db/hmm/FOAM4.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc -E 1e-5 --domE 1e-5 --incE 1e-5 --incdomE 1e-5 --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/metabolic.hmm.tblout /data/db/hmm/metabolic.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc --cut_tc --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/FOAM5.hmm.tblout /data/db/hmm/FOAM5.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:19 2020] Running: hmmsearch --notextw --acc -E 1e-5 --domE 1e-5 --incE 1e-5 --incdomE 1e-5 --cpu 8 --tblout \/data\/full-metagenome-trial\/tmp\/casgenes.hmm.tblout /data/db/hmm/casgenes.hmm /data/full-metagenome-trial/tmp/cds.faa > /dev/null 2>&1 [Mon Feb 17 16:19:32 2020] Could not run command:diamond blastp -k 1 --quiet --masking 0 -p 8 -q /data/full-metagenome-trial/tmp/cds.faa -d /data/db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > /data/full-metagenome-trial/tmp/genomedb.blasttable 2> /dev/null, No such file or directory

Perl exited with active threads: 11 running and unjoined -1 finished and unjoined 0 running and detached [Mon Feb 17 16:19:32 2020] Could not run command:/usr/bin/perl /NGStools/metaerg/bin/annotCDs.pl --dbdir /data/db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir /data/full-metagenome-trial/tmp /data/full-metagenome-trial/tmp/cds.faa, No such file or directory

And here is the output for docker image inspect xiaolidong/docker-metaerg:

[ { "Id": "sha256:16aaa669b220c4d53ce30dd617747e63e230e0cf7f173b622a4d221142a84035", "RepoTags": [ "xiaolidong/docker-metaerg:latest" ], "RepoDigests": [ "xiaolidong/docker-metaerg@sha256:9691c58716d290902f0ebc9decbc4d315d96077b601049e2b129d7b5bb9502fc" ], "Parent": "", "Comment": "", "Created": "2019-10-31T18:00:21.438411354Z", "Container": "9a33e406f4b5ede065931f0a2435ce6c794dd71f43efeb64b69f50182ed3d974", "ContainerConfig": { "Hostname": "9a33e406f4b5", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/NGStools/aragorn:/NGStools/minced:/NGStools/Prodigal:/NGStools/ncbi-blast-2.9.0+/bin:/NGStools/diamond:/NGStools/hmmer/src:/NGStools/MinPath:/NGStools/metaerg/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "MinPath=/NGStools/MinPath" ], "Cmd": [ "/bin/sh", "-c", "#(nop) WORKDIR /NGStools/" ], "ArgsEscaped": true, "Image": "sha256:c0d5abbda23ad1b9cacf9ff0fe4f56e387d03a283fb94fbc0fa70b63eeab9d1e", "Volumes": null, "WorkingDir": "/NGStools", "Entrypoint": null, "OnBuild": null, "Labels": { "version": "1.2.2" } }, "DockerVersion": "18.03.1-ee-3", "Author": "Xiaoli Dong xiaolid@gmail.com", "Config": { "Hostname": "", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/NGStools/aragorn:/NGStools/minced:/NGStools/Prodigal:/NGStools/ncbi-blast-2.9.0+/bin:/NGStools/diamond:/NGStools/hmmer/src:/NGStools/MinPath:/NGStools/metaerg/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "MinPath=/NGStools/MinPath" ], "Cmd": [ "/bin/bash" ], "ArgsEscaped": true, "Image": "sha256:c0d5abbda23ad1b9cacf9ff0fe4f56e387d03a283fb94fbc0fa70b63eeab9d1e", "Volumes": null, "WorkingDir": "/NGStools/", "Entrypoint": null, "OnBuild": null, "Labels": { "version": "1.2.2" } }, "Architecture": "amd64", "Os": "linux", "Size": 2304146940, "VirtualSize": 2304146940, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/d41a7e3edae9426056e4fc23706a1bd34c6289dea4eb601e3acac48ae45c1144/diff:/var/lib/docker/overlay2/c493fb58bad3bff3a2bd266b3636ff083a1c058fa077491d377f01396ea275da/diff:/var/lib/docker/overlay2/6c0356bd344b240d31dac436175dd1d67ef162a1d3a7afdaee134f7278b4bf59/diff:/var/lib/docker/overlay2/450080c651b11bfe7e4e3a8540a8f6e1ba631cacb73e6086e5e1d21e6fa61c83/diff:/var/lib/docker/overlay2/83ae440c6e63ce3c1a62fa2bca73ead69dc0466673d61e93ae1ade91f5625783/diff:/var/lib/docker/overlay2/3f44f55faae0288e34b9c125add013d1fbf7f19ecb9f131251ff947104785ee0/diff:/var/lib/docker/overlay2/6f1fb4e8e41aff9d54b30f8219570b68e22cf4f90cead8fdedaba50513119f7e/diff:/var/lib/docker/overlay2/b05bc7476d229aa1c499a314501a26c4ae09f3ed219a7377095d7c1ac7bedba4/diff:/var/lib/docker/overlay2/9862ede71305751357e702f95160b5fda01b6aa6ffb05119f367506b4017a408/diff:/var/lib/docker/overlay2/f5153699175a8a5a908c4bc2ae3f91e69a0dc5681e2a3f6e27d4ddf60ad9b50d/diff:/var/lib/docker/overlay2/9e1f8c71648dd2f6cb60f8d97fbed4e8f51ed92ec562dfacdf46312590a17e47/diff:/var/lib/docker/overlay2/dbe4dcc26cff612dc73e2b3199233ebbb1b2c55df65d22608039253847b8c707/diff:/var/lib/docker/overlay2/0298080c8fff4513e47bb11cc3843882fb2ff869522ecac47e964958bb6d5d95/diff:/var/lib/docker/overlay2/cc7fb76fb18f5766a006ada35fff8014be382576b94de52fab54755dfedea826/diff:/var/lib/docker/overlay2/a5efa0353bc6320e6073592cd5c27aebfb7c84f653c91c1dfec4a0d53c42805c/diff:/var/lib/docker/overlay2/8cd11cdd89f91ed23955e5f97fc09e6e94af56964a1ea5b860e2abd6692a9c53/diff", "MergedDir": "/var/lib/docker/overlay2/c86fa28c31c4fb2fff23a9570e19cf44f312dc3111f12a968aa95c308b8ad77c/merged", "UpperDir": "/var/lib/docker/overlay2/c86fa28c31c4fb2fff23a9570e19cf44f312dc3111f12a968aa95c308b8ad77c/diff", "WorkDir": "/var/lib/docker/overlay2/c86fa28c31c4fb2fff23a9570e19cf44f312dc3111f12a968aa95c308b8ad77c/work" }, "Name": "overlay2" }, "RootFS": { "Type": "layers", "Layers": [ "sha256:e4cd282edf17c7b68a460199ffbe678c04567365b3e4a246f0f5759980250467", "sha256:986a5be1fdfe196bf09c9b1a9f3f5d8b01950b13d17d061b464e64ff848fc379", "sha256:f1bbc256e464f44f60e66efdf435fc65e485eda68c7805c127b9308239d9698e", "sha256:0a3242a235153c4ba82890669e219f95f855f3715d4a87b07db80f31f8226bc2", "sha256:28acd7115ddfaf926ae4ae9920eac3ac5b5676620dd6d8c16aaf3e85d97cd93b", "sha256:e8ddb9a2712a91a5b79fb9ee4ad65ce8565f2939ffaf235556db82fc5d2f712b", "sha256:a62e2b573a14d6ad5399aa311350bf1563f7aebca158dfa49ef222eba02338cf", "sha256:c3ccef3b64e9e91f3f2ceaa38279094623965fbb6a897117a672fed7faf7a3f7", "sha256:5e5e7622b959f73473c48dfe433ad04f3a8c98b9b16205c3873f56d6f2a98357", "sha256:64477599d793a7800561e2eadcaf3e4bfc67ffb0ebe03ab06eeb4c87df16d9f2", "sha256:b490db096a80b29df033d79cf37252e8bf9c6b62fe536903ae95cb4e1b5db0cc", "sha256:6d97931f92154d2065c71dc556e1188d9028975f4843febd9b0d4591dc47c696", "sha256:6c5140e77e852ce1052d4a422c1047a377a61ec47c5c8a86b220fafc9b9295db", "sha256:e50c24de79953e22dac84e3dbbcb3b0146710608f24162372d3e581f0e0da974", "sha256:74fd2a10966083f77026e00bd74a8718a51876a4789144faf06d628bcfafa34a", "sha256:58bd823920de3661c110e3d64339dec106f9fc6687d72b1465b8b0c70d0e9bbf", "sha256:15b6a2415cac8255f0a342456b7584926dea834abc35252f5ddac0d5d550cd13" ] }, "Metadata": { "LastTagTime": "0001-01-01T00:00:00Z" } } ]

Finesim97 commented 4 years ago

Seems to be the right version, could you try to build your own image (may take a while) and run that instead: https://gist.github.com/Finesim97/6b7635d526bba47f8cbb9f609440d688

vrou1995 commented 4 years ago

Hi,

Thanks for this. However, the result is the same. I made sure that it built successfully and that I am using the correct image but it still fails with exactly the same error.

Any idea why this might be the case?

Thanks,

Vincent

Finesim97 commented 4 years ago

Do you know how to modify the image to remove the "2> /dev/null" from the diamond call? I really would like to do this for you, but I am currently drowning in work.

vrou1995 commented 4 years ago

I am afraid I don't. Its the first time I've used Docker so I'm really only just getting to grips with it.

If you can find time, I would greatly appreciate it, but I understand things can get hectic!

Cheers,

Vincent

Finesim97 commented 4 years ago

Hi, sry for the long wait. This should build a Docker image with the DIAMOND errors showing up:

mkdir metaergold
cd metaergold
wget https://raw.githubusercontent.com/xiaoli-dong/metaerg/532bd189a446f8da85373335d48105ace49f3f7e/Dockerfile
wget -qO- https://gist.githubusercontent.com/Finesim97/2d6fc61c7cca1b9f4df1c21356116ebc/raw/ba7cca34ba81a918f8ac47f0abebf4e8b797266e/Dockerfile_diamonderr.diff | patch Dockerfile
sudo docker build -t metaerg-old .
vrou1995 commented 4 years ago

Hi,

Thanks for getting back to me. Unfortunately it still exits at the diamond step.

Could you double check when you get the chance?

Thanks,

Vincent

Finesim97 commented 4 years ago

And there is no additional output?

Finesim97 commented 4 years ago

And the "command called" line doesn't include the /dev/null

xiaoli-dong commented 4 years ago

I will find some time and take a look at it

Xiaoli

On Monday, March 16, 2020, vrou1995 notifications@github.com wrote:

Hi,

Thanks for getting back to me. Unfortunately it still exits at the diamond step.

Could you double check when you get the chance?

Thanks,

Vincent

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/xiaoli-dong/metaerg/issues/1#issuecomment-599492891, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5C46RRDDOVD7YM6YFEFTRHYGYRANCNFSM4JBMC5UA .

tarunpal33 commented 2 years ago

Hi, I got the bellow error from diamond while reaching annotation section for demo.fna file:

Fri Dec 24 18:50:50 2021] Running: /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/bin/perl /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/annotCDs.pl --dbdir /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaergtest/tmp metaergtest/tmp/cds.faa [Fri Dec 24 18:50:51 2021] Running: diamond blastp --quiet --masking 1 -p 8 -q metaergtest/tmp/cds.faa -d /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaergtest/tmp/uniprot_sprot.blasttable 2> /dev/null sh: line 1: 30069 Illegal instruction diamond blastp --quiet --masking 1 -p 8 -q metaergtest/tmp/cds.faa -d /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaergtest/tmp/uniprot_sprot.blasttable 2> /dev/null [Fri Dec 24 18:50:51 2021] Could not run command:diamond blastp --quiet --masking 1 -p 8 -q metaergtest/tmp/cds.faa -d /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/diamond/uniprot_sprot -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaergtest/tmp/uniprot_sprot.blasttable 2> /dev/null, No such file or directory

[Fri Dec 24 18:50:51 2021] Running: diamond blastp --quiet --masking 0 -p 8 -q metaergtest/tmp/cds.faa -d /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaergtest/tmp/genomedb.blasttable 2> /dev/null sh: line 1: 30072 Illegal instruction diamond blastp --quiet --masking 0 -p 8 -q metaergtest/tmp/cds.faa -d /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaergtest/tmp/genomedb.blasttable 2> /dev/null [Fri Dec 24 18:50:51 2021] Could not run command:diamond blastp --quiet --masking 0 -p 8 -q metaergtest/tmp/cds.faa -d /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/diamond/genomedb -e 1e-05 --tmpdir /dev/shm -f 6 qseqid sseqid qlen qstart qend sstart send qframe pident bitscore evalue qcovhsp > metaergtest/tmp/genomedb.blasttable 2> /dev/null, No such file or directory

[Fri Dec 24 18:50:51 2021] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout metaergtest\/tmp\/Pfam-A.hmm.tblout /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/hmm/Pfam-A.hmm metaergtest/tmp/cds.faa > /dev/null 2>&1 [Fri Dec 24 18:50:51 2021] Running: hmmsearch --notextw --acc --cut_ga --cpu 8 --tblout metaergtest\/tmp\/TIGRFAMs.hmm.tblout /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/hmm/TIGRFAMs.hmm metaergtest/tmp/cds.faa > /dev/null 2>&1 [Fri Dec 24 18:50:51 2021] Running: hmmsearch --notextw --acc -E 1e-5 --domE 1e-5 --incE 1e-5 --incdomE 1e-5 --cpu 8 --tblout metaergtest\/tmp\/metabolic.hmm.tblout /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db/hmm/metabolic.hmm metaergtest/tmp/cds.faa > /dev/null 2>&1 Perl exited with active threads: 6 running and unjoined 0 finished and unjoined 0 running and detached [Fri Dec 24 18:50:51 2021] Could not run command:/gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/bin/perl /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/annotCDs.pl --dbdir /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/bin/../db --cpus 8 --evalue 1e-05 --identity 20 --coverage 70 --hmmcutoff --cut_ga --hmmevalue 1e-5 --outdir metaergtest/tmp metaergtest/tmp/cds.faa,

tarunpal33 commented 2 years ago

Command from terminal

metaerg.pl --depth /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/example/demo.depth.txt /gpfs0/biores/users/tarun/miniconda3/envs/metaerginstall/metaerg/example/demo.fna --sp --tm --outdir "metaergtest" --cpus