Closed XC-Zhai closed 1 year ago
I could improve this. But it seems you only have one species in your sample after quality filtering.
Could you please check the tables Binning/DASTool/bins2species.tsv, Binning/DASTool/filtered_bin_info.tsv the column Representative.
I can fix the bug. but what do you intend to do with only one genome as representative?
I could improve this. But it seems you only have one species in your sample after quality filtering.
Could you please check the tables Binning/DASTool/bins2species.tsv, Binning/DASTool/filtered_bin_info.tsv the column Representative.
I can fix the bug. but what do you intend to do with only one genome as representative?
Yes, you are true, only one bin remains there. I may not use a small demon dataset to run a test.
Ok thenk it makes sense. I provide a small test dataset in the docs with 2-3 genomes. I keep the issue as ref to fix this small bug. if there is only one species.
I could improve this. But it seems you only have one species in your sample after quality filtering. Could you please check the tables Binning/DASTool/bins2species.tsv, Binning/DASTool/filtered_bin_info.tsv the column Representative. I can fix the bug. but what do you intend to do with only one genome as representative?
Yes, you are true, only one bin remains there. I may not use a small demon dataset to run a test.
Ok thenk it makes sense. I provide a small test dataset in the docs with 2-3 genomes. I keep the issue as ref to fix this small bug. if there is only one species.
Sorry for a following-up error. I also had another error as following when run dram annotation: [Tue Jul 25 15:45:14 2023] Error in rule DRAM_annotate: jobid: 173 input: genomes/genomes/MAG2.fasta, /home/projects/atlas_db1/DRAM/DRAM.config output: genomes/annotations/dram/intermediate_files/MAG2 log: logs/dram/run_dram/MAG2.log (check log file(s) for error details) conda-env: /home/projects/atlas_db1/condaenvs/ccc2b09d1155123764d32f2ee4daa7be shell: DRAM.py annotate --config_loc /home/projects/atlas_db1/DRAM/DRAM.config --input_fasta genomes/genomes/MAG2.fasta --output_dir genomes/annotations/dram/intermediate_files/MAG2 --threads 4 --min_contig_size 1000 --verbose &> logs/dram/run_dram/MAG2.log (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Removing output files of failed job DRAM_annotate since they might be corrupted: genomes/annotations/dram/intermediate_files/MAG2 [Tue Jul 25 15:45:14 2023] Error in rule DRAM_annotate: jobid: 172 input: genomes/genomes/MAG3.fasta, /home/projects/atlas_db1/DRAM/DRAM.config output: genomes/annotations/dram/intermediate_files/MAG3 log: logs/dram/run_dram/MAG3.log (check log file(s) for error details) conda-env: /home/projects/atlas_db1/condaenvs/ccc2b09d1155123764d32f2ee4daa7be shell: DRAM.py annotate --config_loc /home/projects/atlas_db1/DRAM/DRAM.config --input_fasta genomes/genomes/MAG3.fasta --output_dir genomes/annotations/dram/intermediate_files/MAG3 --threads 4 --min_contig_size 1000 --verbose &> logs/dram/run_dram/MAG3.log (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Removing output files of failed job DRAM_annotate since they might be corrupted: genomes/annotations/dram/intermediate_files/MAG3 [Tue Jul 25 15:45:14 2023] Error in rule DRAM_annotate: jobid: 174 input: genomes/genomes/MAG1.fasta, /home/projects/atlas_db1/DRAM/DRAM.config output: genomes/annotations/dram/intermediate_files/MAG1 log: logs/dram/run_dram/MAG1.log (check log file(s) for error details) conda-env: /home/projects/atlas_db1/condaenvs/ccc2b09d1155123764d32f2ee4daa7be shell: DRAM.py annotate --config_loc /home/projects/atlas_db1/DRAM/DRAM.config --input_fasta genomes/genomes/MAG1.fasta --output_dir genomes/annotations/dram/intermediate_files/MAG1 --threads 4 --min_contig_size 1000 --verbose &> logs/dram/run_dram/MAG1.log (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Removing output files of failed job DRAM_annotate since they might be corrupted: genomes/annotations/dram/intermediate_files/MAG1 [Tue Jul 25 15:45:30 2023] Finished job 119. 5 of 18 steps (28%) done Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message Note the path to the log file for debugging. Documentation is available at: https://metagenome-atlas.readthedocs.io Issues can be raised at: https://github.com/metagenome-atlas/atlas/issues Complete log: .snakemake/log/2023-07-25T154445.616397.snakemake.log [Atlas] CRITICAL: Command 'snakemake --snakefile /home/projects/atlas/atlas/workflow/Snakefile --directory /home/projects/Test --rerun-triggers mtime --jobs 24 --rerun-incomplete --configfile '/home/projects/Test/config.yaml' --nolock --use-conda --conda-prefix /home/projects/atlas_db1/conda_envs --resources mem=1435 mem_mb=1470457 java_mem=1220 --scheduler greedy all ' returned non-zero exit status 1.
Here is the log from rule of run_dram
023-07-25 15:45:13,721 - The log file is created at genomes/annotations/dram/intermediate_files/MAG1/annotate.log.
2023-07-25 15:45:13,721 - 1 FASTAs found
Traceback (most recent call last):
File "/home/projects/atlas_db1/condaenvs/ccc2b09d1155123764d32f2ee4daa7be/bin/DRAM.py", line 207, in
can you tell me what's in /home/projects/atlas_db1/DRAM/DRAM.config
can you tell me what's in /home/projects/atlas_db1/DRAM/DRAM.config
I have attached it below. DRAM database was sent (previous we have a problem downloading database) from the other server and then the path from DRAM.config was changed accordingly. DRAM.config.txt
It seems you modified the first key which should stay "search_databases":
{
"search_databases": {
"kegg": null,
"kofam_hmm": "/...b/kofam_profiles.hmm",
Strike 2 errors in less than 2 h
Strike 2 errors in less than 2 h
Much appreciated for your patience and time. ATLAS was finished perfect.
You are welcome.
Hi thanks for this good tool, I tried it but encountered the following error:
Config file /home/projects/atlas/atlas/workflow/../config/default_config.yaml is extended by additional config specified via the command line. Building DAG of jobs... Using shell: /bin/bash Provided cores: 24 Rules claiming more threads will be scaled down. Provided resources: mem_mb=150000, mem_mib=150000, time_min=300 Singularity containers: ignored Select jobs to execute... Changing to shadow directory: /home/projects/cow_test/.snakemake/shadow/tmpcqbbx8qm Not cleaning up /home/projects/cow_test/.snakemake/shadow/tmpcqbbx8qm/.snakemake/scripts/tmp9hz_kvbf.rename_genomes.py [Tue Jul 25 14:55:41 2023] Error in rule rename_genomes: jobid: 0 input: Binning/DASTool/raw_bins/paths.tsv, Binning/DASTool/bins2species.tsv, Binning/DASTool/filtered_bin_info.tsv output: genomes/genomes, genomes/clustering/contig2genome.tsv, genomes/clustering/old2newID.tsv, genomes/clustering/allbins2genome.tsv, genomes/genome_quality.tsv log: logs/genomes/rename_genomes.log (check log file(s) for error details)
Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message [Tue Jul 25 14:55:42 2023] Finished job 124. 111 of 137 steps (81%) done
Here is the relevant log output:
Atlas version 2.17.2
Thanks a lot for helping me figure out the problem.
Best regards, Xichuan