google / deepconsensus

DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.
BSD 3-Clause "New" or "Revised" License
229 stars 36 forks source link

error in 1_merge_datasets step #8

Closed guosongjia closed 3 years ago

guosongjia commented 3 years ago

Dear all,\ I'm a PhD student working with a series of Pacbio CCS reads and I noticed the release of the preprint paper of the deepconsensus. This tool can improve the correctness of CCS reads generated from pbccs. I installed it on our lab server and met with the following error when using it:

$ INPUTS="$(pwd)"
$ OUTPUTS="$(pwd)"
$ CHECKPOINT_PATH="/home/data/vip21/jgs/test_deepconsensus/models/checkpoint-50"
$ python3 -m deepconsensus.scripts.run_deepconsensus --input_subreads_aligned=${INPUTS}/subreads_to_ccs.bam --input_subreads_unaligned=${INPUTS}/subreads.bam --input_ccs_fasta=${INPUTS}/ccs.fasta --output_directory=${OUTPUTS} --checkpoint=${CHECKPOINT_PATH}

***** Running the command:*****
python3 -m deepconsensus.preprocess.merge_datasets   --input_bam=/home/data/vip21/jgs/test_deepconsensus/subreads_to_ccs.bam   --input_unaligned_bam=/home/data/vip21/jgs/test_deepconsensus/subreads.bam   --output_path=/home/data/vip21/jgs/test_deepconsensus/1_merge_datasets   --inference=true
*******************************

2021-09-04 09:01:31.543001: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-09-04 09:01:31.543076: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
I0904 09:01:43.412593 140136974808896 fn_api_runner_transforms.py:548] ==================== <function annotate_downstream_side_inputs at 0x7f71432847b8> ====================
I0904 09:01:43.413714 140136974808896 fn_api_runner_transforms.py:548] ==================== <function fix_side_input_pcoll_coders at 0x7f71432848c8> ====================
I0904 09:01:43.414398 140136974808896 fn_api_runner_transforms.py:548] ==================== <function lift_combiners at 0x7f7143284950> ====================
I0904 09:01:43.414686 140136974808896 fn_api_runner_transforms.py:548] ==================== <function expand_sdf at 0x7f71432849d8> ====================
I0904 09:01:43.416237 140136974808896 fn_api_runner_transforms.py:548] ==================== <function expand_gbk at 0x7f7143284a60> ====================
I0904 09:01:43.417087 140136974808896 fn_api_runner_transforms.py:548] ==================== <function sink_flattens at 0x7f7143284b70> ====================
I0904 09:01:43.417660 140136974808896 fn_api_runner_transforms.py:548] ==================== <function greedily_fuse at 0x7f7143284bf8> ====================
I0904 09:01:43.420369 140136974808896 fn_api_runner_transforms.py:548] ==================== <function read_to_impulse at 0x7f7143284c80> ====================
I0904 09:01:43.420574 140136974808896 fn_api_runner_transforms.py:548] ==================== <function impulse_to_input at 0x7f7143284d08> ====================
I0904 09:01:43.420840 140136974808896 fn_api_runner_transforms.py:548] ==================== <function inject_timer_pcollections at 0x7f7143284ea0> ====================
I0904 09:01:43.421363 140136974808896 fn_api_runner_transforms.py:548] ==================== <function sort_stages at 0x7f7143284f28> ====================
I0904 09:01:43.421562 140136974808896 fn_api_runner_transforms.py:548] ==================== <function window_pcollection_coders at 0x7f7142d02048> ====================
I0904 09:01:43.425726 140136974808896 statecache.py:154] Creating state cache with size 100
I0904 09:01:43.427177 140136974808896 fn_api_runner.py:2011] Created Worker handler <apache_beam.runners.portability.fn_api_runner.EmbeddedWorkerHandler object at 0x7f7142cd9cf8> for environment urn: "beam:env:embedded_python:v1"

I0904 09:01:43.427564 140136974808896 fn_api_runner.py:974] Running (((((ref_AppliedPTransform_write_merged_subreads/Write/WriteImpl/DoOnce/Impulse_50)+(ref_AppliedPTransform_write_merged_subreads/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:2639>)_51))+(ref_AppliedPTransform_write_merged_subreads/Write/WriteImpl/DoOnce/Map(decode)_53))+(ref_AppliedPTransform_write_merged_subreads/Write/WriteImpl/InitializeWrite_54))+(ref_PCollection_PCollection_33/Write))+(ref_PCollection_PCollection_34/Write)
I0904 09:01:43.482085 140136974808896 fn_api_runner.py:974] Running (((ref_AppliedPTransform_read_unaligned_reads/Read/_SDFBoundedSourceWrapper/Impulse_23)+(read_unaligned_reads/Read/_SDFBoundedSourceWrapper/ParDo(SDFBoundedSourceDoFn)/PairWithRestriction))+(read_unaligned_reads/Read/_SDFBoundedSourceWrapper/ParDo(SDFBoundedSourceDoFn)/SplitAndSizeRestriction))+(ref_PCollection_PCollection_13_split/Write)
I0904 09:01:43.515367 140136974808896 fn_api_runner.py:974] Running ((((ref_PCollection_PCollection_13_split/Read)+(read_unaligned_reads/Read/_SDFBoundedSourceWrapper/ParDo(SDFBoundedSourceDoFn)/Process))+(ref_AppliedPTransform_reshuffle_unaligned_reads/AddRandomKeys_26))+(ref_AppliedPTransform_reshuffle_unaligned_reads/ReshufflePerKey/Map(reify_timestamps)_28))+(reshuffle_unaligned_reads/ReshufflePerKey/GroupByKey/Write)
2021-09-04 09:01:43.533782: W nucleus/io/sam_reader.cc:115] Unknown tag pb: in header line, ignoring: @HD   VN:1.5  SO:unknown  pb:3.0.7
I0904 09:01:43.533916 140124420044544 genomics_reader.py:208] Reading /home/data/vip21/jgs/test_deepconsensus/subreads.bam with NativeSamReader
I0904 10:52:22.286793 140136974808896 fn_api_runner.py:974] Running ((((((reshuffle_unaligned_reads/ReshufflePerKey/GroupByKey/Read)+(ref_AppliedPTransform_reshuffle_unaligned_reads/ReshufflePerKey/FlatMap(restore_timestamps)_33))+(ref_AppliedPTransform_reshuffle_unaligned_reads/RemoveRandomKeys_34))+(ref_AppliedPTransform_get_unaligned_read_name_35))+(ref_AppliedPTransform_group_by_read_name/pair_with_1_38))+(group_by_read_name/Flatten/Transcode/1))+(group_by_read_name/Flatten/Write/1)
Traceback (most recent call last):
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/site-packages/deepconsensus/scripts/run_deepconsensus.py", line 246, in <module>
    app.run(main)
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/site-packages/deepconsensus/scripts/run_deepconsensus.py", line 238, in main
    example_width=EXAMPLE_WIDTH)
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/site-packages/deepconsensus/scripts/run_deepconsensus.py", line 220, in run_deepconsensus
    run_command(command, dry_run=dry_run, log_file=log_file)
  File "/home/data/vip21/miniconda3/envs/deepconsensus/lib/python3.6/site-packages/deepconsensus/scripts/run_deepconsensus.py", line 184, in run_command
    raise RuntimeError(f'Command failed: \n{command}\n')
RuntimeError: Command failed: 
python3 -m deepconsensus.preprocess.merge_datasets   --input_bam=/home/data/vip21/jgs/test_deepconsensus/subreads_to_ccs.bam   --input_unaligned_bam=/home/data/vip21/jgs/test_deepconsensus/subreads.bam   --output_path=/home/data/vip21/jgs/test_deepconsensus/1_merge_datasets   --inference=true

My ccs.fasta file s about 800Mb, my subreads_to_ccs.bam file is 8.9Gb, and my subreads.bam is about 28Gb.\ Can anyone help me to solve this problem?\ Best,\ Guo-Song

gunjanbaid commented 3 years ago

@guosongjia I am not sure what the issue is from looking at the above stacktrace. Could you run the command that failed directly? This might show a more detailed error message.

python3 -m deepconsensus.preprocess.merge_datasets   \
  --input_bam=/home/data/vip21/jgs/test_deepconsensus/subreads_to_ccs.bam   \
  --input_unaligned_bam=/home/data/vip21/jgs/test_deepconsensus/subreads.bam   \
  --output_path=/home/data/vip21/jgs/test_deepconsensus/1_merge_datasets   \
  --inference=true
guosongjia commented 3 years ago

@gunjanbaid Thanks for your reply~\ I have tried several times after submitting this issue but all failed. I feel this error may be caused by the lack of memory.\ I run this program on our bioinformatic server which has about 640Gb MEM. I keep tracking the MEM usage of the software in top page and found that 1_merge_datasets was unable to finish until it occupied about 620Gb MEM. Our server administrator designed a program to kill those programs automatically when it consuming more than 90% MEM.\ The MEM consuming strategy of the software currently is not user-friendly for most users major in bioinformatics. I hope it will be optimized in the following releases.\ Best,\ Guo-Song

gunjanbaid commented 3 years ago

Hi @guosongjia, definitely, the team is working on improving usability of the software. We hope to have a solution that works for you soon.