google / deepconsensus

DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.
BSD 3-Clause "New" or "Revised" License
229 stars 36 forks source link

Error: ValueError: Shapes (2640, 280) and (560, 280) are incompatible #36

Closed gevro closed 2 years ago

gevro commented 2 years ago

Hi, I'm getting the below error:

Total params: 9,525,067
Trainable params: 9,525,067
Non-trainable params: 0
_________________________________________________________________
Traceback (most recent call last):
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 130, in restore
    assigned_variable = resource_variable_ops.shape_safe_assign_variable_handle(
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 308, in shape_safe_assign_variable_handle
    shape.assert_is_compatible_with(value_tensor.shape)
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py", line 1291, in assert_is_compatible_with
    raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (2640, 280) and (560, 280) are incompatible

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/bio/bin/deepconsensus", line 8, in <module>
    sys.exit(run())
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 111, in run
    app.run(main, flags_parser=parse_flags)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 102, in main
    app.run(quick_inference.main, argv=passed)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 814, in main
    outcome_counter = run()
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 734, in run
    loaded_model, model_params = initialize_model(
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 476, in initialize_model
    checkpoint.restore(
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2537, in restore
    status = self.read(save_path, options=options)
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2417, in read
    result = self._saver.restore(save_path=save_path, options=options)
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 1468, in restore
    base.CheckpointPosition(
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 295, in restore
    restore_ops = trackable._restore_from_checkpoint_position(self)  # pylint: disable=protected-access
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 1060, in _restore_from_checkpoint_position
    current_position.checkpoint.restore_saveables(tensor_saveables,
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 349, in restore_saveables
    new_restore_ops = functional_saver.MultiDeviceSaver(
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 415, in restore
    restore_ops = restore_fn()
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 398, in restore_fn
    restore_ops.update(saver.restore(file_prefix, options))
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 112, in restore
    restore_ops[saveable.name] = saveable.restore(
  File "/home/user01/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 133, in restore
    raise ValueError(
ValueError: Received incompatible tensor with shape (560, 280) when attempting to restore variable with shape (2640, 280) and name model/transformer_input_condenser/kernel/.ATTRIBUTES/VARIABLE_VALUE.
kishwarshafin commented 2 years ago

@gevro ,

Can you please confirm that you are not useing r0.2 models with r0.3 codebase? This looks like you are using an older model with newer release.

gevro commented 2 years ago

I am using r0.3.1 docker and I downloaded model from the link in the quick start: gsutil cp -r gs://brain-genomics-public/research/deepconsensus/models/v0.3/model_checkpoint/*

Is that not correct?

Thanks.

kishwarshafin commented 2 years ago

@gevro ,

In that case, can you please put down the full command that you are running here? I just followed the https://github.com/google/deepconsensus/blob/r0.3/docs/quick_start.md to run everything on a VM and everything seems to work correctly.

gevro commented 2 years ago

Sure, here is the command:

singularity run -W /data -B /scratch/projects/lab/bin/deepconsensus/model:/model -B `pwd` /scratch/bin/deepconsensus/deepconsensus_0.3.1.sif deepconsensus run --batch_size=1024 --batch_zmws=100 --cpus 8 --max_passes 100 --subreads_to_ccs=subreads_to_ccs.bam --ccs_bam=ccs.bam --checkpoint=/model/checkpoint --output=output.deepconsensus.fastq
kishwarshafin commented 2 years ago

@gevro , I believe the download didn't happen correctly in this case. Can you please run the following commands and try one more time:

mkdir /scratch/projects/lab/bin/deepconsensus/model/v0.3_model
cd /scratch/projects/lab/bin/deepconsensus/model/v0.3_model
gsutil cp -r gs://brain-genomics-public/research/deepconsensus/models/v0.3/model_checkpoint/'*' .
ls -lha

And verify the output (file size) matches to this:

103M Aug  1 17:54 checkpoint.data-00000-of-00001
4.9K Aug  1 17:54 checkpoint.index
3.4K Aug  1 17:54 params.json

And then run:

singularity run -W /data -B /scratch/projects/lab/bin/deepconsensus/model/v0.3_model:/model -B `pwd` /scratch/bin/deepconsensus/deepconsensus_0.3.1.sif deepconsensus run --batch_size=1024 --batch_zmws=100 --cpus 8 --max_passes 100 --subreads_to_ccs=subreads_to_ccs.bam --ccs_bam=ccs.bam --checkpoint=/model/checkpoint --output=output.deepconsensus.fastq

This should work.

gevro commented 2 years ago

However, I checked and that downloads the same files I already have. What else could be the issue?

kishwarshafin commented 2 years ago

@gevro ,

did you make sure that the directory you downloaded to didn't contain v0.2 checkpoint from before? Can you please run the instructed commands as a sanity check that the download wasn't the issue? I get the same error as you when I run deepconsensus v0.3.1 with v0.2 models so I am suspecting that is the issue.

gevro commented 2 years ago

I'm still getting the same error. Maybe your docker deepconsensus_0.3.1 is the problem and has the wrong version of deepconsensus packaged inside?

Can you double check using your docker?

If not, how do we fix this issue?

_________________________________________________________________
Traceback (most recent call last):
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 130, in restore
    assigned_variable = resource_variable_ops.shape_safe_assign_variable_handle(
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 308, in shape_safe_assign_variable_handle
    shape.assert_is_compatible_with(value_tensor.shape)
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py", line 1291, in assert_is_compatible_with
    raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (2640, 280) and (560, 280) are incompatible

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/bio/bin/deepconsensus", line 8, in <module>
    sys.exit(run())
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 111, in run
    app.run(main, flags_parser=parse_flags)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 102, in main
    app.run(quick_inference.main, argv=passed)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 814, in main
    outcome_counter = run()
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 734, in run
    loaded_model, model_params = initialize_model(
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 476, in initialize_model
    checkpoint.restore(
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2537, in restore
    status = self.read(save_path, options=options)
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2417, in read
    result = self._saver.restore(save_path=save_path, options=options)
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 1468, in restore
    base.CheckpointPosition(
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 295, in restore
    restore_ops = trackable._restore_from_checkpoint_position(self)  # pylint: disable=protected-access
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 1060, in _restore_from_checkpoint_position
    current_position.checkpoint.restore_saveables(tensor_saveables,
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 349, in restore_saveables
    new_restore_ops = functional_saver.MultiDeviceSaver(
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 415, in restore
    restore_ops = restore_fn()
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 398, in restore_fn
    restore_ops.update(saver.restore(file_prefix, options))
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 112, in restore
    restore_ops[saveable.name] = saveable.restore(
  File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 133, in restore
    raise ValueError(
ValueError: Received incompatible tensor with shape (560, 280) when attempting to restore variable with shape (2640, 280) and name model/transformer_input_condenser/kernel/.ATTRIBUTES/VARIABLE_VALUE.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
W0802 21:18:21.240287 23396527953728 util.py:200] Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).save_counter
W0802 21:18:21.240466 23396527953728 util.py:209] Value in checkpoint could not be found in the restored object: (root).save_counter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
W0802 21:18:21.240509 23396527953728 util.py:209] Value in checkpoint could not be found in the restored object: (root).optimizer.iter
pichuan commented 2 years ago

@gevro, you are setting max passes in your command. You can’t change this setting without using a model trained for a larger number of passes.

Our current model only supports max_passes=20.

Remove this flag, and it should work.

On Tue, Aug 2, 2022 at 6:21 PM 'gevro' via deepvariant-rotations < @.***> wrote:

I'm still getting the same error. Maybe your docker deepconsensus_0.3.1 is the problem and has the wrong version of deepconsensus packaged inside?

Can you double check using your docker?

If not, how do we fix this issue?


Traceback (most recent call last): File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 130, in restore assigned_variable = resource_variable_ops.shape_safe_assign_variable_handle( File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 308, in shape_safe_assign_variable_handle shape.assert_is_compatible_with(value_tensor.shape) File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py", line 1291, in assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (2640, 280) and (560, 280) are incompatible

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/opt/conda/envs/bio/bin/deepconsensus", line 8, in sys.exit(run()) File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 111, in run app.run(main, flags_parser=parse_flags) File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 102, in main app.run(quick_inference.main, argv=passed) File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 814, in main outcome_counter = run() File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 734, in run loaded_model, model_params = initialize_model( File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 476, in initialize_model checkpoint.restore( File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2537, in restore status = self.read(save_path, options=options) File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2417, in read result = self._saver.restore(save_path=save_path, options=options) File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 1468, in restore base.CheckpointPosition( File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 295, in restore restore_ops = trackable._restore_from_checkpoint_position(self) # pylint: disable=protected-access File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 1060, in _restore_from_checkpoint_position current_position.checkpoint.restore_saveables(tensor_saveables, File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 349, in restore_saveables new_restore_ops = functional_saver.MultiDeviceSaver( File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 415, in restore restore_ops = restore_fn() File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 398, in restore_fn restore_ops.update(saver.restore(file_prefix, options)) File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 112, in restore restore_ops[saveable.name] = saveable.restore( File "/home/lab/.local/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 133, in restore raise ValueError( ValueError: Received incompatible tensor with shape (560, 280) when attempting to restore variable with shape (2640, 280) and name model/transformer_input_condenser/kernel/.ATTRIBUTES/VARIABLE_VALUE. WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use status.expect_partial(). See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function. W0802 21:18:21.240287 23396527953728 util.py:200] Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use status.expect_partial(). See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function. WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).save_counter W0802 21:18:21.240466 23396527953728 util.py:209] Value in checkpoint could not be found in the restored object: (root).save_counter WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter W0802 21:18:21.240509 23396527953728 util.py:209] Value in checkpoint could not be found in the restored object: (root).optimizer.iter

— Reply to this email directly, view it on GitHub https://github.com/google/deepconsensus/issues/36#issuecomment-1203376660, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADTGBMXFPVT5CFTDLFQUDTVXHCQBANCNFSM55ICPRZA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

-- You received this message because you are subscribed to the Google Groups "deepvariant-rotations" group. To unsubscribe from this group and stop receiving emails from it, send an email to @.*** To view this discussion on the web visit https://groups.google.com/a/google.com/d/msgid/deepvariant-rotations/google/deepconsensus/issues/36/1203376660%40github.com https://groups.google.com/a/google.com/d/msgid/deepvariant-rotations/google/deepconsensus/issues/36/1203376660%40github.com?utm_medium=email&utm_source=footer .

gevro commented 2 years ago

Ok thanks. Is there any possibility to train a model for higher max_passes?

gevro commented 2 years ago

Also, now I'm getting a different error:

=================================================================
Total params: 8,942,667
Trainable params: 8,942,667
Non-trainable params: 0
_________________________________________________________________
I0802 23:05:13.125668 22923163895616 model_utils.py:231] Setting hidden size to transformer_input_size.
I0802 23:05:13.125833 22923163895616 quick_inference.py:484] Finished initialize_model.
I0802 23:05:13.126322 22923163895616 quick_inference.py:738] Model setup took 0.6180477142333984 seconds.
Traceback (most recent call last):
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/preprocess/utils.py", line 981, in proc_feeder
    ccs_bam_read = next(ccs_bam_h)
  File "pysam/libcalignmentfile.pyx", line 1874, in pysam.libcalignmentfile.AlignmentFile.__next__
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/bio/bin/deepconsensus", line 8, in <module>
    sys.exit(run())
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 111, in run
    app.run(main, flags_parser=parse_flags)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/cli.py", line 102, in main
    app.run(quick_inference.main, argv=passed)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/share/apps/python/3.8.6/intel/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 814, in main
    outcome_counter = run()
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 762, in run
    for zmw, subreads, dc_config in input_file_generator:
  File "/opt/conda/envs/bio/lib/python3.8/site-packages/deepconsensus/inference/quick_inference.py", line 428, in stream_bam
    for input_data in proc_feeder():
RuntimeError: generator raised StopIteration
danielecook commented 2 years ago

@gevro what does the distribution of the number of passes look like in your dataset?

We do not expect a large benefit from training a model for more passes. Quality goes up as the number of subread passes increases, and DeepConsensus will not be necessary as the sequence becomes more correct.

gevro commented 2 years ago

Is it possible to connect offline via e-mail so I can explain better?

AndrewCarroll commented 2 years ago

Hi @gevro

In terms of talking about model training possibilities by email, can you send a message to awcarroll@google.com. I can loop in Dan and we can continue discussion from there.

danielecook commented 2 years ago

I believe we have resolved this issue. If there are any further questions please reach out.