Closed jferbe1 closed 10 months ago
Your --ccs_bam flag doesn't seem to be set correctly. Can you fix that and rerun?
Thank you for the swift response. I re-ran it with the fixed flag, and I still receive the same error.
If this helps, here is the full output after my input:
W1208 09:49:38.220699 22625633736512 model_utils.py:449] A new parameter (CCS_BQ_MAX=95) was added to the base config that is not present in params.json
W1208 09:49:38.220891 22625633736512 model_utils.py:449] A new parameter (band_width=None) was added to the base config that is not present in params.json
W1208 09:49:38.220927 22625633736512 model_utils.py:449] A new parameter (ccs_bq_hidden_size=1) was added to the base config that is not present in params.json
W1208 09:49:38.220952 22625633736512 model_utils.py:449] A new parameter (conv_model=resnet50) was added to the base config that is not present in params.json
W1208 09:49:38.220978 22625633736512 model_utils.py:449] A new parameter (rezero=False) was added to the base config that is not present in params.json
W1208 09:49:38.221004 22625633736512 model_utils.py:449] A new parameter (total_rows=None) was added to the base config that is not present in params.json
W1208 09:49:38.221026 22625633736512 model_utils.py:449] A new parameter (trial=1) was added to the base config that is not present in params.json
W1208 09:49:38.221050 22625633736512 model_utils.py:449] A new parameter (use_ccs_bq=False) was added to the base config that is not present in params.json
I1208 09:49:38.222167 22625633736512 quick_inference.py:863] Using multiprocessing: cpus is 63.
I1208 09:49:38.222408 22625633736512 model_utils.py:345] Setting hidden size to transformer_input_size.
I1208 09:49:38.222475 22625633736512 quick_inference.py:511] Loading /home/jferbe1/deepconsensus/model/v0.3_model/checkpoint
2023-12-08 09:49:38.224418: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-08 09:49:38.228372: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting:
I1208 09:49:38.315788 22625633736512 networks.py:427] Condensing input.
Traceback (most recent call last):
File "/opt/conda/envs/bio/bin/deepconsensus", line 8, in
Thank you for following up. @jferbe1 can you give more details on what exact command you ran, and describe the data you used?
And, did you try running with the data in https://github.com/google/deepconsensus/blob/r1.2/docs/quick_start.md, and did that work for you in Singularity setting?
Hi @jferbe1 , Given the inactivity of this thread, I'll close it. Feel free to reopen with more information so we can help. Thanks!
Hello all,
I am currently trying to run deepconsensus using singularity with the following command:
singularity run -B /work /work/jferbe1/Eunicea/Genome/Deep_Genome/Eunicea_2/DeepConsensus/deepconsensus_1.2.0.sif deepconsensus run \ --subreads_to_ccs=subreads_to_ccs.bam \ --ccs_bam=.ccs.bam \ --checkpoint=/home/deepconsensus/model/v0.3_model/checkpoint \ --output=DeepConsensus/dc.fastq
I have been receiving this error when running it: KeyError: 'Exception encountered when calling layer "encoder_only_learned_values_transformer" (type EncoderOnlyLearnedValuesTransformer).\n\n\'attn_win_size\'\n\nCall arguments received by layer "encoder_only_learned_values_transformer" (type EncoderOnlyLearnedValuesTransformer):\n • inputs=tf.Tensor(shape=(1, 85, 120, 1), dtype=float32)\n • training=None'
I am not sure how to interpret it, so if someone could help, that would be greatly appreciated.
Thanks, Jackson