artic-network / fieldbioinformatics

The ARTIC field bioinformatics pipeline
MIT License
110 stars 69 forks source link

pipeline has stopped working...! #121

Open rebeelouise opened 1 year ago

rebeelouise commented 1 year ago

Hey, haven't used artic in a while and I am now getting errors when I get to the medaka step in the pipeline and getting the following error...! Anyone know how to fix?

`Running: medaka consensus --model r941_min_high_g360 --threads 60 --chunk_len 800 --chunk_ovlp 400 --RG 1 /home/rebee/projects/analysis/20221017/FAS60816/filtered/FAS60816_barcode02_filtered.trimmed.rg.sorted.bam /home/rebee/projects/analysis/20221017/FAS60816/filtered/FAS60816_barcode02_filtered.1.hdf
[13:52:28 - ValidArgs] Reads will be filtered to only those with RG tag: 1
[13:52:28 - Predict] Reducing threads to 2, anymore is a waste.
[13:52:28 - Predict] It looks like you are running medaka without a GPU and attempted to set a high number of threads. We have scaled this down to an optimal number. If you wish to improve performance please see https://nanoporetech.github.io/medaka/installation.html#improving-parallelism.
[13:52:28 - Predict] Setting tensorflow inter/intra-op threads to 2/1.
[13:52:28 - Predict] Processing region(s): MN908947.3:0-29903
[13:52:28 - Predict] Using model: /home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/data/r941_min_high_g360_model.hdf5.
[13:52:28 - Predict] Processing 1 long region(s) with batching.
[13:52:28 - MdlStore] filepath /home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/data/r941_min_high_g360_model.hdf5
[13:52:28 - MdlStore] ModelStore exception Cannot convert a symbolic Tensor (bidirectional/forward_gru1/strided_slice:0) to a numpy array.
Traceback (most recent call last):
  File "/home/rebee/miniconda3/envs/artic/bin/medaka", line 11, in <module>
    sys.exit(main())
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/medaka.py", line 720, in main
    args.func(args)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/prediction.py", line 160, in predict
    model = model_store.load_model(time_steps=args.chunk_len)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/datastore.py", line 78, in load_model
    model = model_partial_function(time_steps=time_steps)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/models.py", line 147, in build_model
    model.add(Bidirectional(gru, input_shape=input_shape))
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 456, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py", line 198, in add
    layer(x)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/wrappers.py", line 531, in __call__
    return super(Bidirectional, self).__call__(inputs, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/wrappers.py", line 644, in call
    y = self.forward_layer(forward_inputs,
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 654, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent_v2.py", line 408, in call
    inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 848, in _process_inputs
    initial_state = self.get_initial_state(inputs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 636, in get_initial_state
    init_state = get_initial_state_fn(
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 1910, in get_initial_state
    return _generate_zero_filled_state_for_cell(self, inputs, batch_size, dtype)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2926, in _generate_zero_filled_state_for_cell
    return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2944, in _generate_zero_filled_state
    return create_zeros(state_size)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2939, in create_zeros
    return array_ops.zeros(init_state_size, dtype=dtype)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2677, in wrapped
    tensor = fun(*args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2721, in zeros
    output = _constant_if_small(zero, shape, dtype, name)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2662, in _constant_if_small
    if np.prod(shape) < 1000:
  File "<__array_function__ internals>", line 5, in prod
  File "/home/rebee/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3051, in prod
    return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
  File "/home/rebee/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 748, in __array__
    raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"
NotImplementedError: Cannot convert a symbolic Tensor (bidirectional/forward_gru1/strided_slice:0) to a numpy array.
Command failed:medaka consensus --model r941_min_high_g360 --threads 60 --chunk_len 800 --chunk_ovlp 400 --RG 1 /home/rebee/projects/analysis/20221017/FAS60816/filtered/FAS60816_barcode02_filtered.trimmed.rg.sorted.bam /home/rebee/projects/analysis/20221017/FAS60816/filtered/FAS60816_barcode02_filtered.1.hdf
`
BioWilko commented 1 year ago

Hi Rebee

What version of the artic pipeline are you using? and how did you install it?

Also would you be willing to share the file "/home/rebee/projects/analysis/20221017/FAS60816/filtered/FAS60816_barcode02_filtered.trimmed.rg.sorted.bam" so I can have a look myself?

Sam

rebeelouise commented 1 year ago

Hi, I used the latest version on git.

I have installed and reinstalled a couple of times now, with conda and the git method. My miniconda is using python 3.9 so I have made the env have python 3.8.13 as stated in the environment.yml too. That is how I overcame my last installation bug here: https://github.com/artic-network/fieldbioinformatics/issues/87

I have just ran samtools depth -aa on my sorted bam file and I can see that there are definately reads over the genome positions... so minimap2 and samtools and other parts of the pipeline seem to be working fine - just the medaka part with the tensor flow errors...!

rebeelouise commented 1 year ago

Update:

Same errors when running bash test-runner.sh medaka:


`Running: medaka consensus --model r941_min_high_g351 --threads 2 --chunk_len 800 --chunk_ovlp 400 --RG Ebov-DRC_2 ebov-mayinga.trimmed.rg.sorted.bam ebov-mayinga.Ebov-DRC_2.hdf
[11:25:19 - ValidArgs] Reads will be filtered to only those with RG tag: Ebov-DRC_2
[11:25:19 - Predict] Setting tensorflow inter/intra-op threads to 2/1.
[11:25:19 - Predict] Processing region(s): BTB20484:0-18953
[11:25:19 - Predict] Using model: /home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/data/r941_min_high_g351_model.hdf5.
[11:25:19 - Predict] Processing 1 long region(s) with batching.
[11:25:19 - MdlStore] filepath /home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/data/r941_min_high_g351_model.hdf5
[11:25:19 - MdlStore] ModelStore exception Cannot convert a symbolic Tensor (bidirectional/forward_gru1/strided_slice:0) to a numpy array.
Traceback (most recent call last):
  File "/home/rebee/miniconda3/envs/artic/bin/medaka", line 11, in <module>
    sys.exit(main())
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/medaka.py", line 720, in main
    args.func(args)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/prediction.py", line 160, in predict
    model = model_store.load_model(time_steps=args.chunk_len)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/datastore.py", line 78, in load_model
    model = model_partial_function(time_steps=time_steps)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/medaka/models.py", line 147, in build_model
    model.add(Bidirectional(gru, input_shape=input_shape))
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 456, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py", line 198, in add
    layer(x)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/wrappers.py", line 531, in __call__
    return super(Bidirectional, self).__call__(inputs, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/wrappers.py", line 644, in call
    y = self.forward_layer(forward_inputs,
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 654, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent_v2.py", line 408, in call
    inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 848, in _process_inputs
    initial_state = self.get_initial_state(inputs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 636, in get_initial_state
    init_state = get_initial_state_fn(
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 1910, in get_initial_state
    return _generate_zero_filled_state_for_cell(self, inputs, batch_size, dtype)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2926, in _generate_zero_filled_state_for_cell
    return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2944, in _generate_zero_filled_state
    return create_zeros(state_size)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2939, in create_zeros
    return array_ops.zeros(init_state_size, dtype=dtype)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2677, in wrapped
    tensor = fun(*args, **kwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2721, in zeros
    output = _constant_if_small(zero, shape, dtype, name)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2662, in _constant_if_small
    if np.prod(shape) < 1000:
  File "<__array_function__ internals>", line 5, in prod
  File "/home/rebee/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3051, in prod
    return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
  File "/home/rebee/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
  File "/home/rebee/miniconda3/envs/artic/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 748, in __array__
    raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"
NotImplementedError: Cannot convert a symbolic Tensor (bidirectional/forward_gru1/strided_slice:0) to a numpy array.
Command failed:medaka consensus --model r941_min_high_g351 --threads 2 --chunk_len 800 --chunk_ovlp 400 --RG Ebov-DRC_2 ebov-mayinga.trimmed.rg.sorted.bam ebov-mayinga.Ebov-DRC_2.hdf`

Think I might have to revert to a previous version for the time being - I am struggling to problem solve this one! :)

rebeelouise commented 1 year ago

Okay fixed with this suggestion:

https://github.com/tensorflow/models/issues/9706#issuecomment-782841778

:)

MauriAndresMU1313 commented 2 weeks ago

Thank you for the link that you provided! I wonder what lines of code you used or approach to solve the issue because, in that discussion forum, they offer different alternatives. Looks like the way that you solved is related to medaka-issue-specific Any information that you can provide is welcome! @rebeelouise