I'm running into an error using any of the models:
Traceback (most recent call last):
File "/home/esteinig/miniconda3/envs/bonito/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/home/esteinig/src/bonito/bonito/multiprocessing.py", line 110, in run
for item in self.iterator:
File "/home/esteinig/src/bonito/bonito/crf/basecall.py", line 69, in <genexpr>
(read, compute_scores(model, batch, reverse=reverse)) for read, batch in batches
File "/home/esteinig/src/bonito/bonito/crf/basecall.py", line 35, in compute_scores
sequence, qstring, moves = beam_search(
File "/home/esteinig/miniconda3/envs/bonito/lib/python3.9/site-packages/koi/decode.py", line 13, in beam_search
raise TypeError('Expected fp16 but received %s' % scores.dtype)
TypeError: Expected fp16 but received torch.float32
Installation into fresh conda environment with Python 3.9 (this is on a GPU node of our cluster, with CUDA 11.1 module loaded) and either the repository head commit fd5bf56 or v0.5.0
I'm running into an error using any of the models:
Installation into fresh
conda
environment withPython 3.9
(this is on a GPU node of our cluster, withCUDA 11.1
module loaded) and either the repository head commit fd5bf56 orv0.5.0
CUDA config:
Dependencies: