nanoporetech / dorado

Oxford Nanopore's Basecaller
https://nanoporetech.com/
Other
531 stars 63 forks source link

Size of tensor a not matching size of tensor b - Direct RNA (SQK-RNA004) basecalling #873

Closed NStrowbridge closed 5 months ago

NStrowbridge commented 5 months ago

Issue Report

Please describe the issue:

Trying to run basecalling on direct RNA (SQK-RNA004) data. Keep getting error: The size of tensor a (19) must match the size of tensor b (29) at non-singleton dimension 2.

Run environment:

Logs

[2024-06-05 17:24:02.849] [info] Running: "basecaller" "hac" "-v" "--emit-fastq" "pod5/"
[2024-06-05 17:24:02.849] [info]  - Note: FASTQ output is not recommended as not all data can be preserved.
[2024-06-05 17:24:02.922] [info] Assuming cert location is /etc/ssl/cert.pem
[2024-06-05 17:24:02.923] [info]  - downloading rna004_130bps_hac@v5.0.0 with foundation
[2024-06-05 17:25:09.405] [info] > Creating basecall pipeline
[2024-06-05 17:25:09.405] [debug] CRFModelConfig { qscale:1.100000 qbias:-1.000000 stride:5 bias:0 clamp:1 out_features:-1 state_len:4 outsize:1024 blank_score:2.000000 scale:1.000000 num_features:1 sample_rate:4000 mean_qscore_start_pos:60 SignalNormalisationParams { strategy:pa StandardisationScalingParams { standardise:1 mean:80.875900 stdev:17.269760}} BasecallerParams { chunk_size:10000 overlap:500 batch_size:0} convs: { 0: ConvParams { insize:1 size:16 winlen:5 stride:1 activation:swish} 1: ConvParams { insize:16 size:16 winlen:5 stride:1 activation:swish} 2: ConvParams { insize:16 size:384 winlen:29 stride:5 activation:tanh}} model_type: lstm { bias:0 outsize:1024 blank_score:2.000000 scale:1.000000}}
[2024-06-05 17:25:09.474] [info]  - BAM format does not support `U`, so RNA output files will include `T` instead of `U` for all file types.
[2024-06-05 17:25:09.538] [debug] Physical/Usable memory available: 64/64 GB
[2024-06-05 17:25:09.538] [debug] Retrieved GPU core count of 24 from IO Registry
[2024-06-05 17:25:09.538] [debug] Trying batch size 288
[2024-06-05 17:25:09.538] [debug] Linear layer split 1
[2024-06-05 17:25:09.539] [debug] lstm_chunk_size 300 => 15 LSTM kernel launches
[2024-06-05 17:25:09.541] [debug] conv3 output_time_step_count 300 => 15 kernel launches
[2024-06-05 17:25:09.548] [error] The size of tensor a (19) must match the size of tensor b (29) at non-singleton dimension 2
Exception raised from infer_size_impl at /Users/ben.lawrence/Documents/work/pytorch/aten/src/ATen/ExpandUtils.cpp:35 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 92 (0x1090c49a0 in dorado)
frame #1: at::infer_size_dimvector(c10::ArrayRef<long long>, c10::ArrayRef<long long>) + 376 (0x104f89bd8 in dorado)
frame #2: at::TensorIteratorBase::compute_shape(at::TensorIteratorConfig const&) + 460 (0x10500c58c in dorado)
frame #3: at::TensorIteratorBase::build(at::TensorIteratorConfig&) + 524 (0x105008d44 in dorado)
frame #4: at::TensorIteratorConfig::build() + 56 (0x105367d68 in dorado)
frame #5: at::native::copy_(at::Tensor&, at::Tensor const&, bool) + 1568 (0x105367494 in dorado)
frame #6: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor& (c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool), &(torch::ADInplaceOrView::copy_(c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool))>, at::Tensor&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool> >, at::Tensor& (c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool) + 72 (0x108e69644 in dorado)
frame #7: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor& (c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool), &(torch::autograd::VariableType::(anonymous namespace)::copy_(c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool))>, at::Tensor&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool> >, at::Tensor& (c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor&, at::Tensor const&, bool) + 556 (0x108e66e08 in dorado)
frame #8: at::_ops::copy_::call(at::Tensor&, at::Tensor const&, bool) + 276 (0x105ee6e54 in dorado)
frame #9: dorado::utils::load_state_dict(torch::nn::Module&, std::__1::vector<at::Tensor, std::__1::allocator<at::Tensor> > const&) + 164 (0x104e3879c in dorado)
frame #10: dorado::basecall::nn::MetalCRFModelImpl::load_state_dict(std::__1::vector<at::Tensor, std::__1::allocator<at::Tensor> > const&) + 20 (0x104e8f1d0 in dorado)
frame #11: dorado::basecall::MetalLSTMCaller::set_chunk_batch_size(dorado::basecall::CRFModelConfig const&, std::__1::vector<at::Tensor, std::__1::allocator<at::Tensor> > const&, int, int) + 632 (0x104e75dc8 in dorado)
frame #12: dorado::basecall::MetalLSTMCaller::benchmark_batch_sizes(dorado::basecall::CRFModelConfig const&, std::__1::vector<at::Tensor, std::__1::allocator<at::Tensor> > const&, float) + 984 (0x104e75890 in dorado)
frame #13: dorado::basecall::MetalLSTMCaller::MetalLSTMCaller(dorado::basecall::CRFModelConfig const&, float) + 596 (0x104e75044 in dorado)
frame #14: dorado::api::create_metal_caller(dorado::basecall::CRFModelConfig const&, float) + 128 (0x104d19228 in dorado)
frame #15: dorado::api::create_basecall_runners(dorado::basecall::CRFModelConfig const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, unsigned long, float, dorado::basecall::PipelineType, float) + 200 (0x104d194c4 in dorado)
frame #16: dorado::setup(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, dorado::basecall::CRFModelConfig const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::__fs::filesystem::path, std::__1::allocator<std::__1::__fs::filesystem::path> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, unsigned long, unsigned long, float, dorado::utils::HtsFile::OutputMode, bool, unsigned long, unsigned long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, dorado::alignment::Minimap2Options const&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, bool, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, argparse::ArgumentParser&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, dorado::ModelSelection const&, std::__1::shared_ptr<dorado::demux::BarcodingInfo const>, std::__1::unique_ptr<dorado::utils::SampleSheet const, std::__1::default_delete<dorado::utils::SampleSheet const> >) + 1004 (0x104cf1e14 in dorado)
frame #17: dorado::basecaller(int, char**) + 13672 (0x104cf8cb4 in dorado)
frame #18: main + 2872 (0x104cb3b38 in dorado)
frame #19: start + 516 (0x10f665088 in dyld)
HalfPhoton commented 5 months ago

Hi @NStrowbridge, we can reproduce this issue locally and we'll issue a fix in an upcoming release. You should still be able to use the rna004 fast and sup models on apple hardware and all previous models e.g. hac@v4.3.

Thanks for reporting this issue

NStrowbridge commented 5 months ago

@HalfPhoton Thanks for the quick response! I have successfully basecalled using the other models. Thanks again.

iiSeymour commented 4 months ago

@NStrowbridge this is fixed in v0.7.2.