Open Rohit-Satyam opened 4 weeks ago
Also, I think you should also add SISPA to the README file since it also gives extremely high coverage and therefore people should use --var_pct_full --ref_pct_full 1
right?
The command looks correct. Could you please ls /home/satyamr/dorado-0.8.2-linux-x64/models/dna_r10.4.1_e8.2_400bps_fast@v4.3.0
and see what it shows?
@aquaskyline
Apologies as I was away on medical leave. Here are the contents of the directory as you requested
ls /home/satyamr/dorado-0.8.2-linux-x64/models/dna_r10.4.1_e8.2_400bps_fast@v4.3.0
0.conv.bias.tensor 1.conv.weight.tensor 4.rnn.bias_hh_l0.tensor 4.rnn.weight_ih_l0.tensor 5.rnn.weight_hh_l0.tensor 6.rnn.bias_ih_l0.tensor 7.rnn.bias_hh_l0.tensor 7.rnn.weight_ih_l0.tensor 8.rnn.weight_hh_l0.tensor config.toml
0.conv.weight.tensor 2.conv.bias.tensor 4.rnn.bias_ih_l0.tensor 5.rnn.bias_hh_l0.tensor 5.rnn.weight_ih_l0.tensor 6.rnn.weight_hh_l0.tensor 7.rnn.bias_ih_l0.tensor 8.rnn.bias_hh_l0.tensor 8.rnn.weight_ih_l0.tensor
1.conv.bias.tensor 2.conv.weight.tensor 4.rnn.weight_hh_l0.tensor 5.rnn.bias_ih_l0.tensor 6.rnn.bias_hh_l0.tensor 6.rnn.weight_ih_l0.tensor 7.rnn.weight_hh_l0.tensor 8.rnn.bias_ih_l0.tensor 9.linear.weight.tensor
The folder contains a dorado model, not a Clair3 model.
Yeah. That was the question. Is there a way to have clair3 models in "hac" and "fast" mode too, or should we use the "sup" mode model on the fasta files produced in "fast" base calling mode?
Dear Developers
In the readme section, you mention that ONT models can be used with Clair3. I downloaded one of these model
dna_r10.4.1_e8.2_400bps_fast@v4.3.0
and gave it as input to Clair3. However, I get the following errorShould I use r1041_e82_400bps_sup_v420.tar.gz instead even the basecaller version is slightly different and the basecalling was run in fast mode ? Is there a way to convert the dorado models to pileup?