Closed aringeri closed 2 months ago
We recommend a minimum of 8GB of GPU RAM, so I'm not very surprised this card is struggling.
[2024-06-27 05:37:19.311] [warning] cuda:0 maximum safe estimated batch size at chunk size 12288 is only 0.
That probably explains the segfault.
The 5.0 sup models have a minimum batch size of 32, other models have a minimum size of 64. If you still see failures with these values then the other option is to reduce the --chunksize
as well (I'd suggest by a factor of 2 each time?), though this may have an effect on accuracy.
Thanks for your response. I'm able to basecall when using the --batchsize 64
parameter for both the v5.0.0 and v4.3.0 sup models.
Are there any guides for how the --chunksize
and --batchsize
parameters affect accuracy of the basecalls?
We don't have a formal categorisation for chunk size, no. Batch size should not affect accuracy.
Issue Report
Please describe the issue:
I am currently trialing using a home desktop GPU to speedup basecalling (over CPU hardware). The device I have access to is a NVIDIA GeForce GTX 1060 3GB which is considerably less powerful than the workstation GPUs. Hac and fast basecalling models are working well on this device but I run into issues when attempting to use the sup model.
For v4.3.0 models I get an 'out of memory' message. While on v5 models I get a segmentation fault.
Is there any way to configure the dorado to use less GPU memory to fit these lower spec device?
Steps to reproduce the issue:
v4.3.0 sup
v5.0.0 sup
Run environment:
dorado basecaller
Logs
v4.3.0
v5.0.0
nvidia-smi