Closed lpryszcz closed 9 months ago
Hi @lpryszcz , it certainly would be possible to add the ability to set the batchsize of the caller in the dorado_basecall_server, but we have an automated system that tries to set a suitable batch size based on the available VRAM in the cuda device you are running with. Can you let us know a little bit about what model you are running and what hardware you are trying to run it on? Thanks!
actually, I've experienced problems with dorado basecaller
running out of memory on RTX2080Ti (11 GB vRAM) when executed without specifying -b/--batchsize
I requested this for the server just in case, but so far the server seems ok.
That's interesting - was this with the latest version of dorado standalone (v0.4.2)? I would expect an RTX 2080 Ti to work well, as we do sometimes run internally on GTX 1080 Ti cards and they seem to behave.(although they will use a slightly different code path).
I've added a ticket to expose --batchsize
in the server so you can directly control it.
thank you!
yes, the out of memory error happens with the latest standalone version (0.4.2) on Ubuntu 18.04 and when using custom model. with official models it works fine.
hi, I can see --batch_size_scale_factor
was added in v7.2.13. thanks!
Hello, could you please provide a bit more description on the use of --batch_size_scale_factor? Could this option also be used on MinKNOW by modifying the app_conf in order to optimize basecalling and VRAM usage for small GPUs? Thank you in advance for your help.
Hi @frumencelab ,
Sure, the --batch_size_scale_factor
argument is used to scale the maximum memory fraction of the available GPU memory that the CUDA caller will use. If something external is already running using 3GB on an 11GB card, there will be 8GB actually available for use. A --batch_size_scale_factor 0.25
will tell dorado that it can use up to ~2GB of that memory for a CUDA caller. There are some other allocations (e.g. model weights themselves), so it won't be exact, but it allows you to exercise some control over how much memory the basecaller will use.
Yes, you can use the argument in the app_conf file or in the doradod daemon on Linux MinKNOW installs in order to change how the memory is used.
Hi, could you please add
-b/--batchsize
todorado_basecall_server
? It runs out of vRAM for us.