Closed fergsc closed 1 year ago
Hi @fergsc,
There are two batch parameters: --batch_zmws
and --batch_size
. --batch_size
controls how many examples are processed in one call to the TensorFlow model. You may try that to increase the load on GPU.
--batch_zmws
controls how many ZMWs are processed at a time. For each ZMW batch preprocess and inference steps are called sequentially. --cpus
flag controls how many processes are used for preprocessing step. This flag does not affect the inference (run on GPU) part.
Hi,
I am trying to understand what the comutational limits for deepconsensus are. Does the batch size of ZMW
--batch_zmws=100
control how much data is passed to the GPU? Ie. if we have a lot of RAM on GPU we can increase this?Thanks.