Closed cement-head closed 3 years ago
For RACON:
only available when built with CUDA:
-c, --cudapoa-batches <int>
default: 0
number of batches for CUDA accelerated polishing per GPU
-b, --cuda-banded-alignment
use banding approximation for polishing on GPU. Only applicable when -c is used.
--cudaaligner-batches <int>
default: 0
number of batches for CUDA accelerated alignment per GPU
--cudaaligner-band-width <int>
default: 0
Band width for cuda alignment. Must be >= 0. Non-zero allows user defined
band width, whereas 0 implies auto band width determination.
For RAVEN:
only available when built with CUDA:
-c, --cuda-poa-batches <int>
default: 0
number of batches for CUDA accelerated polishing
-b, --cuda-banded-alignment
use banding approximation for polishing on GPU
(only applicable when -c is used)
-a, --cuda-alignment-batches <int>
default: 0
number of batches for CUDA accelerated alignment
Is there some detailed explanation on how to optimise and/or use these parameters for CUDA?
The parameters are the same in Racon and Raven, there are small differences in names and band width is missing in Raven. Mapping in Racon/Raven is not yet on the GPU, only alignment and POA in Racon are and hence the nvidia-smi does not show any GPU usage yet.
@tijyojwad, can you please assist @cement-head with parameter tuning for cuda-poa/cuda-aligner parameters?
P.s. Sorry for my late reply!
If the RAVEN genome assembly is too large 7 Gbp rather than the expected 6 Gbp, would the best place to focus be on (a) the polishing step, or (b) the initial assembly parameters step?
have you tried to purge haplotigs with purge_dups https://github.com/dfguan/purge_dups
On Wed, Mar 10, 2021 at 10:21 PM Andor J Kiss @.***> wrote:
If the RAVEN genome assembly is too large 7 Gbp rather than the expected 6 Gbp, would the best place to focus be on (a) the polishing step, or (b) the initial assembly parameters step?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/lbcb-sci/raven/issues/41#issuecomment-795479586, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALV5JAPV2SEDFBFY7F3Q3LTC5W7BANCNFSM4YPSDJAA .
Nope. I haven't.
Trying to optimise the CUDA parameters. Using 24 GB GPUs (2x RTX TITANs).
This is essentially the command I'm using, but not sure if I'm getting proper CUDA usage...
$ raven -t 124 -c 100 -a 100 input_file.fastq > raven_asm.fasta