kokkos / kokkos-kernels

Kokkos C++ Performance Portability Programming Ecosystem: Math Kernels - Provides BLAS, Sparse BLAS and Graph Kernels
Other
313 stars 98 forks source link

cuBLAS + UVM #775

Open cgcgcg opened 4 years ago

cgcgcg commented 4 years ago

I am trying to use cuBLAS from EMPIRE / Trilinos / Tpetra. Since KokkosKernels lacks specializations for cuBLAS + UVM that currently isn't possible. I tried adding them, similar to https://github.com/kokkos/kokkos-kernels/pull/759, but there still seems to be a mismatch.

Is there a reason for not supporting this use case?

lucbv commented 4 years ago

@cgcgcg you should probably mention which kernels you want running with cuBLAS, I know we talked about dot, axpy and axpby. Anything else that seems important to EMPIRE?

vbrunini commented 4 years ago

Could this be handled with something like #763 to make sure any specializations that work for CudaSpace also work for CudaUVMSpace?

cgcgcg commented 4 years ago

@lucbv Here are the kernels I see in the kernel logger: dot, mult, nrm2, axpby, update and scal. However, wouldn't it make sense to enable this for all the BLAS kernels? @vbrunini #763 looks like a great option to me, but neither do I know what work this involves, nor am I a KK developer.

jennloe commented 4 years ago

@lucbv This also came up in my Belos code when I wanted to call gemm with CuBLAS.