Closed MalayAgr closed 1 year ago
Hello, I implemented the CUDA/CPU flexibility myself in this project : https://github.com/ClementPinard/Pytorch-Correlation-extension
Yeah, I ended up going the if
route in the end.
On Thu, Mar 17, 2022, 1:37 PM Clément Pinard @.***> wrote:
Hello, I implemented the CUDA/CPU flexibility myself in this project : https://github.com/ClementPinard/Pytorch-Correlation-extension
— Reply to this email directly, view it on GitHub https://github.com/pytorch/extension-cpp/issues/77#issuecomment-1070515220, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADR3UKHVBWEFA5XG5DYJ5L3VALR5VANCNFSM5Q4MQIJA . You are receiving this because you authored the thread.Message ID: @.***>
Hi.
I am interested in writing a custom C++/CUDA extension.
The tutorial here only shows a scenario where you have a pure CUDA kernel which will not work on a machine which doesn’t have CUDA. I’d like to make an extension which uses a CPU version OR the CUDA version automatically, akin to what PyTorch itself does. I was wondering if there was a way to use the dispatcher in PyTorch to accomplish this in the extension. I know this can be done using if statements or #if preprocessors. But I’d like a slightly more “automatic” solution.
Thank you!
P.S.: I have already searched the discussion forums for this topic but couldn’t find a solution.