LLNL / inq

This is a mirror. Please check our main website on gitlab.
https://gitlab.com/npneq/inq/
Mozilla Public License 2.0
23 stars 4 forks source link

[question] Support for a new backend #4

Open TejaX-Alaghari opened 2 years ago

TejaX-Alaghari commented 2 years ago

Currently INQ can run on Nvidia GPUs through CUDA backend. What would be the design requirements required for supporting a new backend, say Intel's SYCL or AMD's HIP?

Would migrating/porting the CUDA code to SYCL/HIP and enabling support for a new configuration in the build be sufficient? If so, what files/modules in INQ and the external libraries be supported? Otherwise, please suggest the expected requirements for enabling this feature.

correaa commented 2 years ago

Dear @TejaX-Alaghari

Thank you for your message.

Indeed, we would love to port the code to AMD or other devices, and HIP is within the medium term plans, OpenACC and SYCL perhaps later. But works towards that goal, contributions, proofs of principle and implementations are very welcome.

Currently we are using GPU devices through a few different means in the code. The multidimensional array (Multi) library uses mostly CUDA thrust behind the scenes, that should be replaced by thrustHIP when/if that works at some point. Then there is an ad-hoc library of kernels for loops and reductions called gpu::run (written by @xavierandrade), those kernel need to be rewritten or translated to HIP or SYCL. Finally, we use cudaBLAS and cuFFT which need to be replaced by the AMD or Intel versions.

As long as Intel/Khronos can provide similar backends, the work seems to doable for SYCL.

Please note that the development of inq is done on gitlab, so we would prefer any contributions to be sent there.

TejaX-Alaghari commented 2 years ago

@correaa, Thanks for a detailed response and showing interest in enabling new backends.

After discussing internally, our team from Intel would like to discuss further on proposing the addition of SYCL backend for INQ. Please let me know if you're okay with discussing this over an official call. If so, let me know where I can forward an invite for further discussion.