Open MichaelBroughton opened 3 years ago
Hi @MichaelBroughton I am the creator of Cirun.io, "CUDA support" caught my eye.
FWIW I'll share my two cents. I created a service for problems like these, which is basically running custom machines (including GPUs) in GitHub Actions: https://cirun.io/
It is used in multiple open source projects needing GPU support like the following:
It is fairly simple to setup, all you need is a cloud account (AWS or GCP) and a simple yaml file describing what kind of machines you need and Cirun will spin up ephemeral machines on your cloud for GitHub Actions to run. It's native to GitHub ecosystem, which mean you can see logs/trigger in the Github's interface itself, just like any Github Action run.
Also, note that Cirun is free for Open source projects. (You only pay to your cloud provider for machine usage)
With early stages of CUDA support starting to make its way into qsim we should investigate registering GPU host devices for our OP kernels so we can give our users GPU accelerated simulation when it makes sense to do so.
Like with our existing ops, we may need to have two different parallelization schemes, one for small circuits where we parallelize over circuits in a batch and one for larger circuits we parallelize across single large wavefunctions. Once the connections have been made we should do an in depth study to determine which scenarios give best performance. This is a large project and will roughly require the following:
math_ops
.noise
ops.