NVIDIA / cuda-quantum

C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
https://nvidia.github.io/cuda-quantum/
Other
427 stars 149 forks source link

Allow torch tensors and cupy arrays as input to quantum kernels #1480

Open zohimchandani opened 3 months ago

zohimchandani commented 3 months ago

Required prerequisites

Describe the bug

Pytorch tensors and cupy arrays lie on GPU memory and we need to be able to input them into quantum kernels from GPU. Can we please add support for these?

The code snippet below works fine for numpy:

import cudaq 
from cudaq import spin 
import numpy as np 

n_samples = 5 
n_params = 2 

params = np.random.rand(n_samples, n_params)

@cudaq.kernel
def kernel(params: np.ndarray):

    qvector = cudaq.qvector(1)

    rx(params[0], qvector[0])
    ry(params[1], qvector[0])

result = cudaq.observe(kernel, spin.z(0), params)

result

It does not work for torch.Tensor inputs:


import cudaq 
from cudaq import spin 
import torch

n_samples = 5 
n_params = 2 

params = torch.rand(n_samples, n_params)

@cudaq.kernel
def kernel(params: torch.Tensor):

    qvector = cudaq.qvector(1)

    rx(params[0], qvector[0])
    ry(params[1], qvector[0])

result = cudaq.observe(kernel, spin.z(0), params)

result
CompilerError: 792851843.py:12: error: torch is not a supported type.
     (offending source -> torch.Tensor)

and a similar error is shown for cupy arrays:

import cudaq 
from cudaq import spin 
import cupy as cp 

n_samples = 5 
n_params = 2 

params = cp.random.rand(n_samples, n_params)

@cudaq.kernel
def kernel(params: cp.ndarray):

    qvector = cudaq.qvector(1)

    rx(params[0], qvector[0])
    ry(params[1], qvector[0])

result = cudaq.observe(kernel, spin.z(0), params)
CompilerError: 1207908829.py:11: error: cp is not a supported type.
     (offending source -> cp.ndarray)

Steps to reproduce the bug

NA

Expected behavior

NA

Is this a regression? If it is, put the last known working version (or commit) here.

Not a regression

Environment

Suggestions

No response

zohimchandani commented 3 months ago

I have a tensor that lives on GPU memory.

I want the ability to access that pointer and pass it to the observe call without GPU-CPU memory transfer.

Something like this:

import torch 
import cudaq 

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# This tensor lives on gpu memory 
thetas = torch.Tensor([1,2,3]).to(device)

cudaq.observe(kernel, thetas)

We should also support doing this for cupy arrays which live on GPU memory

import cupy as cp 

x = cp.random.rand(4)

x.device #This lives on GPU memory 

cudaq.observe(kernel, thetas)

UPDATE:

It would also be nice to have the ability to access the output of cudaq.observe() which lives on GPU memory to be an input to a Pytorch function without CPU-GPU memory transfer.

marwafar commented 2 months ago

Same here. I want to move result from the pytorch minimization on GPU to observe call on GPU in cudaq

import torch
from torchmin import minimize
import cudaq

spin_ham=.....
init_params=torch.from_numpy(init_params)

@cudaq.kernel
def main_kernel(nelec:int, qubits_num:int, thetas: torch.tensor):

    qubits=cudaq.qvector(qubits_num)

    for i in range(nelec):
        x(qubits[i])

    cudaq.kernels.uccsd(qubits, thetas, nelec, qubits_num)

def objective_func(parameter_vector):
         cost = cudaq.observe(main_kernel, spin_ham, nelectrons, qubits_num, parameter_vector).expectation() 

         return cost

result_vqe=minimize(objective_func,init_params,method='l-bfgs')
bettinaheim commented 4 weeks ago

Some additional feedback related to torch: