Open zohimchandani opened 3 months ago
I have a tensor that lives on GPU memory.
I want the ability to access that pointer and pass it to the observe
call without GPU-CPU memory transfer.
Something like this:
import torch
import cudaq
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# This tensor lives on gpu memory
thetas = torch.Tensor([1,2,3]).to(device)
cudaq.observe(kernel, thetas)
We should also support doing this for cupy arrays which live on GPU memory
import cupy as cp
x = cp.random.rand(4)
x.device #This lives on GPU memory
cudaq.observe(kernel, thetas)
UPDATE:
It would also be nice to have the ability to access the output of cudaq.observe()
which lives on GPU memory to be an input to a Pytorch function without CPU-GPU memory transfer.
Same here. I want to move result from the pytorch minimization on GPU to observe call on GPU in cudaq
import torch
from torchmin import minimize
import cudaq
spin_ham=.....
init_params=torch.from_numpy(init_params)
@cudaq.kernel
def main_kernel(nelec:int, qubits_num:int, thetas: torch.tensor):
qubits=cudaq.qvector(qubits_num)
for i in range(nelec):
x(qubits[i])
cudaq.kernels.uccsd(qubits, thetas, nelec, qubits_num)
def objective_func(parameter_vector):
cost = cudaq.observe(main_kernel, spin_ham, nelectrons, qubits_num, parameter_vector).expectation()
return cost
result_vqe=minimize(objective_func,init_params,method='l-bfgs')
Some additional feedback related to torch:
Required prerequisites
Describe the bug
Pytorch tensors and cupy arrays lie on GPU memory and we need to be able to input them into quantum kernels from GPU. Can we please add support for these?
The code snippet below works fine for numpy:
It does not work for
torch.Tensor
inputs:and a similar error is shown for cupy arrays:
Steps to reproduce the bug
NA
Expected behavior
NA
Is this a regression? If it is, put the last known working version (or commit) here.
Not a regression
Environment
Suggestions
No response