NVIDIA / cuda-quantum

C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
https://nvidia.github.io/cuda-quantum/
Other
430 stars 149 forks source link

Wrap kernel with target #1823

Open zohimchandani opened 4 weeks ago

zohimchandani commented 4 weeks ago

Required prerequisites

Describe the feature

Currently, set_target is set globally and if we wanted to offload different kernels onto different architectures we could do this:

cudaq.set_target("superconducting")

@cudaq.kernel
def kernel_1()
    pass

#execute kernel 1 onto superconducting 

cudaq.set_target("photonic")

@cudaq.kernel
def kernel_2()
    pass

#execute kernel 2 on photonic now that the target has been changed 

This however is not natural and somewhat cumbersome as we have to have things defined in a sequential manner. I think wrapping a target to a kernel is more natural:

@cudaq.kernel(target = 'superconducting')
def kernel_1()
    pass

@cudaq.kernel(target = 'photonic')
def kernel_2()
    pass

#execute kernel 1 onto superconducting 
#execute kernel 2 onto photonic
bettinaheim commented 2 weeks ago

I don't think we should mark kernels for specific targets only. We could consider allowing to pass a target to runtime functions like sample and observe instead.