NVIDIA / cuda-quantum

C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
https://nvidia.github.io/cuda-quantum/
Other
427 stars 149 forks source link

random_walk_qpe.cpp not returning correct phase when run through NVQC #1597

Open qci-petrenko opened 2 months ago

qci-petrenko commented 2 months ago

Required prerequisites

Describe the bug

When running the random walk phase estimation example (random_walk_qpe.cpp) in WSL (on a Windows 11 machine - CPU simulation) using the 0.7.1 cuda-quantum image, the expected phase is returned: Phase = 0.487390.

If running through Nvidia's cloud service (--target nvqc), Phase = 0.000000. This is not expected. It's not clear whether this is a GPU simulation issue or something with the cloud service.

Steps to reproduce the bug

The full terminal i/o is below:

cudaq@add20b7f7934:~/examples/cpp/other$ nvq++ random_walk_qpe.cpp
cudaq@add20b7f7934:~/examples/cpp/other$ ./a.out
Phase = 0.487390
cudaq@add20b7f7934:~/examples/cpp/other$ nvq++ random_walk_qpe.cpp --target nvqc
cudaq@add20b7f7934:~/examples/cpp/other$ ./a.out
[2024-05-02 00:29:36.675] Submitting jobs to NVQC service with 1 GPU(s). Max execution time: 3600 seconds (excluding queue wait time).

================ NVQC Device Info ================
GPU Device Name: "NVIDIA H100 80GB HBM3"
CUDA Driver Version / Runtime Version: 12.2 / 11.8
Total global memory (GB): 79.1
Memory Clock Rate (MHz): 2619.000
GPU Clock Rate (MHz): 1980.000
==================================================
Phase = 0.000000

Expected behavior

I expect to have the local simulation and the cloud simulation return the same result.

Is this a regression? If it is, put the last known working version (or commit) here.

Unknown.

Environment

Suggestions

No response

1tnguyen commented 2 months ago

Hi @qci-petrenko,

Thank you for reporting the issue. The current nvqc target doesn't support quantum kernels that return values yet (the returned double value in random_walk_qpe.cpp is being ignored). We'll put this feature in the next release.