PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.27k stars 585 forks source link

[BUG]lightning.gpu wrong QAOA results #2745

Closed duanyh12 closed 2 years ago

duanyh12 commented 2 years ago

Expected behavior

For a QAOA max cut problem, here I am comparing the qnode results from 4 different devices:

  1. lightning.qubit shots=None
  2. lightning.gpu shots=None
  3. lightning.gpu shots=1
  4. lightning.gpu shots=8192 (some large enough value)

Code goes:

import os
import pennylane as qml
from pennylane import numpy as np
import networkx as nx
from pennylane import qaoa

n_qubits = 10
edges = 20
seed = 197
params_init = np.ones((2, 2))
dev1 = qml.device("lightning.qubit", wires=n_qubits, shots=None)
dev2 = qml.device("lightning.gpu", wires=n_qubits, shots=None)
dev3 = qml.device("lightning.gpu", wires=n_qubits, shots=1)
dev4 = qml.device("lightning.gpu", wires=n_qubits, shots=8192)

g = nx.gnm_random_graph(n_qubits, edges, seed=seed)
cost_h, mixer_h = qaoa.maxcut(g)

def qaoa_layer(gamma, alpha):
    qaoa.cost_layer(gamma, cost_h)
    qaoa.mixer_layer(alpha, mixer_h)

def qaoa_circuit(params, n_qubits, depth):
    for w in range(n_qubits):
        qml.Hadamard(wires=w)
    gammas = params[0]
    alphas = params[1]
    qml.layer(qaoa_layer, depth, gammas, alphas)

@qml.qnode(dev1)
def cost1(params):
    qaoa_circuit(params, n_qubits, 2)
    return qml.expval(cost_h)        

@qml.qnode(dev2)
def cost2(params):
    qaoa_circuit(params, n_qubits, 2)
    return qml.expval(cost_h)

@qml.qnode(dev3)
def cost3(params):
    qaoa_circuit(params, n_qubits, 2)
    return qml.expval(cost_h) 

@qml.qnode(dev4)
def cost4(params):
    qaoa_circuit(params, n_qubits, 2)
    return qml.expval(cost_h) 

print(params_init)
print(dev1.name)
print(cost1(params_init))
print(dev2.name)
print(cost2(params_init))    
print(dev3.name)
print(cost3(params_init)) 
print(dev4.name)
print(cost4(params_init)) 

I would expect cost1(params_init) to match cost2(params_init) and to roughly match cost4(params_init). cost3(params_init) should be an integer. Let me know if I am mistaken.

Actual behavior

  | 2022-06-18T19:42:07.895-07:00 | [[1. 1.] [1. 1.]]
  | 2022-06-18T19:42:07.895-07:00 | Lightning Qubit PennyLane plugin
  | 2022-06-18T19:42:07.895-07:00 | -10.24677464628668
  | 2022-06-18T19:42:07.895-07:00 | PennyLane plugin for GPU-backed Lightning device using NVIDIA cuQuantum SDK
  | 2022-06-18T19:42:07.895-07:00 | -0.0018589484246231427
  | 2022-06-18T19:42:07.895-07:00 | PennyLane plugin for GPU-backed Lightning device using NVIDIA cuQuantum SDK
  | 2022-06-18T19:42:07.895-07:00 | -0.0018589484246231427
  | 2022-06-18T19:42:07.895-07:00 | PennyLane plugin for GPU-backed Lightning device using NVIDIA cuQuantum SDK
  | 2022-06-18T19:42:07.895-07:00 | -0.0018589484246231427

Additional information

No response

Source code

No response

Tracebacks

No response

System information

2022-06-18T19:42:09.896-07:00   Name: PennyLane

2022-06-18T19:42:09.896-07:00   Version: 0.25.0.dev0

2022-06-18T19:42:09.896-07:00   Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.

2022-06-18T19:42:09.896-07:00   Home-page: https://github.com/XanaduAI/pennylane

2022-06-18T19:42:09.896-07:00   Author:

2022-06-18T19:42:09.896-07:00   Author-email:

2022-06-18T19:42:09.896-07:00   License: Apache License 2.0

2022-06-18T19:42:09.896-07:00   Location: /opt/conda/lib/python3.8/site-packages

2022-06-18T19:42:09.896-07:00   Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, retworkx, scipy, semantic-version, toml

2022-06-18T19:42:09.896-07:00   Required-by: amazon-braket-pennylane-plugin, PennyLane-Lightning, PennyLane-Lightning-GPU, PennyLane-Qchem

2022-06-18T19:42:09.897-07:00   Platform info: Linux-4.14.275-142.503.amzn1.x86_64-x86_64-with-glibc2.17

2022-06-18T19:42:09.897-07:00   Python version: 3.8.10

2022-06-18T19:42:09.897-07:00   Numpy version: 1.21.4

2022-06-18T19:42:09.897-07:00   Scipy version: 1.7.3

2022-06-18T19:42:09.897-07:00   Installed devices:

2022-06-18T19:42:09.897-07:00   - braket.aws.qubit (amazon-braket-pennylane-plugin-1.6.7)

2022-06-18T19:42:09.897-07:00   - braket.local.qubit (amazon-braket-pennylane-plugin-1.6.7)

2022-06-18T19:42:09.897-07:00   - default.gaussian (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - default.mixed (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - default.qubit (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - default.qubit.autograd (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - default.qubit.jax (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - default.qubit.tf (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - default.qubit.torch (PennyLane-0.25.0.dev0)

2022-06-18T19:42:09.897-07:00   - lightning.gpu (PennyLane-Lightning-GPU-0.23.0)

2022-06-18T19:42:09.897-07:00   - lightning.qubit (PennyLane-Lightning-0.23.0)

2022-06-18T19:42:09.897-07:00   None

Existing GitHub issues

mlxd commented 2 years ago

Hi @duanyh12 This is due to the lack of full support for MultiRZ in LightningGPU, as well as the limited shots>0 support in the 0.23 release. If you install the pre-release 0.24 version (soon to be the fully released 0.24 version), this is working as expected for the exact case.

As for the variation expected with the finite-shots>0, let me check.

[[1. 1.]
 [1. 1.]]
Lightning Qubit PennyLane plugin
-10.24677464628668
PennyLane plugin for GPU-backed Lightning device using NVIDIA cuQuantum SDK
-10.246774646286685
PennyLane plugin for GPU-backed Lightning device using NVIDIA cuQuantum SDK
-10.246774646286685
PennyLane plugin for GPU-backed Lightning device using NVIDIA cuQuantum SDK
-10.246774646286685
mlxd commented 2 years ago

This required a quick fix. v0.24 will have this resolved, and will be released tomorrow.

duanyh12 commented 2 years ago

Confirming the issue is resolved with v0.24. Really appreciate the help!