Closed ktasha45 closed 2 years ago
I can reproduce the issue too. Currently, it keeps processing data on the cpu rather than gpu, making it impossible to accelerate training by using gpu devices.
@ktasha45 These lines are wrong:
from qiskit.providers.aer import AerError
if torch.cuda.is_available():
DEVICE = torch.device('cuda')
else:
DEVICE = torch.device('cpu')
simulator_gpu = Aer.get_backend('aer_simulator_statevector')
simulator_gpu.set_options(device=DEVICE)
If you want to make use of GPU support in Qiskit Aer you have to write:
backend = Aer.get_backend("aer_simulator")
backend.set_options(device='GPU')
qi = QuantumInstance(backend)
Take a look here: https://qiskit.org/documentation/tutorials/simulators/1_aer_provider.html#GPU-Simulation
The issue is fixed in #335, hope the PR will be merged soon.
Environment
What is happening?
Torch connector seems to expect all the data to be in the CPU and fails when its run on a GPU. Using the qiskit-aer-gpu backends does not change the outcome. Since the function is in the library, we cannot control its output
How can we reproduce the issue?
Imports and set GPU
Set Simulator
Set Dataset
Set QNN
Sample random Net function
Training Start [Error in this block]
What should happen?
It would be great if GPU simulators can be used for training and testing models. The function can be modified to convert according to the device being used. If there is any error in our method of using the GPU, please do let us know!
We used the suggestions from this issue but it didnt seem to work https://github.com/Qiskit/qiskit-machine-learning/issues/286
https://github.com/Qiskit/qiskit-machine-learning/blob/26a5a69580d4f05cb4f0aa1525fc2144fa4413fa/qiskit_machine_learning/connectors/torch_connector.py#L104
Any suggestions?
No response