Closed alecandido closed 1 month ago
Tests are not passing, but the problems are all caused by PyTorch on Windows.
Any idea? @BrunoLiegiBastonLiegi @renatomello @Simone-Bordoni
Note that this branch bumped PyTorch from 2.3.1 to 2.4.0, but it was allowed by the specified range in pyproject.toml
(which is unchanged).
If we do not support PyTorch 2.4.0 (or not on Windows), we should update the pyproject.toml
.
I was experiencing similar problems with some tests of qiboml
involving expectation_from_samples
under windows
. Actually, what I noticed, in my case, was that a similar discrepancy in the expectation value was present in other platforms as well, but with smaller magnitude, small enough to be covered by an atol=1e-1
. However, this was happening for all the backends PytorchBackend
, TensorflowBackend
and JaxBackend
. In my case it was not really due to the torch
version as I was experiencing the same error with 2.3.1
. I am not sure this is relevant to this case though...
@BrunoLiegiBastonLiegi I remembered of your troubles with Windows in QiboML (though I also remember you ended up deactivating the tests on Windows in https://github.com/qiboteam/qiboml/pull/20 :disappointed:). That's why I asked you as well.
In any case, thanks for your answer. Let's wait for @renatomello and @Simone-Bordoni, who spent the most time with the torch backend.
Just to add on my previous comment I tested this simple example
from qibo import gates, hamiltonians
from qibo.quantum_info import random_clifford
from qibo.symbols import Z
from qibo.backends import PyTorchBackend
backend = PyTorchBackend()
nqubits = 5
c = random_clifford(nqubits, backend=backend)
c.add(gates.M(*range(nqubits)))
observable = hamiltonians.SymbolicHamiltonian(
sum([(i+1)**2*Z(i) for i in range(nqubits)]),
nqubits=nqubits,
backend=backend,
)
for _ in range(10):
print(observable.expectation_from_samples(backend.execute_circuit(c).frequencies()))
which results in a widely different expectation value every time:
tensor(0.0820+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.3380+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.3940+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.8480+0.j, dtype=torch.complex128, requires_grad=True)
tensor(1.2320+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.4580+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.3940+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.6740+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-1.5440+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.5180+0.j, dtype=torch.complex128, requires_grad=True)
The TensorflowBackend
seems slightly more stable but still problematic I'd say:
tf.Tensor((-0.25000000000000067+0j), shape=(), dtype=complex128)
tf.Tensor((0.40000000000000013+0j), shape=(), dtype=complex128)
tf.Tensor((-0.4860000000000004+0j), shape=(), dtype=complex128)
tf.Tensor((-0.08000000000000018+0j), shape=(), dtype=complex128)
tf.Tensor((0.784+0j), shape=(), dtype=complex128)
tf.Tensor((0.006000000000000075+0j), shape=(), dtype=complex128)
tf.Tensor((-0.8439999999999996+0j), shape=(), dtype=complex128)
tf.Tensor((0.19599999999999973+0j), shape=(), dtype=complex128)
tf.Tensor((0.07400000000000027+0j), shape=(), dtype=complex128)
tf.Tensor((0.01999999999999999+0j), shape=(), dtype=complex128)
@scarrazza should we keep an open issue about the PyTorch version?
To be fair, if yes, we should even move it to Qiboml, since the torch backend won't be here forever...
Yes, I think so.
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.94%. Comparing base (
6d67625
) to head (eb19847
). Report is 7 commits behind head on master.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Yes, I think so.
Issue opened https://github.com/qiboteam/qiboml/issues/31
Thanks.
Cf. https://github.com/qiboteam/qibo/pull/1411#issuecomment-2270445397