qiboteam / qibo

A framework for quantum computing
https://qibo.science
Apache License 2.0
287 stars 58 forks source link

Update matplotlib #1413

Closed alecandido closed 1 month ago

alecandido commented 1 month ago

Cf. https://github.com/qiboteam/qibo/pull/1411#issuecomment-2270445397

alecandido commented 1 month ago

Tests are not passing, but the problems are all caused by PyTorch on Windows.

Any idea? @BrunoLiegiBastonLiegi @renatomello @Simone-Bordoni

alecandido commented 1 month ago

Note that this branch bumped PyTorch from 2.3.1 to 2.4.0, but it was allowed by the specified range in pyproject.toml (which is unchanged).

If we do not support PyTorch 2.4.0 (or not on Windows), we should update the pyproject.toml.

BrunoLiegiBastonLiegi commented 1 month ago

I was experiencing similar problems with some tests of qiboml involving expectation_from_samples under windows. Actually, what I noticed, in my case, was that a similar discrepancy in the expectation value was present in other platforms as well, but with smaller magnitude, small enough to be covered by an atol=1e-1. However, this was happening for all the backends PytorchBackend, TensorflowBackend and JaxBackend. In my case it was not really due to the torch version as I was experiencing the same error with 2.3.1. I am not sure this is relevant to this case though...

alecandido commented 1 month ago

@BrunoLiegiBastonLiegi I remembered of your troubles with Windows in QiboML (though I also remember you ended up deactivating the tests on Windows in https://github.com/qiboteam/qiboml/pull/20 :disappointed:). That's why I asked you as well.

In any case, thanks for your answer. Let's wait for @renatomello and @Simone-Bordoni, who spent the most time with the torch backend.

BrunoLiegiBastonLiegi commented 1 month ago

Just to add on my previous comment I tested this simple example

from qibo import gates, hamiltonians
from qibo.quantum_info import random_clifford
from qibo.symbols import Z
from qibo.backends import PyTorchBackend

backend = PyTorchBackend()
nqubits = 5
c = random_clifford(nqubits, backend=backend)
c.add(gates.M(*range(nqubits)))
observable = hamiltonians.SymbolicHamiltonian(
    sum([(i+1)**2*Z(i) for i in range(nqubits)]),
    nqubits=nqubits,
    backend=backend,
)

for _ in range(10):
    print(observable.expectation_from_samples(backend.execute_circuit(c).frequencies()))

which results in a widely different expectation value every time:

tensor(0.0820+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.3380+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.3940+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.8480+0.j, dtype=torch.complex128, requires_grad=True)
tensor(1.2320+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.4580+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.3940+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.6740+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-1.5440+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.5180+0.j, dtype=torch.complex128, requires_grad=True)

The TensorflowBackend seems slightly more stable but still problematic I'd say:

tf.Tensor((-0.25000000000000067+0j), shape=(), dtype=complex128)
tf.Tensor((0.40000000000000013+0j), shape=(), dtype=complex128)
tf.Tensor((-0.4860000000000004+0j), shape=(), dtype=complex128)
tf.Tensor((-0.08000000000000018+0j), shape=(), dtype=complex128)
tf.Tensor((0.784+0j), shape=(), dtype=complex128)
tf.Tensor((0.006000000000000075+0j), shape=(), dtype=complex128)
tf.Tensor((-0.8439999999999996+0j), shape=(), dtype=complex128)
tf.Tensor((0.19599999999999973+0j), shape=(), dtype=complex128)
tf.Tensor((0.07400000000000027+0j), shape=(), dtype=complex128)
tf.Tensor((0.01999999999999999+0j), shape=(), dtype=complex128)
alecandido commented 1 month ago

@scarrazza should we keep an open issue about the PyTorch version?

To be fair, if yes, we should even move it to Qiboml, since the torch backend won't be here forever...

scarrazza commented 1 month ago

Yes, I think so.

codecov[bot] commented 1 month ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 99.94%. Comparing base (6d67625) to head (eb19847). Report is 7 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #1413 +/- ## ======================================= Coverage 99.94% 99.94% ======================================= Files 78 78 Lines 11222 11225 +3 ======================================= + Hits 11216 11219 +3 Misses 6 6 ``` | [Flag](https://app.codecov.io/gh/qiboteam/qibo/pull/1413/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qiboteam) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/qiboteam/qibo/pull/1413/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qiboteam) | `99.94% <ø> (+<0.01%)` | :arrow_up: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qiboteam#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

alecandido commented 1 month ago

Yes, I think so.

Issue opened https://github.com/qiboteam/qiboml/issues/31

scarrazza commented 1 month ago

Thanks.