qiboteam / qibo

A framework for quantum computing
https://qibo.science
Apache License 2.0
287 stars 58 forks source link

Matplotlib drawer: fix bug #1437 and add tests #1438

Closed sergiomtzlosa closed 1 week ago

sergiomtzlosa commented 3 weeks ago

Checklist:

alecandido commented 3 weeks ago

Already noticed that the CI is still not running...

scarrazza commented 3 weeks ago

@alecandido I have the suspicion that they changed on tag name to pull_request_target: https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#pull_request_target

alecandido commented 3 weeks ago

@scarrazza this is the only related change I've found in the last six months

https://github.com/github/docs/commit/9e3e15773248f53207fd12514e480bad15687cab

But maybe I should keep checking, I may have missed something. The file is the following: https://github.com/github/docs/blob/main/content/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows.md

(I believe the workflow runner is not even open source, but GitHub docs are, so we can check those, at least)

scarrazza commented 3 weeks ago

Ok, the problem is the if condition in rules.yml: https://github.com/qiboteam/qibo/blob/4f2314ab2cc311894dbb05c862c5c8cc63b75ec0/.github/workflows/rules.yml#L13 This line is forcing a skip due to missing pushes (this PR was opened with a series of pushes already performed). We should add the "PR opened event" in the if condition too.

alecandido commented 3 weeks ago

Correct, thanks for the investigation.

In any case, even the run-workflow label is definitely a good solution. For sure for this PR.

sergiomtzlosa commented 3 weeks ago

thanks @alecandido @scarrazza for tune in the CI pipeline, I removed the line!!

codecov[bot] commented 3 weeks ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 99.94%. Comparing base (218f298) to head (51cd75d). Report is 127 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #1438 +/- ## ========================================== + Coverage 97.10% 99.94% +2.83% ========================================== Files 81 81 Lines 11653 11708 +55 ========================================== + Hits 11316 11701 +385 + Misses 337 7 -330 ``` | [Flag](https://app.codecov.io/gh/qiboteam/qibo/pull/1438/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qiboteam) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/qiboteam/qibo/pull/1438/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qiboteam) | `99.94% <100.00%> (+2.83%)` | :arrow_up: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=qiboteam#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

sergiomtzlosa commented 2 weeks ago

Hi, thanks @MatteoRobbiati , just a question, which version should I install to do the test? I had installed pytest and I get this errors on runing the tests:

> pytest --version
pytest 8.3.2
__main__.py: error: unrecognized arguments: --cov=qibo --cov-append --cov-report=xml --cov-report=html
  inifile: ..\qibo\pyproject.toml
  rootdir: ..\qibo`
> pytest test_ui_mpldrawer.py
ERROR: usage: __main__.py [options] [file_or_dir] [file_or_dir] [...]
__main__.py: error: unrecognized arguments: --cov=qibo --cov-append --cov-report=xml --cov-report=html
  inifile: ..\qibo\pyproject.toml
  rootdir: ..\qibo`

Maybe codecov is not triggering....

scarrazza commented 2 weeks ago

@sergiomtzlosa you can install pytest-cov or simply install this package with poetry install --with tests.

scarrazza commented 2 weeks ago

@sergiomtzlosa could you please fix tests? Following my instructions above you should be able to test locally.

sergiomtzlosa commented 2 weeks ago

thanks a lot @scarrazza ! I am now able to run test locally, now I go with the fix

MatteoRobbiati commented 2 weeks ago

Thanks for your work @sergiomtzlosa! If you want to avoid pre-commit to fix the code cosmetics after your commits, I would suggest you to install it and activate it in your local qibo repo. It will standardize the code following the qibo guidelines.

You can install it using pip via pip install pre-commit and then type pre-commit install after entering qibo locally.

You will see pre-commit fixing the code everytime you do a commit. After that, you can add the new modifications to git and commit again with the same commit message.

As usual, let me know if you need any further help/info!

And again thanks for the work :)

sergiomtzlosa commented 2 weeks ago

Thanks a lot @MatteoRobbiati !! it is a useful hook indeed

sergiomtzlosa commented 2 weeks ago

All code is covered by tests, this PR is ready to merge @MatteoRobbiati

sergiomtzlosa commented 2 weeks ago

@MatteoRobbiati I can merge the master branch into this one to get latest changes and solve conflicts, do you agree or do you want to do solve them by yourself?

MatteoRobbiati commented 2 weeks ago

During the meeting we agreed in setting up a bit more robust strategy to test the images. What is done so far is perfectly fine, but we just need to be sure the output of our plots is actually what we expect during the tests. For doing this, we can:

  1. generate locally the images we are re-generating in the tests;
  2. convert them into Numpy arrays and store those arrays in a folder;
  3. during tests, check whether the new plots, converted into arrays, are actually equal to the target arrays.

Alternatively, we can also think to use the Matplotlib testing tools.

sergiomtzlosa commented 2 weeks ago

During the meeting we agreed in setting up a bit more robust strategy to test the images. What is done so far is perfectly fine, but we just need to be sure the output of our plots is actually what we expect during the tests. For doing this, we can:

1. generate locally the images we are re-generating in the tests;

2. convert them into Numpy arrays and store those arrays in a folder;

3. during tests, check whether the new plots, converted into arrays, are actually equal to the target arrays.

Alternatively, we can also think to use the Matplotlib testing tools.

Good!!! I will start with the plot arrays comparison...

MatteoRobbiati commented 1 week ago

Good!!! I will start with the plot arrays comparison...

Hi there! Do you need any help here? It would be ideal to fix this for the next review (friday). Let me know about the status/if help is needed @sergiomtzlosa.

sergiomtzlosa commented 1 week ago

Good!!! I will start with the plot arrays comparison...

Hi there! Do you need any help here? It would be ideal to fix this for the next review (friday). Let me know about the status/if help is needed @sergiomtzlosa.

Hi @MatteoRobbiati , everything is done in the latest commit, please let me know if something is missing, you can launch CI indeed

sergiomtzlosa commented 1 week ago

Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?

MatteoRobbiati commented 1 week ago

Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?

Sorry!! Missed it! Remember also to ask me for review once you are done with modifications :)

sergiomtzlosa commented 1 week ago

Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?

Sorry!! Missed it! Remember also to ask me for review once you are done with modifications :)

Hi @MatteoRobbiati , now I have a big issue, I cannot fix this until the next week then I cannot finish this for friday, and i would ask for some help to get this on time otherwise i fix it next week, sorry!

alecandido commented 1 week ago

Testing exact equality pixel by pixel is definitely a hard constraint, so you should make sure there is no platform-dependent behavior in matplotlib as well (even driven by the environment, i.e. variables and packages available on a certain system).

A different option could be generating the pictures directly in the CI, and caching them somewhere, separately for each platform, skipping the tests if they are not available. However, while this option will be more robust from the point of view of platform-dependent behaviors, but it will be much more expensive (time-wise) to instrument...

If you're able to debug the issue, and fix some env var to obtain the same result, so much the better. Otherwise, I'd consider acceptable even just testing that an image file is produced, whatever the content is, since it will tell us that the code didn't raise/crash (at least).

sergiomtzlosa commented 1 week ago

Finally, I could take some time and I changed the images by arrays in files for matching, so it is not needed to use PIL library, also I set the matplotlib backend and I hope this time all test are passing, otherwise I will go with the solutions you suggested @alecandido @MatteoRobbiati

sergiomtzlosa commented 1 week ago

Thanks @alecandido @MatteoRobbiati , finally we made It!