Closed sergiomtzlosa closed 1 week ago
Already noticed that the CI is still not running...
@alecandido I have the suspicion that they changed on
tag name to pull_request_target
: https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#pull_request_target
@scarrazza this is the only related change I've found in the last six months
https://github.com/github/docs/commit/9e3e15773248f53207fd12514e480bad15687cab
But maybe I should keep checking, I may have missed something. The file is the following: https://github.com/github/docs/blob/main/content/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows.md
(I believe the workflow runner is not even open source, but GitHub docs are, so we can check those, at least)
Ok, the problem is the if
condition in rules.yml:
https://github.com/qiboteam/qibo/blob/4f2314ab2cc311894dbb05c862c5c8cc63b75ec0/.github/workflows/rules.yml#L13
This line is forcing a skip due to missing pushes (this PR was opened with a series of pushes already performed).
We should add the "PR opened event" in the if
condition too.
Correct, thanks for the investigation.
In any case, even the run-workflow
label is definitely a good solution. For sure for this PR.
thanks @alecandido @scarrazza for tune in the CI pipeline, I removed the line!!
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.94%. Comparing base (
218f298
) to head (51cd75d
). Report is 127 commits behind head on master.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hi, thanks @MatteoRobbiati , just a question, which version should I install to do the test? I had installed pytest and I get this errors on runing the tests:
> pytest --version
pytest 8.3.2
__main__.py: error: unrecognized arguments: --cov=qibo --cov-append --cov-report=xml --cov-report=html
inifile: ..\qibo\pyproject.toml
rootdir: ..\qibo`
> pytest test_ui_mpldrawer.py
ERROR: usage: __main__.py [options] [file_or_dir] [file_or_dir] [...]
__main__.py: error: unrecognized arguments: --cov=qibo --cov-append --cov-report=xml --cov-report=html
inifile: ..\qibo\pyproject.toml
rootdir: ..\qibo`
Maybe codecov is not triggering....
@sergiomtzlosa you can install pytest-cov
or simply install this package with poetry install --with tests
.
@sergiomtzlosa could you please fix tests? Following my instructions above you should be able to test locally.
thanks a lot @scarrazza ! I am now able to run test locally, now I go with the fix
Thanks for your work @sergiomtzlosa! If you want to avoid pre-commit to fix the code cosmetics after your commits, I would suggest you to install it and activate it in your local qibo repo. It will standardize the code following the qibo guidelines.
You can install it using pip via pip install pre-commit
and then type pre-commit install
after entering qibo locally.
You will see pre-commit fixing the code everytime you do a commit. After that, you can add the new modifications to git and commit again with the same commit message.
As usual, let me know if you need any further help/info!
And again thanks for the work :)
Thanks a lot @MatteoRobbiati !! it is a useful hook indeed
All code is covered by tests, this PR is ready to merge @MatteoRobbiati
@MatteoRobbiati I can merge the master branch into this one to get latest changes and solve conflicts, do you agree or do you want to do solve them by yourself?
During the meeting we agreed in setting up a bit more robust strategy to test the images. What is done so far is perfectly fine, but we just need to be sure the output of our plots is actually what we expect during the tests. For doing this, we can:
Alternatively, we can also think to use the Matplotlib testing tools.
During the meeting we agreed in setting up a bit more robust strategy to test the images. What is done so far is perfectly fine, but we just need to be sure the output of our plots is actually what we expect during the tests. For doing this, we can:
1. generate locally the images we are re-generating in the tests; 2. convert them into Numpy arrays and store those arrays in a folder; 3. during tests, check whether the new plots, converted into arrays, are actually equal to the target arrays.
Alternatively, we can also think to use the Matplotlib testing tools.
Good!!! I will start with the plot arrays comparison...
Good!!! I will start with the plot arrays comparison...
Hi there! Do you need any help here? It would be ideal to fix this for the next review (friday). Let me know about the status/if help is needed @sergiomtzlosa.
Good!!! I will start with the plot arrays comparison...
Hi there! Do you need any help here? It would be ideal to fix this for the next review (friday). Let me know about the status/if help is needed @sergiomtzlosa.
Hi @MatteoRobbiati , everything is done in the latest commit, please let me know if something is missing, you can launch CI indeed
Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?
Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?
Sorry!! Missed it! Remember also to ask me for review once you are done with modifications :)
Hi @MatteoRobbiati , I am wondering why all test are correct , but the macos test fail or got cancelled, please can you gibe some light? Is the CI maybe crashing?
Sorry!! Missed it! Remember also to ask me for review once you are done with modifications :)
Hi @MatteoRobbiati , now I have a big issue, I cannot fix this until the next week then I cannot finish this for friday, and i would ask for some help to get this on time otherwise i fix it next week, sorry!
Testing exact equality pixel by pixel is definitely a hard constraint, so you should make sure there is no platform-dependent behavior in matplotlib as well (even driven by the environment, i.e. variables and packages available on a certain system).
A different option could be generating the pictures directly in the CI, and caching them somewhere, separately for each platform, skipping the tests if they are not available. However, while this option will be more robust from the point of view of platform-dependent behaviors, but it will be much more expensive (time-wise) to instrument...
If you're able to debug the issue, and fix some env var to obtain the same result, so much the better. Otherwise, I'd consider acceptable even just testing that an image file is produced, whatever the content is, since it will tell us that the code didn't raise/crash (at least).
Finally, I could take some time and I changed the images by arrays in files for matching, so it is not needed to use PIL library, also I set the matplotlib backend and I hope this time all test are passing, otherwise I will go with the solutions you suggested @alecandido @MatteoRobbiati
Thanks @alecandido @MatteoRobbiati , finally we made It!
Checklist: