Closed suganya-sk closed 2 years ago
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
Hey @suganya-sk
The goal of the snapshots is actually not to check for consistency between versions but between runs with the same version. But I definitely understand that it creates some confusion.
If you look at the GitHub workflow we are using, you will see that we are running once the tests to update the snapshots before running them for statistics computation:
So I would encourage you to do the same.
Additional comment: if the version you are comparing are strict open-source JupyterLab, you can directly fork this repository and execute the action to compute the benchmarks; see https://jupyterlab-benchmarks.readthedocs.io/en/latest/benchmarks/ci.html. In particular the challenger repo will be jupyterlab/jupyterlab
and the git references will be the tags you want to compare.
For example I started a benchmark between 3.4.7 and 4.0.0alpha 29; see that job
For completion, this is the results of the above mentioned job.
The switch action can be faster in 3.4.7 because it was using a different approach to switch tabs. That approach was actually reverted because it breaks Chrome-based browser if one of the tab contains an iframe (like external documentation or a pdf viewer).
The execution time (in milliseconds) are grouped by test file, test type and browser. For each case, the following values are computed: min <- [1st quartile - median - 3rd quartile] -> max.
The mean relative comparison is computed with 95% confidence.
v4.0.0a29 = 8e08e4252a79a6c816535b9e80759adff984cad7 | v3.4.7 = 8fea0391c8d9d72f4f2aabb7af3b3064be4fa52e Go to action log Changelog covered
Hello, thank you for the through response here.
If you look at the GitHub workflow we are using, you will see that we are running once the tests to update the snapshots before running them for statistics computation:
Let me try this and update.
Please correct me if I'm wrong here - I had understood -u
to mean that it updates snapshots and also sets this run to be expected value in tests-out/lab-benchmark-expected.json
. If this is so, when I use -u
before running tests on the challenger, would this not reset the values in the tests-out/lab-benchmark-expected.json
as well?
Indeed it will do, on the CI we copy a back up after the updated snapshots call and before the challenger tests:
Ah, I missed that, sorry. Thank you, this makes it very clear.
Can we consider adding this to the instructions here, to compare a reference and a challenger between which there are expected UI differences?
I would be happy to raise a PR, if that works.
Sure PRs are welcomed.
Raised https://github.com/jupyterlab/benchmarks/pull/125 to add doc as discussed above.
Description
TLDR - Snapshots change across 4.0.0a26 and 4.0.0a27. This makes it tedious to generate a benchmark report to compare 3.x and 4.x.
I'm running benchmark tests as documented here with a slight change - instead of building local checkout, I use 2 different versions of Jupyterlab installed in the venv, upgrading to go from the reference state (3.4.5) to the challenger state (4.0.0a29 - the latest pre-release version available at this time)
My aim is to generate benchmark reports, especially the graph depicting actual vs expected time.
However, since the snapshots change between 4.0.0a26 and 4.0.0a27, (the cell toolbar appears in 4.x's snapshots), the tests fail and the report is not generated. Updating the snapshots while running 4.0.0a27 does not fix the issue since this marks the current version (4.0.0a27 - the challenger) as the reference.
I understand that UI changes could be part of major version changes and that I can comment out the pieces of the tests that compare snapshots to only generate the benchmark report.
Is there a cleaner way to compare benchmark results across two versions with UI changes?
Thanks in advance!
Reproduce
Tests fail since snapshots do not match.
Expected behavior
A way to compare benchmark tests from 3.x and 4.x.
Context
I've provided all information relevant to the question; please let me know if anything else is required.