Open hackebrot opened 6 years ago
Any thoughts @mozilla/data-tools?
I get that you're describing the current state of the tests but seeing the Redash fork in this list doesn't make sense to me given our work to move away from it.
As such the goal of the UI tests should be amended to only deal with the main repo, and not with the fork, and our extension redash-stmo.
The ultimate goal of the work to move our changes in the fork to the main Redash repo is to stop having to customize any of its code directly and use extensions instead. So the focus for UI tests needs to be about making sure the both Redash itself as well as our extension works well.
So of the list above of the tests that should be kept are:
The first will be also used by mozilla/redash while we still have the fork since we're regularly rebasing the fork on top of getredash/redash. The configuration and setup of the tests should be the same though and not contain Mozilla specific tests.
I don't see a reason to keep the release branch testing since all feature go first through master and will be tested there.
There is no need to run tests individually in the mozilla/redash-ui-tests repo since it won't match our primary goal of continuous testing during development (with PR integration etc).
About the "Requirement for UI tests" I think the test matrix should be as simple as possible to reduce the amount of effort needed to write UI tests (e.g. so Redash developers can contribute them as part of feature development and not be bottlenecked by QA resources).
So I don't see a reason to have separate test classifiers at all, or run only subsets of some tests in certain situations. It just makes it harder to answer the question "Does this change break Redash?" during day to day development.
I encourage you to structure the test code into different Python modules and packages, using your classifiers and a structure by domain objects, e.g. tests/regressions
, tests/extensions
, tests/admin
, tests/queries
, tests/dashboards
etc.
I don't see a reason to keep the release branch testing since all feature go first through master and will be tested there.
I think raphael was referring to the current release of redash. So the docker tag :latest
, which can be very different from the github branch named master.
There is no need to run tests individually in the mozilla/redash-ui-tests repo since it won't match our primary goal of continuous testing during development (with PR integration etc).
Can you explain what you mean here a bit more?
So I don't see a reason to have separate test classifiers at all, or run only subsets of some tests in certain situations. It just makes it harder to answer the question "Does this change break Redash?" during day to day development.
Well as we found, testing against the current master branch can be very different than the latest available redash release. It may make sense to have logic within the configuration to distinguish between api's.
I don't see a reason to keep the release branch testing since all feature go first through master and will be tested there.
I think raphael was referring to the current release of redash. So the docker tag
:latest
, which can be very different from the github branch named master.
To be clear, I referred to this:
Manual regression tests on mozilla/redash against release candidate
Since our current release process for the Mozilla Redash fork is to have a separate "rc" environment to deploy to for manual QA testing via Madalin, I thought this would be the analog test situation Raphael referred to.
There is no need to run tests individually in the mozilla/redash-ui-tests repo since it won't match our primary goal of continuous testing during development (with PR integration etc).
Can you explain what you mean here a bit more?
What I mean is that we should not be looking at the redash-ui-tests project on Circle CI to figure out if a particular change in Redash is passing the tests since then it's too late and the changes have already been merged there. Instead we should focus on integrating the UI tests into the development workflow of Redash, so in its own continuous testing setup.
I guess we'll still need a way to verify that the tests written in the redash-ui-tests repo pass when they are committed somehow, so an own CI setup is needed I guess and a decision needs to be made against which Redash those tests run. My primary concern is that any additional layer of setup increases the barrier of entry of maintaining the test cases and are counterproductive because of that.
So I don't see a reason to have separate test classifiers at all, or run only subsets of some tests in certain situations. It just makes it harder to answer the question "Does this change break Redash?" during day to day development.
Well as we found, testing against the current master branch can be very different than the latest available redash release. It may make sense to have logic within the configuration to distinguish between api's.
IIRC the tests that failed were written before the UI tests were integrated with the Redash repo, so it's not a good case to base a strategy for the future on. If the UI tests would have been integrated with the Redash repo already they would have failed during the development of the feature and correctly led to creation of a ticket to update the test cases.
Regarding creating version specific test cases: I don't see the relevance since we're always going to track the latest version of Redash as part of the development of Redash. So testing against the "latest release" of Redash (e.g. right now 4.0, aka "stable") is not useful for the primary goal of the UI tests: answering the question "Does this change break Redash?" It's not "Does this change break Redash 3.0, 3.0.1, 4.0, 4.0.1 etc?" after all.
So in short: we don't need a complex test matrix for the UI tests since it increases the risk of unmaintainable test cases and a high barrier of contribution.
The goal of this issue is to provide an overview over the current UI testing workflow and highlight its limitations. The questions at the end of this description are supposed to help us discuss the requirements for redash-ui-tests.
Repos
Releases getredash/redash
https://hub.docker.com/r/redash/redash/
Releases mozilla/redash
https://hub.docker.com/r/mozilla/redash/
Tests
UI tests
Goal of UI tests
Automatically alert developers to UI regressions in pull requests and verify release candidates.
Requirement for UI tests
Tests cases need to be classified either:
Questions