Closed ppitonak closed 6 years ago
I think that the way to define and present this task is to first define our end goal. In other words, first define where we want to be, and the define the steps that we will take to reach that goal.
Our goal ought to be:
- The test must run in less than 5 minutes so that it can be used to verify pull requests.
- The test must be run for all pull requests before they are merged into production in order to ensure
- The test must be run for all pull requests before they are merged into production in order to ensure that critical OSIO components and their integrations are not adversely impacted by any changes made by the pull requests.
IMHO this is extremely difficult/impossible to achieve and shouldn't be our goal at this point
- The test must provide a clean test environment for itself and must "clean up" before it terminates.
Clean environment is (?) a prerequisite for tests, not it's goal.
- The test must provide fine-grained information (inc. Jenkins build logs) on any failures that it experiences.
This statement looks like an ideal goal of every test in the world and is not specific to this refactoring.
- The test must be run for all pull requests before they are merged into production in order to ensure that critical OSIO components and their integrations are not adversely impacted by any changes made by the pull requests. IMHO this is extremely difficult/impossible to achieve and shouldn't be our goal at this point
This has been a goal since August 2017 - it would be great if we could finally make progress on this.
- The test must provide a clean test environment for itself and must "clean up" before it terminates. Clean environment is (?) a prerequisite for tests, not it's goal.
Agreed! Let's drop this item.
- The test must provide fine-grained information (inc. Jenkins build logs) on any failures that it experiences. This statement looks like an ideal goal of every test in the world and is not specific to this refactoring.
This item is actually a deliverable that multiple people have requested. Solving https://github.com/openshiftio/openshift.io/issues/1790 would help this a great deal!
The test must be run for all pull requests before they are merged into production in order to ensure that critical OSIO components and their integrations are not adversely impacted by any changes made by the pull requests.
IMHO this is extremely difficult/impossible to achieve and shouldn't be our goal at this point
This has been a goal since August 2017 - it would be great if we could finally make progress on this.
I disagree with this. E2E tests by their nature will never be fast and complex enough at the same time. If we want to run them for each pull request, we would slow down the progress on PR for couple of hours. Do we want to trade the speed of development for more stable production environment? IMHO this is a question for broader team.
The test must provide fine-grained information (inc. Jenkins build logs) on any failures that it experiences.
This statement looks like an ideal goal of every test in the world and is not specific to this refactoring.
This item is actually a deliverable that multiple people have requested. Solving openshiftio/openshift.io#1790 would help this a great deal!
I'm not saying that it is a bad goal or that it isn't a goal at all. I'm just saying that high-quality test report is an implicit expectation of every test so it doesn't need to be stated explicitly.
As part of the re-architecture effort, please make an allowance to gather metrics at various points in the process. These metrics should be stored (for both passing and failing tests) in a location where they can be examined. A reasonable amount of historical data should be maintained. The purpose of this is to allow us to judge current performance of OSIO, especially compared to past days/weeks. This will be useful when we introduce changes to the environment and when we get support tickets complaining about performance. Thank you for considering this idea.
What we have today, a smoketest suite that
What is still missing:
I think that the minimum that we can do with Che is to open a workspace and verify that the project has been populated into the workspace. Thx!
We talkedl with Che QE team today and we agreed that we would do a proof of concept for using their test suite in e2e Jenkins job. Jenkins job would look like this:
pros
cons
@ldimaggi @pmacik what part of this is already covered by your tests?
Issue created on che-functional-tests side: https://github.com/redhat-developer/che-functional-tests/issues/211
Question - we will be running the Che (Java) tests as the OSIO E2E tests? Is this the proposed plan?
@ldimaggi yes, that's the plan for PoC
Most of the work is done, additional extension of smoke test will be tracked in linked issues.
Status quo
Proposed solution
Advantages of proposed solution