ReadyTalk / gradle-readytalk-ci

CI lifecycle and conventions plugin for Gradle
MIT License
9 stars 0 forks source link

Add e2e lifecycle support #9

Open vdice opened 9 years ago

vdice commented 9 years ago

While the ci task works great for building a project and then running the integTest(s), an e2e task is needed (separate from the ci workflow) for managing the lifecycle of the e2e tests, including the test run itself and any setup/teardown of the test environment needed for said run. These tests can be run as part of a dev cycle or in a post-deploy fashion against an external environment.

dclements commented 9 years ago

Do we want to be facilitating this? http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html

Seems like a largish amount of work to manage in the flexible way the plugin would require for something that should be generally discouraged. If there were already standardization on deployment metadata that could also be managed locally it might not be quite so onerous, but at the moment it strikes me as a hard problem without significant benefit.

vdice commented 9 years ago

Although I'm personally in agreement with the referenced post, I believe this plugin should facilitate running all automation against a given application (though not necessarily as part of the same lifecycle), regardless of a particular project's test balance between unit, integration and user acceptance-level. Any reasonable project will have automated tests at all levels -- let's enable running all these automated tests (though perhaps not necessarily all the time, depending on expense :)).

In my experience e2e tests do indeed prove their worth as regards continuous delivery, especially in the form of a reasonably quick-running user-level smoke, therefore, I'd love for this plugin to offer better support integrating these into a deployment pipeline.

Additionally, I would envision such an e2e task as requested here could be expanded in the future to optionally provide other beneficial features such as screenshot capture on failure, standardized output/reporting, etc. -- as opposed to every project having to build their own homegrown versions of such features (We've had to do just this on multiple projects in the past; I would love to contribute such features to this plugin if this moves forward.)

@dougborg and I had the initial conversation this morning, so let's all get together if there are concerns.

dougborg commented 9 years ago

The initial scope of the plugin @vdice and I discussed would be limited to:

  1. Creating a source set for e2e tests to live in - just like what we have for integTest now.
  2. Creating a "lifecycle" e2e task that we can wire up other things to - basically the stuff @vdice was talking about. Right now we run integTest as part of the ci lifecycle task. I envision e2e being a new top-level lifecycle task to be run post-deployment as a separate jenkins job.

I do agree we should be striving for the conventional test pyramid, and not be focusing our primary testing efforts on e2e tests. I also think we should have a standard place for all levels of testing to live and have some standards around naming, structure and lifecycle.

dclements commented 9 years ago

I'm hearing different things here, which may just be me, and I would like to clarify what is exactly being asked for. There's a world of difference between "I want a separate target and some hooks" and "I want something that can be extended to eventually include features that are provided by testing frameworks such as Selenium and tools such as FIT, but more generically and run from within gradle."

To clarify the most basic set of options, are we talking about:

These, to me, represent three different scenarios about what could conceivably be asked for.

If it is the first one, then there needs to be a standardized way of describing how to do a deployment and, since the management layer is by no means standardized in our industry yet, it will need to be relatively generic to be informed about how to do these things. There are some good ways to standardize the actual reports for some categories of application (not all), but the more complex the chain of dependencies around both the client and server components the more difficult this one becomes and the more platform-specific and library-specific this becomes.

If it is the second scenario, then I would argue strongly that the test framework should be encapsulated in a published artifact with attached runner/reporting tooling rather than being a feature in the same way integTest is used. The reason for this is because if you aren't running the deploy from within the git directory you should have the ability to run your post-deployment tests without compiling them each time you want to run them and having to check out a specific git tag in order to make sure you are running against the correct version.

My impression—and please correct me on this if I am wrong—is that what @vdice is asking for is some combination of the first two. Basically something that can optionally launch the service (any service) and run a set of client tests (any client) against it and also can be effectively employed as a post-deployment test.

If if is the third version then that's not a big deal and could be quite useful, but is really just a glorified version of integTest that we want to keep separate for semantic cleanliness. It doesn't take the place of doing true post-deployment tests and isn't really that different from using a more generic version of the ExternalResource JUnit Rule along with Categories. These are a natural fit for the same style as we are already doing and would be a useful addition, if one that increases the dependency on gradle (you couldn't run these tests from inside of your IDE unless you ran them using gradle). This is my impression of what @dougborg is talking about.

In addition, are we talking about having a simple set of runners or are we talking about doing a full-featured e2e test framework with a DSL and tools such as screen capture built-in at some point in the future? Even if the latter is not in the works yet, if that's the plan it necessitates a change in basic approach.

Basically, what exactly is envisioned for this scope of work, both current and reaching out into the future?

dougborg commented 9 years ago

OK, there is a lot to get through there, but I think we are basically on the same page and I think we have the same basic concerns. I will try to address each point you bring up as best I can. Before I do that, I do want to say the scope of this work will very much be limited to a lifecycle task, a sourceset and possibly a couple other conventions that very much mirror the existing integTest functionality we have defined now in the CI plugin.

I think you did a fairly good job of outlining the some scenarios, but I don't really think of them as discrete, exclusive options.

In the case of the videolab project, we do actually have the deployment environment information and the deployment logic there in the repo, so that is one of the scenarios we will be targeting to support with this work. Local environments are defined in the same way so it is possible to run the (already existing) e2e tests on a development environment, a shared test environment, or even against production.

The second scenario is also something we want to support. The pipeline we envisioned for the videolab project includes a separate job to run the e2e tests as a separate job against an already-deployed environment. I think we should be able to do both of those without too much trouble or at least focus on the common parts of those to support in the CI plugin. Any differences or other project-specific implementation details will left to the project teams to figure out until we start seeing some common best practices we want to pull out into a plugin. I like the idea you mentioned of having the tests published as artifacts. If anything, I think having the separate sourceset defined for the e2e tests will make that trivial to add. We can later add the external runner if we want. That would be kind of cool, but it is not work that is in-scope for what we are trying to do with this issue.

We are also not setting out to replicate the work of Selenium or Fitnesse - we are just trying to find a place for those types of tests to live alongside the rest of each project's codebase. Other tools may end up fitting into this space, but we will still need a place for them to live and get run.

We are very much just trying to set up the e2e stuff as glorified integTests with the semantic differences that they are very much separate from integTests, are run at different stages in the pipeline, and have different dependencies (both in terms of code and external services that may need to be started, accessed, etc).

Hopefully that cleared things up more than it hurt :) I would be happy to talk about it more in person as well. I am WFH tomorrow so we can do it over the videolab stuff or we can chat in-person and with whiteboards on Friday.