projectatomic / adb-tests

placeholder for tests for the adb while its long term place to live is sorted out
GNU General Public License v2.0
7 stars 19 forks source link

CI for ADB #19

Closed dharmit closed 8 years ago

dharmit commented 8 years ago

CI tests for ADB to fix #195 on projectatomic/adb-atomic-developer-bundle.

More tests would be added to this in future. However, not much should change in the Jenkins configuration.

dustymabe commented 8 years ago

I still think the run.py file should be very dumb similar to this run.py where you just define a TEST_CMD environment var . Maybe consider putting the logic for running ansible elsewhere?

one way to do this would be to have a seperate script that sets up ansible and then your TEST_CMD is defined inside your jjb file and is just TEST_CMD="ansible_setup.sh && test_{provider}.sh" or something like that.

dharmit commented 8 years ago

@dustymabe run.py is doing more than just setting up the environment and executing the tests. It's actually figuring out which provider needs to be tested as well. Let's say you modify components/centos/centos-k8s-singlenode-setup or components/centos/centos-openshift-setup; it figures out which of the provider needs to be tested.

Having said that, I am open to ideas about how this can be done better. Currently it's hacky and doesn't look good enough to me. Ideally, having something like, a test that exclusively runs if custom Vagrantfile for Kubernetes is modified; another test exclusively for OpenShift Vagrantfile, and so on would make more sense to me.

However, I am not sure how to go about implementing that. Looking for ideas/suggestions.

dustymabe commented 8 years ago

@dustymabe run.py is doing more than just setting up the environment and executing the tests. It's actually figuring out which provider needs to be tested as well. Let's say you modify components/centos/centos-k8s-singlenode-setup or components/centos/centos-openshift-setup; it figures out which of the provider needs to be tested.

What I'm saying is that there is no reason why run.py (the jjb embedded python file) has to figure this out. Let run.py be dumb and just call out to another script (call it test-adb.py) that lives in the "adb-tests" repo. That script should be the one that has the complex logic in it. There are a few benefits to this:

dharmit commented 8 years ago

@dustymabe Gotcha. I'll modify the PR and pingback soon,

dharmit commented 8 years ago

While we're discussing this, I've seen an issue with the current approach. I'll try to explain it here.

Currently, if a PR is modifying two custom Vagrantfiles, let's say for OpenShift and k8s, the script detects that and runs tests for both Vagrantfiles one after the another.

Let's say OpenShift Vagrantfile tests are getting executed first and then k8s. If any of the tests for former fail, and if none of the tests for latter fail, it is considered a passed overall test. Should we have separate tests for k8s and OpenShift providers? I think that's what you're doing for atomicapp. Having one Jenkins job trigger another would make sense here?

dkutalek commented 8 years ago

I agree that tests should be instructed what to test (eg. by parameters) as opposed to figure it out in tests. Not only the diagnostic could go wrong, but software under test could be wrongly composed and test would return wrongly pass instead of fail.

Instead, test should just assert that environment is set up as required.

Example: when testing vagrant file for Openshift which actually start k8s instead. Test should be instructed to test openshift and should fail on asserting it is running.

As to how many jobs per PR, I think one is enough. Basic use case would be run all the tests for every PR, running just some based on priority / resources / files changed is just optimization, isn't it? Then all tests should pass or be fixed.

With regards to test runner, why not use some existing one like eg. Avocado? https://avocado-framework.readthedocs.org/en/latest/

It can run python, bash, whatever tests and report consistently. It is being actively developed and used.

Thing is, we will need more features as we go and why wasting our resources to develop another runner from scratch?

dustymabe commented 8 years ago

Let's say OpenShift Vagrantfile tests are getting executed first and then k8s. If any of the tests for former fail, and if none of the tests for latter fail, it is considered a passed overall test. Should we have separate tests for k8s and OpenShift providers? I think that's what you're doing for atomicapp. Having one Jenkins job trigger another would make sense here?

For atomicapp we run all of the tests every time so we don't have this problem. What you could do is have the test run every time but just make it exit success if there is nothing to test.

dharmit commented 8 years ago

@dustymabe I've made modifications as you suggested. run.py is pretty much as dumb as you suggested it to be. Rest has been moved to individual test scripts.

dustymabe commented 8 years ago

hey @dharmit.. do you mind rebasing this on master?

dharmit commented 8 years ago

@navidshaikh @praveenkumar I've incorporated most of the changes that you guys have suggested. An important point to note that for tests to work after closing the PR, I'll have to change references to my fork and branch within to projectatomic/adb-tests. This can be done either before the merge or after - however you guys recommend.

For both the providers - Kubernetes and OpenShift - there's not many tests at the moment. In fact I removed one test since last evening which was checking kubectl output in OpenShift box. This is obviously something we should expand upon.

dharmit commented 8 years ago

To check if things are working, you can take a look at the fake PRs I've raised on ADB. This for OpenShift and this for Kubernetes. The trigger phrases that work at the moment are buildkubernetes, buildopenshift and buildall.

navidshaikh commented 8 years ago

I'll have to change references to my fork and branch within to projectatomic/adb-tests. This can be done either before the merge or after - however you guys recommend.

I'd recommend you update the references before merge.

dharmit commented 8 years ago

@navidshaikh @praveenkumar I've updated the references. However, I'd be able to test them only after the PR's merged. And till it's merged, there might be few failing tests. :unamused:

navidshaikh commented 8 years ago

LGTM!

dharmit commented 8 years ago

Thanks, I'm checking if things are working well after removing the references to my forked repo.