In the month of May, 2011, we began writing a significant amount of code which must interact with the OpenQuake database. This introduces some challenges with respect to development/testing environments.
For example, our CI server (http://openquake.globalquakemodel.org) does yet have the OpenQuake db infrastructure set up. Also, not all developers have a test database set up within their development environments. This creates some challenges with respect to adding database tests to our test suite; basically, if the machine running the tests doesn't have an OpenQuake db properly set up, there will be test failures.
This document is meant as an opening discussion on how to approach this problem for the first iteration. The solution I will outline (which is of course open to suggestions and criticism) in this document represents the bare minimum solution we can implement with minimal effort without creating additional scope within the middle of a sprint.
This is meant to be a team discussion; a living document.
Solution outline
Currently, all of our test code is in the tests/ package. Our CI server and test run scripts will look for tests only in this package.
Thus, the first thing we can do is create a db_tests/ package which will only contain test cases which touch the db. These will be run separately from the rest of the test suite.
db_tests/ test files should follow the same naming convention as other tests (i.e., each file should end with unittest.py, such as hazard_engine_unittest.py, etc.).
We have a run_tests.py file which handles running the entire test suite. A command line param can be added to this file to invoke the db_tests if a developer has the required system/db setup to run the tests. (The param could be something like --run-db-tests=True/False. This param would default to False.)
Also, nosetests can be invoked by itself to run tests in the db_tests/ package. Like so:
nosetests db_tests/*_unittest.py
Risks
The primary risk here is that this solution does not include an automated run of the db tests. However, I think this is an acceptable risk as long we compensate with sufficient manual test runs and peer reviews until more time is allowed for a more permanent solution.
Abstract
In the month of May, 2011, we began writing a significant amount of code which must interact with the OpenQuake database. This introduces some challenges with respect to development/testing environments.
For example, our CI server (http://openquake.globalquakemodel.org) does yet have the OpenQuake db infrastructure set up. Also, not all developers have a test database set up within their development environments. This creates some challenges with respect to adding database tests to our test suite; basically, if the machine running the tests doesn't have an OpenQuake db properly set up, there will be test failures.
This document is meant as an opening discussion on how to approach this problem for the first iteration. The solution I will outline (which is of course open to suggestions and criticism) in this document represents the bare minimum solution we can implement with minimal effort without creating additional scope within the middle of a sprint.
This is meant to be a team discussion; a living document.
Solution outline
Currently, all of our test code is in the tests/ package. Our CI server and test run scripts will look for tests only in this package.
Thus, the first thing we can do is create a db_tests/ package which will only contain test cases which touch the db. These will be run separately from the rest of the test suite.
db_tests/ test files should follow the same naming convention as other tests (i.e., each file should end with unittest.py, such as hazard_engine_unittest.py, etc.).
We have a
run_tests.py
file which handles running the entire test suite. A command line param can be added to this file to invoke the db_tests if a developer has the required system/db setup to run the tests. (The param could be something like--run-db-tests=True/False
. This param would default to False.)Also,
nosetests
can be invoked by itself to run tests in the db_tests/ package. Like so:Risks
The primary risk here is that this solution does not include an automated run of the db tests. However, I think this is an acceptable risk as long we compensate with sufficient manual test runs and peer reviews until more time is allowed for a more permanent solution.