Open yarikoptic opened 3 years ago
@yarikoptic The first two items sound like tox (or its cousin nox) to me....
I think we should start (and forget about instantiating environments for now) with a helper nwb-healthstatus
tool which would read that specification and "encapsulate" the ideas on layout, at large would be generalization of our mighty __main__
, and provide commands
sample
- just a click group to capture commands to manipulate samples
create
[--overwrite] --environment|-e ENVIRONMENT producers/<producername>/<testcasename>.py
which would create (or skip creation if file exists and there is no --overwrite
) for the environment named ENVIRONMENT (created by outside).
~/.cache/nwb-healthstatus/<producername>/<testcasename>.nwb
, from where test
command would then load it.~/.cache
but an explicit option like --samples-path
to specifytest [--environment|-e ENVIRONMENT] producers/<producername>/<testcasename>.py
which would run (in current environment) the tests in that .py
file using data for the specified ENVIRONMENT (or in all known if no ENVIRONMENT provided?)
--tap
flag to output it in TAP compliant output. Otherwise could be a json output with a list of individual steps (load, nwb_validate, each individual test) and their success/fail (and may be captured outputs/return values for nwb_validate)environment
- click group
list
- list found environments? (primarily for the above invocation of test
if ran without environment specifiedThat would then allow us to run the tests within any specific environment (orchestrated outside, may be within/by testkraken or whatnot), and also add such testing to pynwb
CI itself, which would just install/clone this repo and run tests in the current environment where pynwb would be installed by CI there.
I guess we should somehow "reference" https://gin.g-node.org/dandi/nwb-healthstatus-samples which would get datalad clone
d somewhere into the local cache. (we could add auto-updates etc later on).
And later we decide on specification etc, and some other functionality would be just driving those commands in a given envrionment.
ideally tests running should be done through some standard tests platform/runner, so I guess pytest
since allows for the simplest form to establish tests (just an assert
)...
good/old testkraut was producing tests for nose
: https://github.com/neurodebian/testkraut/blob/master/testkraut/testcase.py ; and I believe boutiques does something similar based on pytest for its pipelines/containers -- I guess coded somewhere within https://github.com/boutiques/boutiques
@yarikoptic Psuedocode for the architecture I was describing:
class SampleCase(ABC):
#: Set of extensions needed by the sample case
EXTENSIONS: ClassVar[Set[str]]
#: Basename for the sample file created by the case
FILENAME: ClassVar[str]
@abstractmethod
def create(self, filepath: Path) -> None:
""" Creates a sample file at the given path """
...
@abstractmethod
def test(self, data: hdmf.container.Container) -> None:
""" Takes the data read from a sample file and asserts that it contains what it should """
...
def get_extension_cases(extensions: List[str]) -> List[Type[SampleCase]]:
""" Returns a list of all SampleCase classes that can be run with just the given extensions """
...
class Environment:
def get_extensions(self) -> List[str]:
""" Returns a list of the extensions in the environment """
...
def get_sample_directory(self, repo_base: Path) -> Path:
""" Determines the subpath of `repo_base` at which the sample files for this environment are stored """
...
We then store the sample case classes in modules in a cases/
package, and we can use something like this to iterate through the package & import them all. (Note that we may have to name all the modules with the same prefix or suffix, like *_case.py
, in order to be able to configure pytest to rewrite their assertions.)
In conftest.py
, we add command-line options to pytest for passing a list of extensions and the path to the samples directory for a given environment. Then, we create one test file that looks like this:
def pytest_generate_tests(metafunc):
extensions = ### Get from metafunc.config
samples_dir = ### Get from metafunc.config
argvalues = []
ids = []
for casecls in get_extension_cases(extensions):
argvalues.append((casecls, samples_dir / casecls.FILENAME))
ids.append(casecls.FILENAME)
metafunc.parametrize("casecls,samplepath", argvalues, ids=ids)
def test_case(casecls, samplepath):
case = casecls()
with pynwb.NWBHDF5IO(samplepath, "r", load_namespaces=True) as io:
nwb = io.read()
case.test(nwb)
The test
command then calls pytest to run this test file with the appropriate options.
by an "extension" in the above you mean "nwb extensions"? and yeap -- I was thinking about similar "glob"ing of the available test cases, and thought just to compose SampleCase
instances on the fly based on those create
and test
methods in the e.g. simple1.py instead of e.g. instead requiring those to import/subclass SampleCase. But I guess they could as well subclass I guess, although then extensions would need to declare dependency on nwb-healthstatus to make such class importable, in comparison to just having two functions, and possibly even using them within their own internal full test_
to be ran "natively" by pytest (without requiring our runner etc).
List of actions we would need to perform and with which a helper tool could come handy