Until now, amti's testing has been entirely manual. There have been three reasons for this
decision:
amti began as a tool I built for myself. Originally, I wanted to demo the idea of
reproducible, version controlled crowdsourcing pipelines and to make my own
crowdsourcing research reproducible. I open sourced amti so that others could run my
pipelines, to share the idea of reproducible crowdsourcing, and in case people found it
helpful.
amti is still in initial development (has not had a 1.0 release), and manual testing means
less effort expended on maintaining automatic tests.
Most good tests for amti require mocking the MTurk APIs or running against the MTurk
sandbox; so, good tests are more work to write than in other situations (and thus it's more
valuable to keep testing burden low during initial development).
That said, as adoption grows it's more important to ensure amti's reliability. Similarly,
amti needs high quality automated tests before any possible 1.0 release.
Since amti will still undergo some major refactoring before 1.0 (see Issue #24
for example), it's worth discussing tests people plan to write here beforehand, to avoid
wasting effort.
Here are my thoughts on how to increase test coverage:
amti.utils contains lots of small utilities that often don't require mocks and whose
APIs are unlikely to change. They can be tested first with Python's unittest module.
Other parts of the CLI work only locally (e.g., amti.actions.extraction), don't require
mocks, and probably won't change much. They're also good candidates for initial tests.
Mocked tests help local development because they're fast and don't require a network
connection; however, tests hitting the worker sandbox provide the most assurance
that the code works correctly. We should focus on tests against the worker sandbox
over mocked tests.
Tests against the worker sandbox should be run infrequently (i.e. after committing or
when merging a PR) and thus need to be separated from local unit tests used for quick, frequent feedback during development.
Until now, amti's testing has been entirely manual. There have been three reasons for this decision:
That said, as adoption grows it's more important to ensure amti's reliability. Similarly, amti needs high quality automated tests before any possible 1.0 release.
Since amti will still undergo some major refactoring before 1.0 (see Issue #24 for example), it's worth discussing tests people plan to write here beforehand, to avoid wasting effort.
Here are my thoughts on how to increase test coverage:
amti.actions.extraction
), don't require mocks, and probably won't change much. They're also good candidates for initial tests.