Experiments related to tuf-conformance testing
Rough design:
- test runner is given a specific client wrapper as argument
- The client wrapper is an executable that implements a client-under-test CLI (to be defined)
-- each tested client will need a wrapper. That wrapper is responsible for doing the requested TUF client
operations and also for making the client metadata cache available in the given location
- test runner runs a single web server that individual tests can attach a simulated TUF repository to
- each test sets up a simulated repository, attaches it to the server, runs the client-under-test
against that repository. It can then modify the repository state and run the client-under-test again
- the testrunner and web server run in the same thread: when a client-under-test process is started
the web server request handler is manually pumped until the client-under-test finishes
- the idea is that a test can run the client multiple times while modifying the repository state. After each client
execution the test can verify
- client success/failure
- clients internal metadata state (what it considers currently valid metadata) and
- that the requests client made were the expected ones
- There should be helpers to make these verifications simple in the tests but these helpers are still
largely unimplemented.
Original ideas document
Install
Setting up the virtual environment (recommended):
pip install -e .
Run the tests
How to run the test rig against each client.
python-tuf
make test-python-tuf
go-tuf-metadata
make test-go-tuf