theupdateframework / tuf-conformance

TUF client conformance test suite
MIT License
7 stars 4 forks source link

Add documentation on writing tests #29

Open AdamKorcz opened 2 months ago

AdamKorcz commented 2 months ago

We should add documentation on writing new tests. IMO the most important part of this right now is to describe how to use the test utilities, for example:

jku commented 1 month ago

There is now an example of how to write a test in the README.

I think what's missing is more details on RepositorySimulator, e.g. :

jku commented 1 month ago
  • bump_root_by_one()

this is also a bad name because there is no indication it does something different from repo.root.version += 1

jku commented 2 days ago

Initial version here:


RepositorySimulator and ClientRunner

Most tests in tuf-conformance use RepositorySimulator, a TUF repository implementation designed for this test suite. It makes common repository actions fairly easy but still allows tests to meddle with the repository in ways that are not spec-compliant.

Using RepositorySimulator requires a basic understanding of TUF metadata mechanisms. A typical test setup looks like this:

def test_example(client: ClientRunner, server: SimulatorServer) -> None:
    """example test"""
    init_data, repo = server.new_test(client.test_name)

    # Use repo (RepositorySimulator) to setup the repository the test needs,
    # then use client (ClientRunner) to control and measure the clients actions

Modifying the repository content

"Current" metadata is store in repo.mds but typically it is modified via helper properties, e.g. : repo.root.version = 99. There are also helper methods to make tests a bit easier to write:

Modifications are not visible to clients until they are published with repo.publish().

Publishing metadata to make it available to client

Metadata versions must be explicitly made available to clients (with the exception of first versions of the top level metadata roles: The repositorySimulator publishes those at initialization). As an example, here we publish new versions of a delegated role "somerole" as well as snapshot and timestamp roles:

repo.publish(["somerole", Snapshot.type, Timestamp.type])

Publishing will bump the version number in the roles metadata, sign the metadata and store a copy of the serialized bytes in repo.signed_mds (which is where clients will be served data from).

There are two side-effects of publishing:

This makes the default case shown above work out of the box: Publishing "somerole" updates snapshot so it's ready for publishing and publishing snapshot updates timestamp so it's ready for publishing.

In some cases tests will want to modify the published, signed metadata: The bytes in repo.signed_mds can be modified at will.

Measuring client actions

There are a few different measurements that tests can use to verify clients conformance:

  1. Return value of client.refresh() and client.download_target(): See CLIENT-CLI
  2. Clients trusted metadata state
    • client.version(Root.type) == 1
    • client.trusted_roles() == [(Root.type, 1), (Timestamp.type, 1)]
  3. Repository request statistics: repo.metadata_statistics and repo.artifact_statistics
    • note that these are requests so may include 404s
    • these data structures can be cleared in middle of a test to make comparisons easier for a later step