pactflow / roadmap

Public Pactflow issue tracker and roadmap
MIT License
15 stars 0 forks source link

OAS CLI Testing tool (Evidence Provider for Provider Driven Contracts) #31

Open mefellows opened 3 years ago

mefellows commented 3 years ago

Currently, we rely on the Provider API to publish an OAS using an external tool of their choosing (see #4 for common tools), and rely on that process to ensure the provider itself is compatible with the OAS.

We believe the current OAS testing ecosystem has limitations that could be alleviated with the added information from a consumer contract.

In short, the challenge:

Schemas are abstract, and introduce ambiguity which can lead to misinterpretations. For instance, in an OAS you may define that an API can return a 400, a 403 or a 200, but you cannot say for certain which specific set of inputs will result in those status codes by looking at the specification alone. Coverage - for the same reason as (3), it's easy to check if a system is compatible with a schema, but it's very difficult to be sure it fully implements the spec. For example, we know of no such tool that can currently do this for OAS.

Having the consumer usage information for comparison, means that we can be sure any ambiguities in a schema are accounted for.

This feature involves:

See also:

antoniogamiz commented 2 years ago

Any update on this?

mefellows commented 2 years ago

Hi Antonio! No update on this feature at the moment, we're still reviewing and validating the concept and problems our users face in this space. Please feel free to share your thoughts here as to what you'd like to see in such a tool, ideally including what tools you currently use and why they aren't sufficient.

bethesque commented 10 months ago

Some insights we have gained from performing runtime validation of the Pactflow OAS against the Pactflow application using the openapi_first gem.

Our approach was to validate the OAS by using middleware in our feature specs to ensure that every request and response payload matched the OAS. The feature specs run in memory (that is, there is no running process on a port). This is preferable to having to start up a real process as it allows the same fine grained control over mocking and transaction clean up that our unit tests normally have, and gives us much quicker feedback. If I was to write a feature wish list for our OAS validation tool, I would want it to be able to hook into unit tests, rather than having to run against an external process.

When a property in the OAS is not required, or the value allows null, or an array can be empty, any test that compares a real payload to the schema may pass simply because the property/item was not present in the real payload.

To mitigate this, we performed some in-memory manipulation of the OAS to make every property required, removed the option for a value to be null, and made every array minLength: 1. This increased our confidence that the OAS was valid, but it meant that we had to go through every feature spec and add in test data for every optional property, which was a lot of work.

We also turned off additional properties (in memory) at every level for all payloads. While we might want to publicly state that the client should always expect that additional properties may be present (according to Postals Law), when we validate the OAS, we want to be sure that every field that turns up in a payload is include in the OAS.

Then we found uses cases where certain properties HAD to be missing. For this scenario, we added some per-test configuration options to say "expect these properties to be missing from the payload in this test".

Another per-test configuration option we had to add was "do not validate this request". We needed this to test the validation failure responses. If we didn't turn off the request validation, then it would raise an error, which would then stop us being able to validate the response.

Another form of "optional" property in an OAS is the "anyOf" scenario. When there are multiple options for a property, a single request/response cannot validate every scenario. We were unable to come up with an approach for ensuring that all options in an anyOf were covered in the time that we had to spend on this task.

The current test approach is: ensure that every request/response matches the OAS. Allow per-test configuration to disable required properties, allow empty arrays, or disable request validation.

This approach requires a lot of effort setting up test data, and does not ensure complete coverage of anyOfs.

A more helpful approach would be to say: you must provide evidence at some stage in the test suite that every property you have documented is possible to provide.

I think a more helpful approach would be to use the entire test suite to validate the oas. The tool would keep track of which properties had been matched during the overall test suite, raising an immediate error when a request/response did not match, but then providing a report at the end to identify which properties had/hadn't had evidence provided for them, and if configured, failing when coverage was not complete. It would behave more like a code coverage tool than a pure validation tool.

To summarise, the feature wish list would be: