pact-foundation / pact-specification

Describes the pact format and verification specifications
MIT License
295 stars 28 forks source link

Best approach for supporting non-JSON serialization formats in pact #62

Open mcon-gr opened 5 years ago

mcon-gr commented 5 years ago

The below is a Request For Comment on the idea of adding support for non-JSON serialization in pact - before I do any work on trying to add this support for pact, I'd like to get thoughts and suggestions from the community on whether such a change to pact would be likely to be accepted, and feedback on the best way to go about it.

Motivation

JSON isn't a schema based serialization format, and so having some sort of contract or integration test is essential - without it, there would be no guarantee that producers and consumers were in sync at all. To my mind, pact tests cover the following bases:

All of the above points also apply to protobuf (and other non-JSON encodings): admittedly protobuf does encourage patterns which support versioning (which I won't go into here), and there is a data schema that's shared between producer / consumer, but there's nothing forcing schemas to be in sync between services or to stop developers breaking backwards compatibility.

In a world in which services are loosely coupled, producers and consumers live in separate repositories - with the data schema living in a third repository - there's ample potential for services to find themselves out of sync with each other. To ensure services stay in sync, either an integration test or a contract test is required.

Currently there exists no tooling for contract testing with HTTP + protobuf, but from what I can work out, adding support to pact is both a desirable and tractable thing to do.

Suggested approach

Given protobuf, isn't the only non-JSON encoding that users might want to contract test using pact, my proposal to implement this functionality is encoding agnostic (there are lots of other encodings out there: flatbuffers, msgpack, SBE, thrift to name a few).

Suggested flow for non-JSON pacts

When generating a consumer contract and validating consumer code, the test code POSTs to the /interactions endpoint on the pact mock_service the details of the expected response when a given request is received. I suggest that when new interactions are being configured, that two fields are added: 1) encoding type, and 2) an opaque representation containing details of how to encode/decode the given request/response. Request and response matching / requests / responses would still be sent in JSON.

For every additional encoding (in addition to JSON), a new module in the mock_service and provider_verifier would be required, which would contain encoding-specific logic that uses opaque representation. This module translates the native format to a JSON representation, which can then be fed through to the standard mock_service matching logic - thereby keeping most of the code paths the same for JSON/non-JSON.

In terms of the consumer contract, in addition to the current JSON which specifies the contract, the following would be required: 1) the expected encoding type, 2) the opaque representation as mentioned at the start of this section. Given the opaque representation would likely obfuscate the rest of the contract, this would likely be stored in a seperate file to the JSON contract and referenced with a unique identifier.

The new consumer contract could then be used by an implementation of provider_verifier which supports the additional encoding in order to validate the producer.

Protobuf specifics

In terms of protobuf, the opaque representation of the message encoding is the MessageDescriptor, which is programming language agnostic, and thus straightforward to pass between pact test clients and the Ruby-based mock server.

Proof of concept

Given I'm at the point of having an idea of how to go about adding support for protobuf to pact, it's something I'm going to have a go at over the next few weeks once I have time. The main purpose of the proof of concept would be to gain more insight on the nicest way to add protobuf support, ahead of attempting to add support in a way which would be amenable to the community.

mefellows commented 5 years ago

but there's nothing forcing schemas to be in sync between services or to stop developers breaking backwards compatibility.

I've never used protobufs in anger, and I always expected this was the case. I'm glad to get some form of confirmation that this is was possible / is the case.

The question is how would matchers work from a consumer client code, could you put together a gist/pseudocode of how you see it looking in your head?

bethesque commented 5 years ago

Just letting you know I've seen this, I just haven't had time to sit down and devote the attention it deserves. I will as soon as I can.

mcon-gr commented 5 years ago

Thanks folks, @mefellows I'll put something together on how I vaguely expect this to work in the mock_service once I manage to carve out some time.

mcon-gr commented 5 years ago

Hi @mefellows @bethesque I've just knocked together a quick gist that illustrates a rough outline how I see this working. Based on the outline in the gist I've got a very bare-bones implementation working locally - please let me know what you think. I'm happy to add more detail, or code up a firmer implementation as you prefer.

https://gist.github.com/mcon-gr/348232a3d2e64df544858b5491bb9d30

mefellows commented 5 years ago

Minor update note - progress on this issue has started, thanks Matt!