apiaryio / api-blueprint

API Blueprint
https://apiblueprint.org
MIT License
8.65k stars 2.14k forks source link

Scenarios and testing #21

Open zdne opened 11 years ago

zdne commented 11 years ago

API Blueprint & Testing

Mission: Test backend responses against what is defined in a blueprint. This is the blueprint validation.

With API Blueprint in CI the goal is to minimize number of ambiguous transactions in blueprint and support complex tests using scenarios.


Terms

Test Case

An HTTP transaction to be tested. All the data needed to complete an HTTP transaction. A Well-defined Transaction.

Scenario

A scenario is a sequence of steps. Where a step is either another scenario, a test case or a freeform text.

Background

A setup block referencing a scenario.


Test Case

Implicit Test Case

An implicit test case is a test case created as a by-product of API Blueprint reference style description of an API.

Question

Can an API blueprint be, by default, composed of implicit test cases or do they need to be explicitly stated?

Collision points

Proposed steps to resolve collision points:

Inside out approach where somebody writes tests for an API and the reference-style documentation is the by-product.

Scenario

Ad-hoc Scenario

A scenario written after your API is described in an API blueprint. References to test cases and / or resources and actions.

Scenario-driven Blueprint

A freetext scenario. A scenario discussing and API and thus implicitly describing the API.

Background

A technicality, a reference to another scenario.

# Scenario A
 ...

# Scenario B
 ...

# My Resource [/resource]

## Retrieve [GET]

### Transaction Example 1
+ Background [Scenario A][]
+ Request A
+ Response 200
+ Possible Response 200
+ Possible Response 200

### Transaction Example 2
+ Background [Scenario B][]
+ Request B
+ Response 200

Notes

Transaction Example

As of API Blueprint Format 1A the API Blueprint support "under the hood" transactions examples. With implicit "pairing" planned in its parser.

The upcoming API Blueprint revision should consider introducing explicit support for defining transaction.

ecordell commented 10 years ago

Transaction Examples These are definitely needed, and this seems like a good approach. Can't wait to see the pairing!

Scenario I'm not entirely sure I understand what a Scenario can contain? What exactly can the "series of steps" be? Arbitrary code? Other transactions?

General Thoughts It seems like this might be mixing the concerns of testing and documenting to a degree that could be problematic down the road. If you build your testing support into the format, then you're committing to understanding and supporting an undefined set of testing requirements for all, or at least the majority of, blueprint users.

And while reading tests can be a useful form of documentation, in the case of rest APIs I would sort of expect those details to clutter things. That leaves more work in blueprint -> documentation parsers (to strip excess testing data).

Some cases that don't seem to be addressed by this proposal:

A solution I've been pondering

This might not be the best solution, and I'm not sure if you're just looking for feedback for something you've already decided on or if this is more of a brainstorming session. Assuming the latter, I've been thinking about how to do this in a way that addresses everyone's testing concerns.

The basic idea is to expose the transactions and examples as nodes with before/after hooks. I'm a fan of the way mocha does things, so I was thinking in terms of their nomenclature. Here's some notes I typed up a few days ago about how it would work:

Methods

Dredd CLI

Node names Node names would remove keywords:

It would actually be ideal if there were blueprint support for naming nodes so that one could do: accounts.create

Desired API

Sync:

before 'Accounts > Create Account', (transaction) ->
  # create test user
  transaction.headers['Authorization'] = base64 'testuser:testpass'

after 'Accounts > Create Account', () ->
  #delete test user

Async:

before 'Accounts > Create Account', (transaction, done) ->
  # create test user
  transaction.headers['Authorization'] = base64 'testuser:testpass'
  done()

after 'Accounts > Create Account', (done) ->
  #delete test user
  done()

This way, it's really not up to the blueprint to define how tests get set up/torn down and how transactions are manipulated. To me, this seems more flexible, easier to maintain, and better separates the concerns of documentation and testing. It also allows for more languages/libraries, so that for example someone could come along and write a better testing API, or a ruby library, etc.

I'm not sure this is the best approach. At the very least I think it could be made better if you could also add your own assertions to the tests, rather than relying solely on gavel. Not quite sure what an api for that would look like.

christhekeele commented 10 years ago

@ecordell I think Scenarios and Backgrounds allow you to create multi-request API workflows, both as potentially useful documentation and as a way of manually ordering tests with dependencies and limited setup/teardown, akin to rake tasks.

I have similar reservations about baking test-framework features into the blueprint format, but I can see a use-case for them outside of that context.

Almad commented 10 years ago

In my mind, test support in api blueprint / dredd is not a replacement for your integration test.

Rather, it is a way to verify two things:

For those, I think it's perfectly OK to have them baked in blueprint itself, because they are documentation to users as well.

Once you are going for edge cases, implementation bugs etc, your favourite framework and/or cucumber is the way to go.

ecordell commented 10 years ago

@christhekeele If that's the case, then this sounds very useful, but I guess I see it more as "workflow documentation" than "testing support".

@Almad It sounds like you've envisioned the Blueprint as a description of an API, rather than a prescription of the API (i.e. the API comes first, not the docs)? So when you talk of "regressions" you mean that you catch regressions in the documentation, rather than regressions in the functionality of the API?

The format seems to support both approaches rather well. We're using it basically as a TDD-style approach to API development - we define the API, then implement it. I don't see it as a replacement for integration tests, but I do see it as an integral part of our testing (it probably falls under "systems" testing).

The Blueprint contains a list of all of the endpoints, a list of all possible responses and how to get them, and schemas defining the format and valid values of all requests and responses. That's really the only information necessary to determine if an API is performing to spec, so it seems natural to use this existing, structured information to generate tests.

That all said, I think I see the use for defining some of the test requirements in the Blueprint now, from your and Chris's comments. But I did want to explain how we're using the format and why I think it's valid/useful (and why I think a system similar to what I described would be a useful addition regardless of changes to the format itself).

stekycz commented 10 years ago

I have prepared a proposal of cucumber step definitions implementation for testing API using API Blueprint. I am going to implement it as a part of my "in progress" master thesis. Feel free to discuss my ideas here or in an issue in the proposal repository.

zdne commented 10 years ago

Thanks @stekycz I will check it during the weekend. Thanks for sharing.

Note that it is great to have full-featured API testing tool built on API Blueprint, the main focus of this issue is to support API Blueprint testability as discussed here: https://github.com/apiaryio/dredd/pull/44#issuecomment-36506052

praky commented 10 years ago

@zdne Thanks for the API Blueprint toolset. I have been exploring Dredd recently. I have couple of questions as follows:

zdne commented 10 years ago

@praky thanks for your interest!

I wanted to check the latest status of this issue/proposal. Has it been implemented / any progress updates, please? :-)

There is no direct progress on this at the moment. Indirectly the introduction of implicit request / response pairing will affect the testing by Dredd and it is currently being implemented in the Dredd itself.

We may need to invoke one API end point for authentication that may return some kind of accesstoken in the response. To access other authorized resources/APIs, we need to pass around the access token.

As Dredd is primarily designated to run in a testing sandbox (not in production environment) I would see a (theoretical) way around it but using a set of backend fixtures with a specific token that is also used as an example value of a token parameter in the API Blueprint. This way Dredd will use the example value from blueprint and since it will be set in backend fixtures as well, it should work™.

Note we are looking at ways to supply some additional values for parameters in the blueprint (specific to requests, or more precisely transaction examples) but there was no development on this.

If you would be interested in contributing to the API Blueprint specification or the parser itself in this area I will be more than happy!

praky commented 10 years ago

Thanks @zdne for your detailed response. We were evaluating multiple API documentation options and finally narrowed down to API blueprint - very beginning stages of implementation and getting feedback from the people using it. I am sure that we will have opportunities to contribute back.

Once again, appreciate your effort on the API blueprint toolset.

zdne commented 10 years ago

Thanks @praky ! Let me know should you have any further questions.