Closed jayvdb closed 4 years ago
Very interesting suggestion.
Fundamentally, this package works in a static manner; requiring you to manually write tests for each view, and as a consequence it has no performance impact on the running application.
I take it you're suggesting that the testing is moved from being a part of the test suite, to being performed live on every request? That doesn't sound very difficult, except you would need some mapping to connect incoming requests to the relevant section of the OpenAPI schema.
The way the package currently works, we require
The first would be easy to get in a Middleware context, but how would you go about the other two? Would it be possible to handle an outgoing request in a context where we could infer the last two?
You get the request and response methods passed to the middleware. It includes path, method, status code, body and so on.
This seems like it would be pretty easy to implement then. The only thing I'm unsure of is just whether adding a middleware layer is an efficient way of a validating a static schema. Wouldn't you end up validating the same thing over and over?
That being said, I can see the benefit of having something like this running in the background during development.
@jayvdb, is this something you would be willing to create a pull request for?
Wouldn't you end up validating the same thing over and over?
In a production system especially, validating the requests is a way of identifying and rejecting malformed requests before they hit the views.
And validating the responses is a bit repetitive, but it offers a way of ensuring that a valid response is emitted irrespective of the request and state of the system. And usually this highlights all sorts of DRF responses which were not described in the spec. e.g. does the spec accurately describe what the response will be when the database is temporarily locked? Most people dont include all of those scenarios. drf-yasg and friends certainly do not do that automatically, and hand-crafting those is not simple, so people usually dont do it. https://github.com/tfranzel/drf-spectacular/issues/101 will hopefully automate that soon.
is this something you would be willing to create a pull request for?
I've already done quite a bit of the polishing to get https://github.com/zlqm/openapi-toolset/blob/master/openapi_toolset/django_plugin/middlewares.py working, and atm I am focused on getting the value out of that time investment.
But before getting into who codes it, there is much to think about for how the code should work. I've given links to two existing implementations, and neither are very mature, but they at least allow identification of problems with Django integration which havent really been tackled yet, and this project might build a better mouse trap if it tries to get the design right from the beginning. This is made more complicated because the focus of this project is testing, and this middleware would mean it needs to also support being run in more diverse environments, possibly even production environments.
If the focus is on testing, errors can simply be exceptions that ripple out to the test runner. This also works reasonable well in development, but Django will convert those exceptions to 500 HTML responses, which isnt very helpful for a DRF client, probably even violates the spec it is enforcing. https://github.com/zlqm/openapi-toolset/issues/2 has some thoughts about this.
And in production, Django middleware should never raise exceptions that were not explicitly requested by the implementor, as each piece of middleware needs to interoperate with other middleware. If a middleware can not safely handle a request or response, it should defer to the next middleware. In the case of OpenAPI spec checking, obviously the middleware should be active on routes it sees in the specs.
What should the middle do when a route in the spec doesnt have a delete
, but a delete
request comes in? Should it reject the unknown operation, or defer since that operation isnt covered by the spec? Most people would think that should be a clear reject! Ok, but if we are rejecting unknown delete
, what about unknown options
requests? Does your spec explicitly allow options
requests, and accurately describe their responses ? What about head
? ;-) Thankfully the list of HTTP methods is quite small, so there can be a setting to whitelist/ignore options
and head
, so that delete
can be blocked by default, and implementers can add delete
to that whitelist if they want to allow delete
through even if it isnt advertised in the spec.
... but it offers a way of ensuring that a valid response is emitted irrespective of the request and state of the system. And usually this highlights all sorts of DRF responses which were not described in the spec. e.g. does the spec accurately describe what the response will be when the database is temporarily locked? Most people dont include all of those scenarios. drf-yasg and friends certainly do not do that automatically, and hand-crafting those is not simple, so people usually dont do it. tfranzel/drf-spectacular#101 will hopefully automate that soon.
I'm a little unclear about what you mean here - are you saying that OpenAPI schemas do not always document all response codes? To me it seems like validation of missing response codes or request types in the an OpenAPI schema should be skipped altogether.
To avoid potential misunderstandings I think it might be good to explicitly state what the full scope/purpose of the envisioned middleware should be. The current scope of the package is response validation in the sense that we make sure outgoing responses adhere to the application's OpenAPI specification - I'm assuming that this would also be the scope of the middleware as well - please let me know if that falls short of what you're proposing.
If the scope of the middleware is limited to only verifying that your APIs are adherent to an OpenAPI specification, I don't think it would ever be appropriate to reject a request. Instead I see this as an aid to developers, and would think that a mechanism for alerting developers about non-adherent responses would be sufficient, e.g., by using logging with a settings-specified log level. For this type of solution, you would probably also need a whitelist mechanism, where you could whitelist traffic to certain routes, to avoid problems when serving non-documented endpoints; I have certainly created applications in the past with /health
health check endpoint, that were purposely excluded from swagger documentation - this seems like a use pattern that needs to be accommodated for.
I'm currently on vacation, so have not had the chance to browse through the linked implementations in detail, but unless I'm missing some key functionality here, this seems very easy to implement.
I'm also a bit confused here.
What should the middle do when a route in the spec doesnt have a delete, but a delete request comes in? Should it reject the unknown operation, or defer since that operation isnt covered by the spec?
To me this is an obvious no. This package is created to validate response/requests up against it's documentation. Wrong or lacking documentation shouldn't interfere with an actual request/response. It's not a serializer, nor should it be.
I agree with @sondrelg here, simply logging it would be a nice feature. How ever, since drf-yasg
documentation isn't compiled it could cause some serious overhead.
The current scope of the package is response validation ..
The README also talks about Input Validation :P
Even if the intended scope is response validation, choosing the correct response spec depends on the incoming request being validated, otherwise you cant ignore responses to requests which are outside of the spec. I mentioned undocumented HTTP methods above, which are very high level, and you mentioned undocumented paths like "health checks". But there are many more types of requests which deviate from the spec in more subtle ways, such as a request including a query param that are not in the spec's endpoint definition. In the case of drf-yasg
, that probably means the spec creator/generator wasnt able to inspect the view mechanisms correctly, and the spec for that request is probably only correct for a basic subset of requests, and the response might be quite different from the spec. Where I see this happen frequently is dynamically expanding sub-queries. (e.g. djangorestframework-expandable
, but I forget exactly which ones trips up drf-yasg)
fwiw, drf-yasg is dead, long live drf-spectacular, which is much more likely to get OpenAPI 3 right. :P And I would need https://github.com/snok/django-swagger-tester/issues/61 done first, before me building middleware here would be a worthwhile activity. But I could conceivable do that.
Wrong or lacking documentation shouldn't interfere with an actual request/response.
OK. Reasonable approach. Even in tests, often one wants an invalid request
to go through to the view.
But then some other mechanism would be desirable to allow test suites to access the results of this spec validation middleware. IMO logging alone isnt a useful/desirable way for tests to detect what occurred during the processing of a request.
But I guarantee that as soon as this feature is built without invalid request/response rejection, someone will request it. Especially validation of incoming requests. Lots of helpers exist to make DRF easier by automatically doing validation of incoming requests based on dataclasses
or similar meta information. OpenAPI provides the necessary meta information to do this, and so it is natural that people will want to re-use that.
That said, if we build it so that the spec validation information is not lost, people can build separate components that re-purposes this middleware to do rejection according to their own needs.
Some more oddballs that I have encountered, which could effect scope/impl of this middlware, and worth discussing early here to get a common understanding, and possibly require other enhancements before an effective middleware is possible:
application/problem+json
(rfc 7807) responses to Accept: application/json
requests which the client will recognise without worries, but the spec probably isnt correct.If the general approach is going to be to not throw exceptions / reject any request or response, these are the types of oddballs that some outer layer (test runner, or other middleware) may want to know about, and may need to be exposed in a structured way.
I havent studied the code here in detail yet, but it seems that content-type isnt a fundamental part of this library yet - the schema loading mechanisms dont seem to handle varying schema based on content-type. That seems like it will be a major roadblock for drf-spectacular and this middleware.
Ok, this is good, I think I'm starting to understand.
The way I'm reading you based on the last post is that you want this middleware to be able to validate incoming request data based on the schema. In other words, for a post request to a given endpoint, the middleware should be able to return, e.g., a 400 if the request body is a string where it should be an int. It would work similarly to DRFs ModelSerializer just based on the OpenAPI schema rather than a database model - and it would be more comprehensive, since it could do more than just validate a request body (ref oddballs section). Am I understanding you correctly?
For that purpose I can see how you might want to reject a request in the middleware layer - this is what both me and @JonasKs (probably) were missing before.
With DRF-spectacular, I take it your vision would be that Django REST APIs could be built with DRF as the base layer, DRF-spectacular as the documentation layer (which infers most of it's information from DRF), and you could add request and response validation to various degrees using the schema definitions in a middleware layer? This way you could forego defining a DRF ModelSerializer or Serializer for your APIView and instead automatically have a base level of validation for all your views. Is that right?
That is roughly correct. However my own goal isnt to remove any validation in views/serializers/etc, but adding an extra layer on top.
I am only recognising that many Django devs are keen to generically handle validation, derived almost exactly on the model, as they view the extra layers are unnecessary. (And that is sort-of true for very simple models, which is all many people try to build at the beginning, but they tend to find their models get very complicated over time as the business logic gets more complicated, because they are trying to stuff all the extra logic into their model as semantic meta clues used by various other layers. But I digress..) For that style of project, it is better if they are enhancing their OpenAPI rather than trying to get their models to define strict validation rules.
To understand my scenario a bit better...
The Django project may be using third-party apps, and those may not have ideal validations, either because they are very flexible, or because of bad coding. e.g. an address app might allow any address in any country, and may have two sets of problems : the project may not want to support any country, and there may be insufficient constraints in place because of its flexibility. Or I have two apps which have an address (customer and shipping), and they have slightly different validations.
Typically this is resolved by subclassing the third-party app views, serializers, etc, to refine the validation and the exposed API.
However with drf-spectacular especially, but also drf-yasg, and even drf-core with some extra work, it is possible to override the schema definition, adding more constraints as metadata at the outermost level. Backend devs rarely do this, because it is metadata which is typically not used anywhere, and seems to be duplicating the other layers.
But if that metadata is added to the OpenAPI, third party apps can be adapted to the project needs without adding technical debt, and this can be enforced in middleware.
Likewise on the response side, extracting the schema from third party apps captures the expected output of the third party app, and it can be desk checked, and a client built around it. But then it emits a strange response under unusual circumstances, or after a "pip" update of some kind, and the client starts behaving strangely. The client should have logging to sentry etc, but debugging that can be difficult, as the response might be quite close to the expected schema, causing subtle errors not near the http layer.(A TypeScript OpenAPI client validator is also on my shopping list) If the middleware in Django starts rejecting responses from the third party app, the client gets a very simple error to handle in a uniform way, and the backend is immediately seen as the problem. No frontend devs are engaged to investigate the problem - managers send it straight to the backend devs, or possibly even ops who just rollback immediately. Logging only in the backend will also work if ops / managers / etc are directly watching sentry and understand the gravity of the incorrect response schema, but IMO they tend not to, and a policy of rejecting responses empowers ops to rollback the last deploy quickly, or triggers managers if there are suddenly frontend errors occurring and the cause is unrelated to a recent deploy.
I do like the idea of making API development more schema centric, or at least deriving more of a benefit from the effort put into proper API documentation. A schema generator library that infers as much information as possible from DRF coupled with a middleware for providing a base-level input validation seems like a good idea - even better if it can be extended with or from DRF serializers.
I really hope this becomes a reality, but it is not clear to me that the middleware should live in this repository, as it seems like a much bigger undetaking than the whole repo currently is. I'm curious to why you think the middleware should be separate from the schema generator library in the first place - would it not make more sense to build this as a part of drf-spectacular or as a downstream library from it? One thing I found writing this package is that catering to both static OpenAPI schemas and generated schemas creates a lot of overhead.
IMO the middleware shouldnt be tied to the generator - OpenAPI is the interface that separates them. fwiw, drf-spectacular is primarily static.
The reason to build it in a testing tool is that live request/response checking assists testing, and ideally any invalid requests/responses are caught in tests before deploys.
Any why this project? Even before I knew about the issue here for supporting drf-spectacular, the design here smacks of a good mix of 'tight' coupling with schema generators to extract as much info as possible, with generic use of that info for validating the implementation. There are lots of DRF killers being built, and gaining traction, so I dont want to be building/using testing tools that only work with DRF.
Good points. I will have a look at this once I'm back from vacation in about 10 days.
I'm keen to at least build a simple version of what's been suggested.
@jayvdb, how would you propose the middleware should work? Opt-in or opt-out? It seems that both would require some mapping of views. similar to the way routes are structure in an urls.py. After thinking about it, it seems a little backwards to do it this way.
An alternative, perhaps simpler solution seems to be to offer decorator functions you can use for each view. I've created an mvp working example below
def validate(request_body=False, uri_parameter=False, response=False):
def outer(fn: Callable):
@wraps(fn)
def inner(*args, **kwargs):
if not isinstance(args[0], Request):
raise ImproperlyConfigured('The first argument to a view needs to be a Request')
if request_body:
# input validation function here
print('Request body', args[0].data)
if uri_parameter:
# path param validation function here
print('URI parameters', kwargs)
# ^ Code above this line happens before the view is run
output = fn(*args, **kwargs)
if response:
# response validation function here
try:
validate_response(response=output, method=args[0].method.upper(), route=args[0].path)
except SwaggerDocumentationError as e:
logger.error('Response from <class name> (<route>) returned an incorrect response, '
'with respect to the OpenAPI schema')
return output
return inner
return outer
@jayvdb,
I've been looking at this for a while now and I'm getting to the point where I'll start testing the implementation.
The first draft is here. Feel free to have a look and flag anything you think might be missing or confusing.
I'll check it out later today. We have started using https://github.com/kiwicom/schemathesis , so I can push thousands of test cases through it ;-)
After fiddling with this for a while, I've decided to implement a response validation middleware first, before venturing onto a request body validation type middleware. If you're interested, please have a look at #74
Another approach to test is having Django middleware check the requests and responses across the wire.
Two projects do that https://github.com/zlqm/openapi-toolset and https://github.com/Cohey0727/drf_open_api_validator . I am using the former, with good results (and few more patches pending), as the latter seems stagnant at https://github.com/Cohey0727/drf_open_api_validator/issues/3 .
While I am moderately happy with my current solution,
django-swagger-tester
looks like a more comprehensive framework and it could benefit from such a 'live' testing mode, which could be engaged during unit tests, or on real sites.