Closed bergmark closed 9 years ago
The JS client didn't have this problem since it looks at the accept header.
Shouldn't we do the same thing here as in the JS client then, instead of hardcoding a different default?
I'd prefer not to, ATM we know what the server will return and can just act accordingly, if we just look at the accept header and it includes multiple formats we have to trial and error until a parse succeeds.
I'd also rather fix the js client to do the same thing, it only handles json at the moment by trying to parse it if the accept header contains json, and returns a string if the parse fails or if the header doesn't include json.
I agree it's weird to look at the accept header. But it's also bad to hardcode something based on the current server code. I think the right thing to do, is look at the content type of the response, and parse based on that. This is how it should work, right? We send an accept header, saying what we can accept (possibly only one thing). Then we either get an error (unacceptable) or we get a response in one of those formats. We check the content type of the response, parse according to it, and everything works. What do you think?
It depends on what you mean by server code. Changing the default response type in rest-core won't affect anything since the generated clients always have explicit accept headers as of 0.16. But it can be broken from the api itself if an existing end point gets a new response type, which should be a non breaking addition.
It does sound more robust to parse the response type like you say.
It does assume that there will only be one parser for a given response type though, right? Do we want that? As an example, if we make the dictionaries extensible and we add Fay serialization; If then an endpoint has fayO
we might want to use application/fay
as the header, but it's also valid json so should application/json
produce an error or should it give the fay response since it is also valid json? There might also be formats that don't have a custom type and just use application/json
naively (which would make customO . jsonO
inherently ambiguous...). And if you only use fayO
in your api and not regular jsonO
you may want to allow application/json
to be the same as application/fay
since it wouldn't be ambiguous.
Yes, I think if we do fay, we should use a custom mime type for that. Something like application/vnd.fay
? The Fay serializer should probably only be chosen for that output type, not for application/json
.
Fay is perhaps a poor example since I am the maintainer and can decide what the content type should be :-) Do other json-based formats typically use their own content type?
I also just remembered my comment here. In this case we should definitely allow application/json
as the content type for another dictionary (is that the right word?). There you'd want mutually exclusive dictionaries though so it won't cause ambiguity. Perhaps it wouldn't really affect what we are discussing here.
I don't think it would. I think you'd want a max of one formatter (that's what I call it on my branch where I'm building the extensible stuff) per content type, and use custom mime types for custom stuff. There's not even a need for the maintainer of a package like Fay to know about this mime type; the only person who has to know is the maintainer of the formatter.
I'm merging this since it is a simple bug fix. I'll open a new issue for changing the behavior.
The default should always be JSON when possible. In this case the accept header was
text/json
but the conversionfromXML
.I'm going to check the JS generation as well.