Closed nwinch closed 8 years ago
i've been working on this right now. i think we should ditch the result
part. responses should just return data as it is. otherwise it's a pain in the neck to work with the APIs as you will have to unwrap that result
every time in stead of just .get
, .post
data
there is already a HTTP response status, you don't need the found
property. also old_val
is pointless as you have it on the front-end anyways. and it's not the API's job to care about states
@MadRabbit To clarify, the above summary code is what currently exists in redbeard. That's not me proposing what it should be :)
Just wanted to summaries what we currently have so we can all use it as a talking point about what's good and bad, and how we should change it.
I agree on your points about found
, and also about not needing result
.
@nwinch yup, i understand that. and i disagree with the structure, for pretty much the same reasons you've described in the "suggestions" section. I think it needs to be changed. i'm doing an ES6 generators/models spike at the moment, so i'm going to touch this and i think it should be like so:
// GET -> [data {}]
// GET /:id -> data {}
// POST -> data {}
// PUT -> data {} (updated record)
// DELETE -> {ok: true} (or an empty object)
// Socket
{
new_entry: data {}
}
Although I'm not sure what the deal with the sockets, i wasn't across that feature yet
Found is actually an internal marker for the error handler. It's accidental that it's being sent over the wire as part of the payload and should be chomped off.
I think old_val should stay in some form. You might not need it in response to a PUT if you're caching that state in redux, but you're free to just ignore it. I doubt the bandwidth impact is significant. Maybe make it optional?
It's definitely needed in the websocket channel, though. Updates there don't necessarily come from you. It could be some other user of the system making the change and, depending on what exactly changed, you may want to notify the user differently about that.
There's potentially value in changing the signature of the websocket payload to something like:
old_val:
result:
In order to make it consistent. Or maybe just:
old:
new:
For brevity.
There only thing you've missed is that the GET and websocket payloads can
also return a count
property which is necessary for paginated queries.
i'm happy to leave the socket API alone for now as it is a special case, but as for the PUT
/PATCH
i think it should be a goner for the sake of consistency of with GET
/POST
Oh! I'm late to the party.
I think the responses must stay wrapped in order to handle that pagination case, Nikolay. Otherwise they'll be unwrapped except when paginated and that inconsistency is harder to deal with than the need to unwrap everything, in my experience.
you can pass the pagination data with HTTP headers just fine. as well as the self referential links and such. it's been done this way for ages
@MadRabbit Yeah that's nice and simple, I like it.
I'm currently heavily using sockets, and yeah, it'd be nice to have socket response consistent with the other endpoints to, as it will potentially be passed through the same client side actions etc..
It's definitely needed in the websocket channel, though. Updates there don't necessarily come from you. It could be some other user of the system making the change and, depending on what exactly changed, you may want to notify the user differently about that.
Yep, old mate old val is needed for sockets.
I'd be happy for PUT/POST to be unwrapped, but I think GET has to stay wrapped. I think I'd rather they be consistent with one another, which implies both wrapped.
Yeah headers can work. Isn't it a bigger hassle to parse headers than unwrap a .result though? On 16 Jun 2016 12:15 PM, "David Banham" david@prismatik.com.au wrote:
I'd be happy for PUT/POST to be unwrapped, but I think GET has to stay wrapped. I think I'd rather they be consistent with one another, which implies both wrapped.
Prismatik robot?! This has been Dave, by the way.
i'm happy to leave the socket alone, as there is no real standard around that. but RESTful API should be flawless in my view.
@prismatik-robot not sure which hassle you mean. they're both strings, there is no difference between JSON.parse(response.body)
and JSON.parse(response.headers.metadata)
just to clarify why i think GET/POST/PUT should be consistent. this is because how it is used on a client. a flat structure allows simple code that can work with data directly and immediately:
const things = yield api.get("/things");
const thing = yield api.get("/things/123");
const new_thing = yield api.post("/things", data);
const updated_thing = yield api.put("/things/123", old_thing);
it makes it much more pleasant to work with the API on the other end
there is basically an RFC standard for Link
header that should contain the pagination and self-referential data in it http://tools.ietf.org/html/rfc5988#page-6
here is how github does it https://developer.github.com/v3/#pagination it looks basically like so:
Link: <https://api.github.com/user/repos?page=3&per_page=100>; rel="next",
<https://api.github.com/user/repos?page=50&per_page=100>; rel="last"
Totally agree that GET/POST/PUT should be consistent.
The headers thing isn't quite as equivalent to the wrapped response as you make out, IMO. Using Axios for example, the two use cases are:
Wrapped:
get('/something').then(res => {
const result = res.data.result;
const count = res.data.count;
});
and Headers:
get('/something').then(res => {
const result = res.data;
const count = JSON.parse(res.headers.metadata).count;
});
Sure, calling JSON.parse isn't an insurmountable challenge, but it's another thing that can throw
, it's another thing to deal with.
If you're using the fetch
api, it becomes even more onerous:
Wrapped:
fetch('/something').then(res => {
res.json()
.then(body => {
const result = body.result;
const count = body.count; // lol body count
});
});
in Headers:
fetch('/something').then(res => {
const count = JSON.parse(res.headers.get('metadata')).count;
res.json()
.then(body => {
const result = body;
});
});
Interestingly with yield + fetch the LOC is actually less with the headers method, but there's still that extra JSON.parse step.
Wrapped with yield:
const response = yield.fetch('/something');
const body = yield response.json();
const count = body.count;
const result = body.result;
Headers with yield:
const response = yield fetch('/something');
const count = JSON.parse(response.headers.get('metadata')).count;
const result = yield response.json();
For curiosity, Axios with yield:
Wrapped:
const response = yield get('/something');
const result = response.data.result;
const count = response.data.count;
Headers:
const response = yield get('/something');
const result = response.data;
const count = JSON.parse(response.headers.metadata).count;
The Link header as described give you the last and next pages like a good hypermedia API should, but it doesn't tell you how many total results there are. It doesn't give you the data to say "These are results 1-50 of 10000"
The other option is to put stuff in X-prefix headers like Gihub do, but that really just trades JSON.parse
for parseInt
.
const response = yield get('/something');
const count = parseInt(response.headers['X-Count']);
const result = response.data.result;
you can send a custom header, like X-Total-Pages
. headers handling is a highly wrappeable solution. my primary concern here is consistency of the API. Github is a fine example of a very good API and for that exact reason, consistency of the responses. Google API is a pain is every part of the body exactly for the reason that they wrap data
Snap!
pagination is an edge case in my view. there can be lists without pagination. in many cases even if the list is paginated, you want just the first page and don't really care about the metadata. and the whole problem can be solved with a custom header. this is not even a random hack it is a very expected solution.
The headers method is definitely a valid and common way of going about things. It'll certainly work just fine. The wrapped method is also a valid and common way of going about things. It happens to be the method favoured by the (albeit, unofficial) JSON API spec (albeit with data
rather than result
. Oops)
The other thing that just occurred to me about the headers method is that we'd need a separate JSON schema to describe the 'X-Metadata' header or we'd need to abandon the JSON schema entirely for individual 'X-' headers.
Either will work. The question for me is which method will be faster and easier to develop against. In all of the examples above I think the wrapped method was at least a little easier to live with. I can't think of any examples where the raw response with header metadata makes life easier. I get that is aesthetically purer, but I can't see any practical advantage to it.
This guy has already had this conversation with himself on his blog!
http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api#pagination http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api#envelope
The conclusion he comes to is "Don't use envelopes/wrappers, except when you have to."
Which I don't love as an answer. I think an API that's reliably and consistently a bit weird is much more useful than an API that is only sometimes a bit weird.
Note that I am of the opinion that including statusCode
in the envelope is completely goddamned ridiculous. That's what the status code is for. It says so in the name! The other end of this insanity is those APIs that return 200 for every single response and only give you error data in the envelope. That's nuts and nobody is advocating for it.
For the avoidance of doubt!
oh yeah, there are apis like that! there is also another end of spectrum where they always return 500 with a text error (even for validation) :)
i think we will do the right thing by sticking to the HTTP error codes and some JSON responses like {error: "blah"}
, you did well naming it {message: "blah"}
btw.
Apparently we're not supposed (SHOULD NOT) to use the X- prefix any more.
https://tools.ietf.org/html/rfc6648#section-2
So if we do go that way we should go with Total-Pages
or something similar.
So after Reading The Internet it seems that the consensus of the hivemind is that our desired future state is all Headers and no Envelopes. I think that's probably a fine thing in the long term but will require a lot of standardisation before it isn't dumb. Every standardised Link
header will need to come with a weirdo non-standard Total-Pages
header that it brings to the party like an awkward friend nobody else knows.
It's impossible to describe the headers in JSON schema, so we're left to just talking about them in docs. That's fine once they're all IETF standards. Until then, though, I think it'd be a real shame to step away from this machine-parseable-documentation utopia we're gradually inhabiting and go back to "I dunno read the README I guess"
That said, this wouldn't be the first time the internet did something I thought was dumb and I went along with it anyway because it's easier to be consistent. Check me out indenting my code with spaces.
If you're certain that headers is the Way To Be, Nikolay, then I can be okay with it.
we live in imperfect world and programming is a mess. but, you know what they say "be the change you seek"
headers is the best compromise in this situation that i know of. it's been done for ages, it works and doesn't make things more complicated than they should be.
resolved in the thinky branch rework
Currently there exists inconsistency between endpoints on what the response format is. I think it would be ideal to make these consistent as much as possible because..
So far, this is what I've found that exists:
You can see they're all pretty close, but still different.
Suggestions:
old_val
andnew_val
from the response? One issue I see with it currently as it kind of bleeds concerns over to the client that the API usesrethinkdb
under the hood, where the client shouldn't really need to know - or care - that that is the case. This might also help address some linting issues. Perhaps justold
andnew
? Not much of a difference, I know, butold_val
andnew_val
are synonymous withrethinkdb
, whereold
andnew
represent different states of data - regardless of the underlying technology.found
? I haven't found (haaa) a use for that yet? I might be doing something wrong, or simply misunderstanding it's use case though? Happy to be enlightened!What is everyone's thoughts on this?