Open Afterster opened 10 years ago
Yep, let's add pagination.
There was some discussion about this in https://github.com/sampsyo/beets/issues/750 and https://github.com/sampsyo/beets/issues/718. Some people (e.g., @asutherland) have suggested that start/end or offset/limit can be an efficiency problem in large collections, so let's be careful about this one. Options include:
GET /items
), which seems like a common case because databases can usually efficiently look up by id.Input from people with more database server experience here would be greatly appreciated.
I should add that option 3 looks like a good 80% solution that would be easy to support, so I lean that way at the moment.
Interesting discussion on pagination (yeah, not paging... ^^) on beets threads indeed. 3. looks a good compromise indeed.
Normal "trick" for doing this is having a opaque continuation token. The meaning of this token is internal to the server, and can in effect be a string that encodes a limit/offset, an identifier of the last item retrieved or any other data needed to continue paginating efficiently.
Downside of this variant is that you can never jump straight to page N but must go through 0-N to get there. On the positive side this leaves the servers free to store the information it needs to do efficient pagination and not have to work around whatever was chosen here.
Very cool. I'm enthusiastic about this opaque-token pagination idea. To save any other readers a little googling, here's a discussion of the problems with limit/offset, and here's an implementation of the idea for Django. I like this plan because it allows the naive way (so it's easy to implement) up to a stateful DB-cursor design if you're a glutton for punishment.
One unresolved question is whether the server should be allowed to unilaterally truncate. On one hand, servers might want to prevent against unintentional DoS by returning fewer items than the client requested; on the other, clients would be required to implement pagination (a client could no longer be blissfully unaware of pagination and hope to get all the data). I think the former probably outweighs the latter. Note that the client could still request pagination and the server would have to honor it; it's just that the server could do it without asking too.
(BTW, @adamcik, I'm a Mopidy admirer—I'd be really interested in hearing about lessons learned from Mopidy development. It would be a great design goal to build something that feels "natural" as a Mopidy frontend.)
I've put together a draft of the pagination idea: http://auraspec.readthedocs.org/en/latest/api.html#pagination
Comments and criticism welcome!
Looks good in general. For minutae purposes it's probably worth explicitly stating that use of the opaque token in a request is expected to invalidate the token. It's also probably desirable to state the status code that should be used if the token is expired (410 gone? Just 400 for bad request?)
If it's expected that there will be servers that might use the token to maintain state with non-trivial overhead, it might be good to describe that "limit=0" is a valid parameter that can be used for optimized invalidation of continue tokens that will never be used. So if a user closes their search window/etc., a well behaved UI could invalidate that token. And a server that doesn't care could just optimize the limit=0 as a fast-path NO-OP.
Good point. Two important things were missing: the token is single-use, and the server needs to be able to expire tokens if it wants to (to allow reasonable stateful implementations).
I'll keep thinking about whether "well-behaved" clients should be able to explicitly expire. That might be best left an implementation-specific feature.
Paging should be supported from the beginning as is filtering. I think it is only missing optional start and end parameters, don't you?