metabrainz / web-service-v3-design

A collaborative specification for the third version of the MusicBrainz web service
2 stars 1 forks source link

Decide on a rate limiting strategy #10

Open ocharles opened 12 years ago

ocharles commented 12 years ago

We need to clarify the rate limiting, improving from the current vague points in the spec. Before we can do that though, we need to really decide on how the rate limiter should work - that's what this issue is for.

mayhem commented 12 years ago

Let's enumerate our possible options:

  1. static rate limiter, like we have now.
  2. dynamic rate limiter, where a delay time is specified at the end of one call
  3. No rate limiter, which nearly guarantees that we're going to have shitty service for all. In reality, this isn't a viable option since we aim to provide a fair experience to all of our users.

What other options do we have?

ocharles commented 12 years ago
  1. Dynamic rate limiter based on 'cost'. You are allocated a 'balance' every /x/ minutes/hours/whatever, and each web service call has a cost (for simplicity, they can all be equal). Making a call uses some of your 'balance'. You can only make a call if you have a sufficient balance. It's up to you if you want to space these calls out over the whole hour, or use them as fast as we can deliver them.

Note that this has scope for things like dynamic costs (longer it takes to deliver the response/the bigger the document size, the bigger the final cost).

kepstin commented 12 years ago

The way Twitter does it is a bit different - each user has a maximum number of requests per hour. They can use those requests up as fast as they want, as long as they don't go over the limited number.

They have two categories - unauthenticated and authenticated, and authenticated users get more requests per hour. (This might be interesting in musicbrainz; if a user logs into their account in picard, picard could run faster?)

The anonymous rate limit is by-ip; the authenticated rate limit is per-user over all ips.

They return ratelimiting response headers:

X-RateLimit-Limit: 350
X-RateLimit-Remaining: 350
X-RateLimit-Reset: 1277485629

and also have an api endpoint to query ratelimiting status.

(This is fundamentally similar to the 'cost' idea, except that they simplify by having everything cost 1)

warpr commented 12 years ago

github uses a system similar to what kepstin described for twitter: http://developer.github.com/v3/#rate-limiting

mayhem commented 12 years ago

Thanks for the references of how others are doing it -- quite helpful.

However, I think the data we have is fundamentally different from what twitter and github are serving. With that in mind, I checked on Freebase, which has a data graph as we do. Their wiki says:

"The Freebase API Terms of Service limit give users a read quota of 100k API calls per day (rolling 24 hour clock) and a write quota of 10k writes per day. If you find yourself needing more than 100k API calls you should consider using our data dumps to query the data locally."

They have no rate-of-access limitations -- only a daily quota.

ijabz commented 12 years ago

When we consider the current webservice the problem is not it is rate limited, but that the limit is 1 request per second, if the call was 10 requests per second that would be reasonable. So if the servers could support this level of rate limiting that would be okay I don't see that weighting different queries differently or recalculating the delay based on what a client has already done is that useful. Having said that it would be nice if we could allow faster throughput at dfferent times of day if there is marked difference in load, so perhaps the delay could be returned with each call and this rate could be increased/decreased during the day as required and would be the same for every client, but I wouldn't expect the rate to markedly differ from call to call, it doesn't need to be continually calculated.

Two related questions: 1>There was a plan to provide a 'non-rate' limited webservice that had a premium price, if this is still going ahead a new rate limiting system needs to be easy to understand and fit in with this aim 2>ws/1 and ws/2 still need supporting even after ws/3 for some time, and I wouldn't expect a massive rush to move to ws/3. So do we have the resource to reduce the rate limit for ws/3 whilst still serving ws/2, and if we actually do then we could actually reduce the rate limit for the existing ws/2 system. In fact I would still prefer a ws/2.1 which evolbes the ws/2 interface fixing the problems rather than starting again with ws/3

ijabz commented 12 years ago

A daily limit could be annoying unless it is in the region of 100k, Discogs used to have no rate limiter but a daily limit of 5000 lookups, and that was a real problem for me. They have now moved to the same service as Musicbrainz, 1 reuest per second and no daily limit

mayhem commented 12 years ago

ijabz writes:

Having said that it would be nice if we could allow faster throughput at dfferent times of day if there is marked difference in load, so perhaps the delay could be returned with each call and this rate could be increased/decreased during the day as required and would be the same for every client

That is exactly what ocharles suggests with dynamic rate limiting! Are you coming around to this idea now?

ijabz commented 12 years ago

As I understood it the dynamic rate limiter ocharles suggested worked out the next rate for each individual based on what they had already submitted. I was suggesting we had no such rate limiter per individual just that the global limit could be changed during the day.

kepstin commented 12 years ago

A dynamic rate limit could be compatible with a cost-based solution; the dynamic part would be adjusting the rate at which users receive credit/their maximum credit balance.

mayhem commented 12 years ago

ijabz: The rate would be determined instantaneously. It could be based on a per-user basis and/or a system capacity basis. This way we could try different schemes if we deem that a given approach isn't working. But, its hinged on the idea that a delay is returned with a WS call and if the client waits that amount of time all is well for the next call.

ijabz commented 12 years ago

Right, ok I I was concerned that the overhead of calculating per user was high, and that its difficult for the user ro understand what service they are going to get. Having said all that if those two issues are resolved/are not a issue then I think weighting requests based on amount of data returned is neccessary to provide a webservice that doesnt encourage mediocre apps https://github.com/metabrainz/web-service-v3-design/issues/20. i.e consider looking up a release with 10 tracks, if I want to know where the recording name is different to track name I need to make another 10 queries over the 1 release query, a simpler app that isn't bothered about that only needs to make one query. This gap of 11 -1 could be reduced if rate limiting was based on total data rather than total calls, having said that I still think it would better if this information which isn't bloat was returned in the original release call.

ocharles commented 12 years ago

We have decided on a dynamic rate limiter with no fixed rate. This proposal can be found in https://github.com/metabrainz/web-service-v3-design/tree/rate-limiter - please review this documentation!