Closed lastobelus closed 10 years ago
This is a frustrating issue that takes a lot of the enjoyment out writing shopify apps. It makes code that tries to deal with api limits far more complex & fragile then it otherwise needs to be. If there's a reason why adding the api block time to the headers is not as utterly simple as it seems it would be, at least some feedback would be appreciated.
Seeing that the limits no longer depend on a time to reset, this is not relevant anymore. There are probably going to be some extra headers to give more information about the throttle, but that's not specific to this ruby gem.
This would be an appreciated feature. I believe the limits are now 500 requests / 5 minutes. I tried using https://github.com/Fivell/activeresource-response but had no luck.
The API call limit is a leaky bucket algorithm with a bucket size of 40 and a leak rate of 2/sec. The number of calls remaining are represented in an HTTP header value.
I see. I'll look closer into accessing the headers via this gem later. Thanks! On Mon, Apr 24, 2017 at 8:40 PM Jamie Dwyer notifications@github.com wrote:
The API call limit is a leaky bucket algorithm with a bucket size of 40 and a leak rate of 2/sec. The number of calls remaining are represented in an HTTP header value.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Shopify/shopify_api/issues/51#issuecomment-296867361, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHRjwhe6XADJlKXyekQeYxGiQYW0cb8ks5rzU7vgaJpZM4AtJed .
TLDR
I want to be a good citizen with respect to API limit. Because my app has both internal and external concurrency, I can't unless I know when the api limit will reset.
Detailed description
Without knowing when the api limit will reset I can't avoid spamming you, in any customer-facing app functionality (example: shipping calculation)
If I'm making requests in a single threaded single user tool, I can keep track of API usage & timing and it will more or less work (although I'll still occasionally spam you when another app is using up calls)
But for the things like shipping rate calculation, I can't get away with sequential calls to the api, it adds too much latency to a process that already typically has 3-7 seconds latency. To be a good citizen I want to fire 1 api request, check the limit, then fire the rest in parallel. Thus my app has internal concurrency
I used typhoeus/hydra to do this, setting the concurrency to 20 instead of the default 200 and adding an on-complete to each request to pause/cancel the hydra if any request hits the api limit.
However, if there is not enough requests left, I have no idea how long to sleep. Even if I always bail on requests instead of sleeping and retrying, the web app itself still has however much external concurrency we buy on heroku -- the minimum is 3-4 for a free app, using unicorn. So if 2 or 3 requests overlap, some of them will end up spamming you (unless I wanted to make it so my whole web app stalls for 5 minutes whenever I bump against the api limit, which my clients would not be very happy with)
If you were to tell me in a header in each api response the time the api limit is going to reset, I could track that globally using memcached, and be a good citizen. All incoming requests to my app could check the api call headroom & the time when it will reset and decide whether to wait for the reset or bail. I would occasionally spam you when some other app bumped against the api limit, but usually with only 1 request in a 5 minute block (occasionally with a parallel block of requests when the other app used up the limit in between my checking and firing my parallel block of requests)
My clients (who are also your clients) would benefit, because the app will seem faster under load (requests only bail when they have to, and only stall as long as they need to)
If you add this header, I promise to package & publish on github my typhoeus-based client that uses it to be a good, mostly non-spamming fast api consumer.