amzn / selling-partner-api-models

This repository contains OpenAPI models for developers to use when developing software to call Selling Partner APIs.
Apache License 2.0
611 stars 734 forks source link

How Selling Partner API throttling algorithm works #2422

Closed ynaft closed 1 year ago

ynaft commented 2 years ago

I'm trying to figure out the "token bucket algorithm" for rate limiting how many API requests a user can make. so I have a burst of 10, which means the bucket has initially 10 tokens, so I can make 10 requests, then I have a restore rate of 45 seconds, which means it takes 45 seconds for a token to be deposited in to the bucket, so I should be able to make another one request.

Now based on what I understand, if I empty out all 10 at once, then I'll have to wait 45 seconds to get one token back in, and then at 90 seconds I'll have 2 tokens back in and at 135 I'll have 3 and so on until the bucket is filled with 10 tokens. but when I'm actually testing it with the Selling Partner API, I get a total different picture.

My test with a stopwatch showed, that when I finished emptying the whole bucket (10 requests) which took 00:00:11.50 I was able to get a new token (execute a successful request) at 00:00:48.14 which is less than 45 seconds from the 'last' request, so I figured maybe I have to count from the 'first' request, but then at the later requests I saw a totally mixed up pattern how they come in, the next one was in at 26 seconds than one at 4 seconds and so on with different time laps, see screenshot.

image (6)

Can anyone explain?

ynaft commented 2 years ago

I reproduced it with a script: https://gist.github.com/ynaft/83aafef6c66009a610f1bc670b171857

The numbers totally confuse me, as the original question.

Results (collapsed)

5/17/2022 1:19:05 PM - OK
5/17/2022 1:19:06 PM - OK
5/17/2022 1:19:08 PM - OK
5/17/2022 1:19:09 PM - OK
5/17/2022 1:19:10 PM - OK
5/17/2022 1:19:12 PM - OK
5/17/2022 1:19:13 PM - OK
5/17/2022 1:19:14 PM - OK
5/17/2022 1:19:15 PM - OK
5/17/2022 1:19:17 PM - OK
5/17/2022 1:19:18 PM - 429
5/17/2022 1:19:19 PM - 429
5/17/2022 1:19:20 PM - 429
5/17/2022 1:19:21 PM - 429
5/17/2022 1:19:22 PM - 429
5/17/2022 1:19:23 PM - 429
5/17/2022 1:19:25 PM - 429
5/17/2022 1:19:26 PM - 429
5/17/2022 1:19:27 PM - 429
5/17/2022 1:19:28 PM - 429
5/17/2022 1:19:29 PM - 429
5/17/2022 1:19:30 PM - 429
5/17/2022 1:19:31 PM - 429
5/17/2022 1:19:33 PM - 429
5/17/2022 1:19:34 PM - 429
5/17/2022 1:19:35 PM - 429
5/17/2022 1:19:36 PM - 429
5/17/2022 1:19:37 PM - 429
5/17/2022 1:19:38 PM - 429
5/17/2022 1:19:39 PM - 429
5/17/2022 1:19:40 PM - 429
5/17/2022 1:19:42 PM - 429
5/17/2022 1:19:43 PM - 429
5/17/2022 1:19:44 PM - 429
5/17/2022 1:19:45 PM - 429
5/17/2022 1:19:46 PM - 429
5/17/2022 1:19:47 PM - 429
5/17/2022 1:19:49 PM - 429
5/17/2022 1:19:50 PM - 429
5/17/2022 1:19:51 PM - OK
5/17/2022 1:19:52 PM - 429
5/17/2022 1:19:53 PM - 429
5/17/2022 1:19:54 PM - 429
5/17/2022 1:19:56 PM - 429
5/17/2022 1:19:57 PM - 429
5/17/2022 1:19:58 PM - 429
5/17/2022 1:19:59 PM - 429
5/17/2022 1:20:00 PM - 429
5/17/2022 1:20:01 PM - 429
5/17/2022 1:20:03 PM - 429
5/17/2022 1:20:04 PM - 429
5/17/2022 1:20:05 PM - 429
5/17/2022 1:20:06 PM - 429
5/17/2022 1:20:07 PM - 429
5/17/2022 1:20:08 PM - 429
5/17/2022 1:20:09 PM - 429
5/17/2022 1:20:10 PM - 429
5/17/2022 1:20:12 PM - 429
5/17/2022 1:20:13 PM - 429
5/17/2022 1:20:14 PM - 429
5/17/2022 1:20:15 PM - OK
5/17/2022 1:20:16 PM - 429
5/17/2022 1:20:17 PM - 429
5/17/2022 1:20:18 PM - 429
5/17/2022 1:20:20 PM - 429
5/17/2022 1:20:21 PM - 429
5/17/2022 1:20:22 PM - 429
5/17/2022 1:20:23 PM - OK
5/17/2022 1:20:24 PM - OK
5/17/2022 1:20:25 PM - 429
5/17/2022 1:20:27 PM - 429
5/17/2022 1:20:28 PM - 429
5/17/2022 1:20:29 PM - OK
5/17/2022 1:20:30 PM - 429
5/17/2022 1:20:31 PM - 429
5/17/2022 1:20:32 PM - 429
5/17/2022 1:20:34 PM - 429
5/17/2022 1:20:35 PM - 429
5/17/2022 1:20:36 PM - OK
5/17/2022 1:20:37 PM - 429
5/17/2022 1:20:39 PM - OK
5/17/2022 1:20:40 PM - OK
5/17/2022 1:20:41 PM - 429
5/17/2022 1:20:42 PM - 429
5/17/2022 1:20:43 PM - 429
5/17/2022 1:20:44 PM - OK
5/17/2022 1:20:46 PM - 429
5/17/2022 1:20:47 PM - 429
5/17/2022 1:20:48 PM - OK
5/17/2022 1:20:49 PM - OK
5/17/2022 1:20:50 PM - 429
5/17/2022 1:20:51 PM - 429
5/17/2022 1:20:53 PM - 429
5/17/2022 1:20:54 PM - OK
5/17/2022 1:20:55 PM - 429
5/17/2022 1:20:56 PM - 429
5/17/2022 1:20:57 PM - 429
5/17/2022 1:20:58 PM - OK
5/17/2022 1:20:59 PM - 429
5/17/2022 1:21:01 PM - 429
5/17/2022 1:21:02 PM - 429
5/17/2022 1:21:03 PM - 429
5/17/2022 1:21:04 PM - 429
5/17/2022 1:21:05 PM - 429
5/17/2022 1:21:06 PM - 429
5/17/2022 1:21:08 PM - OK
5/17/2022 1:21:09 PM - OK
5/17/2022 1:21:10 PM - OK
5/17/2022 1:21:11 PM - OK
5/17/2022 1:21:13 PM - 429
5/17/2022 1:21:14 PM - OK
ttimbul commented 2 years ago

Perhaps the documentation refers to a "guaranteed stable rate", so the smart thing would be to:

tmyers273 commented 2 years ago

x2. There seems to be some undocumented logic about the throttling algorithm.

It appears that there is some per-item throttling in place, instead of the per-request throttling alluded to in the docs and hinted at here, at least on the /products/pricing/v0/competitivePrice endpoint.

If this is true, with the upcoming SP API throttling reductions, we have big worries about our use case being slowly deprecated (related 1 2).

We have users trying to analyze ~100k products at a time. Coming from MWS with a 10 items / sec limitation to the planned change in the SP API of 1 item every 2 seconds will increase the minimum run time to analyze 100k products for our users by 20x (jumping from ~2.7 hours to a whopping 54 hours).

Test Results

image image

The correlation between asins per call and the number of throttling errors is quite apparent.

If the API was really throttling on a per-request basis (as documented), we should see near zero throttling errors for all asin counts. But that is clearly not the case.

Testing Methodology

Tests done on the /products/pricing/v0/competitivePrice endpoint. To test, I essentially looped through 1,000 iterations with a different number of ASINs in each call, calling the endpoint in a separate thread.

for i := 0; i < 1000; i++ {
    go callGetCompetitivePricing(asins)
    time.Sleep(100 * time.Millisecond)
}
ttimbul commented 2 years ago

Looks like I commented prematurely and naively, as we're now experiencing the same inverse relationship between the number of asins per call vs error rate, so something is definitely off. It doesn't seem Amazon engineers know what they're doing either or has either of you got any meaningful response out of them?

github-actions[bot] commented 1 year ago

This is a very old issue that is probably not getting as much attention as it deserves. We encourage you to check if this is still an issue after the latest release and if you find that this is still a problem, please feel free to open a new issue and make a reference to this one.

github-actions[bot] commented 1 year ago

closed for inactivity