Closed erkand-imeri closed 1 year ago
Hi @erkand-imeri, thanks for writing in!
One approach you may be interested would be the "smart" rate limiter in our Java SDK:
When using the async web client provided by the SDK, it automatically controls the traffic to Slack API servers under the hood. This enables developers' apps to maintain an optimal pace and avoid being rate-limited. The module manages a queue and traffic metrics to decide how long to wait before making the next API call.
Once your app receives rate-limited errors, the total duration to complete the same number of API calls would be significantly longer. Therefore, monitoring the situation and having pauses between API calls is the best way to mitigate the risk of errors.
Unfortunately, we don't have plans to add something similar to this Node SDK, at least in the short term (due to our bandwidth, priorities, and the current design of the Node SDK). So, please take a look at the Java SDK implementation to get inspiration for your code.
I hope this helps.
đź‘‹ It looks like this issue has been open for 30 days with no activity. We'll mark this as stale for now, and wait 10 days for an update or for further comment before closing this issue out. If you think this issue needs to be prioritized, please comment to get the thread going again! Maintainers also review issues marked as stale on a regular basis and comment or adjust status if the issue needs to be reprioritized.
It looks like implementing lru-caching helped me a lot to avoid hitting the slack api rate limit.
@erkand-imeri
hi Can you tell me how to use LRU to prevent rate limiting?
When using the async web client provided by the SDK, it automatically controls the traffic to Slack API servers under the hood.
Can you please provide the reference module file which does this? @seratch
From the behavior I observe in my implementation, it looks like that it doesn't anticipate the rate limits before making the api call, rather just reacts to the rate limited error by retrying the same api again. And in the retry, it fails as well.
So my understanding is that the developer will always need to implement their own RateLimiter.
Am I correct?
Yes, you're right. The Java SDK implements rate limiter to avoid getting 429 responses as much as possible. Checking the following resources may be helpful for you:
Hitting Slack API rate limits while using Slack Bolt with Node.js and TypeScript
Description:
I'm using the Slack Bolt framework to create a Slack app that allows dispatchers to communicate with customer service agents in Zendesk. However, I'm constantly hitting Slack API rate limits when posting messages and fetching thread replies. I need help optimizing my code to reduce the number of API calls and avoid hitting the rate limits.
Reproducible in:
The Slack SDK version
Node.js runtime version
Dockerfile: node:14-alpine
OS info
Kubernetes, Docker.
Steps to reproduce:
I am using getParentData to get the info from main thread before posting data to an API. I am thinking of reducing these calls by implementing a lru-cache?
Expected result:
The Slack app should work without hitting the rate limits by optimizing API calls.
Actual result:
The app is constantly hitting Slack API rate limits, causing interruptions in the app's functionality.
Dependencies:
What else besides lru-caching can i introduce to atleast reduce the amount of slack API rate limits. API throttling, retries with exponensial backoff?
Thanks in advance, Erkand.