hisabimbola / slack-history-export

A NPM module that allows slack users export their history
MIT License
287 stars 45 forks source link

fix(Network): Throttle network calls #47

Closed rdavison closed 6 years ago

rdavison commented 7 years ago

Throttle network calls in order to satisfy rate limit policy

BREAKING CHANGE: None

39

Summary

Description

TODO

hisabimbola commented 7 years ago

Hello @rdavison, thanks for your PR

I will like to propose some changes to the implementation.

Since all the slack methods are tied to one class, we are implement a queue system at that level and abstracts it from the main method. So the methods queues and execute the task as need be.

I think that might look much better than the current implementation

hisabimbola commented 7 years ago

This module might help.

and the code will look like this

import slack from 'slack'
import Queue from 'promise-queue'

export default class SlackAPI {
  constructor (token) {
    this.token = token
    this.slack = slack

    this.queue = new Queue(5, Infinity) //these parameters should be configurable
  }

  users () {
    return this.queue.add(() =>
      new Promise((resolve, reject) => {
        this.slack.users.list({ token: this.token }, (err, res) => {
          if (err)
            reject(err)
          else
            resolve(res)
        })
      }),
    )
  }
}

The only issue I am seeing with this is that it does not have a time delay, just a limit on concurrent request, but we can find a module that does that too, but you get the idea of how the implementation might look like

A better way might be to add a retry functionality, so the module retries if it encounters a rate limit error.

rdavison commented 7 years ago

I think I see what you're trying to say. I can the see the benefit the queue would add in controlling concurrency, and I agree that it is a more general solution. However I feel it is solving the wrong problem. I say this only because when considering the case where concurrency is limited to just one request at a time, the application would still run into the rate-limiting problem if requests are dispatched too quickly.

Regarding the retry option, I think that sounds more desirable because networks can encounter all sorts of problems outside of being rate-limited. I would be inclined to implement a retry feature to complement a time delay feature, and leave the issue of request concurrency unless it was really needed. What do you think?

hisabimbola commented 7 years ago

Yeah, you are correct @rdavison We need both functionality, concurrency to reduce the rate at which we get _ratelimiting error and retry to retry request if 500 or _ratelimiting error is encounter. I think that's a good option for now.

Also the retry option will/should have a time delay option, the delay can be constant or grows exponentially (called backoff).

FreedomBen commented 7 years ago

FWIW, I checked this out locally and I still ran into throttling issues, even with -d 5000 (and I hit the 429 just as quickly as without this patch). IOW, this did not fix my throttling problem.

Thanks both of you for your work!

rdavison commented 7 years ago

@FreedomBen : Strange! Were you at least able to observe the process taking longer to run? It worked like a charm for me

FreedomBen commented 7 years ago

@rdavison nope, the process took the same amount of time, implying that it wasn't actually throttling. I never rule out PEBKAC though

hisabimbola commented 6 years ago

I just tested this and I do not seem to be getting rate limiting issue. This is because the slack sdk module being used implemented some retry mechanism as seen here.

Closing this PR