walmat / nebula-old

Deployment download link will be hosted here:
http://nebula-deployment.herokuapp.com
3 stars 0 forks source link

Cache Monitor Requests between Tasks #321

Open pr1sm opened 5 years ago

pr1sm commented 5 years ago

Is your feature request related to a problem? Please describe. Users will likely run multiple tasks to increase their odds of getting a successful checkout (jigging address, multiple sizes, using different proxies, etc.). This causes a large number of unnecessary requests to be made while monitoring for products, which increases the change of soft bans from occurring.

Describe the solution you'd like A cache of monitor requests should be created to reduce the number of duplicate requests made in the monitor. These include:

This cache would have a timeout for hits equal to the monitor delay of a task. This would allow prevent multiple tasks from making the requests, but still keep the info relatively fresh (within one monitor delay)

The following diagram illustrates how the timing would work:

      time           task reqs      cache      request
        |               |             |           |
        |             task1--------->miss---->(request)----.
        |               |             |           |        |
        |             task1<--------store<----(response)<--`
        |               |             |           |
        |               |             |           |
        |             task2--------->hit---.      |
        |               |             |    |      |
        |             task2<---------------`      |
        |               |             |           |
(1st monitor delay)     |             |           |
        |             task3--------->miss---->(request)----.
        |               |             |           |        |
        |             task3<--------store<----(response)<--`
        |             task1--------->hit---.      |
        |               |             |    |      |
        |             task1<---------------`      |
        |               |             |           |
        |               |             |           |
        |             task2--------->hit---.      |
        |               |             |    |      |
(2nd monitor delay)   task2<---------------`      |
        |               |             |           |
        |               |             |           |
        |             task3--------->miss---->(request)----.
        |               |             |           |        |
        |             task3<--------store<----(response)<--`
        |             task1--------->hit---.      |
        |               |             |    |      |
        |             task1<---------------`      |
        |               |             |           |
        |               |             |           |
(3rd monitor delay)   task2--------->miss---->(request)----.
        |               |             |           |        |
        |             task2<--------store<----(response)<--`
        |             task3--------->hit---.      |
        |               |             |    |      |
        |             task3<---------------`      |
        |               |             |           |
        |               |             |           |

Describe alternatives you've considered Don't have any other alternatives at the moment, but will post any in the thread below

Additional context This will likely require a lot of refactoring to the taskRunner and monitor classes. Further, we would need some type of task -> cache communication. This will be easier if everything is in a shared address space (single/multi threaded vs multi process).

Because of this, I'm putting this on hold until #192 is solved. This will allow us to phase out the multi process method if it is too difficult to keep that working.

walmat commented 5 years ago

We should think about making a site specific mapping to negate requests to product endpoints that we know are blocked ahead of time. (e.g. – We know products.json is blocked on kith by default, so there is no need to make that request. This will help preserve the user's proxy as well before an eventual softban.

pr1sm commented 5 years ago

192 has been addressed by #348, so this is no longer on hold

pr1sm commented 5 years ago

On hold until #361 is addressed