Closed alhardwarehyde closed 1 year ago
the sinerider-scoring service should, through vertical and/or horizontal scaling (or via using some sort of scalable 3rd party solution), support 20RPM
we should look into the following:
NOTE - this is one potential plan for scaling, please refer to @maxwofford 's prototype and evaluate it as a potential path forward.
Could get 20 concurrent requests locally using browserless. On DO I couldn't get pass 10 and I think it's due to hardware limits there. I think it's ffmpeg that overflows ram there
So our best bet is browserless and we could get even more.
Assigned myself this issue alongside Josias. Progress thus far:
This afternoon I will be implementing a script to efficiently deploy to these machines as manually updating the code (like we've been doing w/ the one instance) will waste time & run us into errors.
Update - we have now deployed the sinerider-scoring service as an App on DO. Instance sizes are 1gb RAM 1VCPU (xsmall). Using our load-testing script we observe these results using the following settings:
Settings:
python3 hailstorm.py https://sinerider-scoring-od9e5.ondigitalocean.app -d -r 5 -n 100 -t 32
(5 req/sec, 100 requests, 32 max parallel requests simultaneously, rate-limited requests are requeued)RATE_LIMIT_MAX_REQUESTS=1, RATE_LIMIT_WINDOW_MS=15000, TICK_RATE=120, DRAW_MODULO=3
Results: 21.59 RPM
Results from this test (which did test accuracy as well as perf) did show a small percentage of failed (incorrect) responses. These problems are tracked by issue #534. Results from this test do not properly simulate the load expected from the bot services. At the time of this writing, the Twitter bot (nor the Reddit bot) send parallel requests to the sinerider-scoring service, and this needs to change to leverage the increased throughput in this service. This issue is tracked by #565.
Description
what it says above
Screenshots
No response
Additional information
No response