Is your feature request related to a problem? Please describe.
Yes, the current implementation of our numbers collector sends every single stat update to our service endpoint immediately as they occur. This approach could potentially cause a high load on our server, especially when the user base grows and multiple users are tracking multiple tweets simultaneously. The high-frequency data transmission could lead to performance issues or even server downtimes.
Describe the solution you'd like
I propose to modify the collector script to accumulate updates and send them in batches. This way, instead of sending individual updates to the server, it would collect a predetermined number of updates or collect updates over a predetermined time period, and then send them all at once. This batching process would reduce the number of requests made to the server, thus reducing the load and improving the efficiency and reliability of our system.
Describe alternatives you've considered
An alternative solution could be to rate-limit the requests from the client-side, but this still doesn't solve the problem of high-frequency data transmission, it only spreads it out over time which might still lead to server performance issues in the long run. Another alternative could be to upgrade our server infrastructure to handle higher loads, but this solution could be costly and still doesn't address the root cause of the problem.
Additional context
Implementing batch processing of updates would be a more scalable and efficient solution as our user base grows. It's a proactive measure to ensure the stability and reliability of our service as the volume of tracked tweets increases. This change would require modifications to both the collector script and the server endpoint to handle batch processing of data.
Is your feature request related to a problem? Please describe. Yes, the current implementation of our numbers collector sends every single stat update to our service endpoint immediately as they occur. This approach could potentially cause a high load on our server, especially when the user base grows and multiple users are tracking multiple tweets simultaneously. The high-frequency data transmission could lead to performance issues or even server downtimes.
Describe the solution you'd like I propose to modify the collector script to accumulate updates and send them in batches. This way, instead of sending individual updates to the server, it would collect a predetermined number of updates or collect updates over a predetermined time period, and then send them all at once. This batching process would reduce the number of requests made to the server, thus reducing the load and improving the efficiency and reliability of our system.
Describe alternatives you've considered An alternative solution could be to rate-limit the requests from the client-side, but this still doesn't solve the problem of high-frequency data transmission, it only spreads it out over time which might still lead to server performance issues in the long run. Another alternative could be to upgrade our server infrastructure to handle higher loads, but this solution could be costly and still doesn't address the root cause of the problem.
Additional context Implementing batch processing of updates would be a more scalable and efficient solution as our user base grows. It's a proactive measure to ensure the stability and reliability of our service as the volume of tracked tweets increases. This change would require modifications to both the collector script and the server endpoint to handle batch processing of data.