Imagine an app that processes the Twitter firehose and send the tweets to another API that that requires throttling to meet the terms of service. If the incoming tweets arrive faster they can be processed, the queue size will grow over time until memory is exhausted or the queue limit is hit (if a queue limit is implemented).
For some applications, lossy throughput is OK and it's desirable to drop entries that can't be processed in time, rather than have the application fall further and further behind with stale data.
For example, imagine trend-analysis or sentiment analysis of a high volume of tweets. It may not be important to include every single tweet in the analysis, but it may be important that to have low latency and not an eventual memory crash.
Imagine an app that processes the Twitter firehose and send the tweets to another API that that requires throttling to meet the terms of service. If the incoming tweets arrive faster they can be processed, the queue size will grow over time until memory is exhausted or the queue limit is hit (if a queue limit is implemented).
For some applications, lossy throughput is OK and it's desirable to drop entries that can't be processed in time, rather than have the application fall further and further behind with stale data.
For example, imagine trend-analysis or sentiment analysis of a high volume of tweets. It may not be important to include every single tweet in the analysis, but it may be important that to have low latency and not an eventual memory crash.