Please make sure to review and check all of these items:
[x] Do tests and lints pass with this change?
[ ] Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)?
[x] Is the new or changed code fully tested?
[ ] Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
[ ] Is there an example added to the examples folder (if applicable)?
[x] Was the change added to CHANGES file?
NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Description of change
Please provide a description of the change here.
In this PR, I experimented with using MULTI and EXEC commands to batch multiple Redis commands into a single request. My goal was to reduce the number of network round-trips and improve overall performance by minimizing network latency.
Observations
After running the integration tests, I noticed that while the number of network requests decreased, the total execution time actually increased. Here’s what I found:
Increased BufferedReader Time
When we send multiple commands in one MULTI/EXEC block, the server responds with all the results in one go. Reading this large combined response takes longer, which increased the time spent in the BufferedReader.
Command Packing Overhead
Packing multiple commands into a single MULTI request requires additional processing to format the data correctly. This added some overhead to the command preparation phase.
Complex Response Parsing
Parsing the combined response from EXEC also turned out to be more complex and time-consuming. Each individual command’s result had to be handled separately from the large, single response, which added to the total processing time.
Test Result
Here are the Integration test results comparing the Original Logic' and the Modified Logic
Conclusion
While the idea was to reduce network latency by batching commands, the extra time taken to read and parse the larger response offset these gains. It seems that for our case, the increased local processing outweighed the benefits of fewer network requests.
I’d love to get your feedback on this. Do you think there are other optimizations we should consider, or is there something I might have missed? Any insights would be greatly appreciated! cc. @chayim 🙇🏻
Pull Request check-list
3090
Please make sure to review and check all of these items:
NOTE: these things are not required to open a PR and can be done afterwards / while the PR is open.
Description of change
Please provide a description of the change here. In this PR, I experimented with using MULTI and EXEC commands to batch multiple Redis commands into a single request. My goal was to reduce the number of network round-trips and improve overall performance by minimizing network latency.
Observations After running the integration tests, I noticed that while the number of network requests decreased, the total execution time actually increased. Here’s what I found:
Increased BufferedReader Time
When we send multiple commands in one MULTI/EXEC block, the server responds with all the results in one go. Reading this large combined response takes longer, which increased the time spent in the BufferedReader.
Command Packing Overhead
Packing multiple commands into a single MULTI request requires additional processing to format the data correctly. This added some overhead to the command preparation phase.
Complex Response Parsing
Parsing the combined response from EXEC also turned out to be more complex and time-consuming. Each individual command’s result had to be handled separately from the large, single response, which added to the total processing time.
Test Result Here are the Integration test results comparing the Original Logic' and the Modified Logic
Conclusion While the idea was to reduce network latency by batching commands, the extra time taken to read and parse the larger response offset these gains. It seems that for our case, the increased local processing outweighed the benefits of fewer network requests.
I’d love to get your feedback on this. Do you think there are other optimizations we should consider, or is there something I might have missed? Any insights would be greatly appreciated! cc. @chayim 🙇🏻
Thanks!