microsoft / AMBROSIA

Robust Distributed Programming Made Easy and Efficient
https://microsoft.github.io/AMBROSIA/
Other
417 stars 49 forks source link

Hiding the logging latency by parallelizing logging and execution. Overall latency is cut down almost by 40% #114

Open zllai opened 4 years ago

zllai commented 4 years ago

Hi Ambrosia team,

I am recently studying the ambrosia source code, seeking for a chance to contribute to this project. As I playing around with it and getting familiar with its source code, I think I may be able to contribute a little now.

In current implementation of Ambrosia, when an immortal receives a request, it will log it before the application code get a chance to process the query. It makes sense in order to enable resiliency. However it could also harm the overall response latency. As a proof of theory, I made an image resize service where the client immortal sends an image to the server and the server resize it to the desired size and returns to the client. In my experiment, the logging latency of server is about 40ms, and resizing takes around 30ms. These add up to the overall latency about 80ms.

I think it might be possible to put logging out of critical path so that the latency won't add up while ensuring resiliency and no side effect is made before the request is stable in resilient storage. Here is my proposal:

After a immortal receives a request and write it to buffer page (at this point, the total order of messages is established), the request is directly sent to the application to process. In the mean time, the immortal coordinator (IC) tries to log it in resilient storage. The application may send out several messages after processing this request. Those messages are sent back to IC and stored in the output queue of IC. However, those messages are hold in the output queue until the request is confirmed to be logged in the resilient storage.

In this way, logging and processing is made parallel so that the logging latency is hidden. Since the output is hold until the input is logged, no side effect is made.

I implemented this idea and done some experiments in the image resize application.

Here is the result:

0.5 0.6 0.7 0.8 0.9 0.98 mean
latency without parallel logging (ms) 77 80 82 85 93 107 79.86
latency with parallel logging (ms) 48 49 50 52 58 73 51.51

The first 6 columns show the latency for various percentiles and the last column is the average latency.

The preliminary result shows about 36% reduction in latency, which I think could be promising.

May I know your comments about this idea? And of course I will continue to refine my code.

Best, Ziliang