tensorflow / serving

A flexible, high-performance serving system for machine learning models
https://www.tensorflow.org/serving
Apache License 2.0
6.12k stars 2.19k forks source link

Evaluate using Profile-Guided Optimization (PGO) and LLVM BOLT #2192

Open zamazan4ik opened 8 months ago

zamazan4ik commented 8 months ago

Feature Request

If this is a feature request, please fill out the following form in full:

Describe the problem the feature is intended to solve

According to my tests, Profile-Guided Optimization (PGO) improvements on multiple projects. The results are available here. With PGO I got measurable improvements in many projects, including network-based ones like Envoy and HAProxy. So I think optimizing TensorFlow Serving with PGO would be a good idea at least to try. With PGO we possibly will be able to improve the Tensorflow serving performance and reduce its CPU overhead.

Describe the solution

Testing Post-Link Optimization techniques (like LLVM BOLT) would be interesting too (CPython, Clang, and Rustc already use BOLT as an addition to PGO) but I recommend starting from the usual PGO.

Describe alternatives you've considered

No viable alternative here.

Additional context

Here you can look at how PGO is already integrated into multiple projects:

singhniraj08 commented 8 months ago

@zamazan4ik,

We have documented Performace Guide for Tensorflow Serving to help users get optimal model server performance. Can you please explain in detail what needs to be done from our end to implement PGO with Tensorflow Serving? Based on that I can take this feature implementation to the team. Thank you!

zamazan4ik commented 8 months ago

Can you please explain in detail what needs to be done from our end to implement PGO with Tensorflow Serving? Based on that I can take this feature implementation to the team.

Sure! At first, you need to integrate the PGO-specific compiler flags into your build pipeline (here are described flags for Clang, here - for GCC. If you want to support other compilers - please use the corresponding documentation to these compilers). I recommend starting with the Instrumentation PGO since generally easier to implement.

Below I collected some examples of how PGO is integrated into the build scripts in other projects (so you can take a look at the existing implementations):

After that point you need to perform the training and optimization PGO phase on your benchmarks so you can estimate - does PGO have any positive effects or not on TF Serving performance (RPS, CPU usage).

This process is simple (for the Clang compiler):

Only after you can think optimizing TF Serving prebuilt binaries with some predefined sample real-life workload. You need to choose the sample workload, integrate profile gathering into your CI/CD pipeline, etc. On the links above you also can get some insights about such a way.

We have documented Performace Guide for Tensorflow Serving to help users get optimal model server performance.

Awesome, that you have such a guide! If PGO has some positive effects on TF Serving performance, I think you can extract this guide with an additional chapter about rebuilding TF Serving with PGO or even create a dedicated page about PGO in the TF Serving documentation. Here I collected some examples of such documentation in various projects (maybe they can help you with shaping your PGO documentation for TF Serving):

Hope this information was helpful!

singhniraj08 commented 8 months ago

@zamazan4ik, Thank you for the detailed explanation. We will discuss this implementation internally and update this thread.