Linux Performance Customer Profiler Utility
Overview
Running LPCPU
Postprocesssing the data
Viewing the results
The Linux Performance Customer Profiler Utility (LPCPU) is a utility that integrates system configuration data collection, performance profiling data collection, profiler data post processing, and graphing into a single package. LPCPU relies on profiling utilities that are standard components of both enterprise and community based distributions. In addition to these dependencies LPCPU bundles additional tools for processing the data and generating graphs.
Note: To create a new lpcpu.tar.bz2 tar ball, run:
./create-tarball.sh
To run LPCPU, execute the lpcpu.sh script that is located in the top level directory of the LPCPU distribution tarball (where this README is located). The lpcpu.sh script takes several arguments, for an all inclusive list please see the script header comments. Ideally the script should be executed with root privileges since many of the data collection steps it performs require the elevated privileges of root.
Here is an example invocation which collects data for 60 seconds (the default period is 120 seconds). This example expects that the workload of interest is already running on the system and the desire is to collect data from a 60 second window while the workload executes:
================================================================================ root@host ~ $ ./lpcpu/lpcpu.sh duration=60 Running Linux Performance Customer Profiler Utility version c7947e57eacca5b3ed3481d19d68a486accffff8 2013-11-12 14:04:11 -0600 Importing CLI variable : duration=60
As the output reports at the end, the resulting data file is located in the /tmp directory. An alternative use case for LPCPU is to use it to profile the entire lifecycle of a program. In order to use it in this fashion, invoke the script using the following syntax:
./lpcpu.sh cmd="
For example:
./lpcpu.sh cmd="dd if=/dev/zero of=/dev/null bs=1M count=1024"
Another useful command line parameter is available to invoke additional profiling tools that are not included in the default list of tools to run. These tools are not included in the default list because they either have special requirements, high overhead, or are specific to certain environments.
In order to invoke these tools use the following syntax:
./lpcpu.sh extra_profilers=""
For example:
./lpcpu.sh extra_profilers="oprofile"
or
./lpcpu.sh extra_profilers="perf"
or
./lpcpu.sh extra_profilers="kvm"
or
./lpcpu.sh extra_profilers="ftrace"
or
./lpcpu.sh extra_profilers="tcpdump"
Multiple non-default profilers can be specified using a quoted list.
For example:
./lpcpu.sh extra_profilers="perf kvm"
For a complete list of the non-default profilers please see the lpcpu.sh script. NOTE: oprofile and perf are mutually exclusive.
In order to complete the data collection a script, postprocess.sh. must be executed from within the directory structure created by the lpcpu.sh script. This step is left to the user so that it can be carried out after testing is complete or even on a different system if desired.
The postprocess.sh script takes a single required argument which is the location of the LPCPU distribution directory. The script requires the location of the LPCPU distribution directory so that it can find utilities required to complete the post processing and graphing of the data.
================================================================================ root@host:/tmp$ tar xjf lpcpu_data.host.default.2013-11-12_1537.tar.bz2
root@host:/tmp$ cd lpcpu_data.host.default.2013-11-12_1537/
When the postprocess.sh script is executed it is possible that some errors may occur. They are not necessarily fatal. Some environments are lacking certain tools or versions of tools that may be used for parts of the data collection. In that event there will be a small impact to the available data but overall there should be plenty of usable information.
The postprocess.sh script also has a second, optional argument that is used to force the use of an older charting technique (known as chart.pl). The charting capabilities in LPCPU have recently been updated to produce dynamic charts using Javascript with a new technique called jschart. These charts require a "modern" browser (Firefox/Internet Explorer/Chrome/Safari) and potentially could have functional/performance issues in some environments (older browsers, extremely large datasets, etc.) so the ability to force LPCPU to generate the older style of charts is provided. To do so, invoke postprocess.sh in the following way:
./postprocess.sh
For example:
./postprocess.sh ~/lpcpu chart.pl
When the processing phase is complete, the directory will contain a summary.html file that can be loaded into a browser to assist in navigating the profiled data.
Due to security considerations, some browsers will not allow the jschart code to load the chart data files when run "locally" (i.e., without a web server delivering the files to the browser). In the environments that this has been tested on, Chrome and Internet Explorer have this restriction while Firefox does not. For browsers that enforce this behavior, the data collected by LPCPU must be served by a web server in order to view the dynamic charts that jschart provides. You can either use your own web server (any will do), or a small Python script (results-web-server.py) is included in the results package. This script will launch a small web server for local access and is easily stopped with a CTRL-C when you are done. Here is an example of the lifecycle of that script:
The new jschart functionality requires the use of a two different third party libraries. By default, the pages that are generated by the postprocess.sh script will instruct the browser to load those libraries from the Internet, so you will need to have Internet access when viewing the data. If you need to be able to access the charts without Internet access, you can download the libraries into your LPCPU distribution by executing the following script:
./download-jschart-dependencies.sh
Executing this script will download the libraries and place them into the LPCPU distribution. It will then use those libraries when generating the data so no Internet access will be required. If you wish to host your data for viewing on an SSL secured web server (HTTPS), you may be forced to use this ability to host the libraries locally, because some browsers will not load external resources when in a secured browser session.
Using jschart has several advantages over chart.pl that should make using LPCPU for performance analysis easier. First, unlike chart.pl (which will only run on an x86 Linux box), jschart has no architectural dependency issues. Second, jschart defers the generation of the charts to the web browser client, so executing postprocess.sh should be noticeably faster than before. This does place some requirements on the client, such as the use of a modern browser, but it also brings several additional advantages. The charts are now dynamic, with the user having the ability to pan and/or zoom the charting area and selecting a specific dataset to focus on. Click the "Help" link on a chart for additional details.