stackimpact / stackimpact-python

DEPRECATED StackImpact Python Profiler - Production-Grade Performance Profiler: CPU, memory allocations, blocking calls, exceptions, metrics, and more
https://stackimpact.com
BSD 3-Clause "New" or "Revised" License
659 stars 25 forks source link
cpu-profiler hot-spot-profiles memory-leak-detection memory-profiler monitoring performance-metrics performance-tuning profiler python python3

StackImpact Python Profiler

Overview

StackImpact is a production-grade performance profiler built for both production and development environments. It gives developers continuous and historical code-level view of application performance that is essential for locating CPU, memory allocation and I/O hot spots as well as latency bottlenecks. Included runtime metrics and error monitoring complement profiles for extensive performance analysis. Learn more at stackimpact.com.

dashboard

Features

Learn more on the features page (with screenshots).

How it works

The StackImpact profiler agent is imported into a program and used as a normal package. When the program runs, various sampling profilers are started and stopped automatically by the agent and/or programmatically using the agent methods. The agent periodically reports recorded profiles and metrics to the StackImpact Dashboard. The agent can also operate in manual mode, which should be used in development only.

Documentation

See full documentation for reference.

Supported environment

Getting started

Create StackImpact account

Sign up for a free trial account at stackimpact.com (also with GitHub login).

Installing the agent

Install the Python agent by running

pip install stackimpact

And import the package in your application

import stackimpact

Configuring the agent

Start the agent in the main thread by specifying the agent key and application name. The agent key can be found in your account's Configuration section.

agent = stackimpact.start(
    agent_key = 'agent key here',
    app_name = 'MyPythonApp')

Add the agent initialization to the worker code, e.g. wsgi.py, if applicable.

All initialization options:

Focused profiling

Use agent.profile(name) to instruct the agent when to start and stop profiling. The agent decides if and which profiler is activated. Normally, this method should be used in repeating code, such as request or event handlers. In addition to more precise profiling, timing information will also be reported for the profiled spans. Usage example:

span = agent.profile('span1');

# your code here

span.stop();

Alternatively, a with statement can be used:

with agent.profile('span1'):
    # your code ehere

Manual profiling

Manual profiling should not be used in production!

By default, the agent starts and stops profiling automatically. Manual profiling allows to start and stop profilers directly. It is suitable for profiling short-lived programs and should not be used for long-running production applications. Automatic profiling should be disabled with auto_profiling: False.

# Start CPU profiler.
agent.start_cpu_profiler();
# Stop CPU profiler and report the recorded profile to the Dashboard.
agent.stop_cpu_profiler();
# Start blocking call profiler.
agent.start_block_profiler();
# Stop blocking call profiler and report the recorded profile to the Dashboard.
agent.stop_block_profiler();
# Start heap allocation profiler.
agent.start_allocation_profiler();
# Stop heap allocation profiler and report the recorded profile to the Dashboard.
agent.stop_allocation_profiler();

Analyzing performance data in the Dashboard

Once your application is restarted, you can start observing continuous CPU, memory, I/O, and other hot spot profiles, execution bottlenecks as well as process metrics in the Dashboard.

Troubleshooting

To enable debug logging, add debug = True to startup options. If the debug log doesn't give you any hints on how to fix a problem, please report it to our support team in your account's Support section.

Overhead

The agent overhead is measured to be less than 1% for applications under high load.