facebookresearch / HolisticTraceAnalysis

A library to analyze PyTorch traces.
http://hta.readthedocs.io
MIT License
265 stars 37 forks source link

Optimizing Trace load and parsing functionality #48

Open briancoutinho opened 1 year ago

briancoutinho commented 1 year ago

🚀 Motivation and context

As we analyze larger and more traces at scale the time for parsing trace files gets into the critical path. In this work-stream, we plan to identify and fix performance bottlenecks in trace load and parsing and fix them.

Description

Details

To investigate this we will need 1) benchmarking setup, 2) test trace data, 3) profiling methodology. These are described below.

Benchmark Setup and Test Trace data

We can leverage pyperf for a reliable benchmarking setup.

For trace data we will use the hta/tests/data/ directory, and optionally include any test traces a user may want to run with benchmarks.

Profiling Methodology

In addition to the benchmark measurements we can leverage py-spy to analyze CPU time breakdown across functions. To install py-spy simply run:

pip install py-spy

And profile the benchmark using

sudo /opt/miniconda3/envs/trace-analyzer/bin/py-spy record -p <pid of benchmark>

Alternatives

No response

Additional context

No response

briancoutinho commented 1 year ago

Initial Analysis

Looking at the py-spy results a large fraction of trace_load() was being spent getting the memory footprint of the loaded json.

Screenshot 2023-04-28 at 5 02 23 PM

This was fixed in #43 (PR44)

We now look to optimize the load and parsing together. This could done by merging the load to json and parsing in a single step in pandas, this is still WIP.

anupambhatnagar commented 1 year ago

Currently the rank parsing is pretty fast with the use of re.search. The trace file is loaded once and converted into a pandas df. What exactly are we trying to optimize here?

briancoutinho commented 1 year ago

Currently the rank parsing is pretty fast with the use of re.search. The trace file is loaded once and converted into a pandas df. What exactly are we trying to optimize here?

We are currently loading the trace as a json object and then constructing dataframe, the intermediate step consumes a lot of memory and time (its a dynamic object with a lot of memory allocations). It may be possible to incrementally parse json and fill the dataframe, that is how pandas read_json works.

The time is low for the example traces, but larger traces are taking 120s or more to load. Also, your optimization sped things up a quite a bit; that was a low hanging fruit.