pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
2.22k stars 371 forks source link

RFC: PTE Size Inspector Design #7088

Open Olivia-liu opened 7 hours ago

Olivia-liu commented 7 hours ago

🚀 The feature, motivation and pitch

Problem

Currently, users who export models to ExecuTorch have no tool to inspect what contributes to the size of the resulting .pte file. This is a concern because the file must fit within the available memory on devices, which are often very limited.

Goal

Users can understand what contributes to the overall size of a PTE size using a commandline tool or python script/notebook.

RFC

Design

Overview

Have 3 entry points to inspect .pte file size:

In order to get detailed sizing information in the delegate blobs, allow delegates to implement hooks to decrypt the delegate blob. It’s optional for the delegate authors to implement. See the comments below for more discussion on this.

Details

Class SizeDistribution and size_distribution, size_distribution_from_pte util functions

SizeDistribution is a recursive data class designed to hold size distribution information. It also comes with SizeDistribution.to_dataframe() to get size distribution details in the format of a pandas dataframe.

size_distribution are convenient util functions to get a SizeDistribution instance from an ExecutorchProgramManager instance or a .pte file.

User Interface

# User code: in export.py or in a notebook, call size_distribution() after to_executorch() is called in the export process
from executorch.devtools.ptetools import size_distribution

...
exec_prog = edge_program.to_executorch()
size_dists = size_distribution(exec_prog)
print(size_dists)
# If want to use dataframe
df = size_dists.to_dataframe()
# User code: in a notebook or a python script file, call size_distribution() after user already has a .pte file
from executorch.devtools.ptetools import size_distribution_from_pte

file_path = "/path/to/your/.pte"
size_dists = size_distribution_from_pte(file_path)
# Rest is the same as the example usages above

Example output of print(size_dists)

Program Flatbuffer: 51.23 KB
Constant Tensors: 112 B
    conv_0_weight: 64 B
    conv_1_weight: 48 B
Delegate Blobs: 13.90 MB
    XnnpackBackend_0: 8.85 MB
    XnnpackBackend_1: 5.05 MB

Print in human-readable scale units:

Example df when printed out

Name Size (bytes) Level
Total Size 13951248 0
Program Flatbuffer 51232 1
Constant Tensors 112 1
conv_0_weight 64 2
conv_1_weight 48 2
Delegate Blobs 13900016 1
XnnpackBackend_0 8845856 2
XnnpackBackend_1 5054160 2

Implementation

# New file: executorch/devtools/ptetools.py

class SizeDistribution:
    """Sizes represented in a recursive structure"""
def __init__(
self, 
name: str, 
size: int, 
components:  Optional[List['Size_Distribution']] = None
):
        self.name = name
        self.size = size
        self.components = components 

    def __str__(self, level=0):
       """String representation for displaying the hierarchy with indentation."""
       indent = "  " * level
       result = f"{indent}{self.name}: {self.size} bytes\n"
       for component in self.components:
           result += component.__str__(level + 1)
       return result

def to_dataframe(self):
    """ Format the class into dataframe """

def size_distribution(exec_prog: ExecutorchProgramManager) -> SizeDistribution:
    """
    Args:
        exec_program: ExecuTorch program
Returns: 
    Hierarchical size distribution of different components of exec_program 
    """

def size_distribution_from_pte(file_path: str) -> SizeDistribution:
    """
    Args:
        File path of a .pte file
    Returns:
        Hierarchical size distribution of different components of the .pte 
    """

Command Line tool, pteinspect

This is useful for users who don’t necessarily export the model themselves, and have a .pte and want to understand the size of it. Users can also call this from a bash script to do pte file analysis.

Example user flow:

$ pteinspect [options] pte_file

where,

Option Description
-l List the top level components of the .pte file and the size of each of them
-e component_name List all the components inside component_name and the size of each of them
-h Display information in the headers
--help Provides a help message listing all available options

Alternatives Considered

Considered having an interactive commandline tool, but decided to move away from it because a commondline with arguments is more scriptable, and the style also matches more with ELF tools, which is widely used in the industry.

Also considered combining different pte tools (file inspection, file modification, etc.) into one tool. Decided to have separate tools for different features to match with ELF tools style, and also give users confidence that they wouldn’t accidentally modify the file when they only want to inspect it.

Release Plan

Milestone 1 (1 week): Define Python class and write Python APIs

Milestone 2 (1 week): Write the commandline tool

Olivia-liu commented 7 hours ago

Delegate Blob Hook Discussion

The .pte file can have multiple segments and in them there could be multiple delegate blob segments, each represents a graph description (for the delegated subgraph) and constant tensor data. Users would be interested in knowing what’s being included in the delegate blobs. We should add hooks that delegate authors can potentially implement in order to return such information.

While exporting is in progress: The to_backend pass is where the delegate blobs are created by the partitioners. I propose adding a step in the partitioning process to construct a dict in which the delegate blob details are written to. And while exporting, we not only produce a .pte, but also a metadata.json of each delegate blob.

After exporting: If the .pte has already been created, and during to_backend no extra step was taken to get metadata.json for delegate blobs, users should still have a chance to inspect the delegate blobs. I propose for each partitioner to have a new .pte parsing function with which a metadata.json can be reverse-engineered from a delegate blob that this partitioner had created.