Open mmingo848 opened 7 months ago
This tutorial might be helpful https://pytorch.org/executorch/main/sdk-profiling.html
cc: @tarun292
This tutorial might be helpful https://pytorch.org/executorch/main/sdk-profiling.html
cc: @tarun292
Thank you for that link. As I was going through it, there seems to be contradictions between the executorch environment setup , a prerequisite, and the SDK profiling tutorial. Specifically the following:
The executorch environment setup states to clone executorch with version 0.1.0:
git clone --branch v0.1.0 https://github.com/pytorch/executorch.git
However, when doing this, the executorch/sdk folder has numerous differences from the version on the SDK profiling tutorial. On step ''Generate ETDump" the imports are either incorrectly mapped in the tutorial or not available in the v0.1.0 clone, which creates a roadblock from continuing. Most important of these are the 'MethodTestCase' and 'MethodTestSuite' which are not in the required v0.1.0 bundled_program/config.py. I did not change the version to prevent other functionalities breaking because it seems specific versioning of executorch can make/break numerous features, based on what I have read in other issues and my own experience.
Oh would you prefer to use main branch or stable branch (release last October)
Oh would you prefer to use main branch or stable branch (release last October)
I'd prefer the stable branch, as long as that has all the features required to run the SDK profiling
the doc for stable is: https://pytorch.org/executorch/stable/sdk-profiling.html for main is: https://pytorch.org/executorch/main/sdk-profiling.html
the doc for stable is: https://pytorch.org/executorch/stable/sdk-profiling.html for main is: https://pytorch.org/executorch/main/sdk-profiling.html
Thanks for that link. At first glance, it looks like an ETDump can only be created for an example that is in the examples.sdk.scripts directory. Is there a way to do this on a exported .pte file of my choice in the stable version? I also ran the ETRecord snippet, and it failed since the documentation shows a mobilnetv2 being used and other models, VGG in my case, do not have the 'get_eager_model' attribute. Is the mobilnetv2 the only supported model to showcase how this will work in the future or can the code be altered for other models? Also does this profiling method work for trained models of any architecture to evaluate or only on new instances of a model type located in executorch/examples/models, such as mobilenet v2?
`AttributeError Traceback (most recent call last) /dir/in/container/sandbox/sandbox.ipynb Cell 46 line 1 9 from executorch.sdk import generate_etrecord 10 from torch.export import export, ExportedProgram 12 aten_model: ExportedProgram = export( ---> 13 model.get_eager_model().eval(), 14 model.get_example_inputs(), 15 ) 17 edge_program_manager: EdgeProgramManager = to_edge( 18 aten_model, compile_config=EdgeCompileConfig(_check_ir_validity=True) 19 ) 21 edge_program_manager_copy = copy.deepcopy(edge_program_manager)
File /opt/conda/envs/et/lib/python3.10/site-packages/torch/nn/modules/module.py:1696, in Module.getattr(self, name) 1694 if name in modules: 1695 return modules[name] -> 1696 raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'VGG' object has no attribute 'get_eager_model'`
I've modified the instructions to pass in my model and not instantiate a model object in the code I was referring to. I was able to move on to the next step "We provide 2 ways of executing the Bundled Model to generate the ETDump:." I tried this and get the following error:
Traceback (most recent call last): File "/opt/conda/envs/et/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/et/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/dir/in/container/executorch/examples/sdk/scripts/export_bundled_program.py", line 20, in <module> from executorch.sdk import BundledProgram ImportError: cannot import name 'BundledProgram' from 'executorch.sdk' (/opt/conda/envs/et/lib/python3.10/site-packages/executorch/sdk/__init__.py)
@mmingo848 we just released a new stable version (v0.2.0
)
can you try with that?
Cc @Gasoonjia, @Olivia-liu - can you guys track this issue?
@mmingo848 Hey thanks a lot for you question, your patience, and for trying this out! As Mergen mentioned, let's try to use the new stable version v0.2.0. Sorry for the confusion. So checkout the 0.2.0 branch (git clone --branch v0.2.0 https://github.com/pytorch/executorch.git), and you should find the code being able to match with the "stable" documentations, like e.g. https://pytorch.org/executorch/stable/tutorials/sdk-integration-tutorial.html.
Both ETRecord and ETDump generation should work on your model, not just the ones in the examples folder :)
You do not need the get_eager_model
attribute. You just need a deepcopy of what's returned by to_edge()
, and also what's return by to_executorch()
. I think you might be just looking at some old instruction/code so hopefully with v0.2.0 everything should make more sense.
I just tried a fresh checkout of v0.2.0 on my mac M1 and did the setup according to https://pytorch.org/executorch/main/getting-started-setup.html#clone-and-install-executorch-requirements. I then was able to run from executorch.sdk import BundledProgram
. So again maybe you issues was related to not using v0.2.0.
Please don't hesitate to ask more questions if you still run into issues. Looking forward to you getting it working!
I am looking for a way to either benchmark the .pte files performance, the final state of the ExecutorchProgramManager object, or similar after following this tutorial. I used the PyTorch profiler on the model before putting it through executorch. I can’t find a way to use any one of the above on the profiler. I’d like to use the same or similar to compare the original model to the executorch model with quantization to see the performance differences. Thanks!