pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
1.9k stars 312 forks source link

"No such file or directory: program.fbs" when export_add.py file is a sibling of `executorch` repo directory #5766

Open andrei-cioaca opened 1 week ago

andrei-cioaca commented 1 week ago

🐛 Describe the bug

Generating .pte is failing at edge_program.to_executorch for default example. https://pytorch.org/executorch/main/getting-started-setup.html

python export_add.py 
Traceback (most recent call last):
  File "/[PATH]/export_add.py", line 20, in <module>
    executorch_program = edge_program.to_executorch()
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/[PATH]/executorch/exir/program/_program.py", line 1354, in to_executorch
    return ExecutorchProgramManager(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/[PATH]/executorch/exir/program/_program.py", line 1408, in __init__
    self._pte_data: Cord = _serialize_pte_binary(
                           ^^^^^^^^^^^^^^^^^^^^^^
  File "/[PATH]/executorch/exir/_serialize/_program.py", line 445, in serialize_pte_binary
    result: _FlatbufferResult = _program_json_to_flatbuffer(
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/[PATH]/executorch/exir/_serialize/_flatbuffer.py", line 277, in _program_json_to_flatbuffer
    schema_info = _prepare_schema(
                  ^^^^^^^^^^^^^^^^
  File "/[PATH]/executorch/exir/_serialize/_flatbuffer.py", line 148, in _prepare_schema
    schemas = _ResourceFiles([program_schema] + deps)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/[PATH]/executorch/exir/_serialize/_flatbuffer.py", line 105, in __init__
    self._files[name] = importlib.resources.read_binary(__package__, name)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/resources/_legacy.py", line 25, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/resources/_legacy.py", line 51, in read_binary
    return (_common.files(package) / normalize_path(resource)).read_bytes()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/pathlib.py", line 1019, in read_bytes
    with self.open(mode='rb') as f:
         ^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/pathlib.py", line 1013, in open
    return io.open(self, mode, buffering, encoding, errors, newline)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/[PATH]/executorch/exir/_serialize/program.fbs'

Versions

python collect_env.py Collecting environment information... PyTorch version: 2.5.0.dev20240901 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A

OS: macOS 14.5 (arm64) GCC version: Could not collect Clang version: 16.0.0 (clang-1600.0.26.3) CMake version: version 3.30.4 Libc version: N/A

Python version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime) Python platform: macOS-14.5-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Apple M3 Max

Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] onnxruntime==1.18.0 [pip3] optree==0.11.0 [pip3] torch==2.5.0.dev20240901 [pip3] torchaudio==2.5.0.dev20240901 [pip3] torchsr==1.0.4 [pip3] torchvision==0.20.0.dev20240901 [conda] executorch 0.5.0a0+9720715 pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.5.0.dev20240912 pypi_0 pypi [conda] torchaudio 2.5.0.dev20240912 pypi_0 pypi [conda] torchsr 1.0.4 pypi_0 pypi [conda] torchvision 0.20.0.dev20240912 pypi_0 pypi

iseeyuan commented 1 week ago

@andrei-cioaca , have you done ./install_requirements.sh successfully? (Cannot reproduce it from my side).

andrei-cioaca commented 1 week ago

Yes. Install is succesful also with the backends, all three - coreml, mps, xnnpack. Also I can generate the execute_runner successfully.

The only thing that I can't do is to generate the .pte file with any script. Same error all over again.

dbort commented 6 days ago

This is what would happen if python is trying to execute directly from the repo, rather than from the installed executorch pip package. What directory are you in when you run this? You could try cd-ing to a directory that does not contain a subdirectory named executorch

andrei-cioaca commented 6 days ago

This is what would happen if python is trying to execute directly from the repo, rather than from the installed executorch pip package. What directory are you in when you run this? You could try cd-ing to a directory that does not contain a subdirectory named executorch

I tried to execute from another directory, but get the same error. I did install from repo using conda.

Collecting environment information... PyTorch version: 2.5.0.dev20240912 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A

OS: macOS 14.5 (arm64) GCC version: Could not collect Clang version: 16.0.0 (clang-1600.0.26.3) CMake version: version 3.30.4 Libc version: N/A

Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 10:07:17) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-14.5-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Apple M3 Max

Versions of relevant libraries: [pip3] executorch==0.5.0a0+6ff52cc [pip3] numpy==1.26.4 [pip3] torch==2.5.0.dev20240912 [pip3] torchao==0.5.0+git0916b5b [pip3] torchaudio==2.5.0.dev20240912 [pip3] torchsr==1.0.4 [pip3] torchvision==0.20.0.dev20240912 [conda] executorch 0.5.0a0+6ff52cc pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.5.0.dev20240912 pypi_0 pypi [conda] torchaudio 2.5.0.dev20240912 pypi_0 pypi [conda] torchsr 1.0.4 pypi_0 pypi [conda] torchvision 0.20.0.dev20240912 pypi_0 pypi

dbort commented 6 days ago

In your logs, does the PATH in "/[PATH]/executorch/exir/_serialize/_flatbuffer.py" point to the repo, or to site-packages?

E.g. when I run

python3 -c "from executorch import exir; print(exir.__file__)"

it prints

/Users/???/.homebrew/Caskroom/miniconda/base/envs/executorch/lib/python3.10/site-packages/executorch/exir/__init__.py

And just to check, you're running the code block from https://pytorch.org/executorch/main/getting-started-setup.html#export-a-program and executing from the commandline using python3 export_add.py ?

What is your current directory when you run this?

andrei-cioaca commented 6 days ago

When I run python3 -c "from executorch import exir; print(exir.file)" it prints /opt/anaconda3/envs/executorch/lib/python3.12/site-packages/executorch/exir/__init__.py

Ok, so I moved the export_add.py file in another director, one that not contains executorch git repo as a subdirector and working now. The example from examples.models.llama2.export_llama also working

I guess is what you mentioned previously. Is there somewhere in the documentation this, because I miss it if is.

Thanks for support @dbort

andrei-cioaca commented 6 days ago

Fixed

dbort commented 5 days ago

I'm glad that worked for you @andrei-cioaca. This is definitely an issue that other users might hit, so I'm re-opening the issue to see how we can mitigate it.

We'll also update the docs to help avoid and recover from this problem.

dbort commented 5 days ago

The ultimate solution for this is to follow python-layout best practices, and move our python files under //executorch/src/executorch/exir/... etc. rather than using the executorch/ repo directory itself as part of the import paths. Then, users will be less likely to import local paths rather than site-packages, since they won't likely a) put scripts in the //executorch/src directory or b) cd to //executorch/src when running scripts.