Open KexinFeng opened 2 years ago
Did you build pytorch in multipy/multipy/runtime/third-party/pytorch
with USE_DEPLOY=1
as it seems like your missing symbols from the build. Try running
cd multipy/multipy/runtime/third-party/pytorch
USE_DEPLOY=1 python setup.py develop
and then build multipy.
You can also use multipy directly from pytorch as the rst you found states. The primary reason we have the build from source instructions here is for development on multipy.
This issue is what I encountered when I was trying to use multipy directly from pytorch. I followed this rst. Now I have m it and managed to build pytorch from source with USE_DEPLOY=1. The problem seems to be that I didn't install the complete list of dependencies for CPython.
However, when trying the example in this rst, it reports an error at the stage of Load the model from the torch.package.
, which is
Importing the numpy C-extensions failed
The full err msg will be attached below.
So, do you by chance know why this tutorial example doesn't work? Also what is the difference between torch::deploy and multipy the out-of-core repo. Which one is the one that you will focus on in the future?
Thanks!
Here is the full message:
terminate called after throwing an instance of 'std::runtime_error'
what(): Exception Caught inside torch::deploy embedded library:
Exception Caught inside torch::deploy embedded library:
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "torch_deploy"
* The NumPy version is: "1.22.4"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: No module named 'numpy.core._multiarray_umath'
At:
/home/ubuntu/anaconda3/envs/torch_build/lib/python3.9/site-packages/numpy/core/__init__.py(52): <module>
<frozen importlib._bootstrap>(219): _call_with_frames_removed
<frozen importlib._bootstrap_external>(783): exec_module
<frozen importlib._bootstrap>(686): _load_unlocked
<frozen importlib._bootstrap>(975): _find_and_load_unlocked
<frozen importlib._bootstrap>(991): _find_and_load
<frozen importlib._bootstrap>(219): _call_with_frames_removed
<frozen importlib._bootstrap>(1050): _handle_fromlist
/home/ubuntu/anaconda3/envs/torch_build/lib/python3.9/site-packages/numpy/__init__.py(144): <module>
<frozen importlib._bootstrap>(219): _call_with_frames_removed
<frozen importlib._bootstrap_external>(783): exec_module
<frozen importlib._bootstrap>(686): _load_unlocked
<frozen importlib._bootstrap>(975): _find_and_load_unlocked
<frozen importlib._bootstrap>(991): _find_and_load
<frozen importlib._bootstrap>(1014): _gcd_import
<Generated by torch::deploy>(127): import_module
<Generated by torch::deploy>(387): _load_module
<Generated by torch::deploy>(466): _do_find_and_load
<Generated by torch::deploy>(476): _find_and_load
<Generated by torch::deploy>(506): _gcd_import
<Generated by torch::deploy>(551): __import__
<torch_package_0>.models/loss_criterions/ac_criterion.py(5): <module>
<Generated by torch::deploy>(369): _make_module
<Generated by torch::deploy>(389): _load_module
<Generated by torch::deploy>(466): _do_find_and_load
<Generated by torch::deploy>(476): _find_and_load
<Generated by torch::deploy>(506): _gcd_import
<Generated by torch::deploy>(555): __import__
<torch_package_0>.models/base_GAN.py(8): <module>
<Generated by torch::deploy>(369): _make_module
<Generated by torch::deploy>(389): _load_module
<Generated by torch::deploy>(466): _do_find_and_load
<Generated by torch::deploy>(476): _find_and_load
<Generated by torch::deploy>(506): _gcd_import
<Generated by torch::deploy>(555): __import__
<torch_package_0>.models/DCGAN.py(4): <module>
<Generated by torch::deploy>(369): _make_module
<Generated by torch::deploy>(389): _load_module
<Generated by torch::deploy>(466): _do_find_and_load
<Generated by torch::deploy>(476): _find_and_load
<Generated by torch::deploy>(506): _gcd_import
<Generated by torch::deploy>(149): import_module
<Generated by torch::deploy>(25): find_class
<Generated by torch::deploy>(1526): load_global
<Generated by torch::deploy>(1212): load
<Generated by torch::deploy>(271): load_pickle
Aborted (core dumped)
@KexinFeng we've completely revamped the install process so it should be a lot simpler and not require building torch from source. Does the new install process work better for you?
📚 The doc issue
Hi,
I'm trying the tutorial example of deploy and aim to package a model and do inference in C++. But I ran into a problems when working with build-from-source pytorch. There are following issue:
export USE_CUDA=0
.I have also set
export USE_DEPLOY=1
following (deploy
tutorial)[https://pytorch.org/docs/stable/deploy.html#loading-and-running-the-model-in-c].Any help will be appreciated!
Here is the system info:
Suggest a potential alternative/fix
No response
cc @wconstab