When installing and importing ai-edge-torch v0.2.0 on Colab:
!pip install ai-edge-torch
import ai_edge_torch
One gets an error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[<ipython-input-1-a72d87013efa>](https://localhost:8080/#) in <cell line: 5>()
3 get_ipython().system('pip install ai-edge-torch')
4 # import torch_xla
----> 5 import ai_edge_torch
4 frames
[/usr/local/lib/python3.10/dist-packages/torch_xla/__init__.py](https://localhost:8080/#) in <module>
18 sys.setdlopenflags(flags)
19
---> 20 import _XLAC
21 from ._internal import tpu
22 from .version import __version__
ImportError: /usr/local/lib/python3.10/dist-packages/_XLAC.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN2at6native12cpu_fallbackERKN3c1014OperatorHandleEPSt6vectorINS1_6IValueESaIS6_EEbNS1_11DispatchKeyE
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
The reason is because for some reason the PyPI distribution lists the following requirements:
Colab has torch 2.4.0 so a new version is not installed; however torch-xla is unavailable so the latest version (2.5.0) is installed. Unfortunately, torch-xla doesn't specify the required version for torch (see also https://github.com/pytorch/xla/issues/8292) so this breaks it with the above library error.
This happens both for CPU and GPU instances.
Actual vs expected behavior:
Installing the package should install consistent requirements. After installing the package successfully, the imports should work.
Any other information you'd like to share?
This might be considered something to be fixed with PyTorch, but with PyTorch they specify commands for how to install PyTorch, depending on environment, sometime with extra arguments.
I would say the CPU version should definitely work out-of-the-box. For GPU/TPU environments, it might be advised to add a note on ensuring PyTorch/XLA properly on the environment prior to installing ai-edge-torch, however I think in this specific case, simply freezing the versions should work. Or make torch-xla an optional dependency.
Description of the bug:
When installing and importing
ai-edge-torch
v0.2.0 on Colab:One gets an error:
The reason is because for some reason the PyPI distribution lists the following requirements:
Colab has
torch
2.4.0 so a new version is not installed; howevertorch-xla
is unavailable so the latest version (2.5.0) is installed. Unfortunately,torch-xla
doesn't specify the required version fortorch
(see also https://github.com/pytorch/xla/issues/8292) so this breaks it with the above library error.This happens both for CPU and GPU instances.
Actual vs expected behavior:
Installing the package should install consistent requirements. After installing the package successfully, the imports should work.
Any other information you'd like to share?
This might be considered something to be fixed with PyTorch, but with PyTorch they specify commands for how to install PyTorch, depending on environment, sometime with extra arguments.
I would say the CPU version should definitely work out-of-the-box. For GPU/TPU environments, it might be advised to add a note on ensuring PyTorch/XLA properly on the environment prior to installing
ai-edge-torch
, however I think in this specific case, simply freezing the versions should work. Or maketorch-xla
an optional dependency.