Closed mkhoshle closed 2 years ago
did you install the nightly release? The M1 GPU support is not in the main release yet. You can do that by
pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
I installed using conda install pytorch torchvision -c pytorch
given instruction in the page: https://pytorch.org/get-started/locally/. Now I get this error:
INTEL MKL ERROR: dlopen(/opt/miniconda3/envs/ml/lib/libmkl_intel_thread.1.dylib, 0x0009): Library not loaded: @rpath/libiomp5.dylib
Referenced from: /opt/miniconda3/envs/ml/lib/libmkl_intel_thread.1.dylib
Reason: tried: '/opt/miniconda3/envs/ml/lib/libiomp5.dylib' (no such file), '/opt/miniconda3/envs/ml/bin/../lib/libiomp5.dylib' (no such file), '/usr/local/lib/libiomp5.dylib' (no such file), '/usr/lib/libiomp5.dylib' (no such file).
Intel MKL FATAL ERROR: Cannot load libmkl_intel_thread.1.dylib.
If you want to install the nightly version, I think you have to use the -c pytorch-nightly
channel as they recommend here: https://pytorch.org
I see. Ok I uninstalled and reinstalled using conda install pytorch torchvision torchaudio -c pytorch-nightly
. But now I get this message:
(ml) Mahzads-MacBook-Pro-2:pytorch-m1-gpu mahzadkhoshlessan$ python lenet-mnist.py --device mps
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
Abort trap: 6
Huh that's interesting. It's weird that it mentions Intel in this message
For more information, please see http://www.intel.com/software/products/support/.
You have a computer with M1 chip though, right?
Correct! and I am not sure either!
could you maybe try this in a fresh environment and see if that helps:
$conda create -n torch-nightly python=3.8
$ conda activate torch-nightly
$ pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
I get the following error:
(torch-nightly) Mahzads-MacBook-Pro-2:pytorch-m1-gpu mahzadkhoshlessan$ python lenet-mnist.py --device mps
torch 1.12.0.dev20220520
device mps
Traceback (most recent call last):
File "lenet-mnist.py", line 286, in <module>
model = model.to(DEVICE)
File "/opt/miniconda3/envs/torch-nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 927, in to
return self._apply(convert)
File "/opt/miniconda3/envs/torch-nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 579, in _apply
module._apply(fn)
File "/opt/miniconda3/envs/torch-nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 579, in _apply
module._apply(fn)
File "/opt/miniconda3/envs/torch-nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 602, in _apply
param_applied = fn(param)
File "/opt/miniconda3/envs/torch-nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: PyTorch is not linked with support for mps devices
Hmm ... could you run the following code and show the output?
conda install watermark -c conda-forge
python -c "import watermark; print(watermark.watermark())"
Yes here is the output:
(torch-nightly) Mahzads-MacBook-Pro-2:pytorch-m1-gpu mahzadkhoshlessan$ python -c "import watermark; print(watermark.watermark())"
Last updated: 2022-05-20T15:06:56.465751-04:00
Python implementation: CPython
Python version : 3.8.13
IPython version : 8.3.0
Compiler : Clang 12.0.0
OS : Darwin
Release : 21.4.0
Machine : x86_64
Processor : i386
CPU cores : 8
Architecture: 64bit
Processor : i386
Oh, looks like you have an Intel Mac, not Arm Mac
Ok but then what does this mean? What will change when installing Pytorch on my laptop?
You can maybe double check with your system configuration info, but it doesn't look like your computer has a M1 chip. So that means that you can just keep using the regular CPU version for PyTorch, but the "mps" option is not going to work for you.
That is not correct. I have a Mac M1 chip.
Oh, it's weird that it is showing
Processor : i386
maybe you are emulating the terminal in Intel mode via Rosetta?
What I see is similar to yours:
And if you open a Python interpreter in the terminal and then check for Python?
Oh it shows intel. Why is that? How can I change it?
You could try the arm version of miniforge, which has arm support for Python: https://github.com/conda-forge/miniforge
hey, just curious, did miniforge work for you with fixing this issue?
@rasbt Yes, it did fix my issue. Thanks so much for your help.
Hi, I run your code on my Mac M1 chip after installing Pytorch and verifying that. How ever I get the following error:
Any idea?