Closed tsrxq closed 6 years ago
Last error can be fixed by importing torch first:
import torch
import lltm_cpp
torch.cuda
cannot be present. It's a subpackage of the torch package and is always presentClosing it, thanks for help!
Sorry for bothering you again and for reopening this ticket.
I tried to get CUDAExtension
to run on macOS and ran into some trouble, so maybe you can help me (it seems that you are also on a mac?).
You wrote in your tutorial:
On MacOS, you will have to download GCC (e.g. brew install gcc will give you GCC 7 at the time of this writing). In the worst case, you can build PyTorch from source with your compiler and then build the extension with that same compiler.
Compiling PyTorch with clang
and clang++
works fine, however it fails when using gcc-7
and g++-7
, and prints the following error message:
nvcc fatal: GNU C/C++ compiler is no longer supported as a host compiler on Mac OS X.
After browsing the internet, these error messages make sense to me, as it seems that nvcc
does not support gcc
/g++
.
In addition, using clang
/clang++
or gcc-7
/g++-7
to compile the CUDAExtension
do not work either.
Using clang
/clang++
, I get the error:
fatal error: 'atomic' file not found
Using gcc-7
/g++-7
, I get the error:
/usr/local/cuda/include/crt/math_functions.h(9457): error: namespace "__gnu_cxx" has no member "__promote_2"
Any idea on how to solve this problem?
For the first error, you simply can't use gcc-7
to compile PyTorch on Mac, because the CUDA toolkit on mac doesn't support it as a compiler.
The second one looks like a bug, cc @goldsborough.
Yeah we actually build PyTorch with clang on Mac, so my tutorial is wrong here. But for the issue, we actually apparently don't support CUDA on Mac, so all bets are off for this unfortunately :/
Just wanted to let you know that I fixed the problem. The actual problem is not within PyTorch and CUDA on macOS, but with distutils (https://github.com/cudamat/cudamat/issues/39).
I noticed that the nvcc
call works fine when I type it manually into the console, but fails when calling via spawn
. I fixed the problem by replacing
def spawn(self, cmd):
spawn(cmd, dry_run=self.dry_run)
with
def spawn(self, cmd):
subprocess.call(cmd)
in python3.6/distutils/ccompiler.py
. If you know of a more elegant fix, please let me know.
Setup: Latest OS X and pytorch built from master (no GPU).
First error comes from
torch.cuda
not being present: https://github.com/pytorch/pytorch/blob/abd8501020d16e9aa12fa60dfd38ed70b8d7b71e/torch/utils/cpp_extension.py#L45. I manually set it to None.The next one is related to flags, if I try:
python setup.py install
I get the following error:That can be fixed by passing:
CFLAGS='-stdlib=libc++'
.Next problem comes when I try to import: