Open jramapuram opened 2 years ago
If you build from source that should fix the issue. This is happening because the files for building extensions aren't there. Will look into improving this.
Thanks for the quick response @dianaml0.
clang: error: unsupported option '-fopenmp'
: I tried installing llvm
, gcc
and libomp
with brew, setting CC
and CPP
and none of these worked. pip install -e .
it seems like the cloned repo xformers
needs to be present (i.e. an rm -rf xformers
fails to properly import the package: python -c "import xformers"
--> no ModuleFound). It is quite a strong requirement for the cloned xformers
repo to be present locally for operation 😬 Thanks for the quick response @dianaml0.
* With OSX I get an `clang: error: unsupported option '-fopenmp'`: I tried installing `llvm`, `gcc` and `libomp` with brew, setting `CC` and `CPP` and none of these worked.
this could be a path issue, MacOS ships its own clang. Have you tried conda by any chance ?
* With the Ubuntu 20.04 container: the source compile works, but after `pip install -e .` it seems like the cloned repo `xformers` needs to be present (i.e. an `rm -rf xformers` fails to properly import the package: `python -c "import xformers"` --> no ModuleFound). It is quite a strong requirement for the cloned `xformers` repo to be present locally for operation grimacing
This is the very definition of -e, it means "editable" if I remember correctly, if you install with that and modify the library code then this modified code will be used. pip install .
or python3 setup.py install
if you want a fire and forget strategy, top of head (this is not specific to xformers, it's general python/pip principle). We should probably explain this better in the README
Thanks for the quick response @dianaml0.
* With OSX I get an `clang: error: unsupported option '-fopenmp'`: I tried installing `llvm`, `gcc` and `libomp` with brew, setting `CC` and `CPP` and none of these worked.
this could be a path issue, MacOS ships its own clang. Have you tried conda by any chance ?
Sure I can give this a shot. Ideally I’m sure folks are mainly prototyping on OSX and only need pip install xformers
. However this is completely broken on OSX and Ubuntu 20.04.
* With the Ubuntu 20.04 container: the source compile works, but after `pip install -e .` it seems like the cloned repo `xformers` needs to be present (i.e. an `rm -rf xformers` fails to properly import the package: `python -c "import xformers"` --> no ModuleFound). It is quite a strong requirement for the cloned `xformers` repo to be present locally for operation grimacing
This is the very definition of -e, it means "editable" if I remember correctly, if you install with that and modify the library code then this modified code will be used.
pip install .
orpython3 setup.py install
if you want a fire and forget strategy, top of head (this is not specific to xformers, it's general python/pip principle). We should probably explain this better in the README
Ah yes derp! Thanks, -e
is indeed editable. Makes sense for development 👍
Thanks for the quick response @dianaml0.
* With OSX I get an `clang: error: unsupported option '-fopenmp'`: I tried installing `llvm`, `gcc` and `libomp` with brew, setting `CC` and `CPP` and none of these worked.
this could be a path issue, MacOS ships its own clang. Have you tried conda by any chance ?
Sure I can give this a shot. Ideally I’m sure folks are mainly prototyping on OSX and only need
pip install xformers
. However this is completely broken on OSX and Ubuntu 20.04.
+1, thanks for the feedback, we're working on that ! to explain a little the issue is with the cuda kernels, which need to be pre-compiled in a gazillion number of configurations for ´pip install xformers´ to just work, work in progress !
Hi @jramapuram ,
In order to install on OSX, you need to pass the following:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ pip install -e .
This will tell the gcc to use the right options for the compiler.
@blefaudeux @dianaml0 I'll update the installation instructions to take this into account
Improvements to installation instructions sent to https://github.com/facebookresearch/xformers/pull/159
For the original issue you were facing, we just need to include the c++
/ .cu
files in the pip
package as well. I'll send a PR to include them
@blefaudeux : appreciate the complexity of getting all the CUDA kernels compiling. How does PyTorch resolve this? Guessing something like build all (valid) CUDA archs and couple with a pinned cudatoolkit? Maybe that would be the cleanest? Adding the sources as suggested by @fmassa would be great as it removes the git coupling at least :)
@fmassa: thanks! That was it. I missed the CXX export. The following worked for me:
brew install llvm libomp # not sure if libomp was necessary, did not ablate.
MACOSX_DEPLOYMENT_TARGET=10.9 CC=/opt/homebrew/opt/llvm/bin/clang \
CXX=/opt/homebrew/opt/llvm/bin/clang++ \
pip install -e .
@jramapuram great that this is working!
FYI I don't think installing llvm
is necessary (at least for building the C++ extensions, maybe for triton).
Indeed, starting from XCode 4.2, clang is the default compiler for OSX, so just setting MACOSX_DEPLOYMENT_TARGET=10.9
should fix everything I believe
@jramapuram great that this is working!
FYI I don't think installing
llvm
is necessary (at least for building the C++ extensions, maybe for triton).
Just adding two cents here, for clarity: not needed for triton either, it ships with the wheels so on our side there's nothing to be done beyond pip install triton
FYI I don't think installing llvm is necessary (at least for building the C++ extensions, maybe for triton).
Didn't work for me on OSX 12.1 on an M1 Max 😬 . I had to install LLVM.
Triton installed fine without the above change though 🤷 .
Same issue building 'xformers' from source on MacOS Monterey 12.5.1
(AI-Feynman) davidlaxer@x86_64-apple-darwin13 xformers % MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ pip install -e .
Obtaining file:///Users/davidlaxer/xformers
Requirement already satisfied: torch>=1.12 in /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages (from xformers==0.0.13.dev0) (1.13.0a0+gitaf8e34c)
Requirement already satisfied: numpy in /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages (from xformers==0.0.13.dev0) (1.22.3)
Requirement already satisfied: pyre-extensions==0.0.23 in /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages (from xformers==0.0.13.dev0) (0.0.23)
Requirement already satisfied: typing-extensions in /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages (from pyre-extensions==0.0.23->xformers==0.0.13.dev0) (4.1.1)
Requirement already satisfied: typing-inspect in /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages (from pyre-extensions==0.0.23->xformers==0.0.13.dev0) (0.8.0)
Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages (from typing-inspect->pyre-extensions==0.0.23->xformers==0.0.13.dev0) (0.4.3)
Installing collected packages: xformers
Running setup.py develop for xformers
ERROR: Command errored out with exit status 1:
command: /Users/davidlaxer/anaconda3/envs/AI-Feynman/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/davidlaxer/xformers/setup.py'"'"'; __file__='"'"'/Users/davidlaxer/xformers/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
cwd: /Users/davidlaxer/xformers/
Complete output (122 lines):
/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/setuptools/dist.py:516: UserWarning: Normalizing '0.0.13.dev' to '0.0.13.dev0'
warnings.warn(tmpl.format(**locals()))
running develop
/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running egg_info
writing xformers.egg-info/PKG-INFO
writing dependency_links to xformers.egg-info/dependency_links.txt
writing requirements to xformers.egg-info/requires.txt
writing top-level names to xformers.egg-info/top_level.txt
reading manifest file 'xformers.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'xformers.egg-info/SOURCES.txt'
running build_ext
building 'xformers._C' extension
Emitting ninja build file /Users/davidlaxer/xformers/build/temp.macosx-10.9-x86_64-3.10/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/11] clang++ -MMD -MF /Users/davidlaxer/xformers/build/temp.macosx-10.9-x86_64-3.10/Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.o.d -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/davidlaxer/anaconda3/envs/AI-Feynman/include -fPIC -O2 -isystem /Users/davidlaxer/anaconda3/envs/AI-Feynman/include -I/Users/davidlaxer/xformers/xformers/components/attention/csrc -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include/TH -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include/THC -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/include/python3.10 -c -c /Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.cpp -o /Users/davidlaxer/xformers/build/temp.macosx-10.9-x86_64-3.10/Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /Users/davidlaxer/xformers/build/temp.macosx-10.9-x86_64-3.10/Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.o
clang++ -MMD -MF /Users/davidlaxer/xformers/build/temp.macosx-10.9-x86_64-3.10/Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.o.d -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/davidlaxer/anaconda3/envs/AI-Feynman/include -fPIC -O2 -isystem /Users/davidlaxer/anaconda3/envs/AI-Feynman/include -I/Users/davidlaxer/xformers/xformers/components/attention/csrc -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include/TH -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/lib/python3.10/site-packages/torch/include/THC -I/Users/davidlaxer/anaconda3/envs/AI-Feynman/include/python3.10 -c -c /Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.cpp -o /Users/davidlaxer/xformers/build/temp.macosx-10.9-x86_64-3.10/Users/davidlaxer/xformers/xformers/components/attention/csrc/cpu/spmm.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
clang: error: unsupported option '-fopenmp'
...
AI-Feynman) davidlaxer@x86_64-apple-darwin13 xformers % clang --version
Apple clang version 13.1.6 (clang-1316.0.21.2.5)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
(AI-Feynman) davidlaxer@x86_64-apple-darwin13 xformers % clang++ --version
Apple clang version 13.1.6 (clang-1316.0.21.2.5)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
...
(AI-Feynman) davidlaxer@x86_64-apple-darwin13 pytorch % python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.10.4 (main, Mar 31 2022, 03:38:35) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.22.3
[pip3] torch==1.13.0a0+gitaf8e34c
[pip3] torchmetrics==0.9.3
[pip3] torchvision==0.14.0a0+a61e6ef
[conda] blas 1.0 mkl anaconda
[conda] captum 0.5.0 0 pytorch
[conda] mkl 2021.4.0 hecd8cb5_637 anaconda
[conda] mkl-service 2.4.0 py310hca72f7f_0 anaconda
[conda] mkl_fft 1.3.1 py310hf879493_0 anaconda
[conda] mkl_random 1.2.2 py310hc081a56_0 anaconda
[conda] numpy 1.22.3 py310hdcd3fac_0 anaconda
[conda] numpy-base 1.22.3 py310hfd2de13_0 anaconda
[conda] pytorch 1.12.1 py3.10_0 pytorch
[conda] torch 1.13.0a0+gitaf8e34c pypi_0 pypi
[conda] torchmetrics 0.9.3 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.14.0a0+a61e6ef pypi_0 pypi
I had to add '-Xclang' to setup.py
extra_compile_args["cxx"].append("-Xclang -fopenmp")
Do you guys get
ERROR: Could not find a version that satisfies the requirement triton==2.0.0.dev20220701 (from versions: 0.1, 0.1.1, 0.1.2, 0.1.3, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.3.0)
ERROR: No matching distribution found for triton==2.0.0.dev20220701
for triton? I could run the benchmark on M1 up to 100% (python3 benchmarks/benchmark_encoder.py --activations relu --plot -emb 256 -bs 32 -heads 16
) but it says cpu (e.g. 32 1024 256 True cpu visual
) so that's probably not a good sign.
Note pip install triton==0.3.0
gives another error RuntimeError: Could not find llvm-config. Please install llvm-{8, 9, 10}-dev
Do you guys get
ERROR: Could not find a version that satisfies the requirement triton==2.0.0.dev20220701 (from versions: 0.1, 0.1.1, 0.1.2, 0.1.3, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.3.0) ERROR: No matching distribution found for triton==2.0.0.dev20220701
for triton? I could run the benchmark on M1 up to 100% (
python3 benchmarks/benchmark_encoder.py --activations relu --plot -emb 256 -bs 32 -heads 16
) but it says cpu (e.g.32 1024 256 True cpu visual
) so that's probably not a good sign.Note
pip install triton==0.3.0
gives another errorRuntimeError: Could not find llvm-config. Please install llvm-{8, 9, 10}-dev
triton is not released on windows I think (edit: lost context: not available on some macs either :D), but it s a soft dependency for xformers, the error message is a bit misleading. you can have mem efficient attention without triton, no worries
Do you guys get
ERROR: Could not find a version that satisfies the requirement triton==2.0.0.dev20220701 (from versions: 0.1, 0.1.1, 0.1.2, 0.1.3, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.3.0) ERROR: No matching distribution found for triton==2.0.0.dev20220701
@Any-Winter-4079 Yup.. see this issue I just opened:
Seems that
triton == 2.0.0.dev20220701
is defined in:https://github.com/facebookresearch/xformers/blob/main/requirements-test.txt#L29-L30
# Dependency for fused layers, optional triton == 2.0.0.dev20220701
And even though it's listed as a version on https://pypi.org/project/triton/#history
pip
no longer seems able to download/install it (even by using--pre
), as it only seems to be seeing the latest 5 prerelease versions.Originally posted by @0xdevalias in https://github.com/facebookresearch/xformers/issues/515#issuecomment-1309810905
For me (MacOS 12.5.1):
brew install llvm libomp
MACOSX_DEPLOYMENT_TARGET=10.9 CC=/usr/local/opt/llvm/bin/clang \
CXX=/usr/local/opt/llvm/bin/clang++ \
pip install xformers
Done.
On MacOs 14.3:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=/opt/homebrew/opt/llvm/bin/clang \
CXX=/opt/homebrew/opt/llvm/bin/clang++ \
pip install xformers
Hi, we don't support MacOS anymore, as there is no recent CUDA release available on Mac.
在MacOs 14.3上:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=/opt/homebrew/opt/llvm/bin/clang \ CXX=/opt/homebrew/opt/llvm/bin/clang++ \ pip install xformers
Thanks! It works!
On MacOs 14.3:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=/opt/homebrew/opt/llvm/bin/clang \ CXX=/opt/homebrew/opt/llvm/bin/clang++ \ pip install xformers
Error occurred on 14.6.1
subprocess.CalledProcessError: Command '['which', '/opt/homebrew/opt/llvm/bin/clang++']' returned non-zero exit status 1.
On MacOs 14.3:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=/opt/homebrew/opt/llvm/bin/clang \ CXX=/opt/homebrew/opt/llvm/bin/clang++ \ pip install xformers
Error occurred on 14.6.1
subprocess.CalledProcessError: Command '['which', '/opt/homebrew/opt/llvm/bin/clang++']' returned non-zero exit status 1.
Did you skip brew install llvm libomp
beforehand?
Not sure if there are any specific requirements or why this particular error occurs, but I see the same error on both:
Error is also quite non-descriptive:
assert len(sources) > 0
Full gist.