idiap / fast-transformers

Pytorch library for fast transformer implementations
1.65k stars 179 forks source link

installation error #92

Closed davidliujiafeng closed 3 years ago

davidliujiafeng commented 3 years ago

Hi team,

I am using below command installing fast-transformers

pip install --user pytorch-fast-transformers

here is my environment info

Ubuntu 16.04 Cuda compilation tools, release 11.0, V11.0.221 gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) Python 3.8

Please find below error:

Building wheels for collected packages: pytorch-fast-transformers
  Building wheel for pytorch-fast-transformers (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /opt/conda/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/setup.py'"'"'; __file__='"'"'/tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3eex4mvx
       cwd: /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/
  Complete output (143 lines):
  No CUDA runtime is found, using CUDA_HOME='/opt/conda'
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.8
  creating build/lib.linux-x86_64-3.8/fast_transformers
  copying fast_transformers/utils.py -> build/lib.linux-x86_64-3.8/fast_transformers
  copying fast_transformers/weight_mapper.py -> build/lib.linux-x86_64-3.8/fast_transformers
  copying fast_transformers/masking.py -> build/lib.linux-x86_64-3.8/fast_transformers
  copying fast_transformers/transformers.py -> build/lib.linux-x86_64-3.8/fast_transformers
  copying fast_transformers/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers
  creating build/lib.linux-x86_64-3.8/fast_transformers/clustering
  copying fast_transformers/clustering/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/clustering
  creating build/lib.linux-x86_64-3.8/fast_transformers/feature_maps
  copying fast_transformers/feature_maps/fourier_features.py -> build/lib.linux-x86_64-3.8/fast_transformers/feature_maps
  copying fast_transformers/feature_maps/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/feature_maps
  copying fast_transformers/feature_maps/base.py -> build/lib.linux-x86_64-3.8/fast_transformers/feature_maps
  creating build/lib.linux-x86_64-3.8/fast_transformers/events
  copying fast_transformers/events/event_dispatcher.py -> build/lib.linux-x86_64-3.8/fast_transformers/events
  copying fast_transformers/events/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/events
  copying fast_transformers/events/event.py -> build/lib.linux-x86_64-3.8/fast_transformers/events
  copying fast_transformers/events/filters.py -> build/lib.linux-x86_64-3.8/fast_transformers/events
  creating build/lib.linux-x86_64-3.8/fast_transformers/sparse_product
  copying fast_transformers/sparse_product/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/sparse_product
  creating build/lib.linux-x86_64-3.8/fast_transformers/hashing
  copying fast_transformers/hashing/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/hashing
  creating build/lib.linux-x86_64-3.8/fast_transformers/aggregate
  copying fast_transformers/aggregate/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/aggregate
  creating build/lib.linux-x86_64-3.8/fast_transformers/local_product
  copying fast_transformers/local_product/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/local_product
  creating build/lib.linux-x86_64-3.8/fast_transformers/builders
  copying fast_transformers/builders/attention_builders.py -> build/lib.linux-x86_64-3.8/fast_transformers/builders
  copying fast_transformers/builders/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/builders
  copying fast_transformers/builders/base.py -> build/lib.linux-x86_64-3.8/fast_transformers/builders
  copying fast_transformers/builders/transformer_builders.py -> build/lib.linux-x86_64-3.8/fast_transformers/builders
  creating build/lib.linux-x86_64-3.8/fast_transformers/recurrent
  copying fast_transformers/recurrent/_utils.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent
  copying fast_transformers/recurrent/transformers.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent
  copying fast_transformers/recurrent/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent
  creating build/lib.linux-x86_64-3.8/fast_transformers/causal_product
  copying fast_transformers/causal_product/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/causal_product
  creating build/lib.linux-x86_64-3.8/fast_transformers/attention_registry
  copying fast_transformers/attention_registry/registry.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention_registry
  copying fast_transformers/attention_registry/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention_registry
  copying fast_transformers/attention_registry/spec.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention_registry
  creating build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/improved_clustered_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/exact_topk_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/conditional_full_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/full_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/improved_clustered_causal_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/causal_linear_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/reformer_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/clustered_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/attention_layer.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/local_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  copying fast_transformers/attention/linear_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/attention
  creating build/lib.linux-x86_64-3.8/fast_transformers/clustering/hamming
  copying fast_transformers/clustering/hamming/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/clustering/hamming
  creating build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention
  copying fast_transformers/recurrent/attention/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention
  creating build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/self_attention
  copying fast_transformers/recurrent/attention/self_attention/full_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/self_attention
  copying fast_transformers/recurrent/attention/self_attention/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/self_attention
  copying fast_transformers/recurrent/attention/self_attention/attention_layer.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/self_attention
  copying fast_transformers/recurrent/attention/self_attention/linear_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/self_attention
  creating build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/cross_attention
  copying fast_transformers/recurrent/attention/cross_attention/full_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/cross_attention
  copying fast_transformers/recurrent/attention/cross_attention/__init__.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/cross_attention
  copying fast_transformers/recurrent/attention/cross_attention/attention_layer.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/cross_attention
  copying fast_transformers/recurrent/attention/cross_attention/linear_attention.py -> build/lib.linux-x86_64-3.8/fast_transformers/recurrent/attention/cross_attention
  running build_ext
  building 'fast_transformers.hashing.hash_cpu' extension
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/hashing
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/hashing/hash_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/hashing/hash_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/hashing/hash_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=hash_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/hashing/hash_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/hashing/hash_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.aggregate.aggregate_cpu' extension
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/aggregate
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/aggregate/aggregate_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/aggregate/aggregate_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/aggregate/aggregate_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=aggregate_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/aggregate/aggregate_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/aggregate/aggregate_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.clustering.hamming.cluster_cpu' extension
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/clustering
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/clustering/hamming
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/clustering/hamming/cluster_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/clustering/hamming/cluster_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/clustering/hamming/cluster_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cluster_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/clustering/hamming/cluster_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/clustering/hamming/cluster_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.sparse_product.sparse_product_cpu' extension
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product/sparse_product_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/sparse_product/sparse_product_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product/sparse_product_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_product_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product/sparse_product_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/sparse_product/sparse_product_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.sparse_product.clustered_sparse_product_cpu' extension
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product/clustered_sparse_product_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/sparse_product/clustered_sparse_product_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product/clustered_sparse_product_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=clustered_sparse_product_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/sparse_product/clustered_sparse_product_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/sparse_product/clustered_sparse_product_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.causal_product.causal_product_cpu' extension
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/causal_product
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/causal_product/causal_product_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/causal_product/causal_product_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/causal_product/causal_product_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=causal_product_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/causal_product/causal_product_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/causal_product/causal_product_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.local_product.local_product_cpu' extension
  creating /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/local_product
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] c++ -MMD -MF /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/local_product/local_product_cpu.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/local_product/local_product_cpu.cpp -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/local_product/local_product_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=local_product_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/local_product/local_product_cpu.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/fast_transformers/local_product/local_product_cpu.cpython-38-x86_64-linux-gnu.so
  building 'fast_transformers.hashing.hash_cuda' extension
  Emitting ninja build file /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/1] /opt/conda/bin/nvcc  -I/opt/conda/lib/python3.8/site-packages/torch/include -I/opt/conda/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.8/site-packages/torch/include/TH -I/opt/conda/lib/python3.8/site-packages/torch/include/THC -I/opt/conda/include -I/opt/conda/include/python3.8 -c -c /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/fast_transformers/hashing/hash_cuda.cu -o /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/hashing/hash_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -arch=compute_50 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=hash_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
  nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
  g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-install-b8w5wyp7/pytorch-fast-transformers_53662d6fb71145ce9d0ea5098997afe6/build/temp.linux-x86_64-3.8/fast_transformers/hashing/hash_cuda.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -L/opt/conda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/fast_transformers/hashing/hash_cuda.cpython-38-x86_64-linux-gnu.so
  /opt/conda/compiler_compat/ld: cannot find -lc10_cuda
  /opt/conda/compiler_compat/ld: cannot find -ltorch_cuda
  collect2: error: ld returned 1 exit status
  error: command 'g++' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for pytorch-fast-transformers

Please help! Thanks very much

Let me know if you need any additional info.

wrrogers commented 3 years ago

I'm noticing at the top it says there is no CUDA runtime found. Maybe need to check the CUDA_HOME path is set right.

wrrogers commented 3 years ago

Also, do you have G++ installed?

angeloskath commented 3 years ago

Hi,

I am closing the issue due to inactivity. Feel free to reopen it if needed.

Cheers, Angelos

annahung31 commented 3 years ago

I also encountered this error, anyone knows how to fix it? I've checked the G++ and it's installed: g++ (Ubuntu 5.4.0-6ubuntu1~16.04.12)

alemoreno991 commented 1 year ago

I'm getting the same error. Could someone give me a hand solving this issue?

My g++ is the following g++ (Ubuntu 11.2.0-19ubuntu1) 11.2.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

My CUDA_HOME is /usr/local/cuda. In this directory I have bin, compute-sanitizer and libdevice directories.

hungngo32 commented 9 months ago

try pip install -v pytorch-fast-transformers