idiap / fast-transformers

Pytorch library for fast transformer implementations
1.65k stars 179 forks source link

pip install and c++ compilation error, then name 'compute_hashes_cuda' is not defined #89

Closed nikjetchev closed 3 years ago

nikjetchev commented 3 years ago

Hi

I am installing fast-transformers via pip

pip install pytorch-fast-transformers==0.1.3

but I get error when compiling the C code (really long tex of warnings, parse only first and last lines for readability here)

 running build_ext
  building 'fast_transformers.hashing.hash_cpu' extension
  creating build/temp.linux-x86_64-3.8
  creating build/temp.linux-x86_64-3.8/fast_transformers
  creating build/temp.linux-x86_64-3.8/fast_transformers/hashing
  gcc -pthread -B /usr/local/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/njetchev/hrnet2/lib/python3.8/site-packages/torch/include -I/home/njetchev/hrnet2/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/njetchev/hrnet2/lib/python3.8/site-packages/torch/include/TH -I/home/njetchev/hrnet2/lib/python3.8/site-packages/torch/include/THC -I/home/njetchev/hrnet2/include -I/usr/local/anaconda3/include/python3.8 -c fast_transformers/hashing/hash_cpu.cpp -o build/temp.linux-x86_64-3.8/fast_transformers/hashing/hash_cpu.o -fopenmp -ffast-math -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=hash_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  fast_transformers/hashing/hash_cpu.cpp: In function ‘void compute_hashes(at::Tensor, at::Tensor, at::Tensor)’:
  fast_transformers/hashing/hash_cpu.cpp:17:45: warning: ‘at::GenericPackedTensorAccessor<T, N, PtrTraits, index_t> at::Tensor::packed_accessor() const & [with T = float; long unsigned int N = 2; PtrTraits = at::DefaultPtrTraits; index_t = long int]’ is deprecated: packed_accessor is deprecated, use packed_accessor32 or packed_accessor
....

cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  fast_transformers/clustering/hamming/cluster_cpu.cpp: In function ‘void recompute_centroids(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int)’:
  fast_transformers/clustering/hamming/cluster_cpu.cpp:136:69: error: base operand of ‘->’ has non-pointer type ‘const at::Generator’
                       int64_t c = at::detail::getDefaultCPUGenerator()->random();
                                                                       ^~
  error: command 'gcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for pytorch-fast-transformers

this makes some transformers such as Clustered Attention unusable for me due to this error

File "/home/njetchev/hrnet2/lib/python3.8/site-packages/fast_transformers/hashing/__init__.py", line 29, in compute_hashes
    compute_hashes_cuda(X, A, H)
NameError: name 'compute_hashes_cuda' is not defined

Can you help me with the issue, if fixing the c++ error is too difficult than is there a way to hack a solution and replace the line which needs the compute hashes with the equivalent pytorch or python function?

thanks a lot, this will help me finish in the next 3 weeks the NeurIPS paper I am writing which tries to reproduce and compare with the results of some of your attention methods from this repository.

angeloskath commented 3 years ago

Hi,

Sorry for the late reply. Is there a specific reason that you want to install v0.1.3? I would suggest to try 0.4.0 or even directly from master.

In the Github actions you can see which versions of python and PyTorch are being automatically tested every time to get an idea for a combination that we know will work.

Let me know if you are still experiencing problems.

Cheers, Angelos

angeloskath commented 3 years ago

I am closing the issue due to inactivity but feel free to reopen it if you are still experiencing problems.

Cheers, Angelos

sef1 commented 1 year ago

This issue exists also in the latest version, when running with cuda 12.