idiap / fast-transformers

Pytorch library for fast transformer implementations
1.65k stars 179 forks source link

Unable to install extensions #6

Closed TariqAHassan closed 3 years ago

TariqAHassan commented 4 years ago

Hello

I had a bit of trouble installing this package via pip. I've included the steps I took to resolve these problem below.

Resolution

  1. First I encountered this somewhat common pytorch error:
Your compiler (g++) is not compatible with the compiler Pytorch was
built with for this platform, which is clang++ on darwin. Please
use clang++ to to compile your extension. Alternatively, you may
compile PyTorch from source using g++, and then you can also use
g++ to compile your extension.

See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.

I solved this by dropping the binary and installing pytorch from source.

  1. Next I encountered an issue with the -fopenmp flag used to install the C++ extensions. I tried to solve this by running
brew install llvm libomp

and replacing each

extra_compile_args=["-fopenmp", ...]

with

extra_compile_args=["-Xpreprocessor", "-fopenmp", ...]

then installing the normal way:

git clone git@github.com:idiap/fast-transformers.git
cd fast-transformers
# modify setup.py as shown above
python setup.py install
  1. From there I have been able to run a simple forward pass through a model:
import torch
from fast_transformers.builders import TransformerEncoderBuilder

# Create the builder for our transformers
builder = TransformerEncoderBuilder.from_kwargs(
    n_layers=8,
    n_heads=8,
    query_dimensions=64,
    value_dimensions=64,
    feed_forward_dimensions=1024
)

# Build a transformer with linear attention
builder.attention_type = "linear"
linear_model = builder.get()

# Construct the dummy input
X = torch.rand(10, 1000, 8*64)

with torch.no_grad():
    out = linear_model(X)

assert isinstance(out, torch.Tensor)  # True

Unfortunately it will be a few days before I can run a proper test on a GPU (I have some data preprocessing to do first :)), but I didn't want to wait until then to post this.

System information

Thank you for your wonderful work!

angeloskath commented 4 years ago

Wow, thanks for the very detailed issue and for working out solutions.

I think it is obvious that we did not test on macOS. We shall look into it to at least incorporate your solutions and make building for Mac easier in general.

Thanks, Angelos

kevinbache commented 4 years ago

This solution worked for me too on a cpu-only OS X build. I tried building pytorch and fast-transformers with posix clang as well and that didn't work.

System information

matthew-jurewicz commented 4 years ago

I use -Xclang instead of -Xpreprocessor, not sure what the difference is.