Open YuanChang98 opened 1 year ago
I have inference running on the model and I didn't seem to have any problems once I ensured CUDA and Torch could talk to each other (matching versions). I also installed the GPU version of torch even though the current server I'm using does not have GPUs.
cuda==11.8.0 torch==2.0.1+cu118 torchaudio==2.0.2+cu118 torchvision==0.15.2+cu118 pytorch-lightning1.1.5 pytorch-fast-transformers==0.4.0 apex==0.1
For some reason, pytorch-fast-transformers
uses torch.linalg.qr
.
You need to uninstall and install pytorch-fast-transformer.
pip uninstall pytorch-fast-transformers
pip install pytorch-fast-transformers==0.4.0
The pip installed
version uses torch.qr
:
fast_transformers/feature_maps/fourierfeatures.py line 37: `Q, = torch.qr(block)`
This problem can also occur if you install pytorch-fast-transformers
by compiling its source codes.
Hello, I have replicated the conda environment following environment.md and downloaded the pre-trained model checkpoints, when I tried to run run_finetune_xxxx.sh I got this error:
AttributeError: module 'torch.linalg' has no attribute 'qr'
the corresponding file is fast_transformers/feature_maps/fourier_features.py, line 37. I looked up the pytorch documentation for module torch.linalg and found that this module does not have the function qr for pytorch1.7.1 and below. To save trouble, I didn't upgrade pytorch but downgraded pytorch-fast-transformers to version 0.3.0 which uses torch.qr instead and it worked. So I‘m wondering if there is any version mismatch between pytorch and pytorch-fast-transformers or just because I did't follow the guide well. Some of my conda environments are below:CUDA version 11.0
pytorch==1.7.1
pytorch-lightning==1.1.5
pytorch-fast-transformers==0.4.0
then changed to0.3.0
cudatoolkit==11.0.221
apex==0.1
Thank you.