Open batrlatom opened 4 years ago
Having the same issue
This library requires pytorch-fast-transformers
to be installed as it uses implementations of causal linear attention from it. When fast-transformers is installed, it attempts to call the CUDA compiler nvcc
in order to create memory-optimised CUDA versions of some functions. If the optimised versions can't be compiled for any reason, performer-pytorch
will use its own, less efficient implementation resulting in this warning. The code should still run but will use more memory.
https://github.com/idiap/fast-transformers/issues/23#issuecomment-693323065 This comment should explain how to install the pytorch-fast-transformers
library in a way that works
I get the same error and it just crashes even when I try to run it on cpu.
@qazwsxal I successfully installed pytorch-fast-transformers
but still get this error either with cpu or gpu: unable to import cuda code for auto-regressive Performer. will default to the memory inefficient non-cuda version Segmentation fault
. Would appreciate any help or pointer. Thank you
As this library is written in 100% Python, a segfault is unlikely to be caused by anything here. Without a full stacktrace, it's also going to be difficult to figure out why the crash is happening
Having the same issue:
Dear friends, I solved this problem to my machine. In terminal:
My system is
Hi, I have tried to test your implementation, but have a problem to make it run correctly. Do you know what the problem could be?
env: fresh conda env, python 3.69, cuda 11, pytorch 1.7 card: gtx1080ti