Closed benderama3 closed 4 years ago
I got similar problems I found that there is no casual_product_gpu in the directory ,and it can't be imported I have tried --no-cache-dir to recompile, but it didn't work
File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 712, in <module>
train()
File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 587, in train
losses = net.train_step(batch_x, batch_y, batch_mask)
File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 302, in train_step
h, y_type = self.forward_hidden(x)
File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 368, in forward_hidden
h = self.transformer_encoder(pos_emb, attn_mask) # y: b x s x d_model
File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/transformers.py", line 138, in forward
x = layer(x, attn_mask=attn_mask, length_mask=length_mask)
File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/transformers.py", line 77, in forward
x = x + self.dropout(self.attention(
File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/attention_layer.py", line 105, in forward
new_values = self.inner_attention(
File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/causal_linear_attention.py", line 98, in forward
V = causal_linear(
File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/causal_linear_attention.py", line 23, in causal_linear
V_new = causal_dot_product(Q, K, V)
File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/causal_product/__init__.py", line 44, in forward
CausalDotProduct.dot[device.type](
TypeError: 'NoneType' object is not callable
进程已结束,退出代码为 1
I got similar problems I found that there is no casual_product_gpu in the directory ,and it can't be imported I have tried --no-cache-dir to recompile, but it didn't work
File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 712, in <module> train() File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 587, in train losses = net.train_step(batch_x, batch_y, batch_mask) File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 302, in train_step h, y_type = self.forward_hidden(x) File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 368, in forward_hidden h = self.transformer_encoder(pos_emb, attn_mask) # y: b x s x d_model File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/transformers.py", line 138, in forward x = layer(x, attn_mask=attn_mask, length_mask=length_mask) File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/transformers.py", line 77, in forward x = x + self.dropout(self.attention( File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/attention_layer.py", line 105, in forward new_values = self.inner_attention( File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/causal_linear_attention.py", line 98, in forward V = causal_linear( File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/causal_linear_attention.py", line 23, in causal_linear V_new = causal_dot_product(Q, K, V) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/causal_product/__init__.py", line 44, in forward CausalDotProduct.dot[device.type]( TypeError: 'NoneType' object is not callable 进程已结束,退出代码为 1
Hi, I met the exact same issue and may I ask if you have solved it? @15805383399
I got similar problems I found that there is no casual_product_gpu in the directory ,and it can't be imported I have tried --no-cache-dir to recompile, but it didn't work
File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 712, in <module> train() File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 587, in train losses = net.train_step(batch_x, batch_y, batch_mask) File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 302, in train_step h, y_type = self.forward_hidden(x) File "/media/lyk/高速/Project/github/compound-word-transformer/workspace/uncond/cp-linear/main-cp.py", line 368, in forward_hidden h = self.transformer_encoder(pos_emb, attn_mask) # y: b x s x d_model File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/transformers.py", line 138, in forward x = layer(x, attn_mask=attn_mask, length_mask=length_mask) File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/transformers.py", line 77, in forward x = x + self.dropout(self.attention( File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/attention_layer.py", line 105, in forward new_values = self.inner_attention( File "/home/lyk/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/causal_linear_attention.py", line 98, in forward V = causal_linear( File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/attention/causal_linear_attention.py", line 23, in causal_linear V_new = causal_dot_product(Q, K, V) File "/home/lyk/anaconda3/lib/python3.8/site-packages/pytorch_fast_transformers-0.4.0-py3.8-linux-x86_64.egg/fast_transformers/causal_product/__init__.py", line 44, in forward CausalDotProduct.dot[device.type]( TypeError: 'NoneType' object is not callable 进程已结束,退出代码为 1
Hi, I met the exact same issue and may I ask if you have solved it? @15805383399
I resolved this one with the methods mentioned in #96 😁
Trying to run the example from the readme using local attention instead of linear attention. I changed the attention_type and added an additional argument in the TransformerEncoderBuilder.from_kwargs method:
The exemple throws an error:
It does work with the other attention modules. Am I doing something wrong? Is the local_context argument supposed to be an integer?
Thank you.
EDIT: looks like it is failing using cuda only (pytorch 1.6 with cuda 10.1), it works on the cpu
EDIT2: fixed using --no-cache-dir argument when installing with pip (to recompile)