I'm proposing a pull request to fix a tiny bug I have found. I'll try to provide as much information as possible in the next of the message:
1. What I want to do
I am trying to implement an NN using transformer layers as a time encoder using the class Transformer from tsl.nn.blocks.encoders import Transformer.
2. What is the problem?
if a set the param axis='time' as is established in documentation I obtain the next error:
File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/blocks/encoders/transformer.py", line 193, in __init__ transformer_layer(
File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/blocks/encoders/transformer.py", line 43, in __init__ self.att = MultiHeadAttention(embed_dim=hidden_size,
File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/layers/base/attention.py", line 135, in __init__ raise ValueError("Axis can either be 'steps' (0) or 'nodes' (1), "
ValueError: Axis can either be 'steps' (0) or 'nodes' (1), not 'time'.
What is the source of the problem?
Checking the code of the code of the class Transformer this can be found:
class Transformer(nn.Module):
Args:
input_size (int): Input size.
hidden_size (int): Dimension of the learned representations.
ff_size (int): Units in the MLP after self attention.
output_size (int, optional): Size of an optional linear readout.
n_layers (int, optional): Number of Transformer layers.
n_heads (int, optional): Number of parallel attention heads.
axis (str, optional): Dimension on which to apply attention to update
the representations. Can be either, 'time', 'nodes', or 'both'.
(default: :obj:`'time'`)
causal (bool, optional): If :obj:`True`, then causally mask attention
scores in temporal attention (has an effect only if :attr:`axis` is
:obj:`'time'` or :obj:`'both'`).
(default: :obj:`True`)
activation (str, optional): Activation function.
dropout (float, optional): Dropout probability.
"""
def __init__(self,
input_size,
hidden_size,
ff_size=None,
output_size=None,
n_layers=1,
n_heads=1,
axis='time',
causal=True,
activation='elu',
dropout=0.):
super(Transformer, self).__init__()
self.f = getattr(F, activation)
if ff_size is None:
ff_size = hidden_size
if axis in ['time', 'nodes']:
transformer_layer = partial(TransformerLayer, axis=axis)
elif axis == 'both':
transformer_layer = SpatioTemporalTransformerLayer
else:
raise ValueError(f'"{axis}" is not a valid axis.')```
However, this is the code of the class MultiHeadAttention:
class MultiHeadAttention(nn.MultiheadAttention):
def __init__(self,
embed_dim,
heads,
qdim: Optional[int] = None,
kdim: Optional[int] = None,
vdim: Optional[int] = None,
axis='steps',
dropout=0.,
bias=True,
add_bias_kv=False,
add_zero_attn=False,
device=None,
dtype=None,
causal=False) -> None:
if axis in ['steps', 0]:
shape = 's (b n) c'
elif axis in ['nodes', 1]:
if causal:
raise ValueError(
f'Cannot use causal attention for axis "{axis}".')
shape = 'n (b s) c'
else:
raise ValueError("Axis can either be 'steps' (0) or 'nodes' (1), "
f"not '{axis}'.")
Which solution do I propose?
I want to change this pull request to update the references to steps in MultiHeadAttention class to time.
Final warning
I have tried to run the tests, but I couldn't because of some problems in my python installation. However, I expect this minor change to be suitable for all tests.
Greetings,
I'm proposing a pull request to fix a tiny bug I have found. I'll try to provide as much information as possible in the next of the message:
1. What I want to do
I am trying to implement an NN using transformer layers as a time encoder using the class Transformer
from tsl.nn.blocks.encoders import Transformer
.2. What is the problem?
if a set the param
axis='time'
as is established in documentation I obtain the next error:File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/blocks/encoders/transformer.py", line 193, in __init__ transformer_layer(
File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/blocks/encoders/transformer.py", line 43, in __init__ self.att = MultiHeadAttention(embed_dim=hidden_size,
File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/layers/base/attention.py", line 135, in __init__ raise ValueError("Axis can either be 'steps' (0) or 'nodes' (1), "
ValueError: Axis can either be 'steps' (0) or 'nodes' (1), not 'time'.
What is the source of the problem?
Checking the code of the code of the class Transformer this can be found:
However, this is the code of the class MultiHeadAttention:
Which solution do I propose?
I want to change this pull request to update the references to steps in MultiHeadAttention class to time.
Final warning
I have tried to run the tests, but I couldn't because of some problems in my python installation. However, I expect this minor change to be suitable for all tests.