Closed LIngerwsk closed 3 years ago
Hi, Attention Window defines a range around the current time step, outside of which all attention score are set to 0
. In other words, it forces the network to focus only on the values directly before and after.
As an illustration, you can take a look at this attention map: the red diagonal represents non zeros values around the current time step. Inside, the shades represent the attention score, you can see the distinction between day/night cycles. This map represents a week's worth of hourly data.
The different variants of MultiHeadAttention, such as Window or Chunk, are purely computational tricks in order to compute faster. You should start with the original MHA block, and switch to the others if needed.
Thanks for your replying!
Hi, Attention Window defines a range around the current time step, outside of which all attention score are set to
0
. In other words, it forces the network to focus only on the values directly before and after.As an illustration, you can take a look at this attention map: the red diagonal represents non zeros values around the current time step. Inside, the shades represent the attention score, you can see the distinction between day/night cycles. This map represents a week's worth of hourly data.
The different variants of MultiHeadAttention, such as Window or Chunk, are purely computational tricks in order to compute faster. You should start with the original MHA block, and switch to the others if needed.
Just to be clear - it looks before AND after, right? The docs say Number of backward elements to apply attention.
which I took to mean only backwards?
Yes my bad, it's currently set to backward AND forward. Changing from one to the other is quite simple though.
Hi, when should I use Attention window size, and what it is used for? I recently try to apply the transformer to the load forecasting, should I use the original Multiheadattention?