maxjcohen / transformer

Implementation of Transformer model (originally from Attention is All You Need) applied to Time Series.
https://timeseriestransformer.readthedocs.io/en/latest/
GNU General Public License v3.0
842 stars 165 forks source link

When should I use Attention window size? #37

Closed LIngerwsk closed 3 years ago

LIngerwsk commented 3 years ago

Hi, when should I use Attention window size, and what it is used for? I recently try to apply the transformer to the load forecasting, should I use the original Multiheadattention?

maxjcohen commented 3 years ago

Hi, Attention Window defines a range around the current time step, outside of which all attention score are set to 0. In other words, it forces the network to focus only on the values directly before and after.

As an illustration, you can take a look at this attention map: the red diagonal represents non zeros values around the current time step. Inside, the shades represent the attention score, you can see the distinction between day/night cycles. This map represents a week's worth of hourly data.

The different variants of MultiHeadAttention, such as Window or Chunk, are purely computational tricks in order to compute faster. You should start with the original MHA block, and switch to the others if needed.

LIngerwsk commented 3 years ago

Thanks for your replying!

shamoons commented 3 years ago

Hi, Attention Window defines a range around the current time step, outside of which all attention score are set to 0. In other words, it forces the network to focus only on the values directly before and after.

As an illustration, you can take a look at this attention map: the red diagonal represents non zeros values around the current time step. Inside, the shades represent the attention score, you can see the distinction between day/night cycles. This map represents a week's worth of hourly data.

The different variants of MultiHeadAttention, such as Window or Chunk, are purely computational tricks in order to compute faster. You should start with the original MHA block, and switch to the others if needed.

Just to be clear - it looks before AND after, right? The docs say Number of backward elements to apply attention. which I took to mean only backwards?

maxjcohen commented 3 years ago

Yes my bad, it's currently set to backward AND forward. Changing from one to the other is quite simple though.