traja-team / traja

Python tools for spatial trajectory and time-series data analysis
https://traja.readthedocs.io
MIT License
98 stars 25 forks source link

attention layer on top of LSTMs #82

Open Saran-nns opened 3 years ago

Saran-nns commented 3 years ago

Attention mechanisms seem to improve the time series prediction/forecasting and classification performance sample paper

Deep learning models in traja can easily accommodate the attention layer

  1. Create a self-attention mechanism wrapper Reference
  2. Inject the attention layer instance on top of LSTM layers before and after encoding. Example here and here
  3. Add optional boolean arg for attention in autoencoding(ae, vae, vaegan) base models.