Raphaaal / fieldy

Fine-grained attention in hierarchical transformers for tabular time-series.
8 stars 0 forks source link

This repository hosts the code for the paper Fine-grained Attention in Hierarchical Transformers for Tabular Time-series by R. Azorin, Z. Ben Houidi, M. Gallo, A. Finamore, and P. Michiardi, accepted at KDD'24 10th MiLeTS Workshop.

Fieldy is a fine-grained hierarchical Transformer that contextualizes fields at both the row and column levels. We compare our proposal against state of the art models on regression and classification tasks using public tabular time-series datasets. Our results show that combining row-wise and column-wise attention improves performance without increasing model size.

intro_fig

Requirements

  1. Create an environment with conda create --name fieldy python==3.8.16
  2. Activate it with conda activate fieldy
  3. Install the requirements with pip install -r requirements.txt

Note: you need a CUDA device with at least 16 GB of VRAM to train all the models. If you want to use CPU or multi-GPU training, you may consider integrating HuggingFace Accelerate into the existing code.

Datasets loading

Choose an option below to load the preprocessed datasets:

Models training

Results will be saved under ./results.

Plot results

Use ./plots/results2latex.ipynb.

Toy task for field-wise attention

Use ./plots/field_wise_attention.ipynb.

Citation

If you use this paper or code as a reference, please cite it with:

@misc{azorin2024finegrained,
      title={Fine-grained Attention in Hierarchical Transformers for Tabular Time-series}, 
      author={Raphael Azorin and Zied Ben Houidi and Massimo Gallo and Alessandro Finamore and Pietro Michiardi},
      year={2024},
      eprint={2406.15327},
      archivePrefix={arXiv},
}

Acknowledgements

This repository is built on top of TabBERT. We would also like to thanks the authors of UniTTab, for discussions on metrics and pre-processing.