Sohanpatnaik106 / CABINET_QA

This repository contains our codebase for the method CABINET that tackles the task of Table Question Answering and achieves state-of-the-art on three benchmark datasets
10 stars 2 forks source link

CABINET: CONTENT RELEVANCE BASED NOISE REDUCTION FOR TABLE QUESTION ANSWERING

Table understanding capability of Large Language Models (LLMs) has been extensively studied through the task of question-answering (QA) over tables. Typically, only a small part of the whole table is relevant to derive the answer for a given question. The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) – a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. CABINET comprises an Unsupervised Relevance Scorer (URS), trained differentially with the QA LLM, that weighs the table content based on its relevance to the input question before feeding it to the question-answering LLM (QA LLM). To further aid the relevance scorer, CABINET employs a weakly supervised module that generates a parsing statement describing the criteria of rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines, as well as GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new SoTA performance on WikiTQ, FeTaQA, and WikiSQL datasets.

File Description

This repository contains codes for some baselines and our proposed method. The details about the file and directory structure can be found below.

Baselines

Our Code

Setup the environment

  conda env create -f environment.yml
  conda activate tabllm

Experiments

Please download the datasets from here

Please download the checkpoints from here

To run the experiments and train the model with a certain config, run the following command

  python main.py --config <config_path>

To evaluate the trained model on a particular dataset, run the following command

  python evaluate.py --config <config_path> --device <device_name> --ckpt_path <checkpoint_path>

For CABINET, set the <config_path> as follows for the different datasets

Citation

If you find this work useful and relevant to your research, please cite it.

  @inproceedings{
    patnaik2024cabinet,
    title={{CABINET}: Content Relevance-based Noise Reduction for Table Question Answering},
    author={Sohan Patnaik and Heril Changwal and Milan Aggarwal and Sumit Bhatia and Yaman Kumar and Balaji Krishnamurthy},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=SQrHpTllXa}
  }

Contact

For questions related to this code, please raise an issue and you can mail us at sohanpatnaik106@gmail.com.