Why Generate when you can Discriminate? A Novel Technique for Text Classification using Language Models
Describe your Talk
In this talk, I will be discussing about one of our recent works on using small language models for text classification. It was published as part of EACL 2024 (Findings). The abstract of the paper is as follows:
In this paper, we propose a novel two-step technique for text classification using autoregressive Language Models (LM). In the first step, a set of perplexity and log-likelihood based numeric features are elicited from an LM for a text instance to be classified. Then, in the second step, a classifier based on these features is trained to predict the final label. The classifier used is usually a simple machine learning classifier like Support Vector Machine (SVM) or Logistic Regression (LR) and it is trained using a small set of training examples. We believe, our technique presents a whole new way of exploiting the available training instances, in addition to the existing ways like fine-tuning LMs or in-context learning. Our approach stands out by eliminating the need for parameter updates in LMs, as required in fine-tuning, and does not impose limitations on the number of training examples faced while building prompts for in-context learning. We evaluate our technique across 5 different datasets and compare with multiple competent baselines.
Nitin Ramrakhiyani has been working at TCS Research as a Scientist since 2013. He is parallelly pursuing his PhD from the International Institute of Information Technology (IIIT), Hyderabad. He has received M.Tech in Information and Communication Technology from DA-IICT Gandhinagar. His areas of research are Natural Language Processing (NLP), Text Mining and Machine Learning. He has published several papers in noteworthy NLP conferences and journals on Information Extraction, Narrative Understanding and NLP Applications.
Website: https://nramrakhiyani.wordpress.com/
Title
Why Generate when you can Discriminate? A Novel Technique for Text Classification using Language Models
Describe your Talk
In this talk, I will be discussing about one of our recent works on using small language models for text classification. It was published as part of EACL 2024 (Findings). The abstract of the paper is as follows:
In this paper, we propose a novel two-step technique for text classification using autoregressive Language Models (LM). In the first step, a set of perplexity and log-likelihood based numeric features are elicited from an LM for a text instance to be classified. Then, in the second step, a classifier based on these features is trained to predict the final label. The classifier used is usually a simple machine learning classifier like Support Vector Machine (SVM) or Logistic Regression (LR) and it is trained using a small set of training examples. We believe, our technique presents a whole new way of exploiting the available training instances, in addition to the existing ways like fine-tuning LMs or in-context learning. Our approach stands out by eliminating the need for parameter updates in LMs, as required in fine-tuning, and does not impose limitations on the number of training examples faced while building prompts for in-context learning. We evaluate our technique across 5 different datasets and compare with multiple competent baselines.
Pre-requisites & reading material
Basics of Language Models and Text Classification
Time required for the talk
1 hour
Link to slides/demos
https://aclanthology.org/2024.findings-eacl.74/
About you
Nitin Ramrakhiyani has been working at TCS Research as a Scientist since 2013. He is parallelly pursuing his PhD from the International Institute of Information Technology (IIIT), Hyderabad. He has received M.Tech in Information and Communication Technology from DA-IICT Gandhinagar. His areas of research are Natural Language Processing (NLP), Text Mining and Machine Learning. He has published several papers in noteworthy NLP conferences and journals on Information Extraction, Narrative Understanding and NLP Applications. Website: https://nramrakhiyani.wordpress.com/
Availability
18/05/2024
Any comments
No response