josh-segal / NLP-Presentation-Similarity-Evaluation

0 stars 1 forks source link

Presentation Evaluation Using Machine Learning

frontend example

Overview

This repository contains the code and resources for the CS 4120 project by Joshua Segal and Cris Hernandez at Northeastern University. The project focuses on evaluating presentation-paper pairs for entailment, aiming to provide a robust rating system for presentations based on research papers. The method involves keyword matching presentation sentences to paper sentences and predicting entailment using machine learning models. The final rating is calculated using a weighted score.

Report

Access our project report here!

Pipeline

Datasets

Preprocessing

Model Training/Tuning

Fine-Tuning

Comparison

Demo

To access the demo, follow these steps:

  1. Clone the repository to your local machine

  2. Launch the Streamlit local app:

streamlit run frontend.py
  1. Input paper and presentation XML files
  2. Choose your model and inference results

Future Directions