CornellNLP / ConvoKit

ConvoKit is a toolkit for extracting conversational features and analyzing social phenomena in conversations. It includes several large conversational datasets along with scripts exemplifying the use of the toolkit on these datasets.
https://convokit.cornell.edu/documentation/
MIT License
552 stars 125 forks source link
computational-social-science conversational-ai conversational-analysis conversations dataset dialogs machine-learning nlp toolkit

ConvoKit

All Contributors

pypi py\_versions Code style: black license Discord Community

This toolkit contains tools to extract conversational features and analyze social phenomena in conversations, using a single unified interface inspired by (and compatible with) scikit-learn. Several large conversational datasets are included together with scripts exemplifying the use of the toolkit on these datasets. The latest version is 3.0.0 (released July 17, 2023); follow the project on GitHub to keep track of updates.

Read our documentation or try ConvoKit in our interactive tutorial.

The toolkit currently implements features for:

Linguistic coordination (API)

A measure of linguistic influence (and relative power) between individuals or groups based on their use of function words. Example: exploring the balance of power in the U.S. Supreme Court.

Politeness strategies (API)

A set of lexical and parse-based features correlating with politeness and impoliteness. Example: understanding the (mis)use of politeness strategies in conversations gone awry on Wikipedia.

Expected Conversational Context Framework (API)

A framework for characterizing utterances and terms based on their expected conversational context, consisting of model implementations and wrapper pipelines. Examples: deriving question types and other characterizations in British parliamentary question periods, exploration of Switchboard dialog acts corpus, examining Wikipedia talk page discussions and computing the orientation of justice utterances in the US Supreme Court

Hypergraph conversation representation (API)

A method for extracting structural features of conversations through a hypergraph representation. Example: hypergraph creation and feature extraction, visualization and interpretation on a subsample of Reddit.

Linguistic diversity in conversations (API)

A method to compute the linguistic diversity of individuals within their own conversations, and between other individuals in a population. Example: speaker conversation attributes and diversity example on ChangeMyView

CRAFT: Online forecasting of conversational outcomes (API)

A neural model for forecasting future outcomes of conversations (e.g., derailment into personal attacks) as they develop. Available as an interactive notebook: full version (fine-tuning + inference) or inference-only.

Datasets

ConvoKit ships with several datasets ready for use "out-of-the-box". These datasets can be downloaded using the convokit.download() helper function. Alternatively you can access them directly here.

Conversations Gone Awry Datasets (Wikipedia/CMV)

Two related corpora of conversations that derail into antisocial behavior. One corpus (CGA-WIKI) consists of Wikipedia talk page conversations that derail into personal attacks as labeled by crowdworkers (4,188 conversations containing 30.021 comments). The other (CGA-CMV) consists of discussion threads on the subreddit ChangeMyView (CMV) that derail into rule-violating behavior as determined by the presence of a moderator intervention (6,842 conversations containing 42,964 comments). Name for download: conversations-gone-awry-corpus (for CGA-WIKI) or conversations-gone-awry-cmv-corpus (for CGA-CMV)

Cornell Movie-Dialogs Corpus

A large metadata-rich collection of fictional conversations extracted from raw movie scripts. (220,579 conversational exchanges between 10,292 pairs of movie characters in 617 movies). Name for download: movie-corpus

Parliament Question Time Corpus

Parliamentary question periods from May 1979 to December 2016 (216,894 question-answer pairs). Name for download: parliament-corpus

Supreme Court Corpus

A collection of conversations from the U.S. Supreme Court Oral Arguments. Name for download: supreme-corpus

Wikipedia Talk Pages Corpus

A medium-size collection of conversations from Wikipedia editors' talk pages. Name for download: wiki-corpus

Tennis Interviews

Transcripts for tennis singles post-match press conferences for major tournaments between 2007 to 2015 (6,467 post-match press conferences). Name for download: tennis-corpus

Reddit Corpus

Reddit conversations from over 900k subreddits, arranged by subreddit. A small subset sampled from 100 highly active subreddits is also available.

Name for download: subreddit-<name_of_subreddit> for the by-subreddit data, reddit-corpus-small for the small subset.

WikiConv Corpus

The full corpus of Wikipedia talk page conversations, based on the reconstruction described in this paper. Note that due to the large size of the data, it is split up by year. We separately provide block data retrieved directly from the Wikipedia block log, for reproducing the Trajectories of Blocked Community Members paper.

Name for download: wikiconv-<year> to download wikiconv data for the specified year.

Chromium Conversations Corpus

A collection of almost 1.5 million conversations and 2.8 million comments posted by developers reviewing proposed code changes in the Chromium project.

Name for download: chromium-corpus

Winning Arguments Corpus

A metadata-rich subset of conversations made in the r/ChangeMyView subreddit between 1 Jan 2013 - 7 May 2015, with information on the delta (success) of a speaker's utterance in convincing the poster.

Name for download: winning-args-corpus

Coarse Discourse Corpus

A subset of Reddit conversations that have been manually annotated with discourse act labels.

Name for download: reddit-coarse-discourse-corpus

Persuasion For Good Corpus

A collection of online conversations generated by Amazon Mechanical Turk workers, where one participant (the persuader) tries to convince the other (the persuadee) to donate to a charity.

Name for download: persuasionforgood-corpus

Intelligence Squared Debates Corpus

Transcripts of debates held as part of Intelligence Squared Debates.

Name for download: iq2-corpus

Friends Corpus

A collection of all the conversations that occurred over 10 seasons of Friends, a popular American TV sitcom that ran in the 1990s.

Name for download: friends-corpus

Switchboard Dialog Act Corpus

A collection of 1,155 five-minute telephone conversations between two participants, annotated with speech act tags.

Name for download: switchboard-corpus

Stanford Politeness Corpus (Wikipedia/Stack Exchange)

Two collections of requests (from Wikipedia and Stack Exchange respectively) with politeness annotations. Name for download: wikipedia-politeness-corpus (Wikipedia portion), stack-exchange-politeness-corpus (Stack Exchange portion).

Deception in Diplomacy Conversations

Conversational dataset with intended and perceived deception labels. Over 17,000 messages annotated by the sender for their intended truthfulness and by the receiver for their perceived truthfulness.

Name for download: diplomacy-corpus

Group Affect and Performance (GAP) Corpus

A conversational dataset comprising group meetings of two to four participants that deliberate in a group decision-making exercise. This dataset contains 28 group meetings with a total of 84 participants.

Name for download: gap-corpus

Wikipedia Articles for Deletion Corpus

A collection of Wikipedia's Articles for Deletion editor debates that occurred between January 1, 2005 and December 31, 2018. This corpus contains about 3,200,000 contributions by approximately 150,000 Wikipedia editors across almost 400,000 debates.

Name for download: wiki-articles-for-deletion-corpus

CaSiNo Corpus

CaSiNo (stands for CampSite Negotiations) is a novel dataset of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements.

Name for download: casino-corpus

SPOLIN Corpus

Selected Pairs of Learnable ImprovisatioN (SPOLIN) is a collection of more than 68,000 "Yes, and" type utterance pairs extracted from the long-form improvisation podcast Spontaneanation by Paul F. Tompkins, the Cornell Movie-Dialogs Corpus, and the SubTle corpus.

Name for download: spolin-corpus

...And your own corpus!

In addition to the provided datasets, you may also use ConvoKit with your own custom datasets by loading them into a convokit.Corpus object. This example script shows how to construct a Corpus from custom data.

Installation

This toolkit requires Python >= 3.9.

  1. Download the toolkit: pip3 install convokit
  2. Download Spacy's English model: python3 -m spacy download en
  3. Download NLTK's 'punkt' model: import nltk; nltk.download('punkt') (in Python interpreter)

Alternatively, visit our Github Page to install from source.

If you encounter difficulties with installation, check out our Troubleshooting Guide for a list of solutions to common issues.

Documentation

Documentation is hosted here. If you are new to ConvoKit, great places to get started are the Core Concepts tutorial for an overview of the ConvoKit "philosophy" and object model, and the High-level tutorial for a walkthrough of how to import ConvoKit into your project, load a Corpus, and use ConvoKit functions.

For an overview, watch our SIGDIAL talk introducing the toolkit: SIGDIAL 2020: Introducing ConvoKit

Contributing

We welcome community contributions. To see how you can help out, check the contribution guidelines.

Citing

If you use the code or datasets distributed with ConvoKit please acknowledge the work tied to the respective component (indicated in the documentation) in addition to:

Jonathan P. Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, Cristian Danescu-Niculescu-Mizil. 2020. "ConvoKit: A Toolkit for the Analysis of Conversations". Proceedings of SIGDIAL.

ConvoKit

Contributors āœØ

Thanks goes to these wonderful people (emoji key):

Cristian Danescu-Niculescu-Mizil
Cristian Danescu-Niculescu-Mizil

šŸ’» šŸ”£ šŸ¤” šŸš§ šŸ“– šŸ‘€
Andrew Wang
Andrew Wang

šŸ’» šŸ”£ šŸ¤” šŸš§ šŸ“– šŸ‘€
Justine Zhang
Justine Zhang

šŸ’» šŸ”£ šŸ¤” šŸš§ šŸ“– šŸ‘€
Jonathan Chang
Jonathan Chang

šŸ’» šŸ”£ šŸ¤” šŸš§ šŸ“– šŸ‘€
Liye Fu
Liye Fu

šŸ’» šŸ”£ šŸ¤” šŸš§ šŸ“– šŸ‘€
calebchiam
calebchiam

šŸ’» šŸ”£ šŸ¤” šŸš§ šŸ“– šŸ‘€
rgangela99
rgangela99

šŸ’»
Khonzoda Umarova
Khonzoda Umarova

šŸ”£ šŸš§
mwilbz
mwilbz

āš ļø
Alex Koen
Alex Koen

šŸ›
Emily Tseng
Emily Tseng

šŸ› šŸ”£
Uliyana Kubasova
Uliyana Kubasova

šŸ”£
Jack Schluger
Jack Schluger

šŸ› šŸ’»
Kushal Chawla
Kushal Chawla

šŸ”£
June Cho
June Cho

šŸ”£
Noam Eshed
Noam Eshed

šŸ”£
Andrew Szmurlo
Andrew Szmurlo

šŸ”£
Katharine Sadowski
Katharine Sadowski

šŸ”£
Lucas Van Bramer
Lucas Van Bramer

šŸ”£
Marianne Aubin
Marianne Aubin

šŸ”£
Di Ni
Di Ni

šŸ”£
gdeng96
gdeng96

šŸ”£
Frank Li
Frank Li

šŸ”£
rjz46
rjz46

šŸ”£
KatyBlumer
KatyBlumer

šŸ”£
als452
als452

šŸ”£
KaminskyJ
KaminskyJ

šŸ’»
Armaan Puri
Armaan Puri

šŸ’»
Oscar So
Oscar So

šŸ’»
Justin Cho
Justin Cho

šŸ”£
seanzhangkx8
seanzhangkx8

šŸ’» šŸ¤” šŸš§ šŸ“– šŸ‘€

This project follows the all-contributors specification. Contributions of any kind welcome!