Open jamesallenevans opened 3 years ago
Intuitions
Dataset Pre-trained GPT-3, BERT, and humans (e.g. students, M Turkers)
Intuitions:
Data: Google Drive folder - contains 2 pre-trained models (first using Bhargav's code for the labelling task, second GPT2.) Data is sampled sentences which come from Early Marx (Volume 3), Late Marx (Volume 27), and Ambiguous (Late??) Marx (Volume 35). Both data CSVs in the folder contain same data scheme but one is a tiny subset of the other.
Intuition: Following previous week question: that (1) Issues in terms of Immigrants are more and more related to Economics and Security issues. (2) + Post and Pre 911 News have different topic intentions towards immigrants, refugees, and Specific Cultures I have the following intuition For this week: (1) Republican media and Democrat media may have different evaluating criteria towards immigrants/refugees. (1)+ It's possible to understand the target audience different media focus on for immigrant issues.
Dataset: COCA News(I have the data for 2002 and maybe can start from this) https://drive.google.com/file/d/1rzcTmYxeT5zLRG1UJnyce2izIIM324zi/view?usp=sharing
Intuitions: (1) + The words from subreddit Personal Finance corpus that are similar to word "finance" are different from words subreddit wallstreetbets that are similar to "finance" --when people talking about finance, they are talking different things (2) * Wallstreetbets corpus is different from Personal Finance corpus
Data: Posts from subreddit Personal Finance: Download Posts from subreddit Wallstreetbets: Download
*1. Feminism has always been the theme of Gilmore Girls. +2. Rory's speech style and content have changed when she graduates from high school and enters college.
Dataset: http://www.crazy-internet-people.com/site/gilmoregirls/scripts.html
Intuitions: (1) (*) Given the low signal-to-noise ratio in the financial market, incorporating BERT embeddings may not be able to improve our ability to forecast the stock market returns. (2) (+) BERT embeddings can beat human traders in digesting textual information.
Data: financial news, BERT, stock returns from CRSP
Intuition:
Dataset : https://data.world/romanticmonkey/syrianwarfakenews
Intuitions: I expect romantic language to be generated by a model trained on fanfiction, even if the input suggests a diferent genre.* I expect more positive word associations with marginalized groups in fanfiction than in source material.+
Data: Scraping data from Archive of Our Own (http://archiveofourown.org/) using this script (https://github.com/radiolarian/AO3Scraper), along with the Davies TV Corpus
data: Glassdoor company review database.
Two intuitions:
Dataset: Presidential speeches corpus from https://millercenter.org/the-presidency/presidential-speeches Can be scraped with: https://github.com/RobertoBarrosoLuque/ContentAnalysisPresidentialRhetoric/blob/main/ScrapeSpeeches/scrape_miller.py
Intuitions:
(+) In structured debates, the winning teams' arguments will be centered around the debate topic. (*) Winners of different debates on a similar topic (like climate change) would be more closely aligned to each other than the losers.
I didn't collect the data on this because this is unrelated to my project, but it can be scraped from Munk Debates and Intelligence Squared websites.
Intuitions
(*) BERT provides unprecedented performance on this dataset compared to any other model used so far. (+) For this dataset, BERT embedding might not be as informative as LSTM embedding. Dataset: Sothebys Dataset (shared in previous weeks)
Intuitions: 1) (+) When trained on presidential speeches, GPT-3 generates text readers perceive as convincing (would require human ratings). 2) (*) Word embeddings could help us predict whether an executive order later gets revoked (above and beyond the president's party affiliation in relation to successive presidents'). Datasets: Presidential speeches corpus from https://millercenter.org/the-presidency/presidential-speeches Can be scraped with: https://github.com/RobertoBarrosoLuque/ContentAnalysisPresidentialRhetoric/blob/main/ScrapeSpeeches/scrape_miller.py Email me at lilygrier@uchicago.edu for dataset of executive orders!
Intuitions:
People speaking about Latin American politicians that ran for president (2005-2015):
Corpus del Español: This corpus contains about two billion words of Spanish, taken from about two million web pages from 21 different Spanish-speaking countries. It was web-scraped in 2015.
Class dataset: Corpus del Español ("SPAN").
First, write down two intuitions you have about broad content patterns you will discover about your data as encoded within a pre-trained or fine-tuned deep contextual (e.g., BERT) embedding. These can be the same as those from last week...or they can evolve based on last week's explorations and the novel possibilities that emerge from dynamic, contextual embeddings--e.g., they could be about text generation from a tuned model. As before, place an asterisk next to the one you expect most firmly, and a plus next to the one that, if true, would be the biggest or most important surprise to others (especially the research community to whom you might communicate it, if robustly supported). Second, describe the dataset(s) you would like to fine-tune or embed within a pre-trained contextual embedding model to explore these intuitions. Note that this need not be large text--you could simply encode a few texts in a pretrained contextual embedding and explore their position relative to one another and the semantics of the model. Then place (a) a link to the data, (b) a script to download and clean it, (c) a reference to a class dataset, or (d) an invitation for a TA to contact you about it. Please do NOT spend time/space explaining the precise embedding or analysis strategy you will use to explore your intuitions. (Then upvote the 5 most interesting, relevant and challenging challenge responses from others).