lamthuyvo / social-media-data-scripts

248 stars 87 forks source link
data-gathering facebook nicar-2018 nicar18 python social-media social-web twitter youtube

Mining data from social media platforms

zoolander

At present, most journalists treat social sources like they would any other — individual anecdotes and single points of contact. But to do so with a handful of tweets and Instagram posts is to ignore the potential of hundreds of millions of others.

Many stories lay dormant in the vast amounts of data produced by everyday consumers. Here's a guide and tool box that may help you. What you find below are a number of scripts developed to mine data from APIs.

Slides that explain the work process can be found here. I'm currently in the process of writing more thorough resources on the subject of social media data mining. Feel free to reach out with questions on Twitter @lamthuyvo!

How to get the data

What data you can get with the scripts

This is a growing list of scripts we've put together to make social data mining easier.

There are broadly three different ways to harvest data from the social web:

APIs

The kind of data that official channels like API data streams provide is very limited. Despite harboring warehouses of data on consumers’ behavior, social media companies only provide a sliver of it through their APIs (for Facebook, developers can only get data for public pages and groups, and for Twitter, this access is often restricted to a set number of tweets from a user’s timeline or to a set time frame for search).

Scripts and instructions related to APIs can be found in the 01-apis directory of this repository.

Personal Archives

There are ways for users of social media platforms to request and download archives of their own online persona and behavior. Some services like Facebook or Twitter will allow users to download a history of the data that constitutes their posts, their messaging, or their profile photos.

Scripts and instructions related to personal archives can be found in the 02-personal-archives directory of this repository.

Scraping

While there's plenty of social media data on display on the sites you browse, extracting social media data from the platforms through scraping is often against the terms of service. Scraping a social media platform can get users booted from a service and potentially even result in a lawsuit.

If you end up wanting to look into harvesting data from the social web, there is information information related in the 03-scraping directory of this repository.

Setup

Below is a set of instructions you can follow to get your machine ready to run any of the Python scripts in this repository. While Python is one of the most powerful languages for data gathering and analysis, it can take a few tries to get it installed and running properly. If you're a beginner, don't despair though, these growing pains are normal and can vary from machine to machine. We promise the payoff is worth it!

Before you begin

  1. If you don’t already have Python installed, start by getting Python up and running. There are two Python versions — Python 2 and Python 3. Please install Python 3, as it handles modern Internet language and characters related to it better. Also have git installed. A helpful guide to getting a brand new machine set up can be found here, courtesy of NPR's Visuals Team.
  2. You should also make sure you have pip.

Installing all the libraries you need

  1. You need to get developer oauth credentials from the social media platforms you want to tap into. Oauth credentials are like an ID and password (often referred to as an app ID and secret respectively) that you create for an app or a script to access the data stream that a social media company provides. This data stream — also known as a company's Application Program Interface, or API — is often accessible using these credentials through a link (for example, this is what one of these queries could look like https://graph.facebook.com/v2.6/BuzzFeed/posts/?fields=message/&access_token=YOURID|YOURSECRET). Here's where you can get them: Twitter: https://apps.twitter.com/ Facebook: https://developers.facebook.com/

Setting up your system

  1. Open up your Terminal and go to the folder where you want to clone this repository of code using the cd bash command.
    git clone https://github.com/lamthuyvo/social-media-data-scripts.git
    cd social-media-data-scripts
  2. Then install all the dependencies, i.e. the Python libraries we are using for these scripts by running the following command:
    pip install -r requirements.txt

    or

    sudo pip install -r requirements.txt

    If you have problems with installing the dependencies through

    pip install requests
    pip install tweepy --ignore-installed six
    pip install beautifulsoup4

    or

    sudo pip install requests
    sudo pip install tweepy --ignore-installed six
    pip install beautifulsoup4

Getting your data

Hooray! You're ready to get your data now. We have created a directory for scripts that you can use to get data from each data source.

You can follow the directions for each script in its sub-folders:

Further Reading

There are numerous useful resources and tools out on the web for social media data gathering. Find an incomplete list that I'll continue to update below.

Articles

Books

Academic research

Technical resources