StanGirard / TrollHunter

Twitter Troll & Fake News Hunter - Crawls news websites and twitter to identify fake news
https://doc.trollhunter.guru/
GNU General Public License v3.0
62 stars 11 forks source link
ai elasticsearch elk fake-news news scraper sitemap twint twitter

TrollHunter

TrollHunter is a Twitter Crawler & News Website Indexer. It aims at finding Troll Farmers & Fake News on Twitter.

It composed of three parts:

Installation

You can either run

pip3 install TrollHunter

or clone the project and run

pip3 install -r requirements.txt

Docker

TrollHunter requires many services to run

You can either launch them individually if you already have them setup or use our docker-compose.yml

Setup

Change the .env with the required values Export the .env variables

export $(cat .env | sed 's/#.*//g' | xargs)

Twitter crawler

Twint

For crawl tweets and extract user's information we use Twint wich allow us to get many information without using Twitter api.

Some of the benefits of using Twint vs Twitter API:

When we used twint, we encountered some problems:

So we decided to fork the project.

With allow us to:

API

For this we use the open-source framework flask.

Four endpoints are defined and their

Some query parameters are available:

Twitter Storage

Information retrieve with twint is stored in elastic search, we do not use the default twint storage format as we want a stronger relationship parsing. There is currently three index:

The first and second index are stored as in twitter. The third is build to store interaction from followers/following, conversation and retweet.

Twitter interaction

News Indexer

The second main part of the project is the crawler and indexer of news.

For this, we use the sitemap xml file of news websites to crawl all the articles. In a sitemap file, we extract the tag sitemap and url.

The sitemap tag is a link to a child sitemap xml file for a specific category of articles in the website.

The url tag represents an article/news of the website.

The root url of a sitemap is stored in a postgres database with a trust level of the website (Oriented, Verified, Fake News, ...) and headers. The headers are the tag we want to extract from the url tag which contains details about the article (title, keywords, publication date, ...).

The headers are the list of fields use in the index pattern of ElasticSearch.

In crawling sitemaps, we insert the new child sitemap in the database with the last modification date or update it for the ones already in the database. The last modification date is used to crawl only sitemaps which change since the last crawling.

The data extracts from the url tags are built in a dataframe then sent in ElasticSearch for further utilisation with the request in Twint API.

In the same time, some sitemaps don't provide the keywords for their articles. Hence, from ElasticSearch we retrieve the entries without keywords. Then, we download the content of the article and extract the keywords thanks to NLP. Finally, we update the entries in ElasticSearch.

How it works

Run

For the crawler/indexer:

from TrollHunter.news_crawler import scheduler_news

scheduler_news(time_interval)

For updating keywords:

from TrollHunter.news_crawler import scheduler_keywords

scheduler_keywords(time_interval, max_entry)

Or see with the main use with docker.

Grafana

We use grafana for visualizing and monitoring different events with the crawler/indexer as the insertion of an url in ElasticSearch and the extraction of keywords in an article.

alt text

Create new events.

Text analysis

The text Analysis part is under TrollHunter/texto. It aims to process a text or a set of texts to retrieve useful information that can be used to help determine the "troll" status of a user or link a text to a news.

There a several classes that make the job:

Keyword

Keywords extraction are useful because it can help can detect topics of an input text. To extract keywords from a text, just import "extract" function from Keyword and call it with a text as input. "extract" function is just a wrapper of 2 extraction functions:

We use both results on the same text and merge them to have keywords from both algorithm. Because of different algorithms, results are sometimes different so we merge the result into a set of unique keywords to have both visions. At least 75 keywords are returned: 25 from extract_v1 and 50 from extract_v2 (we can adjust this number by parameter).

Feelings

Feelings extraction is to extract Polarity, Feelings and Subjectivity as numerical values from a text or set of text. To extract them, import from Sentiment.py functions get_sentiment_from_tweets, get_polarity and get_subjectivity. We use TextBlob for Polarity and Subjectivity analysis. We use SentimentIntensityAnalyzer from nltk.sentiment.vader (nltk package) for feeling analysis.

Average Indicator

This one is to compute and extract average useful data from a set of tweets for a user (or a set of users). It consists in one class called "Indicator". You give it one folder with a set of user csv file, and you call "get_all_indicator_users" function to apply all our algorithms to have an average and detect some patterns. We can for instance compare a set of troll users and a set of non-troll users.