lintool / warcbase

Warcbase is an open-source platform for managing analyzing web archives
http://warcbase.org/
161 stars 47 forks source link

Tweet URL Extraction: All Twitter Shortlinks #216

Open ianmilligan1 opened 8 years ago

ianmilligan1 commented 8 years ago

Right now, our script for URL extraction is as follows:

import org.warcbase.spark.matchbox._
import org.warcbase.spark.matchbox.TweetUtils._
import org.warcbase.spark.rdd.RecordRDD._

val tweets =
RecordLoader.loadTweets("/mnt/vol1/data_sets/elxn42/ruest-white/elxn42-tweets-combined-deduplicated-unshortened-fixed.json",
sc)
val r = tweets.flatMap(tweet => {"""http://[^ ]+""".r.findAllIn(tweet.text).toList})
.countItems()
.saveAsTextFile("/home/i2millig/tweet-test/tweet-urls-test.txt")

By grabbing tweets from the text field we just get results like:

(http://…,49033)
(http://t…,48066)
(http://t.…,45610)
(http://t.c…,42470)
(http://t.co…,38145)
(http://t.co/…,32723)
(http://t.co/pbFMYFZpQC,2902)
(http://t.co/lTTkYPlGX0,2823)
(http://t.co/mn2pyBGZmj,1964)
(http://t.co/rriRvt6DyI,1964)
(ad nauseum)

This is not very useful – so what's the best path? In the past, @ruebot and I have used unshorten.py in twarc.

ruebot commented 8 years ago

...which in-turn uses https://github.com/edsu/unshrtn

We could incorporate that in. Or, create a method in warcbase that does the same thing, or maybe there is already a Java library that does unshortening that we could just pull in.

lintool commented 8 years ago

Do we have a file which has the mapping from short urls to the full URLs? If so, I can show you how to join in the data...

ruebot commented 8 years ago

@lintool can you clarify what you mean by "a file that has the mapping from short urls to the full URLs"?

ruebot commented 8 years ago

...or, is this what you're looking for? https://github.com/edsu/unshrtn/blob/master/unshrtn.coffee

lintool commented 8 years ago

File that has:

http://t.co/pbFMYFZpQC http://foo.bar.com/
http://t.co/pg3SFzLc http://foo.bar.com/
...
ruebot commented 8 years ago

Oh, https://github.com/edsu/twarc/blob/master/utils/unshorten.py#L37-L53 puts it back in the dataset with a new entry.

lintool commented 8 years ago

If I understand correctly what it's doing, that's absolutely terrible. That's the digital equivalent of going through a paper archive with a black magic marker, crossing out historical place names and replacing them with their modern names. Would you do that to a paper archive? No! So don't do it to a digital archive.

The correct way to do this is to have a separate file that has the mapping (per above), and join in the unshortened form during processing.

EDIT: okay, it adds in a new field in the JSON, which isn't as bad as I thought. Analogy would be to go through a paper archive and put a post-it note next to every instance of a historical place name and on the post-it note write it's modern name.

ruebot commented 8 years ago

You don't do it on the preservation/master version of the dataset, you always cat it out to a new file. By default it is stdout. It only reads the preservation/master version of the dataset.

lintool commented 8 years ago

If that's the case, it's a waste of space. You still just want

short long
short long
...
ruebot commented 8 years ago

Would the output be:

short, count, long, count
http://t.co/pbFMYFZpQC, 12, http://foo.bar.com/, 123
lintool commented 8 years ago

You wouldn't even need the count. If you just had short, long, you can process the original archival JSON and just join in the long form as needed.

ianmilligan1 commented 8 years ago

Just re-opening this. Did we reach any agreement here?

lintool commented 8 years ago

Do we have a way to generate a file that has the following?

short-url full-url
short-url full-url
...