Yelp / mrjob

Run MapReduce jobs on Hadoop or Amazon Web Services
http://packages.python.org/mrjob/
Other
2.62k stars 586 forks source link

mrjob: the Python MapReduce library

.. image:: https://github.com/Yelp/mrjob/raw/master/docs/logos/logo_medium.png

mrjob is a Python 2.7/3.4+ package that helps you write and run Hadoop Streaming jobs.

Stable version (v0.7.4) documentation <http://mrjob.readthedocs.org/en/stable/>_

Development version documentation <http://mrjob.readthedocs.org/en/latest/>_

.. image:: https://travis-ci.org/Yelp/mrjob.png :target: https://travis-ci.org/Yelp/mrjob

mrjob fully supports Amazon's Elastic MapReduce (EMR) service, which allows you to buy time on a Hadoop cluster on an hourly basis. mrjob has basic support for Google Cloud Dataproc (Dataproc) which allows you to buy time on a Hadoop cluster on a minute-by-minute basis. It also works with your own Hadoop cluster.

Some important features:

Installation

pip install mrjob

As of v0.7.0, Amazon Web Services and Google Cloud Services are optional depedencies. To use these, install with the aws and google targets, respectively. For example:

pip install mrjob[aws]

A Simple Map Reduce Job

Code for this example and more live in mrjob/examples.

.. code-block:: python

"""The classic MapReduce job: count the frequency of words. """ from mrjob.job import MRJob import re

WORD_RE = re.compile(r"[\w']+")

class MRWordFreqCount(MRJob):

   def mapper(self, _, line):
       for word in WORD_RE.findall(line):
           yield (word.lower(), 1)

   def combiner(self, word, counts):
       yield (word, sum(counts))

   def reducer(self, word, counts):
       yield (word, sum(counts))

if name == 'main': MRWordFreqCount.run()

Try It Out!

::

# locally
python mrjob/examples/mr_word_freq_count.py README.rst > counts
# on EMR
python mrjob/examples/mr_word_freq_count.py README.rst -r emr > counts
# on Dataproc
python mrjob/examples/mr_word_freq_count.py README.rst -r dataproc > counts
# on your Hadoop cluster
python mrjob/examples/mr_word_freq_count.py README.rst -r hadoop > counts

Setting up EMR on Amazon

Setting up Dataproc on Google

Advanced Configuration

To run in other AWS regions, upload your source tree, run make, and use other advanced mrjob features, you'll need to set up mrjob.conf. mrjob looks for its conf file in:

See the mrjob.conf documentation <https://mrjob.readthedocs.io/en/latest/guides/configs-basics.html>_ for more information.

Project Links

Reference

More Information

Thanks to Greg Killion <mailto:greg@blind-works.net>_ (ROMEO ECHO_DELTA <http://www.romeoechodelta.net/>_) for the logo.