big-data-europe / docker-hadoop

Apache Hadoop docker image
2.18k stars 1.27k forks source link
docker docker-hadoop hadoop hadoop-cluster hadoop-docker

Gitter chat

Changes

Version 2.0.0 introduces uses wait_for_it script for the cluster startup

Hadoop Docker

Supported Hadoop Versions

See repository branches for supported hadoop versions

Quick Start

To deploy an example HDFS cluster, run:

  docker-compose up

Run example wordcount job:

  make wordcount

Or deploy in swarm:

docker stack deploy -c docker-compose-v3.yml hadoop

docker-compose creates a docker network that can be found by running docker network list, e.g. dockerhadoop_default.

Run docker network inspect on the network (e.g. dockerhadoop_default) to find the IP the hadoop interfaces are published on. Access these interfaces with the following URLs:

Configure Environment Variables

The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.):

  CORE_CONF_fs_defaultFS=hdfs://namenode:8020

CORE_CONF corresponds to core-site.xml. fs_defaultFS=hdfs://namenode:8020 will be transformed into:

  <property><name>fs.defaultFS</name><value>hdfs://namenode:8020</value></property>

To define dash inside a configuration parameter, use triple underscore, such as YARN_CONF_yarn_logaggregationenable=true (yarn-site.xml):

  <property><name>yarn.log-aggregation-enable</name><value>true</value></property>

The available configurations are:

If you need to extend some other configuration file, refer to base/entrypoint.sh bash script.