cloudera / clusterdock

Apache License 2.0
70 stars 57 forks source link

Configurations are not correctly applied #5

Closed apurtell closed 8 years ago

apurtell commented 8 years ago

Build an image and try to run it with a custom configuration:

...

CLUSTERDOCK_IMAGE=clusterdock:latest CLUSTERDOCK_PULL=false clusterdock_run \
    ./bin/start_cluster apache_hbase \
    --hadoop-version=2.7.2 \
    --hbase-version=0.98.21 \
    --data-directories='/data/0,/data/1,/data/2,/data/3,/data/4,/data/5,/data/6,/data/7,/data/8,/data/9,/data/10,/data/11' \
    --primary-node=node-1 --secondary-nodes='node-{2..5}' --start-services \
    --configurations=target/test-setup.cfg

where test-setup.cfg is, for sake of reproducibility, just a copy of clusterdock/clusterdock/topologies/apache_hbase/configurations.cfg

The generated configuration will not be correct.

The namenode fails to launch:

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []

This is because the generated /hadoop/etc/hadoop/core-site.xml is empty

$ ssh node-1.cluster
$ cat /hadoop/etc/hadoop/core-site.xml
...
<configuration>
</configuration>

("..." is the Apache license preamble of the default core-site.xml which I have omitted for brevity.)

If I try to add any custom settings to either the custom configuration or even clusterdock/clusterdock/topologies/apache_hbase/configurations.cfg, they don't show up. I've tried adding properties to hdfs-site.xml, core-site.xml, and hbase-site.xml. I've tried generating hadoop-env.sh and hbase-env.sh with 'body' specifications. Doesn't have an effect.

dimaspivak commented 8 years ago

Hey @apurtell, I can't seem to reproduce this. I think maybe this is a product of the new pluggable topologies style that I have up for review in HBASE-12721? Here's what I did to get it to work (using the old configuration you suggested over in that JIRA):

source /dev/stdin <<< "$(curl -sL http://tiny.cloudera.com/clusterdock.sh)"
root@more-hbase-docker:~# cat > andy-setup.cfg
[hadoop/slaves]
+++ '\n'.join(["{{0}}.{network}".format(node) for node in {secondary_nodes}])

[hadoop/core-site.xml]
fs.default.name = hdfs://{primary_node[0]}.{network}:8020

[hadoop/mapred-site.xml]
mapreduce.framework.name = yarn

[hadoop/yarn-site.xml]
yarn.resourcemanager.hostname = {primary_node[0]}.{network}
yarn.nodemanager.aux-services = mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce_shuffle.class = org.apache.hadoop.mapred.ShuffleHandler
yarn.nodemanager.vmem-check-enabled = false

[hbase/regionservers]
+++ '\n'.join(["{{0}}.{network}".format(node) for node in {secondary_nodes}])

[hbase/backup-masters]
{secondary_nodes[0]}.{network}

[hbase/hbase-site.xml]
hbase.cluster.distributed = true
hbase.rootdir = hdfs://{primary_node[0]}.{network}/hbase
hbase.zookeeper.quorum = {primary_node[0]}.{network}
hbase.zookeeper.property.dataDir = /usr/local/zookeeper

hbase.it.clustermanager.hadoop.hdfs.user = root
hbase.it.clustermanager.zookeeper.user = root
hbase.it.clustermanager.hbase.user = root

[hadoop/hadoop-env.sh]
body:
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+UseG1GC"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:MaxGCPauseMillis=100"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintGCDetails"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintGCDateStamps"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintGCTimeStamps"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintAdaptiveSizePolicy"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintReferenceGC"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+ParallelRefProcEnabled"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+TieredCompilation"
    COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:-ResizePLAB"

    export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -Xms1g -Xmx1g"
    export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS $COMMON_HDFS_OPTS"
    export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -XX:+AlwaysPreTouch"
    export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -verbose:gc -Xloggc:/var/log/hadoop/hdfs-namenode-gc.log"

    export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS -Xms1g -Xmx1g"
    export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS $COMMON_HDFS_OPTS"
    export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS -verbose:gc -Xloggc:/var/log/hadoop/hdfs-secondarynamenode-gc.log"

    export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -Xms1g -Xmx1g"
    export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS $COMMON_HDFS_OPTS"
    export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -XX:+AlwaysPreTouch"
    export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -verbose:gc -Xloggc:/var/log/hadoop/hdfs-datanode-gc.log"

[hbase/hbase-env.sh]
body:
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+UseG1GC"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+PrintGCDetails"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+PrintGCDateStamps"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+PrintGCTimeStamps"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+PrintAdaptiveSizePolicy"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+PrintReferenceGC"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+ParallelRefProcEnabled"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:+TieredCompilation"
    COMMON_HBASE_OPTS="$COMMON_HBASE_OPTS -XX:-ResizePLAB"

    export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xms1g -Xmx1g"
    export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $COMMON_HBASE_OPTS"
    export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -verbose:gc -Xloggc:/var/log/hbase/hbase-master-gc.log"

    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xms32g -Xmx32g"
    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $COMMON_HBASE_OPTS"
    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:MaxGCPauseMillis=50"
    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:+UseCondCardMark"
    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:+AlwaysPreTouch"
    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -verbose:gc -Xloggc:/var/log/hbase/hbase-regionserver-gc.log"

    export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xms1g -Xmx1g"
    export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $COMMON_HBASE_OPTS"
    export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -XX:+AlwaysPreTouch"
    export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -verbose:gc -Xloggc:/var/log/hbase/hbase-zookeeper-gc.log"
^C
root@more-hbase-docker:~# CLUSTERDOCK_TARGET_DIR=$(pwd) CLUSTERDOCK_TOPOLOGY_IMAGE=dimaspivak/clusterdock:apache_hbase_topology clusterdock_run ./bin/start_cluster --namespace=dimaspivak apache_hbase --hbase-version=1.2.2 --hadoop-version=2.5.1 --secondary-nodes='node-{2..5}' --configurations=/root/target/andy-setup.cfg
INFO:clusterdock.topologies.apache_hbase.actions:Extracted container folder /hadoop/etc/hadoop to /var/lib/docker/volumes/77e0f4cfa4b54c206fc65c525cd76f5f6b5206109d1493498906d203c433321a/_data/e0aa02e6-0a42-42ff-afb2-d3990c4081e1/config/hadoop.
INFO:clusterdock.topologies.apache_hbase.actions:Extracted container folder /hbase/conf to /var/lib/docker/volumes/77e0f4cfa4b54c206fc65c525cd76f5f6b5206109d1493498906d203c433321a/_data/e0aa02e6-0a42-42ff-afb2-d3990c4081e1/config/hbase.
INFO:clusterdock.topologies.apache_hbase.actions:The /hbase/lib folder on containers in the cluster will be volume mounted into /var/lib/docker/volumes/77e0f4cfa4b54c206fc65c525cd76f5f6b5206109d1493498906d203c433321a/_data/e0aa02e6-0a42-42ff-afb2-d3990c4081e1/config/hbase-lib...
INFO:clusterdock.topologies.apache_hbase.actions:Extracted container folder /hbase/lib to /var/lib/docker/volumes/77e0f4cfa4b54c206fc65c525cd76f5f6b5206109d1493498906d203c433321a/_data/e0aa02e6-0a42-42ff-afb2-d3990c4081e1/config/hbase-lib.
INFO:clusterdock.cluster:Network (cluster) not present, creating it...
INFO:clusterdock.cluster:Successfully setup network (name: cluster).
INFO:clusterdock.cluster:Successfully started node-1.cluster (IP address: 192.168.124.2).
INFO:clusterdock.cluster:Successfully started node-2.cluster (IP address: 192.168.124.3).
INFO:clusterdock.cluster:Successfully started node-3.cluster (IP address: 192.168.124.4).
INFO:clusterdock.cluster:Successfully started node-4.cluster (IP address: 192.168.124.5).
INFO:clusterdock.cluster:Successfully started node-5.cluster (IP address: 192.168.124.6).
INFO:clusterdock.cluster:Started cluster in 7.49 seconds.
INFO:clusterdock.topologies.apache_hbase.actions:Updating hadoop/slaves...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hadoop/core-site.xml...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hadoop/mapred-site.xml...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hadoop/yarn-site.xml...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hbase/regionservers...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hbase/backup-masters...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hbase/hbase-site.xml...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hadoop/hadoop-env.sh...
INFO:clusterdock.topologies.apache_hbase.actions:Updating hbase/hbase-env.sh...
INFO:clusterdock.topologies.apache_hbase.actions:Formatting namenode on node-1.cluster...
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-namenode-gc.log due to No such file or directory

16/08/11 17:37:43 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = node-1.cluster/192.168.124.2
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.5.1
STARTUP_MSG:   classpath = /hadoop/etc/hadoop:/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/hadoop/share/hadoop/common/lib/xz-1.0.jar:/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/hadoop/share/hadoop/common/lib/hadoop-annotations-2.5.1.jar:/hadoop/share/hadoop/common/lib/activation-1.1.jar:/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/hadoop/share/hadoop/common/lib/junit-4.11.jar:/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/hadoop/share/hadoop/common/lib/hadoop-auth-2.5.1.jar:/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/hadoop/share/hadoop/common/lib/asm-3.2.jar:/hadoop/share/hadoop/common/hadoop-nfs-2.5.1.jar:/hadoop/share/hadoop/common/hadoop-common-2.5.1-tests.jar:/hadoop/share/hadoop/common/hadoop-common-2.5.1.jar:/hadoop/share/hadoop/hdfs:/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.1.jar:/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.5.1.jar:/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.5.1-tests.jar:/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.1.jar:/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.1.jar:/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.1.jar:/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.1-tests.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.1.jar:/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.1.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 2e18d179e4a8065b6a9f29cf2de9451891265cce; compiled by 'jenkins' on 2014-09-05T23:11Z
STARTUP_MSG:   java = 1.8.0_91
************************************************************/
16/08/11 17:37:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/08/11 17:37:43 INFO namenode.NameNode: createNameNode [-format]
16/08/11 17:37:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-cbad364d-154b-4b2f-b9de-da04b91902e9
16/08/11 17:37:44 INFO namenode.FSNamesystem: fsLock is fair:true
16/08/11 17:37:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/08/11 17:37:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/08/11 17:37:44 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/08/11 17:37:44 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Aug 11 17:37:44
16/08/11 17:37:44 INFO util.GSet: Computing capacity for map BlocksMap
16/08/11 17:37:44 INFO util.GSet: VM type       = 64-bit
16/08/11 17:37:44 INFO util.GSet: 2.0% max memory 1 GB = 20.5 MB
16/08/11 17:37:44 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/08/11 17:37:44 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/08/11 17:37:44 INFO blockmanagement.BlockManager: defaultReplication         = 3
16/08/11 17:37:44 INFO blockmanagement.BlockManager: maxReplication             = 512
16/08/11 17:37:44 INFO blockmanagement.BlockManager: minReplication             = 1
16/08/11 17:37:44 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/08/11 17:37:44 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/08/11 17:37:44 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/08/11 17:37:44 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/08/11 17:37:44 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/08/11 17:37:44 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/08/11 17:37:44 INFO namenode.FSNamesystem: supergroup          = supergroup
16/08/11 17:37:44 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/08/11 17:37:44 INFO namenode.FSNamesystem: HA Enabled: false
16/08/11 17:37:44 INFO namenode.FSNamesystem: Append Enabled: true
16/08/11 17:37:44 INFO util.GSet: Computing capacity for map INodeMap
16/08/11 17:37:44 INFO util.GSet: VM type       = 64-bit
16/08/11 17:37:44 INFO util.GSet: 1.0% max memory 1 GB = 10.2 MB
16/08/11 17:37:44 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/08/11 17:37:44 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/08/11 17:37:44 INFO util.GSet: Computing capacity for map cachedBlocks
16/08/11 17:37:44 INFO util.GSet: VM type       = 64-bit
16/08/11 17:37:44 INFO util.GSet: 0.25% max memory 1 GB = 2.6 MB
16/08/11 17:37:44 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/08/11 17:37:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/08/11 17:37:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/08/11 17:37:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/08/11 17:37:44 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/08/11 17:37:44 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/08/11 17:37:44 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/08/11 17:37:44 INFO util.GSet: VM type       = 64-bit
16/08/11 17:37:44 INFO util.GSet: 0.029999999329447746% max memory 1 GB = 314.6 KB
16/08/11 17:37:44 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/08/11 17:37:44 INFO namenode.NNConf: ACLs enabled? false
16/08/11 17:37:44 INFO namenode.NNConf: XAttrs enabled? true
16/08/11 17:37:44 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/08/11 17:37:44 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1722871847-192.168.124.2-1470962264282
16/08/11 17:37:44 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
16/08/11 17:37:44 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/08/11 17:37:44 INFO util.ExitUtil: Exiting with status 0
16/08/11 17:37:44 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node-1.cluster/192.168.124.2
************************************************************/
INFO:clusterdock.topologies.apache_hbase.actions:Starting HDFS...
16/08/11 17:37:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [node-1.cluster]
node-1.cluster: Warning: Permanently added 'node-1.cluster,192.168.124.2' (RSA) to the list of known hosts.
node-1.cluster: starting namenode, logging to /hadoop/logs/hadoop-root-namenode-node-1.cluster.out
node-1.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-namenode-gc.log due to No such file or directory
node-1.cluster: 
node-3.cluster: Warning: Permanently added 'node-3.cluster,192.168.124.4' (RSA) to the list of known hosts.
node-3.cluster: starting datanode, logging to /hadoop/logs/hadoop-root-datanode-node-3.cluster.out
node-3.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-datanode-gc.log due to No such file or directory
node-3.cluster: 
node-2.cluster: Warning: Permanently added 'node-2.cluster,192.168.124.3' (RSA) to the list of known hosts.
node-2.cluster: starting datanode, logging to /hadoop/logs/hadoop-root-datanode-node-2.cluster.out
node-2.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-datanode-gc.log due to No such file or directory
node-2.cluster: 
node-4.cluster: Warning: Permanently added 'node-4.cluster,192.168.124.5' (RSA) to the list of known hosts.
node-4.cluster: starting datanode, logging to /hadoop/logs/hadoop-root-datanode-node-4.cluster.out
node-4.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-datanode-gc.log due to No such file or directory
node-4.cluster: 
node-5.cluster: Warning: Permanently added 'node-5.cluster,192.168.124.6' (RSA) to the list of known hosts.
node-5.cluster: starting datanode, logging to /hadoop/logs/hadoop-root-datanode-node-5.cluster.out
node-5.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-datanode-gc.log due to No such file or directory
node-5.cluster: 
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /hadoop/logs/hadoop-root-secondarynamenode-node-1.cluster.out
0.0.0.0: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs-secondarynamenode-gc.log due to No such file or directory
0.0.0.0: 
16/08/11 17:37:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
INFO:clusterdock.topologies.apache_hbase.actions:Starting YARN...
starting yarn daemons
starting resourcemanager, logging to /hadoop/logs/yarn-root-resourcemanager-node-1.cluster.out
node-3.cluster: Warning: Permanently added 'node-3.cluster,192.168.124.4' (RSA) to the list of known hosts.
node-3.cluster: starting nodemanager, logging to /hadoop/logs/yarn-root-nodemanager-node-3.cluster.out
node-5.cluster: Warning: Permanently added 'node-5.cluster,192.168.124.6' (RSA) to the list of known hosts.
node-5.cluster: starting nodemanager, logging to /hadoop/logs/yarn-root-nodemanager-node-5.cluster.out
node-4.cluster: Warning: Permanently added 'node-4.cluster,192.168.124.5' (RSA) to the list of known hosts.
node-4.cluster: starting nodemanager, logging to /hadoop/logs/yarn-root-nodemanager-node-4.cluster.out
node-2.cluster: Warning: Permanently added 'node-2.cluster,192.168.124.3' (RSA) to the list of known hosts.
node-2.cluster: starting nodemanager, logging to /hadoop/logs/yarn-root-nodemanager-node-2.cluster.out
INFO:clusterdock.topologies.apache_hbase.actions:Starting HBase...
node-1.cluster: Warning: Permanently added 'node-1.cluster,192.168.124.2' (RSA) to the list of known hosts.
node-1.cluster: starting zookeeper, logging to /hbase/bin/../logs/hbase-root-zookeeper-node-1.cluster.out
node-1.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-zookeeper-gc.log due to No such file or directory
node-1.cluster: 
starting master, logging to /hbase/bin/../logs/hbase-root-master-node-1.cluster.out
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-master-gc.log due to No such file or directory

node-4.cluster: Warning: Permanently added 'node-4.cluster,192.168.124.5' (RSA) to the list of known hosts.
node-4.cluster: starting regionserver, logging to /hbase/bin/../logs/hbase-root-regionserver-node-4.cluster.out
node-4.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-regionserver-gc.log due to No such file or directory
node-4.cluster: 
node-2.cluster: Warning: Permanently added 'node-2.cluster,192.168.124.3' (RSA) to the list of known hosts.
node-2.cluster: starting regionserver, logging to /hbase/bin/../logs/hbase-root-regionserver-node-2.cluster.out
node-2.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-regionserver-gc.log due to No such file or directory
node-2.cluster: 
node-5.cluster: Warning: Permanently added 'node-5.cluster,192.168.124.6' (RSA) to the list of known hosts.
node-5.cluster: starting regionserver, logging to /hbase/bin/../logs/hbase-root-regionserver-node-5.cluster.out
node-5.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-regionserver-gc.log due to No such file or directory
node-5.cluster: 
node-3.cluster: Warning: Permanently added 'node-3.cluster,192.168.124.4' (RSA) to the list of known hosts.
node-3.cluster: starting regionserver, logging to /hbase/bin/../logs/hbase-root-regionserver-node-3.cluster.out
node-3.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-regionserver-gc.log due to No such file or directory
node-3.cluster: 
node-2.cluster: Warning: Permanently added 'node-2.cluster,192.168.124.3' (RSA) to the list of known hosts.
node-2.cluster: starting master, logging to /hbase/bin/../logs/hbase-root-master-node-2.cluster.out
node-2.cluster: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hbase/hbase-master-gc.log due to No such file or directory
node-2.cluster: 
starting rest, logging to /hbase/bin/../logs/hbase-root-rest-node-1.cluster.out
INFO:clusterdock.topologies.apache_hbase.actions:NameNode and HBase master are located on node-1. SSH over and have fun!
INFO:clusterdock.topologies.apache_hbase.actions:The HDFS NameNode web UI can be reached at http://more-hbase-docker.vpc.cloudera.com:32774
INFO:clusterdock.topologies.apache_hbase.actions:The YARN ResourceManager web UI can be reached at http://more-hbase-docker.vpc.cloudera.com:32776
INFO:clusterdock.topologies.apache_hbase.actions:The HBase master web UI can be reached at http://more-hbase-docker.vpc.cloudera.com:32775
INFO:clusterdock.topologies.apache_hbase.actions:The HBase REST server can be reached at http://more-hbase-docker.vpc.cloudera.com:32777
INFO:start_cluster:Apache HBase cluster started in 00 min, 38 sec.
root@more-hbase-docker:~# clusterdock_ssh node-1.cluster cat /hadoop/etc/hadoop/hadoop-env.sh
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+UseG1GC"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:MaxGCPauseMillis=100"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintGCDetails"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintGCDateStamps"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintGCTimeStamps"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintAdaptiveSizePolicy"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+PrintReferenceGC"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+ParallelRefProcEnabled"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:+TieredCompilation"
COMMON_HDFS_OPTS="$COMMON_HDFS_OPTS -XX:-ResizePLAB"
export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -Xms1g -Xmx1g"
export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS $COMMON_HDFS_OPTS"
export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -XX:+AlwaysPreTouch"
export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -verbose:gc -Xloggc:/var/log/hadoop/hdfs-namenode-gc.log"
export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS -Xms1g -Xmx1g"
export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS $COMMON_HDFS_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="$HADOOP_SECONDARYNAMENODE_OPTS -verbose:gc -Xloggc:/var/log/hadoop/hdfs-secondarynamenode-gc.log"
export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -Xms1g -Xmx1g"
export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS $COMMON_HDFS_OPTS"
export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -XX:+AlwaysPreTouch"
export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -verbose:gc -Xloggc:/var/log/hadoop/hdfs-datanode-gc.log"
apurtell commented 8 years ago

Thanks @dimaspivak . CLUSTERDOCK_TOPOLOGY_IMAGE is new to me. As a noob I'm cargo culting your usage examples. Will retry at next refresh

I sourced the clusterdock.sh script in the Cloudera repo

I would only be willing to do this on a throwaway host, so a documented alternate that lets me compose in a safe way all of the necessary commands/steps would be highly appreciated

apurtell commented 8 years ago

Also,

CLUSTERDOCK_TOPOLOGY_IMAGE=dimaspivak/clusterdock:apache_hbase_topology

How do I make that topology image myself? Don't want to pull from a remote repo

dimaspivak commented 8 years ago

Hey Andy,

It can be built by doing docker build in the apache_hbase_topology directory up for review over in HBASE-12721.

On Friday, August 12, 2016, Andrew Purtell notifications@github.com wrote:

Also,

CLUSTERDOCK_TOPOLOGY_IMAGE=dimaspivak/clusterdock:apache_hbase_topology

How do I make that topology image myself? Don't want to pull from a remote repo

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/cloudera/clusterdock/issues/5#issuecomment-239493869, or mute the thread https://github.com/notifications/unsubscribe-auth/AFzozBuSywtSTv4-CBNFmkxnadR96ARWks5qfJ-JgaJpZM4Jek8D .

apurtell commented 8 years ago

Ok, got it. Thanks!

On Fri, Aug 12, 2016 at 9:32 AM, Dima Spivak notifications@github.com wrote:

Hey Andy,

It can be built by doing docker build in the apache_hbase_topology directory up for review over in HBASE-12721.

On Friday, August 12, 2016, Andrew Purtell notifications@github.com wrote:

Also,

CLUSTERDOCK_TOPOLOGY_IMAGE=dimaspivak/clusterdock:apache_hbase_topology

How do I make that topology image myself? Don't want to pull from a remote repo

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <https://github.com/cloudera/clusterdock/issues/5#issuecomment-239493869 , or mute the thread https://github.com/notifications/unsubscribe-auth/AFzozBuSywtSTv4- CBNFmkxnadR96ARWks5qfJ-JgaJpZM4Jek8D .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/cloudera/clusterdock/issues/5#issuecomment-239494468, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEZNoosdm4poIyXX5DjWWC90Q202FIMks5qfKAggaJpZM4Jek8D .

Best regards,

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)