linkedin / dynamometer

A tool for scale and performance testing of HDFS with a specific focus on the NameNode.
BSD 2-Clause "Simplified" License
131 stars 34 forks source link

start-dynamometer-cluster.sh can't start NameNode #72

Closed wangzhe330 closed 5 years ago

wangzhe330 commented 5 years ago

start-dynamometer-cluster.sh command: ./start-dynamometer-cluster.sh --hadoop_binary_path hadoop-2.7.2.tar.gz --conf_path /opt/hadoop/wz/dynamome --conf_path /opt/hadoop/wz/dynamometer/bin/conf/ --fs_image_dir hdfs:///dyno/fsimage --block_list_path

check the NameNode's starting log on the AM node : 2019-01-08 16:32:38,311 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2019-01-08 16:32:38,315 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [-D, fs.defaultFS=hdfs://host-xx-xx:9002, -D, dfs.namenode.rpc-address=host-xx-xx:9002, -D, dfs.namenode.servicerpc-address=host-xx-xx:9022, -D, dfs.namenode.http-address=host-xx-xx:50077, -D, dfs.namenode.https-address=host-xx-xx:0, -D, dfs.namenode.name.dir=file:///opt/huawei/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1546852874867_0024/container_1546852874867_0024_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///opt/huawei/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1546852874867_0024/container_1546852874867_0024_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///opt/huawei/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1546852874867_0024/container_1546852874867_0024_01_000002/dyno-node/checkpoint, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/opt/huawei/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1546852874867_0024/container_1546852874867_0024_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.client.read.shortcircuit=false] 2019-01-08 16:32:38,318 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /**** SHUTDOWN_MSG: Shutting down NameNode at host-xx-xx ****/

In start-component.sh , line 277: ${HADOOP_HOME}/sbin/hadoop-daemon.sh start namenode $namenodeConfigs $NN_ADDITIONAL_ARGS;

It seems that NameNode can't recognize the parameters ( $namenodeConfigs ) The $namenodeConfigs is like this : read -r -d '' namenodeConfigs <<EOF -D fs.defaultFS=hdfs://${nnHostname}:${nnRpcPort} -D dfs.namenode.rpc-address=${nnHostname}:${nnRpcPort} -D dfs.namenode.servicerpc-address=${nnHostname}:${nnServiceRpcPort} -D dfs.namenode.http-address=${nnHostname}:${nnHttpPort} -D dfs.namenode.https-address=${nnHostname}:0 -D dfs.namenode.name.dir=file://${nameDir} -D dfs.namenode.edits.dir=file://${editsDir}

If I has usage error for start-dynamometer-cluster.sh ?

xkrogen commented 5 years ago

Hi @wangzhe330 , thanks for trying out Dynamometer and sorry to hear that you are experiencing an issue. It looks like the NameNode is receiving the parameters just fine; if you look at the log line including the createNameNode statement, it shows all of the correct parameters listed there as expected.

Your startup command seems a little odd -- it contains --conf_path twice, and --block_list_path without any argument. Did you make a mistake when pasting it in?

Can you additionally take a look at the logs on the ApplicationMaster? Please note that the NameNode does not run in the same container as the AM; there is a distinct container which runs the AM only. You should be able to find the link to this in the driver logs or from the ResourceManager UI.

wangzhe330 commented 5 years ago

@xkrogen thanks for the help .

Sorry for the 'odd -- ', it is one mistake when pasting it in this website . In fact , I use "./start-dynamometer-cluster.sh --hadoop_binary_path hadoop-2.7.2.tar.gz --conf_path /opt/hadoop/wz/dynamometer/bin/conf/ --fs_image_dir hdfs:///dyno/fsimage --block_list_path hdfs:///dyno/blocks" .

Allready noticed that there are 2 containers , and here is the AM logs : stderr: am_stderr.log and no stdout in AM .

Here is the NameNode container's logs: hadoop-namenode.log hadoop-namenode.out.log [NameNode_container_stdout.log] (https://github.com/linkedin/dynamometer/files/2739172/NameNode_container_stdout.log) and no stderr in this container .

And I put while(1) in the start-component.sh when it want to 'exit 1' , for observe the NameNode container more easily . image

xkrogen commented 5 years ago

Thanks for the additional info.

Usage: java NameNode [-backup] | 
    [-checkpoint] | 
    [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
    [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | 
    [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | 
    [-rollback] | 
    [-rollingUpgrade <rollback|downgrade|started> ] | 
    [-finalize] | 
    [-importCheckpoint] | 
    [-initializeSharedEdits] | 

This would seem to imply that an invalid argument was passed.

I looked into it, and options parsing was only added to the NameNode startup command in HDFS-2580. This isn't present in 2.7.2, only 2.7.3+. So the current Dynamometer can't support 2.7.2.

hexiangheng commented 5 years ago

Hi, @xkrogen ,when i start dynamometer cluster,i can't start NameNode.

start-dynamometer-cluster.sh command:

./bin/start-dynamometer-cluster.sh --hadoop_binary_path /home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT.tar.gz --conf_path /home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT/etc/hadoop --fs_image_dir hdfs:///dyno/fsimage/ --block_list_path hdfs:///dyno/blocks

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hxh/hadoop/dynamometer-fat-0.1.0-SNAPSHOT/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2019-02-27 20:32:43,204 INFO dynamometer.Client: Initializing Client 2019-02-27 20:32:43,220 INFO dynamometer.Client: Starting with arguments: ["--hadoop_binary_path" "/home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT.tar.gz" "--conf_path" "/home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT/etc/hadoop" "--fs_image_dir" "hdfs:///dyno/fsimage/" "--block_list_path" "hdfs:///dyno/blocks"] 2019-02-27 20:32:43,366 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-02-27 20:32:44,548 INFO dynamometer.Client: Running Client 2019-02-27 20:32:44,969 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 2019-02-27 20:32:44,973 INFO retry.RetryInvocationHandler: java.net.ConnectException: Call From EC130/10.120.155.130 to EC131:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over rm2 after 1 failover attempts. Trying to failover after sleeping for 29094ms. 2019-02-27 20:33:14,068 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm3 2019-02-27 20:33:14,080 INFO dynamometer.Client: Got Cluster metric info from ASM, numNodeManagers=3 2019-02-27 20:33:14,114 INFO dynamometer.Client: Queue info, queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, queueChildQueueCount=0 2019-02-27 20:33:14,174 INFO conf.Configuration: resource-types.xml not found 2019-02-27 20:33:14,174 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2019-02-27 20:33:14,191 INFO dynamometer.Client: Max mem capabililty of resources in this cluster 8192 2019-02-27 20:33:14,191 INFO dynamometer.Client: Max virtual cores capabililty of resources in this cluster 4 2019-02-27 20:33:14,251 INFO dynamometer.Client: Set the environment for the application master 2019-02-27 20:33:14,267 INFO dynamometer.Client: Using resource FS_IMAGE directly from current location: hdfs://cluster1/dyno/fsimage/fsimage_0000000000000001475 2019-02-27 20:33:14,269 INFO dynamometer.Client: Using resource FS_IMAGE_MD5 directly from current location: hdfs://cluster1/dyno/fsimage/fsimage_0000000000000001475.md5 2019-02-27 20:33:14,271 INFO dynamometer.Client: Using resource VERSION directly from current location: hdfs:/dyno/fsimage/VERSION 2019-02-27 20:33:14,278 INFO dynamometer.Client: Uploading resource CONF_ZIP from [/home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT/etc/hadoop] to hdfs://cluster1/user/root/.dynamometer/application_1551259943963_0002/conf.zip 2019-02-27 20:33:14,580 INFO dynamometer.Client: Uploading resource START_SCRIPT from [file:/tmp/hadoop-unjar8318175013065877249/start-component.sh] to hdfs://cluster1/user/root/.dynamometer/application_1551259943963_0002/start-component.sh 2019-02-27 20:33:14,689 INFO dynamometer.Client: Uploading resource HADOOP_BINARY from [/home/hxh/hadoop/hadoop-3.3.0-SNAPSHOT.tar.gz] to hdfs://cluster1/user/root/.dynamometer/application_1551259943963_0002/hadoop-3.3.0-SNAPSHOT.tar.gz 2019-02-27 20:33:20,747 INFO dynamometer.Client: Uploading resource DYNO_DEPS from [/home/hxh/hadoop/dynamometer-fat-0.1.0-SNAPSHOT/bin/../lib/dynamometer-infra-0.1.0-SNAPSHOT.jar] to hdfs://cluster1/user/root/.dynamometer/application_1551259943963_0002/dependencies.zip 2019-02-27 20:33:20,919 INFO dynamometer.Client: Completed setting up app master command: [$JAVA_HOME/bin/java, -Xmx1741m, com.linkedin.dynamometer.ApplicationMaster, --datanode_memory_mb 2048, --datanode_vcores 1, --datanodes_per_cluster 1, --datanode_launch_delay 0s, --namenode_memory_mb 2048, --namenode_vcores 1, --namenode_metrics_period 60, 1>/stdout, 2>/stderr] 2019-02-27 20:33:20,920 INFO dynamometer.Client: Submitting application to RM 2019-02-27 20:33:21,189 INFO impl.YarnClientImpl: Submitted application application_1551259943963_0002 2019-02-27 20:33:22,192 INFO dynamometer.Client: Track the application at: http://EC132:8088/proxy/application_1551259943963_0002/ 2019-02-27 20:33:22,192 INFO dynamometer.Client: Kill the application using: yarn application -kill application_1551259943963_0002 2019-02-27 20:34:18,391 INFO dynamometer.Client: NameNode can be reached via HDFS at: hdfs://EC132:9002/ 2019-02-27 20:34:18,391 INFO dynamometer.Client: NameNode web UI available at: http://EC132:50077/ 2019-02-27 20:34:18,391 INFO dynamometer.Client: NameNode can be tracked at: http://EC132:8042/node/containerlogs/container_e25_1551259943963_0002_01_000002/root/ 2019-02-27 20:34:18,391 INFO dynamometer.Client: Waiting for NameNode to finish starting up... 2019-02-27 20:34:19,368 INFO dynamometer.Client: Infra app exited unexpectedly. YarnState=FINISHED. Exiting from client. 2019-02-27 20:34:19,369 INFO dynamometer.Client: Attempting to clean up remaining running applications. 2019-02-27 20:34:19,369 ERROR dynamometer.Client: Application failed to complete successfully

xkrogen commented 5 years ago

The master branch of Dynamometer doesn't support Hadoop 3+. I'm targeting that the version of Dynamometer in Hadoop itself (see HDFS-12345) supports Hadoop 3, and this repo maintains backwards compatibility with the 2.x line. I have a branch which supports Hadoop 3, can you try using it instead? https://github.com/xkrogen/dynamometer/tree/ekrogen-hadoop-3-support

hexiangheng commented 5 years ago

when i compile the dynamometer-ekrogen-hadoop-3-support jar packageuse by the gradle build and run the dynamometer-0.1.0-SNAPSHOT.tar in my hadoop-3.3.0-SNAPSHOT cluster,start namenode will be failed.when i use the hadoop-3.1.1(The specified version of the dynamometer-ekrogen-hadoop-3-support jar package),start namenode still failed.

hexiangheng commented 5 years ago

start-dynamometer-cluster.sh command:

./bin/start-dynamometer-cluster.sh --hadoop_binary_path /home/hxh/hadoop/dynamometer-0.1.0-SNAPSHOT/hadoop-3.1.1.tar.gz --conf_path /home/hxh/hadoop/hadoop-3.1.1/etc/hadoop --fs_image_dir hdfs:///dyno/fsimage/ --block_list_path hdfs:///dyno/blocks

2019-03-01 15:03:21,706 INFO dynamometer.Client: Initializing Client 2019-03-01 15:03:21,723 INFO dynamometer.Client: Starting with arguments: ["--hadoop_binary_path" "/home/hxh/hadoop/dynamometer-0.1.0-SNAPSHOT/hadoop-3.1.1.tar.gz" "--conf_path" "/home/hxh/hadoop/hadoop-3.1.1/etc/hadoop" "--fs_image_dir" "hdfs:///dyno/fsimage/" "--block_list_path" "hdfs:///dyno/blocks"] 2019-03-01 15:03:21,948 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-03-01 15:03:23,184 INFO dynamometer.Client: Running Client 2019-03-01 15:03:23,454 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 2019-03-01 15:03:23,456 INFO retry.RetryInvocationHandler: java.net.ConnectException: Call From EC130/10.120.155.130 to EC131:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over rm2 after 1 failover attempts. Trying to failover after sleeping for 38904ms. 2019-03-01 15:04:02,360 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm3 2019-03-01 15:04:02,372 INFO dynamometer.Client: Got Cluster metric info from ASM, numNodeManagers=3 2019-03-01 15:04:02,405 INFO dynamometer.Client: Queue info, queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, queueChildQueueCount=0 2019-03-01 15:04:02,464 INFO conf.Configuration: resource-types.xml not found 2019-03-01 15:04:02,464 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2019-03-01 15:04:02,483 INFO dynamometer.Client: Max mem capabililty of resources in this cluster 8192 2019-03-01 15:04:02,483 INFO dynamometer.Client: Max virtual cores capabililty of resources in this cluster 4 2019-03-01 15:04:02,553 INFO dynamometer.Client: Set the environment for the application master 2019-03-01 15:04:02,563 INFO dynamometer.Client: Using resource FS_IMAGE directly from current location: hdfs://cluster1/dyno/fsimage/fsimage_0000000000000008444 2019-03-01 15:04:02,567 INFO dynamometer.Client: Using resource FS_IMAGE_MD5 directly from current location: hdfs://cluster1/dyno/fsimage/fsimage_0000000000000008444.md5 2019-03-01 15:04:02,571 INFO dynamometer.Client: Using resource VERSION directly from current location: hdfs:/dyno/fsimage/VERSION 2019-03-01 15:04:02,576 INFO dynamometer.Client: Uploading resource CONF_ZIP from [/home/hxh/hadoop/hadoop-3.1.1/etc/hadoop] to hdfs://cluster1/user/root/.dynamometer/application_1551410686268_0005/conf.zip 2019-03-01 15:04:02,891 INFO dynamometer.Client: Uploading resource START_SCRIPT from [file:/tmp/hadoop-unjar1988672491809268142/start-component.sh] to hdfs://cluster1/user/root/.dynamometer/application_1551410686268_0005/start-component.sh 2019-03-01 15:04:03,024 INFO dynamometer.Client: Uploading resource HADOOP_BINARY from [/home/hxh/hadoop/dynamometer-0.1.0-SNAPSHOT/hadoop-3.1.1.tar.gz] to hdfs://cluster1/user/root/.dynamometer/application_1551410686268_0005/hadoop-3.1.1.tar.gz 2019-03-01 15:04:04,919 INFO dynamometer.Client: Uploading resource DYNO_DEPS from [/home/hxh/hadoop/dynamometer-0.1.0-SNAPSHOT/bin/../lib/dynamometer-infra-0.1.0-SNAPSHOT.jar,/home/hxh/hadoop/hadoop-3.1.1/share/hadoop/mapreduce/lib/junit-4.11.jar] to hdfs://cluster1/user/root/.dynamometer/application_1551410686268_0005/dependencies.zip 2019-03-01 15:04:05,052 INFO dynamometer.Client: Completed setting up app master command: [$JAVA_HOME/bin/java, -Xmx1741m, com.linkedin.dynamometer.ApplicationMaster, --datanode_memory_mb 2048, --datanode_vcores 1, --datanodes_per_cluster 1, --datanode_launch_delay 0s, --namenode_memory_mb 2048, --namenode_vcores 1, --namenode_metrics_period 60, 1>/stdout, 2>/stderr] 2019-03-01 15:04:05,053 INFO dynamometer.Client: Submitting application to RM 2019-03-01 15:04:05,336 INFO impl.YarnClientImpl: Submitted application application_1551410686268_0005 2019-03-01 15:04:06,339 INFO dynamometer.Client: Track the application at: http://EC132:8088/proxy/application_1551410686268_0005/ 2019-03-01 15:04:06,339 INFO dynamometer.Client: Kill the application using: yarn application -kill application_1551410686268_0005 2019-03-01 15:04:32,845 INFO dynamometer.Client: NameNode can be reached via HDFS at: hdfs://EC132:9002/ 2019-03-01 15:04:32,845 INFO dynamometer.Client: NameNode web UI available at: http://EC132:50077/ 2019-03-01 15:04:32,845 INFO dynamometer.Client: NameNode can be tracked at: http://EC132:8042/node/containerlogs/container_e28_1551410686268_0005_01_000002/root/ 2019-03-01 15:04:32,845 INFO dynamometer.Client: Waiting for NameNode to finish starting up... 2019-03-01 15:04:36,418 INFO dynamometer.Client: Infra app exited unexpectedly. YarnState=FINISHED. Exiting from client. 2019-03-01 15:04:36,419 INFO dynamometer.Client: Attempting to clean up remaining running applications. 2019-03-01 15:04:36,419 ERROR dynamometer.Client: Application failed to complete successfully

xkrogen commented 5 years ago

Hey @hexiangheng , something probably went wrong on the NameNode process. Can you look into the ApplicationMaster logs:

2019-03-01 15:04:06,339 INFO dynamometer.Client: Track the application at: http://EC132:8088/proxy/application_1551410686268_0005/

and the NameNode logs:

2019-03-01 15:04:32,845 INFO dynamometer.Client: NameNode can be tracked at: http://EC132:8042/node/containerlogs/container_e28_1551410686268_0005_01_000002/root/
hexiangheng commented 5 years ago

Thanks your suggestions,i'm sorry,i look into the the ApplicationMaster logs and NameNode logs,i still can't solve this problem,Can you help see it?

The ApplicationMaster logs:

Log Type: stderr
Log Upload Time: Sat Mar 02 20:29:42 +0800 2019
Log Length: 5806
Showing 4096 bytes of 5806 total. Click here for the full log.
Node information at hdfs://cluster1/user/root/.dynamometer/application_1551526866877_0002/nn_info.prop
2019-03-02 20:29:22,759 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, allocatedCnt=1
2019-03-02 20:29:22,761 INFO dynamometer.ApplicationMaster: Launching NAMENODE on a new container., containerId=container_e32_1551526866877_0002_01_000002, containerNode=EC130:45454, containerNodeURI=EC130:8042, containerResourceMemory=2048, containerResourceVirtualCores=1
2019-03-02 20:29:22,761 INFO dynamometer.ApplicationMaster: Setting up container launch context for containerid=container_e32_1551526866877_0002_01_000002, isNameNode=true
2019-03-02 20:29:22,818 INFO dynamometer.ApplicationMaster: Completed setting up command for namenode: [./start-component.sh, namenode, hdfs://cluster1/user/root/.dynamometer/application_1551526866877_0002, 1><LOG_DIR>/stdout, 2><LOG_DIR>/stderr]
2019-03-02 20:29:22,837 INFO dynamometer.ApplicationMaster: Starting NAMENODE; track at: http://EC130:8042/node/containerlogs/container_e32_1551526866877_0002_01_000002/root/
2019-03-02 20:29:22,847 INFO impl.NMClientAsyncImpl: Processing Event EventType: START_CONTAINER for Container container_e32_1551526866877_0002_01_000002
2019-03-02 20:29:23,011 INFO dynamometer.ApplicationMaster: NameNode container started at ID container_e32_1551526866877_0002_01_000002
2019-03-02 20:29:29,625 INFO dynamometer.ApplicationMaster: NameNode information: {NM_HTTP_PORT=8042, NN_HOSTNAME=EC130, NN_HTTP_PORT=50077, NN_SERVICERPC_PORT=9022, NN_RPC_PORT=9002, CONTAINER_ID=container_e32_1551526866877_0002_01_000002}
2019-03-02 20:29:29,625 INFO dynamometer.ApplicationMaster: NameNode can be reached at: hdfs://EC130:9002/
2019-03-02 20:29:29,625 INFO dynamometer.ApplicationMaster: Waiting for NameNode to finish starting up...
2019-03-02 20:29:29,791 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, completedCnt=1
2019-03-02 20:29:29,792 INFO dynamometer.ApplicationMaster: Got container status for NAMENODE: containerID=container_e32_1551526866877_0002_01_000002, state=COMPLETE, exitStatus=1, diagnostics=[2019-03-02 20:29:37.877]Exception from container-launch.
Container id: container_e32_1551526866877_0002_01_000002
Exit code: 1

[2019-03-02 20:29:37.881]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
2019-03-02 20:29:35,756 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
ERROR: Cannot find configuration directory "/home/hxh/hadoop/hdfs/tmp/nm-local...
2019-03-02 20:29:29,792 INFO dynamometer.ApplicationMaster: NameNode container completed; marking application as done
2019-03-02 20:29:32,637 INFO dynamometer.ApplicationMaster: NameNode has started!
2019-03-02 20:29:32,637 INFO dynamometer.ApplicationMaster: Looking for block listing files in hdfs:/dyno/blocks
2019-03-02 20:29:32,657 INFO dynamometer.ApplicationMaster: Requesting 5 DataNode containers with 2048MB memory, 1 vcores, 
2019-03-02 20:29:32,659 INFO dynamometer.ApplicationMaster: Finished requesting datanode containers
2019-03-02 20:29:32,659 INFO dynamometer.ApplicationMaster: Application completed. Stopping running containers
2019-03-02 20:29:32,680 INFO dynamometer.ApplicationMaster: Application completed. Signalling finish to RM
2019-03-02 20:29:32,692 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
2019-03-02 20:29:32,794 INFO dynamometer.ApplicationMaster: Application Master failed. exiting

The NameNode logs:

Log Type: prelaunch.err
Log Upload Time: Sat Mar 02 20:29:42 +0800 2019
Log Length: 0

Log Type: prelaunch.out
Log Upload Time: Sat Mar 02 20:29:42 +0800 2019
Log Length: 100
Setting up env variables
Setting up job resources
Copying debugging information
Launching container

Log Type: stderr
Log Upload Time: Sat Mar 02 20:29:42 +0800 2019
Log Length: 1007
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
2019-03-02 20:29:35,756 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
ERROR: Cannot find configuration directory "/home/hxh/hadoop/hdfs/tmp/nm-local-dir/usercache/root/appcache/application_1551526866877_0002/container_e32_1551526866877_0002_01_000002/conf/etc/hadoop"
tput: No value for $TERM and no -T specified
tput: No value for $TERM and no -T specified
tput: No value for $TERM and no -T specified
tput: No value for $TERM and no -T specified**
xkrogen commented 5 years ago

I see errors like:

ERROR: Cannot find configuration directory "/home/hxh/hadoop/hdfs/tmp/nm-local-dir/usercache/root/appcache/application_1551526866877_0002/container_e32_1551526866877_0002_01_000002/conf/etc/hadoop"

Implying it can't find the configuration. I just noticed your conf_path argument:

--conf_path /home/hxh/hadoop/hadoop-3.1.1/etc/hadoop

Is pointing directly to the configuration files, but really as per the help output, it should have the etc/hadoop directory layout within the zip file:

... This must have the standard Hadoop conf layout containing e.g. etc/hadoop/*-site.xml

So you should use:

--conf_path /home/hxh/hadoop/hadoop-3.1.1
hexiangheng commented 5 years ago

Thanks for your suggestions,I followed your method but still failed.

2019-03-07 20:43:39,078 INFO dynamometer.Client: Waiting for NameNode to finish starting up... 2019-03-07 20:43:43,017 INFO dynamometer.Client: Infra app exited unexpectedly. YarnState=FINISHED. Exiting from client. 2019-03-07 20:43:43,017 INFO dynamometer.Client: Attempting to clean up remaining running applications. 2019-03-07 20:43:43,017 ERROR dynamometer.Client: Application failed to complete successfully

The ApplicationMaster logs:

t-component.sh, namenode, hdfs://cluster1/user/root/.dynamometer/application_1551961441755_0001, 1><LOG_DIR>/stdout, 2><LOG_DIR>/stderr]
2019-03-07 20:42:03,700 INFO dynamometer.ApplicationMaster: Starting NAMENODE; track at: http://EC131:8042/node/containerlogs/container_e43_1551961441755_0001_01_000002/root/
2019-03-07 20:42:03,701 INFO impl.NMClientAsyncImpl: Processing Event EventType: START_CONTAINER for Container container_e43_1551961441755_0001_01_000002
2019-03-07 20:42:03,704 INFO impl.ContainerManagementProtocolProxy: Opening proxy : EC131:45454
2019-03-07 20:42:03,847 INFO dynamometer.ApplicationMaster: NameNode container started at ID container_e43_1551961441755_0001_01_000002
2019-03-07 20:43:39,215 INFO dynamometer.ApplicationMaster: NameNode information: {NM_HTTP_PORT=8042, NN_HOSTNAME=EC131, NN_HTTP_PORT=50077, NN_SERVICERPC_PORT=9022, NN_RPC_PORT=9002, CONTAINER_ID=container_e43_1551961441755_0001_01_000002}
2019-03-07 20:43:39,215 INFO dynamometer.ApplicationMaster: NameNode can be reached at: hdfs://EC131:9002/
2019-03-07 20:43:39,215 INFO dynamometer.ApplicationMaster: Waiting for NameNode to finish starting up...
2019-03-07 20:43:39,959 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, completedCnt=1
2019-03-07 20:43:39,961 INFO dynamometer.ApplicationMaster: Got container status for NAMENODE: containerID=container_e43_1551961441755_0001_01_000002, state=COMPLETE, exitStatus=1, diagnostics=Exception from container-launch.
Container id: container_e43_1551961441755_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
    at org.apache.hadoop.util.Shell.run(Shell.java:482)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Container exited with a non...
2019-03-07 20:43:39,962 INFO dynamometer.ApplicationMaster: NameNode container completed; marking application as done
2019-03-07 20:43:42,313 INFO dynamometer.ApplicationMaster: NameNode has started!
2019-03-07 20:43:42,314 INFO dynamometer.ApplicationMaster: Looking for block listing files in hdfs:/dyno/blocks
2019-03-07 20:43:42,337 INFO dynamometer.ApplicationMaster: Requesting 3 DataNode containers with 2048MB memory, 1 vcores, 
2019-03-07 20:43:42,338 INFO dynamometer.ApplicationMaster: Finished requesting datanode containers
2019-03-07 20:43:42,338 INFO dynamometer.ApplicationMaster: Application completed. Stopping running containers
2019-03-07 20:43:42,340 INFO impl.ContainerManagementProtocolProxy: Opening proxy : EC131:45454
2019-03-07 20:43:42,404 INFO dynamometer.ApplicationMaster: Application completed. Signalling finish to RM
2019-03-07 20:43:42,440 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
2019-03-07 20:43:42,544 INFO dynamometer.ApplicationMaster: Application Master failed. exiting
2019-03-07 20:43:42,546 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting for queue
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)

The NameNode logs:

Showing 4096 bytes of 9990 total. Click here for the full log.
e/hxh/hadoop/hadoop-2.7.5/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/common/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/common/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/hdfs:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/hdfs/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/hdfs/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/*:/home/hxh/hadoop/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/lib/*:/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop/nm-config/log4j.properties
LC_CTYPE=zh_CN.UTF-8
XDG_DATA_DIRS=/usr/share:/etc/opt/kde3/share:/opt/kde3/share
HDFS_NAMENODE_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,RFAAUDIT  -Xmx128m -Xms128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70 -verbose:gc -Xloggc:/hdfs-hdfs-namenode-gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1024M -XX:ErrorFile=/hadoop-hdfs-namenode-crash.log
-XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError
CONTAINER_ID=container_e43_1551961441755_0001_01_000002
NN_FILE_METRIC_PERIOD=60
HDFS_JOURNALNODE_USER=root
DATANODE_HEAPSIZE_FROM_MACHINE=128
G_BROKEN_FILENAMES=1
_=/usr/bin/printenv

Using the following ports for the namenode:
NN_HOSTNAME=EC131
NN_RPC_PORT=9002
NN_SERVICERPC_PORT=9022
NN_HTTP_PORT=50077
NM_HTTP_PORT=8042
CONTAINER_ID=container_e43_1551961441755_0001_01_000002
Uploaded namenode port info to hdfs://cluster1/user/root/.dynamometer/application_1551961441755_0001/nn_info.prop
Executing the following:
/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551961441755_0001/container_e43_1551961441755_0001_01_000002/hadoopBinary/home/sbin/hadoop-daemon.sh start namenode -D fs.defaultFS=hdfs://EC131:9002
  -D dfs.namenode.rpc-address=EC131:9002
  -D dfs.namenode.servicerpc-address=EC131:9022
  -D dfs.namenode.http-address=EC131:50077
  -D dfs.namenode.https-address=EC131:0
  -D dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551961441755_0001/container_e43_1551961441755_0001_01_000002/dyno-node/name-data
  -D dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551961441755_0001/container_e43_1551961441755_0001_01_000002/dyno-node/name-data
  -D dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551961441755_0001/container_e43_1551961441755_0001_01_000002/dyno-node/checkpoint
  -D dfs.namenode.kerberos.internal.spnego.principal=
  -D dfs.hosts=
  -D dfs.hosts.exclude=
  -D dfs.namenode.legacy-oiv-image.dir=
  -D dfs.namenode.kerberos.principal=
  -D dfs.namenode.keytab.file=
  -D dfs.namenode.safemode.threshold-pct=0.0f
  -D dfs.permissions.enabled=true
  -D dfs.cluster.administrators="*"
  -D dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied
  -D hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider
  -D hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551961441755_0001/container_e43_1551961441755_0001_01_000002/dyno-node
  -D hadoop.security.authentication=simple
  -D hadoop.security.authorization=false
  -D dfs.http.policy=HTTP_ONLY
  -D dfs.nameservices=
  -D dfs.web.authentication.kerberos.principal=
  -D dfs.web.authentication.kerberos.keytab=
  -D hadoop.http.filter.initializers=
  -D dfs.datanode.kerberos.principal=
  -D dfs.datanode.keytab.file=
  -D dfs.domain.socket.path=
  -D dfs.client.read.shortcircuit=false 
Unable to launch NameNode; exiting.
xkrogen commented 5 years ago

Are there any more logs available from the NameNode? There should be more information about why it failed

hexiangheng commented 5 years ago

yes,Thank you very much,All the logs are as follows : The ApplicationMaster logs:

Log Type: stderr
Log Upload Time: Fri Mar 08 09:22:57 +0800 2019
Log Length: 5189
2019-03-08 09:20:36,061 INFO dynamometer.ApplicationMaster: Initializing ApplicationMaster
2019-03-08 09:20:36,405 INFO dynamometer.ApplicationMaster: Application master for app, appId=2, clustertimestamp=1551969821630, attemptId=1
2019-03-08 09:20:36,405 INFO dynamometer.ApplicationMaster: Starting ApplicationMaster
2019-03-08 09:20:36,535 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-03-08 09:20:36,831 INFO impl.NMClientAsyncImpl: Upper bound of the thread pool size is 500
2019-03-08 09:20:36,833 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
2019-03-08 09:20:36,914 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2019-03-08 09:20:37,322 INFO dynamometer.ApplicationMaster: Requested NameNode ask: Capability[<memory:2048, vCores:1>]Priority[0]
2019-03-08 09:20:37,337 INFO dynamometer.ApplicationMaster: Waiting on availability of NameNode information at hdfs://cluster1/user/root/.dynamometer/application_1551969821630_0002/nn_info.prop
2019-03-08 09:20:39,385 INFO impl.AMRMClientImpl: Received new token for : EC131:45454
2019-03-08 09:20:39,387 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, allocatedCnt=1
2019-03-08 09:20:39,989 INFO dynamometer.ApplicationMaster: Launching NAMENODE on a new container., containerId=container_e46_1551969821630_0002_01_000002, containerNode=EC131:45454, containerNodeURI=EC131:8042, containerResourceMemory=2048, containerResourceVirtualCores=1
2019-03-08 09:20:39,990 INFO dynamometer.ApplicationMaster: Setting up container launch context for containerid=container_e46_1551969821630_0002_01_000002, isNameNode=true
2019-03-08 09:20:40,164 INFO dynamometer.ApplicationMaster: Completed setting up command for namenode: [./start-component.sh, namenode, hdfs://cluster1/user/root/.dynamometer/application_1551969821630_0002, 1><LOG_DIR>/stdout, 2><LOG_DIR>/stderr]
2019-03-08 09:20:40,178 INFO dynamometer.ApplicationMaster: Starting NAMENODE; track at: http://EC131:8042/node/containerlogs/container_e46_1551969821630_0002_01_000002/root/
2019-03-08 09:20:40,178 INFO impl.NMClientAsyncImpl: Processing Event EventType: START_CONTAINER for Container container_e46_1551969821630_0002_01_000002
2019-03-08 09:20:40,180 INFO impl.ContainerManagementProtocolProxy: Opening proxy : EC131:45454
2019-03-08 09:20:41,277 INFO dynamometer.ApplicationMaster: NameNode container started at ID container_e46_1551969821630_0002_01_000002
2019-03-08 09:22:23,657 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, completedCnt=1
2019-03-08 09:22:23,657 INFO dynamometer.ApplicationMaster: Got container status for NAMENODE: containerID=container_e46_1551969821630_0002_01_000002, state=COMPLETE, exitStatus=1, diagnostics=Exception from container-launch.
Container id: container_e46_1551969821630_0002_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
    at org.apache.hadoop.util.Shell.run(Shell.java:482)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Container exited with a non...
2019-03-08 09:22:23,657 INFO dynamometer.ApplicationMaster: NameNode container completed; marking application as done
2019-03-08 09:22:24,170 INFO dynamometer.ApplicationMaster: Application completed. Stopping running containers
2019-03-08 09:22:24,171 INFO impl.ContainerManagementProtocolProxy: Opening proxy : EC131:45454
2019-03-08 09:22:24,228 INFO dynamometer.ApplicationMaster: Application completed. Signalling finish to RM
2019-03-08 09:22:24,235 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
2019-03-08 09:22:24,337 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
2019-03-08 09:22:24,439 INFO dynamometer.ApplicationMaster: Application Master failed. exiting
2019-03-08 09:22:24,440 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting for queue
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)

The NameNode logs:

Log Type: stdout
Log Upload Time: 星期五 三月 08 09:22:56 +0800 2019
Log Length: 9990
Starting namenode with ID 000002
PWD is: /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002
Saving original HADOOP_HOME as: /home/hxh/hadoop/hadoop-2.7.5
Saving original HADOOP_CONF_DIR as: /home/hxh/hadoop/hadoop-2.7.5/etc/hadoop
Environment variables are set as:
(note that this doesn't include changes made by hadoop-env.sh)
NNTPSERVER=news
HOSTNAME=EC131
HADOOP_LOG_DIR=/home/hxh/hadoop/hadoop-2.7.5/logs/userlogs/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002
HADOOP_IDENT_STRING=root
SHELL=/bin/bash
HISTSIZE=1000
HADOOP_HOME=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home
SSH_CLIENT=10.120.155.130 55760 22
YARN_RESOURCEMANAGER_USER=root
NM_HOST=EC131
HADOOP_PID_DIR=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/dyno-node/pid
GRADLE_HOME=/home/hxh/hadoop/gradle-4.5
NN_EDITS_DIR=
HADOOP_PREFIX=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home
YARN_NICENESS=0
NM_AUX_SERVICE_mapreduce_shuffle=AAA0+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=

NN_ADDITIONAL_ARGS=
NM_HTTP_PORT=8042
HDFS_DATANODE_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5006 -Dhadoop.security.logger=ERROR,RFAS 
-Xmx128m -Xms128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70 -verbose:gc -Xloggc:/hdfs-hdfs-datanode-gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1024M -XX:ErrorFile=/hadoop-hdfs-datanode-crash.log
-XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError
HDFS_OM_USER=root
YARN_HOME=/home/hxh/hadoop/hadoop-2.7.5
LOCAL_DIRS=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002
USER=root
LS_COLORS=
HADOOP_HEAPSIZE=
HADOOP_TOKEN_FILE_LOCATION=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/container_tokens
LOG_DIRS=/home/hxh/hadoop/hadoop-2.7.5/logs/userlogs/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002
MALLOC_ARENA_MAX=4
HADOOP_MAPARED_HOME=/home/hxh/hadoop/hadoop-2.7.5
FROM_HEADER=
HDFS_ZKFC_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5008
XDG_CONFIG_DIRS=/etc/xdg
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
YARN_ROOT_LOGGER=INFO,RFA
YARN_NODEMANAGER_USER=root
HADOOP_COMMON_LIB_NATIVE_DIR=/home/hxh/hadoop/hadoop-2.7.5/lib/native
HDFS_NAMENODE_USER=root
MAIL=/var/spool/mail/root
PATH=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home/bin:/home/hadoop/jdk1.8.0_111/bin:/home/hxh/hadoop/gradle-4.5/bin:/home/hadoop/jdk1.8.0_111/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/home/hxh/hadoop/hadoop-2.7.5/bin:/home/hxh/hadoop/hadoop-2.7.5/sbin:/bin:/bin
HDFS_CONF_DIR=/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop
NAMENODE_HEAPSIZE_FROM_MACHINE=128
HADOOP_HDFS_HOME=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home
YARN_IDENT_STRING=root
HADOOP_COMMON_HOME=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home
PWD=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002
JAVA_HOME=/home/hadoop/jdk1.8.0_111
NN_NAME_DIR=
HADOOP_YARN_HOME=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home
HADOOP_CLASSPATH=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/dependencies/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/additionalClasspath/
LANG=POSIX
HADOOP_CONF_DIR=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/conf/etc/hadoop
PYTHONSTARTUP=/etc/pythonstart
HDFS_SCM_USER=root
XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
HADOOP_OPTS=-Djava.library.path=/home/hxh/hadoop/hadoop-2.7.5/lib -Dhadoop.log.dir=/home/hxh/hadoop/hadoop-2.7.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/hxh/hadoop/hadoop-2.7.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
YARN_LOG_DIR=/home/hxh/hadoop/hadoop-2.7.5/logs
HADOOP_SECURE_LOG_DIR=/
HDFS_DATANODE_USER=root
HISTCONTROL=ignoredups
LIBHDFS_OPTS=-Djava.library.path=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home/lib/native
HOME=/home/
GRADLE_USER_HOME=/home/hxh/hadoop/dynamometer-0.1.4
SHLVL=5
QT_SYSTEM_DIR=/usr/share/desktop-data
DN_ADDITIONAL_ARGS=
YARN_LOGFILE=yarn-root-nodemanager-EC131.log
YARN_CONF_DIR=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/conf/etc/hadoop
JVM_PID=13823
HADOOP_SECURE_DN_USER=
XCURSOR_THEME=DMZ
LS_OPTIONS=-A -N --color=none -T 0
HADOOP_MAPRED_HOME=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home
WINDOWMANAGER=/usr/bin/gnome
NM_PORT=45454
LOGNAME=root
HDFS_STANDBYNAMENODE_USER=root
SSH_CONNECTION=10.120.155.130 55760 10.120.155.131 22
CLASSPATH=/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/common/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/common/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/hdfs:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/hdfs/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/hdfs/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/lib/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/*:/home/hxh/hadoop/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/share/hadoop/yarn/lib/*:/home/hxh/hadoop/hadoop-2.7.5/etc/hadoop/nm-config/log4j.properties
LC_CTYPE=zh_CN.UTF-8
XDG_DATA_DIRS=/usr/share:/etc/opt/kde3/share:/opt/kde3/share
HDFS_NAMENODE_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,RFAAUDIT  -Xmx128m -Xms128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70 -verbose:gc -Xloggc:/hdfs-hdfs-namenode-gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1024M -XX:ErrorFile=/hadoop-hdfs-namenode-crash.log
-XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError
CONTAINER_ID=container_e46_1551969821630_0002_01_000002
NN_FILE_METRIC_PERIOD=60
HDFS_JOURNALNODE_USER=root
DATANODE_HEAPSIZE_FROM_MACHINE=128
G_BROKEN_FILENAMES=1
_=/usr/bin/printenv

Using the following ports for the namenode:
NN_HOSTNAME=EC131
NN_RPC_PORT=9002
NN_SERVICERPC_PORT=9022
NN_HTTP_PORT=50077
NM_HTTP_PORT=8042
CONTAINER_ID=container_e46_1551969821630_0002_01_000002
Uploaded namenode port info to hdfs://cluster1/user/root/.dynamometer/application_1551969821630_0002/nn_info.prop
Executing the following:
/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/hadoopBinary/home/sbin/hadoop-daemon.sh start namenode -D fs.defaultFS=hdfs://EC131:9002
  -D dfs.namenode.rpc-address=EC131:9002
  -D dfs.namenode.servicerpc-address=EC131:9022
  -D dfs.namenode.http-address=EC131:50077
  -D dfs.namenode.https-address=EC131:0
  -D dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/dyno-node/name-data
  -D dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/dyno-node/name-data
  -D dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/dyno-node/checkpoint
  -D dfs.namenode.kerberos.internal.spnego.principal=
  -D dfs.hosts=
  -D dfs.hosts.exclude=
  -D dfs.namenode.legacy-oiv-image.dir=
  -D dfs.namenode.kerberos.principal=
  -D dfs.namenode.keytab.file=
  -D dfs.namenode.safemode.threshold-pct=0.0f
  -D dfs.permissions.enabled=true
  -D dfs.cluster.administrators="*"
  -D dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied
  -D hadoop.security.impersonation.provider.class=
com.linkedin.dynamometer.AllowAllImpersonationProvider
  -D hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1551969821630_0002/container_e46_1551969821630_0002_01_000002/dyno-node
  -D hadoop.security.authentication=simple
  -D hadoop.security.authorization=false
  -D dfs.http.policy=HTTP_ONLY
  -D dfs.nameservices=
  -D dfs.web.authentication.kerberos.principal=
  -D dfs.web.authentication.kerberos.keytab=
  -D hadoop.http.filter.initializers=
  -D dfs.datanode.kerberos.principal=
  -D dfs.datanode.keytab.file=
  -D dfs.domain.socket.path=
  -D dfs.client.read.shortcircuit=false 
Unable to launch NameNode; exiting.
hexiangheng commented 5 years ago

In my cluster,I have two NameNode--EC130 and EC131,one active(EC130) and one standby(EC131). and RPC port is 9000. I don't understand the following configuration,Does my NameNode have to be configured like this?

Using the following ports for the namenode:
NN_HOSTNAME=EC131
NN_RPC_PORT=9002
NN_SERVICERPC_PORT=9022
NN_HTTP_PORT=50077
NM_HTTP_PORT=8042
CONTAINER_ID=container_e48_1552039696839_0001_01_000002
Uploaded namenode port info to hdfs://EC130:9000/user/root/.dynamometer/application_1552039696839_0001/nn_info.prop
xkrogen commented 5 years ago

Something seems off here. I don't quite understand your setup.

Is there no stderr file on the NameNode container? Do you have a properly configured log4j file in your conf directory? The NameNode really should be producing logs about what is causing it to be unable to start.

hexiangheng commented 5 years ago

Hi, @xkrogen ,Thanks all your suggestions.

What is the hostname for your host (bare metal) HDFS cluster? The log line like "Uploaded namenode port info to ..." should point to your real HDFS cluster......

I set an incorrectly hostname ,I have revised it,Thank you for your correction.

Are you trying to start the NameNode-under-test as part of Dynamometer, or have you manually started a NameNode?

yes,Firstly,I have builted a HA HDFS cluster, and manually started NameNode and yarn,then deployed the Dynamometer toolkit to my cluster,Then execute the startup command of the Dynamometer-HDFS cluster,But i start the cluster failed as you suggest,The startup command is as follows:

./bin/start-dynamometer-cluster.sh --hadoop_binary_path /home/hxh/hadoop/dynamometer-0.1.4/hadoop-2.7.5.tar.gz --conf_path /home/hxh/hadoop/hadoop-2.7.5 --fs_image_dir hdfs:///dyno/fsimage/ --block_list_path hdfs:///dyno/blocks --namenode_servicerpc_addr EC130:9000

Is there no stderr file on the NameNode container?

There is stderr file under the NameNode container logs in my cluster ,But after the startup command is completed,The stderr file is cleared on the NameNode container,I don't know what caused it.

2019-03-11 23:23:02,973 INFO dynamometer.Client: Submitting application to RM
2019-03-11 23:23:03,352 INFO impl.YarnClientImpl: Submitted application application_1552317599646_0001
2019-03-11 23:23:04,356 INFO dynamometer.Client: Track the application at: http://EC131:8088/proxy/application_1552317599646_0001/
2019-03-11 23:23:04,356 INFO dynamometer.Client: Kill the application using: yarn application -kill application_1552317599646_0001
2019-03-11 23:24:44,266 INFO dynamometer.Client: Infra app exited unexpectedly. YarnState=FINISHED. Exiting from client.
2019-03-11 23:24:44,266 INFO dynamometer.Client: **Attempting to clean up remaining running applications.**
2019-03-11 23:24:44,266 ERROR dynamometer.Client: Application failed to complete successfully

Perhaps the above information of the Attempting to clean up remaining running applications clears the logs information?

hexiangheng commented 5 years ago

Is the failure of NameNode startup related to the configuration file of my HA HDFS cluster? If it is possible, can you post a configuration document related about dynamometer, including some configurations of yarn and HDFS?

xkrogen commented 5 years ago

Hi @hexiangheng ,

If you use --namenode_servicerpc_addr, Dynamometer will not attempt to start up a NameNode. Instead, it will treat the NameNode URI passed there as the NameNode under test. It will start up DataNodes which report to this NameNode, and start a workload replay job which executes against this NN. So there is no NameNode container. The logs you provided in your most recent comment are from the Client. The most interesting logs will be on the ApplicationMaster, which you can find using the Track the application at .... link.

Please also note, the NameNode supplied to --namenode_servicerpc_addr should not be the the real HDFS cluster that you use in tandem with YARN to launch Dynamometer. Rather, it should be a NameNode started with the fsimage at hdfs://dyno/fsimage, but without any DataNodes (these will be started/managed by Dynamometer). If you haven't yet read our blog post describing the relationship between the host HDFS cluster and the one under test, I suggest that you do so.

hexiangheng commented 5 years ago

Hi @xkrogen ,Thanks for your patient answer,I have read your blog,If I understand correctly, I should use Dynamometer tools to start a simulated HDFS cluster,This simulated cluster will use some configuration of the real cluster, so we should also start a real HDFS cluster.right? But I failed to start the simulated HDFS cluster by use the startup command:

./bin/start-dynamometer-cluster.sh --hadoop_binary_path /home/hxh/hadoop/dynamometer-0.1.4/hadoop-2.7.5.tar.gz --conf_path /home/hxh/hadoop/hadoop-2.7.5 --fs_image_dir hdfs:///dyno/fsimage/ --block_list_path hdfs:///dyno/blocks

The NameNode logs:

Log Type: hadoop-root-namenode-EC132.log
Log Upload Time: Tue Mar 19 15:36:30 +0800 2019
Log Length: 33403
2019-03-19 15:35:04,819 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = EC132/10.120.155.132
STARTUP_MSG:   args = [-D, fs.defaultFS=hdfs://EC132:9002, -D, dfs.namenode.rpc-address=EC132:9002, -D, dfs.namenode.servicerpc-address=EC132:9022, -D, dfs.namenode.http-address=EC132:50077, -D, dfs.namenode.https-address=EC132:0, -D, dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node/checkpoint, -D, dfs.namenode.kerberos.internal.spnego.principal=, -D, dfs.hosts=, -D, dfs.hosts.exclude=, -D, dfs.namenode.legacy-oiv-image.dir=, -D, dfs.namenode.kerberos.principal=, -D, dfs.namenode.keytab.file=, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.nameservices=, -D, dfs.web.authentication.kerberos.principal=, -D, dfs.web.authentication.kerberos.keytab=, -D, hadoop.http.filter.initializers=, -D, dfs.datanode.kerberos.principal=, -D, dfs.datanode.keytab.file=, -D, dfs.domain.socket.path=, -D, dfs.client.read.shortcircuit=false]
STARTUP_MSG:   version = 2.7.5
STARTUP_MSG:   classpath = /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/conf/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-net-3.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/asm-3.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/avro-1.7.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/paranamer-2.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jettison-1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/guava-11.0.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/activation-1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsch-0.1.54.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hadoop-annotations-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-io-2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/junit-4.11.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/xz-1.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/gson-2.2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hadoop-auth-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-nfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5-tests.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5-tests.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/mapreduce/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dependencies/dynamometer-infra-0.1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/additionalClasspath/:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075; compiled by 'kshvachk' on 2017-12-16T01:06Z
STARTUP_MSG:   java = 1.8.0_111
************************************************************/
2019-03-19 15:35:04,833 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-03-19 15:35:04,839 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [-D, fs.defaultFS=hdfs://EC132:9002, -D, dfs.namenode.rpc-address=EC132:9002, -D, dfs.namenode.servicerpc-address=EC132:9022, -D, dfs.namenode.http-address=EC132:50077, -D, dfs.namenode.https-address=EC132:0, -D, dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node/checkpoint, -D, dfs.namenode.kerberos.internal.spnego.principal=, -D, dfs.hosts=, -D, dfs.hosts.exclude=, -D, dfs.namenode.legacy-oiv-image.dir=, -D, dfs.namenode.kerberos.principal=, -D, dfs.namenode.keytab.file=, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1552979999658_0002/container_e58_1552979999658_0002_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.nameservices=, -D, dfs.web.authentication.kerberos.principal=, -D, dfs.web.authentication.kerberos.keytab=, -D, hadoop.http.filter.initializers=, -D, dfs.datanode.kerberos.principal=, -D, dfs.datanode.keytab.file=, -D, dfs.domain.socket.path=, -D, dfs.client.read.shortcircuit=false]
2019-03-19 15:35:05,145 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2-namenode.properties
2019-03-19 15:35:05,159 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink dyno-file started
2019-03-19 15:35:05,249 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
2019-03-19 15:35:05,249 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2019-03-19 15:35:05,252 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://EC132:9002
2019-03-19 15:35:05,253 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use EC132:9002 to access this namenode/service.
2019-03-19 15:35:05,320 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-03-19 15:35:05,375 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://EC132:50077
2019-03-19 15:35:05,436 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2019-03-19 15:35:05,445 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-03-19 15:35:05,451 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2019-03-19 15:35:05,458 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-03-19 15:35:05,601 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-03-19 15:35:05,602 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-03-19 15:35:05,615 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50077
2019-03-19 15:35:05,615 INFO org.mortbay.log: jetty-6.1.26
2019-03-19 15:35:05,757 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@EC132:50077
2019-03-19 15:35:05,789 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-03-19 15:35:05,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2019-03-19 15:35:05,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Enabling async auditlog
2019-03-19 15:35:05,831 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-03-19 15:35:05,833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-03-19 15:35:05,873 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2019-03-19 15:35:05,873 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-03-19 15:35:05,874 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-03-19 15:35:05,874 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Mar 19 15:35:05
2019-03-19 15:35:05,876 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-03-19 15:35:05,876 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-19 15:35:05,878 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2019-03-19 15:35:05,878 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2019-03-19 15:35:05,887 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2019-03-19 15:35:05,887 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2019-03-19 15:35:05,887 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2019-03-19 15:35:05,887 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2019-03-19 15:35:05,887 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2019-03-19 15:35:05,887 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2019-03-19 15:35:05,888 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2019-03-19 15:35:05,888 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2019-03-19 15:35:05,899 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2019-03-19 15:35:05,899 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2019-03-19 15:35:05,899 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-03-19 15:35:05,899 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-03-19 15:35:05,900 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Configured NNs:
Nameservice <null>:
  NN ID null => EC132/10.120.155.132:9002

2019-03-19 15:35:05,900 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:765)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:678)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:820)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:804)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
2019-03-19 15:35:05,910 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2019-03-19 15:35:05,910 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2019-03-19 15:35:05,916 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@EC132:50077
2019-03-19 15:35:06,017 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2019-03-19 15:35:06,018 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: dyno-file thread interrupted.
2019-03-19 15:35:06,018 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2019-03-19 15:35:06,018 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2019-03-19 15:35:06,018 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:765)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:678)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:820)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:804)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
2019-03-19 15:35:06,020 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2019-03-19 15:35:06,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at EC132/10.120.155.132
************************************************************/
xkrogen commented 5 years ago

Ah, yes. There is a bug in HDFS where the shared edits dir configuration (dfs.namenode.shared.edits.dir) cannot be overridden on the command line (like -Ddfs.namenode.shared.edits.dir=someValue), which is the way that Dynamometer overrides the configurations that you supply with --conf_path to make them suitable for Dynamometer. You'll have to manually remove the shared edits dir configuration from /home/hxh/hadoop/hadoop-2.7.5/etc/hadoop/hdfs-site.xml

hexiangheng commented 5 years ago

Hi, @xkrogen ,Thank you very much. Can't the real HDFS cluster and YARN be configured in HA mode?I changed my cluster to non-HA mode,But i failed again to start the simulated HDFS cluster. eg:

......
2019-03-24 12:06:24,498 INFO dynamometer.Client: NameNode can be reached via HDFS at: hdfs://EC131:9002/
2019-03-24 12:06:24,498 INFO dynamometer.Client: NameNode web UI available at: http://EC131:50077/
2019-03-24 12:06:24,498 INFO dynamometer.Client: NameNode can be tracked at: http://EC131:8042/node/containerlogs/container_e62_1553399068266_0002_01_000002/root/
2019-03-24 12:06:24,498 INFO dynamometer.Client: Waiting for NameNode to finish starting up...
2019-03-24 12:06:35,266 INFO dynamometer.Client: Infra app exited unexpectedly. YarnState=FINISHED. Exiting from client.
2019-03-24 12:06:35,266 INFO dynamometer.Client: Attempting to clean up remaining running applications.
2019-03-24 12:06:35,267 ERROR dynamometer.Client: Application failed to complete successfully

The NameNode logs:

Log Type: hadoop-root-namenode-EC132.log
Log Upload Time: Thu Mar 21 16:05:04 +0800 2019
Log Length: 39141
2019-03-21 16:03:24,219 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = EC132/10.120.155.132
STARTUP_MSG:   args = [-D, fs.defaultFS=hdfs://EC132:9002, -D, dfs.namenode.rpc-address=EC132:9002, -D, dfs.namenode.servicerpc-address=EC132:9022, -D, dfs.namenode.http-address=EC132:50077, -D, dfs.namenode.https-address=EC132:0, -D, dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/checkpoint, -D, dfs.namenode.kerberos.internal.spnego.principal=, -D, dfs.hosts=, -D, dfs.hosts.exclude=, -D, dfs.namenode.legacy-oiv-image.dir=, -D, dfs.namenode.kerberos.principal=, -D, dfs.namenode.keytab.file=, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.nameservices=, -D, dfs.web.authentication.kerberos.principal=, -D, dfs.web.authentication.kerberos.keytab=, -D, hadoop.http.filter.initializers=, -D, dfs.datanode.kerberos.principal=, -D, dfs.datanode.keytab.file=, -D, dfs.domain.socket.path=, -D, dfs.client.read.shortcircuit=false]
STARTUP_MSG:   version = 2.7.5
STARTUP_MSG:   classpath = /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/conf/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-net-3.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/asm-3.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/avro-1.7.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/paranamer-2.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jettison-1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/guava-11.0.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/activation-1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsch-0.1.54.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hadoop-annotations-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-io-2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/junit-4.11.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/xz-1.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/gson-2.2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hadoop-auth-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-nfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5-tests.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5-tests.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/mapreduce/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dependencies/dynamometer-infra-0.1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/additionalClasspath/:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075; compiled by 'kshvachk' on 2017-12-16T01:06Z
STARTUP_MSG:   java = 1.8.0_111
************************************************************/
2019-03-21 16:03:24,228 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-03-21 16:03:24,231 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [-D, fs.defaultFS=hdfs://EC132:9002, -D, dfs.namenode.rpc-address=EC132:9002, -D, dfs.namenode.servicerpc-address=EC132:9022, -D, dfs.namenode.http-address=EC132:50077, -D, dfs.namenode.https-address=EC132:0, -D, dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/checkpoint, -D, dfs.namenode.kerberos.internal.spnego.principal=, -D, dfs.hosts=, -D, dfs.hosts.exclude=, -D, dfs.namenode.legacy-oiv-image.dir=, -D, dfs.namenode.kerberos.principal=, -D, dfs.namenode.keytab.file=, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.nameservices=, -D, dfs.web.authentication.kerberos.principal=, -D, dfs.web.authentication.kerberos.keytab=, -D, hadoop.http.filter.initializers=, -D, dfs.datanode.kerberos.principal=, -D, dfs.datanode.keytab.file=, -D, dfs.domain.socket.path=, -D, dfs.client.read.shortcircuit=false]
2019-03-21 16:03:24,556 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2-namenode.properties
2019-03-21 16:03:24,570 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink dyno-file started
2019-03-21 16:03:24,659 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
2019-03-21 16:03:24,659 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2019-03-21 16:03:24,662 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://EC132:9002
2019-03-21 16:03:24,663 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use EC132:9002 to access this namenode/service.
2019-03-21 16:03:24,734 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-03-21 16:03:24,786 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://EC132:50077
2019-03-21 16:03:24,843 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2019-03-21 16:03:24,850 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-03-21 16:03:24,856 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2019-03-21 16:03:24,861 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-03-21 16:03:25,000 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-03-21 16:03:25,001 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-03-21 16:03:25,015 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50077
2019-03-21 16:03:25,015 INFO org.mortbay.log: jetty-6.1.26
2019-03-21 16:03:25,145 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@EC132:50077
2019-03-21 16:03:25,172 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-03-21 16:03:25,172 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-03-21 16:03:25,210 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2019-03-21 16:03:25,210 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Enabling async auditlog
2019-03-21 16:03:25,213 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-03-21 16:03:25,215 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-03-21 16:03:25,261 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2019-03-21 16:03:25,261 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-03-21 16:03:25,262 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-03-21 16:03:25,262 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Mar 21 16:03:25
2019-03-21 16:03:25,265 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-03-21 16:03:25,265 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-21 16:03:25,266 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2019-03-21 16:03:25,266 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2019-03-21 16:03:25,276 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2019-03-21 16:03:25,276 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2019-03-21 16:03:25,276 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2019-03-21 16:03:25,276 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2019-03-21 16:03:25,276 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2019-03-21 16:03:25,277 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2019-03-21 16:03:25,277 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2019-03-21 16:03:25,277 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2019-03-21 16:03:25,289 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2019-03-21 16:03:25,289 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2019-03-21 16:03:25,289 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-03-21 16:03:25,289 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-03-21 16:03:25,291 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2019-03-21 16:03:25,484 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2019-03-21 16:03:25,484 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-21 16:03:25,484 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2019-03-21 16:03:25,484 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2019-03-21 16:03:25,485 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2019-03-21 16:03:25,485 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2019-03-21 16:03:25,485 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2019-03-21 16:03:25,486 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2019-03-21 16:03:25,509 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2019-03-21 16:03:25,510 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-21 16:03:25,510 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2019-03-21 16:03:25,510 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2019-03-21 16:03:25,511 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.0
2019-03-21 16:03:25,511 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2019-03-21 16:03:25,512 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2019-03-21 16:03:25,515 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-03-21 16:03:25,515 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-03-21 16:03:25,515 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-03-21 16:03:25,518 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2019-03-21 16:03:25,519 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-03-21 16:03:25,520 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2019-03-21 16:03:25,520 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-21 16:03:25,521 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2019-03-21 16:03:25,521 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2019-03-21 16:03:26,121 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data/in_use.lock acquired by nodename 20956@EC132
2019-03-21 16:03:26,190 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data/current
2019-03-21 16:03:26,191 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2019-03-21 16:03:26,191 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data/current/fsimage_0000000000000000371, cpktTxId=0000000000000000371)
2019-03-21 16:03:26,297 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 51 INodes.
2019-03-21 16:03:26,352 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2019-03-21 16:03:26,352 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 371 from /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553155119846_0002/container_e61_1553155119846_0002_01_000002/dyno-node/name-data/current/fsimage_0000000000000000371
2019-03-21 16:03:26,356 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Initializing quota with 4 thread(s)
2019-03-21 16:03:26,380 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Quota initialization completed in 24 milliseconds
name space=51
storage space=2148025128
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2019-03-21 16:03:26,380 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2019-03-21 16:03:26,381 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 372
2019-03-21 16:03:26,636 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2019-03-21 16:03:26,636 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 1113 msecs
2019-03-21 16:03:26,814 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Service RPC server is binding to EC132:9022
2019-03-21 16:03:26,821 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2019-03-21 16:03:26,834 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9022
2019-03-21 16:03:26,905 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Setting ADDRESS EC132:9022
2019-03-21 16:03:26,905 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to EC132:9002
2019-03-21 16:03:26,905 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2019-03-21 16:03:26,910 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9002
2019-03-21 16:03:26,924 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2019-03-21 16:03:26,933 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2019-03-21 16:03:26,933 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2019-03-21 16:03:26,933 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2019-03-21 16:03:26,933 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
2019-03-21 16:03:26,933 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2019-03-21 16:03:26,933 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2019-03-21 16:03:26,941 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2019-03-21 16:03:26,967 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-03-21 16:03:26,968 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9002: starting
2019-03-21 16:03:26,975 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-03-21 16:03:26,978 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9022: starting
2019-03-21 16:03:26,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 26
2019-03-21 16:03:26,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0
2019-03-21 16:03:26,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 26
2019-03-21 16:03:26,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0
2019-03-21 16:03:26,980 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0
2019-03-21 16:03:26,980 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 46 msec
2019-03-21 16:03:26,985 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: EC132/10.120.155.132:9002
2019-03-21 16:03:26,986 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode service RPC up at: EC132/10.120.155.132:9022
2019-03-21 16:03:26,986 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2019-03-21 16:03:26,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
hexiangheng commented 5 years ago

What are the hallmarks of the success of simulated HDFS cluster startup?

xkrogen commented 5 years ago

Hey @hexiangheng , sorry for the delay in my response. Yes, the real HDFS and YARN can be HA, security-enabled, whatever you like.

From the NameNode logs, it looks like everything was running successfully. It would be helpful to have the logs for the ApplicationMaster as well, which should have some information about why the application exited.

One thing that stands out for now:

2019-03-24 12:06:24,498 INFO dynamometer.Client: NameNode can be reached via HDFS at: hdfs://EC131:9002/
2019-03-24 12:06:24,498 INFO dynamometer.Client: NameNode web UI available at: http://EC131:50077/
2019-03-24 12:06:24,498 INFO dynamometer.Client: NameNode can be tracked at: http://EC131:8042/node/containerlogs/container_e62_1553399068266_0002_01_000002/root/

The client seems to be trying to access host EC131, but the NameNode logs you shared are from host EC132:

Log Type: hadoop-root-namenode-EC132.log
Log Upload Time: Thu Mar 21 16:05:04 +0800 2019
Log Length: 39141
2019-03-21 16:03:24,219 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = EC132/10.120.155.132

Can you provide info on what is running on these 2 hosts, and what the expectation is?

hexiangheng commented 5 years ago

Hi, @xkrogen ,Thank you very much,According to your suggestion,The latest logs are as follows.

The NameNode logs:

Log Type: hadoop-root-namenode-EC132.log
Log Upload Time: Tue Mar 26 18:38:01 +0800 2019
Log Length: 39140
2019-03-26 18:36:02,476 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = EC132/10.120.155.132
STARTUP_MSG:   args = [-D, fs.defaultFS=hdfs://EC132:9002, -D, dfs.namenode.rpc-address=EC132:9002, -D, dfs.namenode.servicerpc-address=EC132:9022, -D, dfs.namenode.http-address=EC132:50077, -D, dfs.namenode.https-address=EC132:0, -D, dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/checkpoint, -D, dfs.namenode.kerberos.internal.spnego.principal=, -D, dfs.hosts=, -D, dfs.hosts.exclude=, -D, dfs.namenode.legacy-oiv-image.dir=, -D, dfs.namenode.kerberos.principal=, -D, dfs.namenode.keytab.file=, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.nameservices=, -D, dfs.web.authentication.kerberos.principal=, -D, dfs.web.authentication.kerberos.keytab=, -D, hadoop.http.filter.initializers=, -D, dfs.datanode.kerberos.principal=, -D, dfs.datanode.keytab.file=, -D, dfs.domain.socket.path=, -D, dfs.client.read.shortcircuit=false]
STARTUP_MSG:   version = 2.7.5
STARTUP_MSG:   classpath = /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/conf/etc/hadoop:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-net-3.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/asm-3.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/avro-1.7.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/paranamer-2.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jettison-1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/guava-11.0.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/activation-1.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jsch-0.1.54.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hadoop-annotations-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-io-2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/junit-4.11.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/xz-1.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/gson-2.2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/hadoop-auth-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-nfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5-tests.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5-tests.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/hdfs/hadoop-hdfs-2.7.5.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/yarn/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/share/hadoop/mapreduce/*:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dependencies/dynamometer-infra-0.1.4.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/additionalClasspath/:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar:/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/hadoopBinary/hadoop-2.7.5/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075; compiled by 'kshvachk' on 2017-12-16T01:06Z
STARTUP_MSG:   java = 1.8.0_111
************************************************************/
2019-03-26 18:36:02,485 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-03-26 18:36:02,488 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [-D, fs.defaultFS=hdfs://EC132:9002, -D, dfs.namenode.rpc-address=EC132:9002, -D, dfs.namenode.servicerpc-address=EC132:9022, -D, dfs.namenode.http-address=EC132:50077, -D, dfs.namenode.https-address=EC132:0, -D, dfs.namenode.name.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data, -D, dfs.namenode.edits.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data, -D, dfs.namenode.checkpoint.dir=file:///home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/checkpoint, -D, dfs.namenode.kerberos.internal.spnego.principal=, -D, dfs.hosts=, -D, dfs.hosts.exclude=, -D, dfs.namenode.legacy-oiv-image.dir=, -D, dfs.namenode.kerberos.principal=, -D, dfs.namenode.keytab.file=, -D, dfs.namenode.safemode.threshold-pct=0.0f, -D, dfs.permissions.enabled=true, -D, dfs.cluster.administrators="*", -D, dfs.block.replicator.classname=com.linkedin.dynamometer.BlockPlacementPolicyAlwaysSatisfied, -D, hadoop.security.impersonation.provider.class=com.linkedin.dynamometer.AllowAllImpersonationProvider, -D, hadoop.tmp.dir=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node, -D, hadoop.security.authentication=simple, -D, hadoop.security.authorization=false, -D, dfs.http.policy=HTTP_ONLY, -D, dfs.nameservices=, -D, dfs.web.authentication.kerberos.principal=, -D, dfs.web.authentication.kerberos.keytab=, -D, hadoop.http.filter.initializers=, -D, dfs.datanode.kerberos.principal=, -D, dfs.datanode.keytab.file=, -D, dfs.domain.socket.path=, -D, dfs.client.read.shortcircuit=false]
2019-03-26 18:36:02,850 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2-namenode.properties
2019-03-26 18:36:02,872 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink dyno-file started
2019-03-26 18:36:03,109 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
2019-03-26 18:36:03,109 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2019-03-26 18:36:03,112 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://EC132:9002
2019-03-26 18:36:03,113 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use EC132:9002 to access this namenode/service.
2019-03-26 18:36:03,174 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-03-26 18:36:03,229 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://EC132:50077
2019-03-26 18:36:03,291 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2019-03-26 18:36:03,298 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-03-26 18:36:03,304 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2019-03-26 18:36:03,309 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-03-26 18:36:03,441 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-03-26 18:36:03,443 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-03-26 18:36:03,485 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50077
2019-03-26 18:36:03,485 INFO org.mortbay.log: jetty-6.1.26
2019-03-26 18:36:03,838 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@EC132:50077
2019-03-26 18:36:03,876 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-03-26 18:36:03,876 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-03-26 18:36:03,934 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2019-03-26 18:36:03,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Enabling async auditlog
2019-03-26 18:36:03,938 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-03-26 18:36:03,940 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-03-26 18:36:03,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2019-03-26 18:36:03,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-03-26 18:36:03,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-03-26 18:36:03,993 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Mar 26 18:36:03
2019-03-26 18:36:03,995 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-03-26 18:36:03,995 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-26 18:36:04,000 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2019-03-26 18:36:04,001 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2019-03-26 18:36:04,011 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2019-03-26 18:36:04,011 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2019-03-26 18:36:04,012 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2019-03-26 18:36:04,012 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2019-03-26 18:36:04,012 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2019-03-26 18:36:04,012 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2019-03-26 18:36:04,012 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2019-03-26 18:36:04,012 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2019-03-26 18:36:04,026 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2019-03-26 18:36:04,026 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2019-03-26 18:36:04,026 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-03-26 18:36:04,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-03-26 18:36:04,033 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2019-03-26 18:36:04,501 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2019-03-26 18:36:04,502 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-26 18:36:04,502 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2019-03-26 18:36:04,502 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2019-03-26 18:36:04,520 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2019-03-26 18:36:04,520 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2019-03-26 18:36:04,520 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2019-03-26 18:36:04,520 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2019-03-26 18:36:04,528 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2019-03-26 18:36:04,528 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-26 18:36:04,528 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2019-03-26 18:36:04,528 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2019-03-26 18:36:04,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.0
2019-03-26 18:36:04,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2019-03-26 18:36:04,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2019-03-26 18:36:04,533 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-03-26 18:36:04,533 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-03-26 18:36:04,533 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-03-26 18:36:04,537 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2019-03-26 18:36:04,537 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-03-26 18:36:04,539 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2019-03-26 18:36:04,539 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2019-03-26 18:36:04,539 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2019-03-26 18:36:04,539 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2019-03-26 18:36:04,875 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data/in_use.lock acquired by nodename 10942@EC132
2019-03-26 18:36:04,948 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data/current
2019-03-26 18:36:04,949 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2019-03-26 18:36:04,949 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data/current/fsimage_0000000000000000371, cpktTxId=0000000000000000371)
2019-03-26 18:36:05,058 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 51 INodes.
2019-03-26 18:36:05,118 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2019-03-26 18:36:05,118 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 371 from /home/hxh/hadoop/hadoop-2.7.5/usercache/root/appcache/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/dyno-node/name-data/current/fsimage_0000000000000000371
2019-03-26 18:36:05,123 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Initializing quota with 4 thread(s)
2019-03-26 18:36:05,140 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Quota initialization completed in 17 milliseconds
name space=51
storage space=2148025128
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2019-03-26 18:36:05,141 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2019-03-26 18:36:05,141 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 372
2019-03-26 18:36:05,400 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2019-03-26 18:36:05,400 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 839 msecs
2019-03-26 18:36:05,559 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Service RPC server is binding to EC132:9022
2019-03-26 18:36:05,565 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2019-03-26 18:36:05,575 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9022
2019-03-26 18:36:05,692 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Setting ADDRESS EC132:9022
2019-03-26 18:36:05,692 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to EC132:9002
2019-03-26 18:36:05,692 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2019-03-26 18:36:05,693 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9002
2019-03-26 18:36:05,701 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2019-03-26 18:36:05,722 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2019-03-26 18:36:05,722 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2019-03-26 18:36:05,722 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2019-03-26 18:36:05,723 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
2019-03-26 18:36:05,723 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2019-03-26 18:36:05,723 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2019-03-26 18:36:05,730 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2019-03-26 18:36:05,752 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 26
2019-03-26 18:36:05,752 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0
2019-03-26 18:36:05,752 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 26
2019-03-26 18:36:05,752 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0
2019-03-26 18:36:05,757 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0
2019-03-26 18:36:05,757 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 33 msec
2019-03-26 18:36:05,765 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-03-26 18:36:05,765 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9002: starting
2019-03-26 18:36:05,768 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-03-26 18:36:05,768 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9022: starting
2019-03-26 18:36:05,770 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: EC132/10.120.155.132:9002
2019-03-26 18:36:05,770 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode service RPC up at: EC132/10.120.155.132:9022
2019-03-26 18:36:05,770 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2019-03-26 18:36:05,773 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds

The ApplicationMaster logs:

Log Type: stderr
Log Upload Time: Tue Mar 26 18:38:01 +0800 2019
Log Length: 5871
2019-03-26 18:37:16,330 INFO dynamometer.ApplicationMaster: Initializing ApplicationMaster
2019-03-26 18:37:16,926 INFO dynamometer.ApplicationMaster: Application master for app, appId=8, clustertimestamp=1553589057471, attemptId=1
2019-03-26 18:37:16,926 INFO dynamometer.ApplicationMaster: Starting ApplicationMaster
2019-03-26 18:37:17,156 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-03-26 18:37:17,677 INFO impl.NMClientAsyncImpl: Upper bound of the thread pool size is 500
2019-03-26 18:37:17,679 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
2019-03-26 18:37:18,652 INFO dynamometer.ApplicationMaster: Requested NameNode ask: Capability[<memory:2048, vCores:1>]Priority[0]
2019-03-26 18:37:18,773 INFO dynamometer.ApplicationMaster: Waiting on availability of NameNode information at hdfs://EC130:9003/user/root/.dynamometer/application_1553589057471_0008/nn_info.prop
2019-03-26 18:37:20,970 INFO impl.AMRMClientImpl: Received new token for : EC132:45454
2019-03-26 18:37:20,974 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, allocatedCnt=1
2019-03-26 18:37:21,032 INFO dynamometer.ApplicationMaster: Launching NAMENODE on a new container., containerId=container_e63_1553589057471_0008_01_000002, containerNode=EC132:45454, containerNodeURI=EC132:8042, containerResourceMemory=2048, containerResourceVirtualCores=1
2019-03-26 18:37:21,034 INFO dynamometer.ApplicationMaster: Setting up container launch context for containerid=container_e63_1553589057471_0008_01_000002, isNameNode=true
2019-03-26 18:37:21,370 INFO dynamometer.ApplicationMaster: Completed setting up command for namenode: [./start-component.sh, namenode, hdfs://EC130:9003/user/root/.dynamometer/application_1553589057471_0008, 1><LOG_DIR>/stdout, 2><LOG_DIR>/stderr]
2019-03-26 18:37:21,480 INFO dynamometer.ApplicationMaster: Starting NAMENODE; track at: http://EC132:8042/node/containerlogs/container_e63_1553589057471_0008_01_000002/root/
2019-03-26 18:37:21,484 INFO impl.NMClientAsyncImpl: Processing Event EventType: START_CONTAINER for Container container_e63_1553589057471_0008_01_000002
2019-03-26 18:37:21,488 INFO impl.ContainerManagementProtocolProxy: Opening proxy : EC132:45454
2019-03-26 18:37:21,881 INFO dynamometer.ApplicationMaster: NameNode container started at ID container_e63_1553589057471_0008_01_000002
2019-03-26 18:37:51,377 INFO dynamometer.ApplicationMaster: NameNode information: {NM_HTTP_PORT=8042, NN_HOSTNAME=EC132, NN_HTTP_PORT=50077, NN_SERVICERPC_PORT=9022, NN_RPC_PORT=9002, CONTAINER_ID=container_e63_1553589057471_0008_01_000002}
2019-03-26 18:37:51,378 INFO dynamometer.ApplicationMaster: NameNode can be reached at: hdfs://EC132:9002/
2019-03-26 18:37:51,378 INFO dynamometer.ApplicationMaster: Waiting for NameNode to finish starting up...
2019-03-26 18:38:00,130 INFO dynamometer.ApplicationMaster: Got response from RM for container ask, completedCnt=1
2019-03-26 18:38:00,131 INFO dynamometer.ApplicationMaster: Got container status for NAMENODE: containerID=container_e63_1553589057471_0008_01_000002, state=COMPLETE, exitStatus=-103, diagnostics=Container [pid=10821,containerID=container_e63_1553589057471_0008_01_000002] is running beyond virtual memory limits. Current usage: 292.2 MB of 2 GB physical memory used; 5.2 GB of 4.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_e63_1553589057471_0008_01_000002 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 10821 10819 10821 10821 (bash) 0 0 12161024 676 /bin/bash -c ./start-component.sh namenode hdfs://EC130:9003/user/root/.dynamometer/application_1553589057471_0008 1>/home/hxh/hadoop/hadoop-2.7.5/logs/userlogs/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/stdout 2>/home/hxh/hadoop/hadoop-2.7.5/logs/userlogs/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/stderr 
    |- 10942 1 10821 10821 (java) 443 14 2773336064 43370 /home/hadoop/jdk1.8.0_111/bin/java -Dproc_namenode -Xmx1000m -Djava.library.path=/home...
2019-03-26 18:38:00,131 INFO dynamometer.ApplicationMaster: NameNode container completed; marking application as done
2019-03-26 18:38:00,523 INFO dynamometer.ApplicationMaster: NameNode has started!
2019-03-26 18:38:00,524 INFO dynamometer.ApplicationMaster: Looking for block listing files in hdfs:/dyno/blocks
2019-03-26 18:38:00,581 INFO dynamometer.ApplicationMaster: Requesting 3 DataNode containers with 2048MB memory, 1 vcores, 
2019-03-26 18:38:00,582 INFO dynamometer.ApplicationMaster: Finished requesting datanode containers
2019-03-26 18:38:00,582 INFO dynamometer.ApplicationMaster: Application completed. Stopping running containers
2019-03-26 18:38:00,583 INFO impl.ContainerManagementProtocolProxy: Opening proxy : EC132:45454
2019-03-26 18:38:00,605 INFO dynamometer.ApplicationMaster: Application completed. Signalling finish to RM
2019-03-26 18:38:00,631 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
2019-03-26 18:38:00,736 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting for queue
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)
2019-03-26 18:38:00,736 INFO dynamometer.ApplicationMaster: Application Master failed. exiting
hexiangheng commented 5 years ago

Can you provide info on what is running on these 2 hosts

sure,I built a real 3-node HA HDFS cluster(EC130,EC131,EC132), The processes running on EC130 (active namenode) as follows:

5537 JobHistoryServer
3170 NameNode
4786 ResourceManager
4886 NodeManager
3767 DataNode
3690 DFSZKFailoverController
2795 QuorumPeerMain
3036 JournalNode
13854 Jps

The processes running on EC131 (standby namenode) as follows:

25700 JobHistoryServer
25446 NodeManager
24759 QuorumPeerMain
25256 DataNode
25193 DFSZKFailoverController
32090 Jps
24828 JournalNode
25102 NameNode

The processes running on EC132 as follows:

23440 JobHistoryServer
26176 Jps
22849 QuorumPeerMain
23073 DataNode
23218 NodeManager
22910 JournalNode

and what the expectation is?

I want to start a simulated HDFS cluster,The startup command be executed on the EC130 node(active namenode),and i will use the audit trace replay capabilities of Dynamometer when the simulation HDFS cluster started successfully.

xkrogen commented 5 years ago

Okay, I see. You will likely need a much larger cluster if you're planning to perform any "production scale" Dynamometer tests. We treat the cluster which executes Dynamometer as a fully fledged Hadoop cluster (100+ nodes).

In any case, the cause of your issue is apparent in your ApplicationMaster logs:

2019-03-26 18:38:00,131 INFO dynamometer.ApplicationMaster: Got container status for NAMENODE: containerID=container_e63_1553589057471_0008_01_000002, state=COMPLETE, exitStatus=-103, diagnostics=Container [pid=10821,containerID=container_e63_1553589057471_0008_01_000002] is running beyond virtual memory limits. Current usage: 292.2 MB of 2 GB physical memory used; 5.2 GB of 4.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_e63_1553589057471_0008_01_000002 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 10821 10819 10821 10821 (bash) 0 0 12161024 676 /bin/bash -c ./start-component.sh namenode hdfs://EC130:9003/user/root/.dynamometer/application_1553589057471_0008 1>/home/hxh/hadoop/hadoop-2.7.5/logs/userlogs/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/stdout 2>/home/hxh/hadoop/hadoop-2.7.5/logs/userlogs/application_1553589057471_0008/container_e63_1553589057471_0008_01_000002/stderr 
    |- 10942 1 10821 10821 (java) 443 14 2773336064 43370 /home/hadoop/jdk1.8.0_111/bin/java -Dproc_namenode -Xmx1000m -Djava.library.path=/home...

It would appear that the NN container needs more memory.

hexiangheng commented 5 years ago

Hi, @xkrogen,This problem was solved perfectly,Thank you very much.

xkrogen commented 5 years ago

Fantastic, I'm glad we were able to figure everything out :)