sequenceiq / cloudbreak-shell

CLI shell for the Cloudbreak project
https://cloudbreak.sequenceiq.com/
16 stars 6 forks source link

unabel to create cluster using script #102

Closed desaiak closed 9 years ago

desaiak commented 9 years ago

I am running the following and it runs fine but not sure what to do after this, create cluster is not working.

credential select --id 1035420 blueprint select --id 3217 instancegroup configure --instanceGroup host_group_master_1 --nodecount 1 --templateId 4863 instancegroup configure --instanceGroup host_group_master_2 --nodecount 1 --templateId 4863 instancegroup configure --instanceGroup host_group_master_3 --nodecount 1 --templateId 4863 instancegroup configure --instanceGroup host_group_client_1 --nodecount 1 --templateId 4863 instancegroup configure --instanceGroup host_group_slave_1 --nodecount 1 --templateId 4863 instancegroup configure --instanceGroup cbgateway --nodecount 1 --templateId 4863 hostgroup configure --hostgroup host_group_master_1 --recipeNames gcs-recipe hostgroup configure --hostgroup host_group_master_2 --recipeNames gcs-recipe hostgroup configure --hostgroup host_group_master_3 --recipeNames gcs-recipe hostgroup configure --hostgroup host_group_client_1 --recipeNames gcs-recipe hostgroup configure --hostgroup host_group_slave_1 --recipeNames gcs-recipe hostgroup configure --hostgroup cbgateway --recipeNames gcs-recipe network select --id 1950 security group select --id 3809 stack create --name gcs79 --region US_CENTRAL1_A stack start --name gcs79

cloudbreak-shell>stack show --name gcs79 FIELD VALUE

owner cf3f6d99-b1fe-4a44-944c-b0f9b0014626 cluster [ambariServerIp:null, ambariStackDetails:null, blueprintId:null, cluster:null, description:null, hostGroups:null, hoursUp:0, id:null, minutesUp:0, password:null, secure:false, serviceEndPoints:[:], status:null, statusReason:null, userName:null] image sequenceiqimage/cb-centos71-amb210-2015-08-12-b1470.tar.gz cloudPlatform GCP securityGroupId 3809 instanceGroups [[group:host_group_master_1, id:90673, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupmaster1-0-gcs79-20150902024649.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_master_1, instanceId:hostgroupmaster1-0-gcs79-20150902024649, instanceStatus:UNREGISTERED, privateIp:10.0.30.232, publicIp:130.211.134.168, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:host_group_master_2, id:90674, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupmaster2-4-gcs79-20150902024649.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_master_2, instanceId:hostgroupmaster2-4-gcs79-20150902024649, instanceStatus:UNREGISTERED, privateIp:10.0.13.245, publicIp:104.197.87.88, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:host_group_client_1, id:90676, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupclient1-2-gcs79-20150902024649.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_client_1, instanceId:hostgroupclient1-2-gcs79-20150902024649, instanceStatus:UNREGISTERED, privateIp:10.0.191.49, publicIp:130.211.165.241, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:host_group_master_3, id:90675, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupmaster3-5-gcs79-20150902024649.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_master_3, instanceId:hostgroupmaster3-5-gcs79-20150902024649, instanceStatus:UNREGISTERED, privateIp:10.0.22.40, publicIp:130.211.144.100, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:cbgateway, id:90672, metadata:[[ambariServer:true, containerCount:0, discoveryFQDN:cbgateway-1-gcs79-20150902024649.node.dc1.consul, dockerSubnet:null, instanceGroup:cbgateway, instanceId:cbgateway-1-gcs79-20150902024649, instanceStatus:REGISTERED, privateIp:10.0.142.12, publicIp:104.197.53.158, volumeCount:1]], nodeCount:1, templateId:4863, type:GATEWAY], [group:host_group_slave_1, id:90677, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupslave1-3-gcs79-20150902024649.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_slave_1, instanceId:hostgroupslave1-3-gcs79-20150902024649, instanceStatus:UNREGISTERED, privateIp:10.0.214.197, publicIp:104.197.106.16, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE]] onFailureAction ROLLBACK statusReason Synced instance states with the cloud provider. public false consulServerCount 3 name gcs79 credentialId 1035420 networkId 1950 id 3210 region US_CENTRAL1_A failurePolicy [adjustmentType:BEST_EFFORT, id:4910, threshold:1] parameters [:] account cf3f6d99-b1fe-4a44-944c-b0f9b0014626 status AVAILABLE

mhmxs commented 9 years ago

The next step is cluster create. What is the error message of the command?

After i start an empty (i mean filled with default data) cloudbreak, i usually use this command file: credential select --name rkovacs blueprint select --name hdp-small-default network select --name default-gcp-network security group select --name all-services-port instancegroup configure --instanceGroup cbgateway --nodecount 1 --templateName minviable-gcp instancegroup configure --instanceGroup host_group_client_1 --nodecount 1 --templateName minviable-gcp instancegroup configure --instanceGroup host_group_master_1 --nodecount 1 --templateName minviable-gcp instancegroup configure --instanceGroup host_group_master_2 --nodecount 1 --templateName minviable-gcp instancegroup configure --instanceGroup host_group_master_3 --nodecount 1 --templateName minviable-gcp instancegroup configure --instanceGroup host_group_slave_1 --nodecount 1 --templateName minviable-gcp hostgroup configure --hostgroup host_group_client_1 stack create --name rkovacs-bigger --region US_CENTRAL1_A cluster create

The only prerequirement is the credential named rkovacs here.

hint: stack start hasn't got --name parameter, it starts the previously created or selected stack.

mhmxs commented 9 years ago

I modified my command file, and deleted cluster create. I ran the command, and than i opened an interactive shell. I typed the following: stack select --name rkovacs-bigger stack start

and than i got this error message: groovyx.net.http.HttpResponseException: Bad Request

In cloudbreak log a saw: com.sequenceiq.cloudbreak.controller.BadRequestException: Cannot update the status of stack '102' to STARTED, because it isn't in STOPPED state.

I have to talk with others, but as i see now stack create is starts the stack automatically, so explicit start call is not necessary. Maybe the documentation is not clear at this section.

I hope this helps.

desaiak commented 9 years ago

I retried using your sample script and it failed at cluster create.

Log says .. 11781 [main] WARN o.s.shell.core.SimpleParser - Command 'cluster create' was found but is not currently available (type 'help' then ENTER to learn about this command)

WWDL71030:cloudbreak-shell B85726$ java -jar /tmp/cloudbreak-shell.jar --sequenceiq.user=*** --sequenceiq.password=*** --cmdfile=script credential select --id 1035420: [SUCCESS] blueprint select --id 3217: [SUCCESS] network select --id 1950: [SUCCESS] security group select --id 3809: [SUCCESS] instancegroup configure --instanceGroup host_group_master_1 --nodecount 1 --templateId 4863: [SUCCESS] instancegroup configure --instanceGroup host_group_master_2 --nodecount 1 --templateId 4863: [SUCCESS] instancegroup configure --instanceGroup host_group_master_3 --nodecount 1 --templateId 4863: [SUCCESS] instancegroup configure --instanceGroup host_group_client_1 --nodecount 1 --templateId 4863: [SUCCESS] instancegroup configure --instanceGroup host_group_slave_1 --nodecount 1 --templateId 4863: [SUCCESS] instancegroup configure --instanceGroup cbgateway --nodecount 1 --templateId 4863: [SUCCESS] hostgroup configure --hostgroup host_group_master_1 --recipeNames gcs-recipe: [SUCCESS] hostgroup configure --hostgroup host_group_master_2 --recipeNames gcs-recipe: [SUCCESS] hostgroup configure --hostgroup host_group_master_3 --recipeNames gcs-recipe: [SUCCESS] hostgroup configure --hostgroup host_group_client_1 --recipeNames gcs-recipe: [SUCCESS] hostgroup configure --hostgroup host_group_slave_1 --recipeNames gcs-recipe: [SUCCESS] hostgroup configure --hostgroup cbgateway --recipeNames gcs-recipe: [SUCCESS] stack create --name gcs79 --region US_CENTRAL1_A: [SUCCESS] cluster create: [FAILED]

however the stack is available and its running, so I try to select teh stract and do create cluster but that fails too.

cloudbreak-shell>stack show --id 3211 FIELD VALUE


owner cf3f6d99-b1fe-4a44-944c-b0f9b0014626 cluster [ambariServerIp:null, ambariStackDetails:null, blueprintId:null, cluster:null, description:null, hostGroups:null, hoursUp:0, id:null, minutesUp:0, password:null, secure:false, serviceEndPoints:[:], status:null, statusReason:null, userName:null] image sequenceiqimage/cb-centos71-amb210-2015-08-12-b1470.tar.gz cloudPlatform GCP securityGroupId 3809 instanceGroups [[group:host_group_slave_1, id:90722, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupslave1-5-gcs79-20150902101703.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_slave_1, instanceId:hostgroupslave1-5-gcs79-20150902101703, instanceStatus:UNREGISTERED, privateIp:10.0.34.50, publicIp:23.251.146.198, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:host_group_master_3, id:90720, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupmaster3-0-gcs79-20150902101703.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_master_3, instanceId:hostgroupmaster3-0-gcs79-20150902101703, instanceStatus:UNREGISTERED, privateIp:10.0.187.50, publicIp:130.211.162.210, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:host_group_master_2, id:90719, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupmaster2-2-gcs79-20150902101703.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_master_2, instanceId:hostgroupmaster2-2-gcs79-20150902101703, instanceStatus:UNREGISTERED, privateIp:10.0.153.8, publicIp:23.251.151.148, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:host_group_master_1, id:90718, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupmaster1-1-gcs79-20150902101703.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_master_1, instanceId:hostgroupmaster1-1-gcs79-20150902101703, instanceStatus:UNREGISTERED, privateIp:10.0.182.82, publicIp:104.154.36.234, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE], [group:cbgateway, id:90717, metadata:[[ambariServer:true, containerCount:0, discoveryFQDN:cbgateway-3-gcs79-20150902101703.node.dc1.consul, dockerSubnet:null, instanceGroup:cbgateway, instanceId:cbgateway-3-gcs79-20150902101703, instanceStatus:REGISTERED, privateIp:10.0.170.63, publicIp:130.211.142.151, volumeCount:1]], nodeCount:1, templateId:4863, type:GATEWAY], [group:host_group_client_1, id:90721, metadata:[[ambariServer:false, containerCount:0, discoveryFQDN:hostgroupclient1-4-gcs79-20150902101703.node.dc1.consul, dockerSubnet:null, instanceGroup:host_group_client_1, instanceId:hostgroupclient1-4-gcs79-20150902101703, instanceStatus:UNREGISTERED, privateIp:10.0.21.74, publicIp:104.154.87.10, volumeCount:1]], nodeCount:1, templateId:4863, type:CORE]] onFailureAction ROLLBACK statusReason public false consulServerCount 3 name gcs79 credentialId 1035420 networkId 1950 id 3211 region US_CENTRAL1_A failurePolicy [adjustmentType:BEST_EFFORT, id:4911, threshold:1] parameters [:] account cf3f6d99-b1fe-4a44-944c-b0f9b0014626 status AVAILABLE

cloudbreak-shell>cluster create Command 'cluster create' was found but is not currently available (type 'help' then ENTER to learn about this command) cloudbreak-shell>stack select --id 3211 Stack selected, id: 3211 cloudbreak-shell>cluster create Command 'cluster create' was found but is not currently available (type 'help' then ENTER to learn about this command)

Question is how do I start a stack which is available ? (I confirmed that all VM's have been created via GCP console.

mhmxs commented 9 years ago

Do my commands without any change (except the credential) create a cluster as well? Could you please share your blueprint with us? Any error in Cloudbreak log?

desaiak commented 9 years ago

Your command with credential change are working fine.

Here is the blueprint: only change I have made to the default spark blueprint is core-site for GCS { "configurations": [ { "core-site": { "fs.gs.impl": "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem", "fs.AbstractFileSystem.gs.impl": "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS", "google.cloud.auth.service.account.enable": true, "google.cloud.auth.service.account.keyfile": "/usr/lib/hadoop/lib/gcp.p12", "fs.gs.project.id": "keen-scion-656", "google.cloud.auth.service.account.email": "1095715326613-o2ufh3rit5i189b9tbn1qgkqajot2420@developer.gserviceaccount.com" } }, { "ams-env": { "properties": { "ambari_metrics_user": "ams", "content": "\n# Set environment variables here.\n\n# The java implementation to use. Java 1.6 required.\nexport JAVA_HOME={{java64_home}}\n\n# Collector Log directory for log4j\nexport AMS_COLLECTOR_LOG_DIR={{ams_collector_log_dir}}\n\n# Monitor Log directory for outfile\nexport AMS_MONITOR_LOG_DIR={{ams_monitor_log_dir}}\n\n# Collector pid directory\nexport AMS_COLLECTOR_PID_DIR={{ams_collector_pid_dir}}\n\n# Monitor pid directory\nexport AMS_MONITOR_PID_DIR={{ams_monitor_pid_dir}}\n\n# AMS HBase pid directory\nexport AMS_HBASE_PID_DIR={{hbase_pid_dir}}\n\n# AMS Collector heapsize\nexport AMS_COLLECTOR_HEAPSIZE={{metrics_collector_heapsize}}\n\n# AMS Collector options\nexport AMS_COLLECTOR_OPTS=\"-Djava.library.path=/usr/lib/ams-hbase/lib/hadoop-native -Xmx$AMS_COLLECTOR_HEAPSIZE \"\n{% if security_enabled %}\nexport AMS_COLLECTOR_OPTS=\"$AMS_COLLECTOR_OPTS -Djava.security.auth.login.config={{ams_collector_jaas_config_file}}\"\n{% endif %}", "metrics_collector_heapsize": "512m", "metrics_collector_log_dir": "/var/log/ambari-metrics-collector", "metrics_collector_pid_dir": "/var/run/ambari-metrics-collector", "metrics_monitor_log_dir": "/var/log/ambari-metrics-monitor", "metrics_monitor_pid_dir": "/var/run/ambari-metrics-monitor" } } }, { "ams-hbase-env": { "properties": { "content": "\n# Set environment variables here.\n\n# The java implementation to use. Java 1.6 required.\nexport JAVA_HOME={{java64_home}}\n\n# HBase Configuration directory\nexport HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}\n\n# Extra Java CLASSPATH elements. Optional.\nexport HBASE_CLASSPATH=${HBASE_CLASSPATH}\n\n# The maximum amount of heap to use, in MB. Default is 1000. Master heap size.\nexport HBASE_HEAPSIZE={{hbase_heapsize}}\n\n# Extra Java runtime options.\n# Below are what we set by default. May only work with SUN JVM.\n# For more on why as well as other possible settings,\n# see http://wiki.apache.org/hadoop/PerformanceTuning\nexport HBASE_OPTS=\"-XX:+UseConcMarkSweepGC -XX:ErrorFile={{hbase_log_dir}}/hs_err_pid%p.log -Djava.io.tmpdir={{hbase_tmp_dir}}\"\nexport SERVER_GC_OPTS=\"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{hbase_log_dir}}/gc.log-date +'%Y%m%d%H%M'\"\n# Uncomment below to enable java garbage collection logging.\n# export HBASE_OPTS=\"$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log\"\n\n# Uncomment and adjust to enable JMX exporting\n# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.\n# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n#\n# export HBASE_JMX_BASE=\"-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false\"\nexport HBASE_MASTER_OPTS=\" -XX:PermSize=64m -XX:MaxPermSize={{hbase_master_maxperm_size}} -Xms{{hbase_heapsize}} -Xmx{{hbase_heapsize}} -Xmn{{hbase_master_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly\"\nexport HBASE_REGIONSERVER_OPTS=\"-XX:MaxPermSize=128m -Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}\"\n# export HBASE_THRIFT_OPTS=\"$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103\"\n# export HBASE_ZOOKEEPER_OPTS=\"$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104\"\n\n# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\nexport HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n# Extra ssh options. Empty by default.\n# export HBASE_SSH_OPTS=\"-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR\"\n\n# Where log files are stored. $HBASE_HOME/logs by default.\nexport HBASE_LOG_DIR={{hbase_log_dir}}\n\n# A string representing this instance of hbase. $USER by default.\n# export HBASE_IDENT_STRING=$USER\n\n# The scheduling priority for daemon processes. See 'man nice'.\n# export HBASE_NICENESS=10\n\n# The directory where pid files are stored. /tmp by default.\nexport HBASE_PID_DIR={{hbase_pid_dir}}\n\n# Seconds to sleep between slave commands. Unset by default. This\n# can be useful in large clusters, where, e.g., slave rsyncs can\n# otherwise arrive faster than the master can service them.\n# export HBASE_SLAVE_SLEEP=0.1\n\n# Tell HBase whether it should manage it's own instance of Zookeeper or not.\nexport HBASE_MANAGES_ZK=false\n\n{% if security_enabled %}\nexport HBASE_OPTS=\"$HBASE_OPTS -Djava.security.auth.login.config={{client_jaas_config_file}}\"\nexport HBASE_MASTER_OPTS=\"$HBASE_MASTER_OPTS -Djava.security.auth.login.config={{master_jaas_config_file}}\"\nexport HBASE_REGIONSERVER_OPTS=\"$HBASE_REGIONSERVER_OPTS -Djava.security.auth.login.config={{regionserver_jaas_config_file}}\"\nexport HBASE_ZOOKEEPER_OPTS=\"$HBASE_ZOOKEEPER_OPTS -Djava.security.auth.login.config={{ams_zookeeper_jaas_config_file}}\"\n{% endif %}\n\n# use embedded native libs\n_HADOOP_NATIVE_LIB=\"/usr/lib/ams-hbase/lib/hadoop-native/\"\nexport HBASE_OPTS=\"$HBASE_OPTS -Djava.library.path=${_HADOOP_NATIVE_LIB}\"\n\n# Unset HADOOP_HOME to avoid importing HADOOP installed cluster related configs like: /usr/hdp/2.2.0.0-2041/hadoop/conf/\nexport HADOOP_HOME={{ams_hbase_home_dir}}", "hbase_log_dir": "/var/log/ambari-metrics-collector", "hbase_master_heapsize": "1024m", "hbase_master_maxperm_size": "128m", "hbase_master_xmn_size": "128m", "hbase_pid_dir": "/var/run/ambari-metrics-collector/", "hbase_regionserver_heapsize": "1024m", "hbase_regionserver_xmn_max": "512m", "hbase_regionserver_xmn_ratio": "0.2", "regionserver_xmn_size": "256m" } } }, { "ams-hbase-log4j": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n# Define the root logger to the system property \"hbase.root.logger\".\nlog4j.rootLogger=${hbase.root.logger}\n\n# Logging Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender properties\nhbase.log.maxfilesize=256MB\nhbase.log.maxbackupindex=20\n\n# Rolling File Appender\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit appender\n#\nhbase.security.log.file=SecurityAuth.audit\nhbase.security.log.maxfilesize=256MB\nhbase.security.log.maxbackupindex=20\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.category.SecurityLogger=${hbase.security.logger}\nlog4j.additivity.SecurityLogger=false\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n#\n# Null Appender\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Custom Logging levels\n\nlog4j.logger.org.apache.zookeeper=INFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.hbase=INFO\n# Make these two classes INFO-level. Make them DEBUG to see more zk debug.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n#log4j.logger.org.apache.hadoop.dfs=DEBUG\n# Set this class to log INFO only otherwise its OTT\n# Enable this to get detailed connection error/retry logging.\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n# Uncomment this line to enable tracing on every RPC call (this can be a lot of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n# Uncomment the below if you want to remove logging of client region caching'\n# and scan of .META. messages\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO" } } }, { "ams-hbase-policy": { "properties": { "security.admin.protocol.acl": "", "security.client.protocol.acl": "", "security.masterregion.protocol.acl": "_" } } }, { "ams-hbase-security-site": { "properties": { "ams.zookeeper.keytab": "", "ams.zookeeper.principal": "", "hadoop.security.authentication": "", "hbase.coprocessor.master.classes": "", "hbase.coprocessor.region.classes": "", "hbase.master.kerberos.principal": "", "hbase.master.keytab.file": "", "hbase.myclient.keytab": "", "hbase.myclient.principal": "", "hbase.regionserver.kerberos.principal": "", "hbase.regionserver.keytab.file": "", "hbase.security.authentication": "", "hbase.security.authorization": "", "hbase.zookeeper.property.authProvider.1": "", "hbase.zookeeper.property.jaasLoginRenew": "", "hbase.zookeeper.property.kerberos.removeHostFromPrincipal": "", "hbase.zookeeper.property.kerberos.removeRealmFromPrincipal": "", "zookeeper.znode.parent": "" } } }, { "ams-hbase-site": { "properties": { "hbase.client.scanner.caching": "10000", "hbase.client.scanner.timeout.period": "900000", "hbase.cluster.distributed": "false", "hbase.hregion.majorcompaction": "0", "hbase.hregion.memstore.block.multiplier": "4", "hbase.hregion.memstore.flush.size": "134217728", "hbase.hstore.blockingStoreFiles": "200", "hbase.hstore.flusher.count": "2", "hbase.local.dir": "${hbase.tmp.dir}/local", "hbase.master.info.bindAddress": "0.0.0.0", "hbase.master.info.port": "61310", "hbase.master.port": "61300", "hbase.master.wait.on.regionservers.mintostart": "1", "hbase.regionserver.global.memstore.lowerLimit": "0.3", "hbase.regionserver.global.memstore.upperLimit": "0.35", "hbase.regionserver.info.port": "61330", "hbase.regionserver.port": "61320", "hbase.regionserver.thread.compaction.large": "2", "hbase.regionserver.thread.compaction.small": "3", "hbase.replication": "false", "hbase.rootdir": "file:///var/lib/ambari-metrics-collector/hbase", "hbase.snapshot.enabled": "false", "hbase.tmp.dir": "/var/lib/ambari-metrics-collector/hbase-tmp", "hbase.zookeeper.leaderport": "61388", "hbase.zookeeper.peerport": "61288", "hbase.zookeeper.property.clientPort": "61181", "hbase.zookeeper.property.dataDir": "${hbase.tmp.dir}/zookeeper", "hbase.zookeeper.quorum": "{{zookeeper_quorum_hosts}}", "hfile.block.cache.size": "0.3", "phoenix.groupby.maxCacheSize": "307200000", "phoenix.query.maxGlobalMemoryPercentage": "15", "phoenix.query.spoolThresholdBytes": "12582912", "phoenix.query.timeoutMs": "1200000", "phoenix.sequence.saltBuckets": "2", "phoenix.spool.directory": "${hbase.tmp.dir}/phoenix-spool", "zookeeper.session.timeout": "120000", "zookeeper.session.timeout.localHBaseCluster": "20000" } } }, { "ams-log4j": { "properties": { "content": "\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# Define some default values that can be overridden by system properties\nams.log.dir=.\nams.log.file=ambari-metrics-collector.log\n\n# Root logger option\nlog4j.rootLogger=INFO,file\n\n# Direct log messages to a log file\nlog4j.appender.file=org.apache.log4j.RollingFileAppender\nlog4j.appender.file.File=${ams.log.dir}/${ams.log.file}\nlog4j.appender.file.MaxFileSize=80MB\nlog4j.appender.file.MaxBackupIndex=60\nlog4j.appender.file.layout=org.apache.log4j.PatternLayout\nlog4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c{1}:%L - %m%n" } } }, { "ams-site": { "properties": { "phoenix.query.maxGlobalMemoryPercentage": "25", "phoenix.spool.directory": "/tmp", "timeline.metrics.aggregator.checkpoint.dir": "/var/lib/ambari-metrics-collector/checkpoint", "timeline.metrics.cluster.aggregator.hourly.checkpointCutOffMultiplier": "2", "timeline.metrics.cluster.aggregator.hourly.disabled": "false", "timeline.metrics.cluster.aggregator.hourly.interval": "3600", "timeline.metrics.cluster.aggregator.hourly.ttl": "31536000", "timeline.metrics.cluster.aggregator.minute.checkpointCutOffMultiplier": "2", "timeline.metrics.cluster.aggregator.minute.disabled": "false", "timeline.metrics.cluster.aggregator.minute.interval": "120", "timeline.metrics.cluster.aggregator.minute.timeslice.interval": "15", "timeline.metrics.cluster.aggregator.minute.ttl": "2592000", "timeline.metrics.hbase.compression.scheme": "SNAPPY", "timeline.metrics.hbase.data.block.encoding": "FASTDIFF", "timeline.metrics.host.aggregator.hourly.checkpointCutOffMultiplier": "2", "timeline.metrics.host.aggregator.hourly.disabled": "false", "timeline.metrics.host.aggregator.hourly.interval": "3600", "timeline.metrics.host.aggregator.hourly.ttl": "2592000", "timeline.metrics.host.aggregator.minute.checkpointCutOffMultiplier": "2", "timeline.metrics.host.aggregator.minute.disabled": "false", "timeline.metrics.host.aggregator.minute.interval": "120", "timeline.metrics.host.aggregator.minute.ttl": "604800", "timeline.metrics.host.aggregator.ttl": "86400", "timeline.metrics.service.checkpointDelay": "60", "timeline.metrics.service.default.result.limit": "5760", "timeline.metrics.service.operation.mode": "embedded", "timeline.metrics.service.resultset.fetchSize": "2000", "timeline.metrics.service.rpc.address": "0.0.0.0:60200", "timeline.metrics.service.webapp.address": "0.0.0.0:6188" } } }, { "capacity-scheduler": { "properties": { "yarn.scheduler.capacity.default.minimum-user-limit-percent": "100", "yarn.scheduler.capacity.maximum-am-resource-percent": "0.2", "yarn.scheduler.capacity.maximum-applications": "10000", "yarn.scheduler.capacity.node-locality-delay": "40", "yarn.scheduler.capacity.resource-calculator": "org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator", "yarn.scheduler.capacity.root.accessible-node-labels": "", "yarn.scheduler.capacity.root.acl_administerqueue": "", "yarn.scheduler.capacity.root.capacity": "100", "yarn.scheduler.capacity.root.default.acl_administerjobs": "", "yarn.scheduler.capacity.root.default.acl_submitapplications": "", "yarn.scheduler.capacity.root.default.capacity": "100", "yarn.scheduler.capacity.root.default.maximum-capacity": "100", "yarn.scheduler.capacity.root.default.state": "RUNNING", "yarn.scheduler.capacity.root.default.user-limit-factor": "1", "yarn.scheduler.capacity.root.queues": "default" } } }, { "cluster-env": { "properties": { "hadoop-streaming_tar_destination_folder": "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/", "hadoop-streaming_tar_source": "/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar", "hive_tar_destination_folder": "hdfs:///hdp/apps/{{ hdp_stack_version }}/hive/", "hive_tar_source": "/usr/hdp/current/hive-client/hive.tar.gz", "ignore_groupsusers_create": "false", "kerberos_domain": "EXAMPLE.COM", "mapreduce_tar_destination_folder": "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/", "mapreduce_tar_source": "/usr/hdp/current/hadoop-client/mapreduce.tar.gz", "pig_tar_destination_folder": "hdfs:///hdp/apps/{{ hdp_stack_version }}/pig/", "pig_tar_source": "/usr/hdp/current/pig-client/pig.tar.gz", "security_enabled": "false", "smokeuser": "ambari-qa", "smokeuser_keytab": "/etc/security/keytabs/smokeuser.headless.keytab", "sqoop_tar_destination_folder": "hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/", "sqoop_tar_source": "/usr/hdp/current/sqoop-client/sqoop.tar.gz", "tez_tar_destination_folder": "hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/", "tez_tar_source": "/usr/hdp/current/tez-client/lib/tez.tar.gz", "user_group": "hadoop" } } }, { "core-site": { "properties_attributes": { "final": { "fs.defaultFS": "true" } }, "properties": { "fs.defaultFS": "hdfs://%HOSTGROUP::host_group_master1%:8020", "fs.trash.interval": "360", "ha.failover-controller.active-standby-elector.zk.op.retries": "120", "hadoop.http.authentication.simple.anonymous.allowed": "true", "hadoop.proxyuser.falcon.groups": "users", "hadoop.proxyuser.falcon.hosts": "", "hadoop.proxyuser.hcat.groups": "users", "hadoop.proxyuser.hcat.hosts": "%HOSTGROUP::host_group_master_2%", "hadoop.proxyuser.hive.groups": "users", "hadoop.proxyuser.hive.hosts": "%HOSTGROUP::host_group_master2%", "hadoop.proxyuser.oozie.groups": "", "hadoop.proxyuser.oozie.hosts": "%HOSTGROUP::host_group_master_1%", "hadoop.security.auth_to_local": "\n DEFAULT", "hadoop.security.authentication": "simple", "hadoop.security.authorization": "false", "io.compression.codecs": "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec", "io.file.buffer.size": "131072", "io.serializations": "org.apache.hadoop.io.serializer.WritableSerialization", "ipc.client.connect.max.retries": "50", "ipc.client.connection.maxidletime": "30000", "ipc.client.idlethreshold": "8000", "ipc.server.tcpnodelay": "true", "mapreduce.jobtracker.webinterface.trusted": "false", "proxyuser_group": "users" } } }, { "falcon-env": { "properties": { "content": "\n# The java implementation to use. If JAVA_HOME is not found we expect java and jar to be in path\nexport JAVA_HOME={{java_home}}\n\n# any additional java opts you want to set. This will apply to both client and server operations\n#export FALCON_OPTS=\n\n# any additional java opts that you want to set for client only\n#export FALCON_CLIENT_OPTS=\n\n# java heap size we want to set for the client. Default is 1024MB\n#export FALCON_CLIENT_HEAP=\n\n# any additional opts you want to set for prisim service.\n#export FALCON_PRISM_OPTS=\n\n# java heap size we want to set for the prisim service. Default is 1024MB\n#export FALCON_PRISM_HEAP=\n\n# any additional opts you want to set for falcon service.\nexport FALCON_SERVER_OPTS=\"-Dfalcon.embeddedmq={{falcon_embeddedmq_enabled}} -Dfalcon.emeddedmq.port={{falcon_emeddedmq_port}}\"\n\n# java heap size we want to set for the falcon server. Default is 1024MB\n#export FALCON_SERVER_HEAP=\n\n# What is is considered as falcon home dir. Default is the base location of the installed software\n#export FALCON_HOME_DIR=\n\n# Where log files are stored. Defatult is logs directory under the base install location\nexport FALCON_LOG_DIR={{falcon_log_dir}}\n\n# Where pid files are stored. Defatult is logs directory under the base install location\nexport FALCON_PID_DIR={{falcon_pid_dir}}\n\n# where the falcon active mq data is stored. Defatult is logs/data directory under the base install location\nexport FALCON_DATA_DIR={{falcon_embeddedmq_data}}\n\n# Where do you want to expand the war file. By Default it is in /server/webapp dir under the base install dir.\n#export FALCON_EXPANDED_WEBAPP_DIR=", "falcon.embeddedmq": "true", "falcon.embeddedmq.data": "/hadoop/falcon/embeddedmq/data", "falcon.emeddedmq.port": "61616", "falcon_local_dir": "/hadoop/falcon", "falcon_log_dir": "/var/log/falcon", "falcon_pid_dir": "/var/run/falcon", "falcon_port": "15000", "falcon_store_uri": "file:///hadoop/falcon/store", "falconuser": "falcon" } } }, { "falcon-runtime.properties": { "properties": { ".domain": "${falcon.app.type}", ".log.cleanup.frequency.days.retention": "days(7)", ".log.cleanup.frequency.hours.retention": "minutes(1)", ".log.cleanup.frequency.minutes.retention": "hours(6)", ".log.cleanup.frequency.months.retention": "months(3)" } } }, { "falcon-startup.properties": { "properties": { ".ConfigSyncService.impl": "org.apache.falcon.resource.ConfigSyncService", ".ProcessInstanceManager.impl": "org.apache.falcon.resource.InstanceManager", ".SchedulableEntityManager.impl": "org.apache.falcon.resource.SchedulableEntityManager", ".application.services": "org.apache.falcon.security.AuthenticationInitializationService,\n org.apache.falcon.workflow.WorkflowJobEndNotificationService, \n org.apache.falcon.service.ProcessSubscriberService,\n org.apache.falcon.entity.store.ConfigurationStore,\n org.apache.falcon.rerun.service.RetryService,\n org.apache.falcon.rerun.service.LateRunService,\n org.apache.falcon.service.LogCleanupService,\n org.apache.falcon.metadata.MetadataMappingService", ".broker.impl.class": "org.apache.activemq.ActiveMQConnectionFactory", ".broker.ttlInMins": "4320", "_.broker.url": "tcp://%HOSTGROUP::host_group_master1%:61616", ".catalog.service.impl": "org.apache.falcon.catalog.HiveCatalogService", ".config.store.uri": "file:///hadoop/falcon/store", ".configstore.listeners": "org.apache.falcon.entity.v0.EntityGraph,\n org.apache.falcon.entity.ColoClusterRelation,\n org.apache.falcon.group.FeedGroupMap,\n org.apache.falcon.service.SharedLibraryHostingService", "_.dfs.namenode.kerberos.principal": "nn/HOST@EXAMPLE.COM", ".domain": "${falcon.app.type}", ".entity.topic": "FALCON.ENTITY.TOPIC", ".falcon.authentication.type": "simple", ".falcon.cleanup.service.frequency": "days(1)", ".falcon.enableTLS": "false", ".falcon.graph.blueprints.graph": "com.thinkaurelius.titan.core.TitanFactory", ".falcon.graph.preserve.history": "false", ".falcon.graph.serialize.path": "/mnt/hadoop/falcon/data/lineage", ".falcon.graph.storage.backend": "berkeleyje", ".falcon.graph.storage.directory": "/mnt/hadoop/falcon/data/lineage/graphdb", ".falcon.http.authentication.blacklisted.users": "", ".falcon.http.authentication.cookie.domain": "EXAMPLE.COM", ".falcon.http.authentication.kerberos.keytab": "/etc/security/keytabs/spnego.service.keytab", ".falcon.http.authentication.kerberos.name.rules": "DEFAULT", ".falcon.http.authentication.signature.secret": "falcon", ".falcon.http.authentication.simple.anonymous.allowed": "true", ".falcon.http.authentication.token.validity": "36000", ".falcon.http.authentication.type": "simple", ".falcon.security.authorization.admin.groups": "falcon", ".falcon.security.authorization.admin.users": "falcon,ambari-qa", ".falcon.security.authorization.enabled": "false", ".falcon.security.authorization.provider": "org.apache.falcon.security.DefaultAuthorizationProvider", ".falcon.security.authorization.superusergroup": "falcon", ".falcon.service.authentication.kerberos.keytab": "/etc/security/keytabs/falcon.service.keytab", ".internal.queue.size": "1000", ".journal.impl": "org.apache.falcon.transaction.SharedFileSystemJournal", ".max.retry.failure.count": "1", ".oozie.feed.workflow.builder": "org.apache.falcon.workflow.OozieFeedWorkflowBuilder", ".oozie.process.workflow.builder": "org.apache.falcon.workflow.OozieProcessWorkflowBuilder", ".retry.recorder.path": "${falcon.log.dir}/retry", ".shared.libs": "activemq-core,ant,geronimo-j2ee-management,hadoop-distcp,jms,json-simple,oozie-client,spring-jms", ".system.lib.location": "${falcon.home}/server/webapp/${falcon.app.type}/WEB-INF/lib", ".workflow.engine.impl": "org.apache.falcon.workflow.engine.OozieWorkflowEngine", "prism.application.services": "org.apache.falcon.entity.store.ConfigurationStore", "prism.configstore.listeners": "org.apache.falcon.entity.v0.EntityGraph,\n org.apache.falcon.entity.ColoClusterRelation,\n org.apache.falcon.group.FeedGroupMap" } } }, { "gateway-log4j": { "properties": { "content": "\n\n # Licensed to the Apache Software Foundation (ASF) under one\n # or more contributor license agreements. See the NOTICE file\n # distributed with this work for additional information\n # regarding copyright ownership. The ASF licenses this file\n # to you under the Apache License, Version 2.0 (the\n # \"License\"); you may not use this file except in compliance\n # with the License. You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n app.log.dir=${launcher.dir}/../logs\n app.log.file=${launcher.name}.log\n app.audit.file=${launcher.name}-audit.log\n\n log4j.rootLogger=ERROR, drfa\n\n log4j.logger.org.apache.hadoop.gateway=INFO\n #log4j.logger.org.apache.hadoop.gateway=DEBUG\n\n #log4j.logger.org.eclipse.jetty=DEBUG\n #log4j.logger.org.apache.shiro=DEBUG\n #log4j.logger.org.apache.http=DEBUG\n #log4j.logger.org.apache.http.client=DEBUG\n #log4j.logger.org.apache.http.headers=DEBUG\n #log4j.logger.org.apache.http.wire=DEBUG\n\n log4j.appender.stdout=org.apache.log4j.ConsoleAppender\n log4j.appender.stdout.layout=org.apache.log4j.PatternLayout\n log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.drfa.File=${app.log.dir}/${app.log.file}\n log4j.appender.drfa.DatePattern=.yyyy-MM-dd\n log4j.appender.drfa.layout=org.apache.log4j.PatternLayout\n log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n\n log4j.logger.audit=INFO, auditfile\n log4j.appender.auditfile=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.auditfile.File=${app.log.dir}/${app.audit.file}\n log4j.appender.auditfile.Append = true\n log4j.appender.auditfile.DatePattern = '.'yyyy-MM-dd\n log4j.appender.auditfile.layout = org.apache.hadoop.gateway.audit.log4j.layout.AuditLayout" } } }, { "gateway-site": { "properties": { "gateway.gateway.conf.dir": "deployments", "gateway.hadoop.kerberos.secured": "false", "gateway.path": "gateway", "gateway.port": "8443", "java.security.auth.login.config": "/etc/knox/conf/krb5JAASLogin.conf", "java.security.krb5.conf": "/etc/knox/conf/krb5.conf", "sun.security.krb5.debug": "true" } } }, { "hadoop-env": { "properties": { "content": "\n# Set Hadoop-specific environment variables here.\n\n# The only required environment variable is JAVA_HOME. All others are\n# optional. When running a distributed configuration it is best to\n# set JAVA_HOME in this file, so that it is correctly defined on\n# remote nodes.\n\n# The java implementation to use. Required.\nexport JAVA_HOME={{java_home}}\nexport HADOOP_HOME_WARN_SUPPRESS=1\n\n# Hadoop home directory\nexport HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n# Hadoop Configuration Directory\n\n{# this is different for HDP1 #}\n# Path to jsvc required by secure HDP 2.0 datanode\nexport JSVC_HOME={{jsvc_path}}\n\n\n# The maximum amount of heap to use, in MB. Default is 1000.\nexport HADOOP_HEAPSIZE=\"{{hadoop_heapsize}}\"\n\nexport HADOOP_NAMENODE_INIT_HEAPSIZE=\"-Xms{{namenode_heapsize}}\"\n\n# Extra Java runtime options. Empty by default.\nexport HADOOP_OPTS=\"-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}\"\n\n# Command specific options appended to HADOOP_OPTS when specified\nexport HADOOP_NAMENODE_OPTS=\"-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -XX:PermSize={{namenode_opt_permsize}} -XX:MaxPermSize={{namenode_opt_maxpermsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-date +'%Y%m%d%H%M' -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}\"\nHADOOP_JOBTRACKER_OPTS=\"-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{jtnode_opt_newsize}} -XX:MaxNewSize={{jtnode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-date +'%Y%m%d%H%M' -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx{{jtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}\"\n\nHADOOP_TASKTRACKER_OPTS=\"-server -Xmx{{ttnode_heapsize}} -Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}\"\nexport HADOOP_DATANODE_OPTS=\"-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-date +'%Y%m%d%H%M' -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS}\"\nHADOOP_BALANCER_OPTS=\"-server -Xmx{{hadoop_heapsize}}m ${HADOOP_BALANCER_OPTS}\"\n\nexport HADOOP_SECONDARYNAMENODE_OPTS=$HADOOP_NAMENODE_OPTS\n\n# The following applies to multiple commands (fs, dfs, fsck, distcp etc)\nexport HADOOP_CLIENT_OPTS=\"-Xmx${HADOOP_HEAPSIZE}m -XX:MaxPermSize=512m $HADOOP_CLIENT_OPTS\"\n\n# On secure datanodes, user to run the datanode as after dropping privileges\nexport HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER:-{{hadoop_secure_dn_user}}}\n\n# Extra ssh options. Empty by default.\nexport HADOOP_SSH_OPTS=\"-o ConnectTimeout=5 -o SendEnv=HADOOP_CONF_DIR\"\n\n# Where log files are stored. $HADOOP_HOME/logs by default.\nexport HADOOP_LOG_DIR={{hdfs_log_dir_prefix}}/$USER\n\n# History server logs\nexport HADOOP_MAPRED_LOG_DIR={{mapred_log_dir_prefix}}/$USER\n\n# Where log files are stored in the secure data environment.\nexport HADOOP_SECURE_DN_LOG_DIR={{hdfs_log_dir_prefix}}/$HADOOP_SECURE_DN_USER\n\n# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.\n# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves\n\n# host:path where hadoop code should be rsync'd from. Unset by default.\n# export HADOOP_MASTER=master:/home/$USER/src/hadoop\n\n# Seconds to sleep between slave commands. Unset by default. This\n# can be useful in large clusters, where, e.g., slave rsyncs can\n# otherwise arrive faster than the master can service them.\n# export HADOOP_SLAVE_SLEEP=0.1\n\n# The directory where pid files are stored. /tmp by default.\nexport HADOOP_PID_DIR={{hadoop_pid_dir_prefix}}/$USER\nexport HADOOP_SECURE_DN_PID_DIR={{hadoop_pid_dir_prefix}}/$HADOOP_SECURE_DN_USER\n\n# History server pid\nexport HADOOP_MAPRED_PID_DIR={{mapred_pid_dir_prefix}}/$USER\n\nYARN_RESOURCEMANAGER_OPTS=\"-Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY\"\n\n# A string representing this instance of hadoop. $USER by default.\nexport HADOOP_IDENT_STRING=$USER\n\n# The scheduling priority for daemon processes. See 'man nice'.\n\n# export HADOOP_NICENESS=10\n\n# Use libraries from standard classpath\nJAVA_JDBC_LIBS=\"\"\n#Add libraries required by mysql connector\nfor jarFile in ls /usr/share/java/*mysql* 2>/dev/null\ndo\n JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile\ndone\n# Add libraries required by oracle connector\nfor jarFile in ls /usr/share/java/*ojdbc* 2>/dev/null\ndo\n JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile\ndone\n# Add libraries required by nodemanager\nMAPREDUCE_LIBS={{mapreduce_libs_path}}\nexport HADOOP_CLASSPATH=${HADOOP_CLASSPATH}${JAVA_JDBC_LIBS}:${MAPREDUCE_LIBS}\n\n# added to the HADOOP_CLASSPATH\nif [ -d \"/usr/hdp/current/tez-client\" ]; then\n if [ -d \"/etc/tez/conf/\" ]; then\n # When using versioned RPMs, the tez-client will be a symlink to the current folder of tez in HDP.\n export HADOOP_CLASSPATH=${HADOOPCLASSPATH}:/usr/hdp/current/tez-client/:/usr/hdp/current/tez-client/lib/_:/etc/tez/conf/\n fi\nfi\n\n\n# Setting path to hdfs command line\nexport HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}\n\n# Mostly required for hadoop 2.0\nexport JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}\n\nexport HADOOP_OPTS=\"-Dhdp.version=$HDP_VERSION $HADOOP_OPTS\"", "dfs.datanode.data.dir.mount.file": "/etc/hadoop/conf/dfs_data_dir_mount.hist", "dtnode_heapsize": "1024m", "hadoop_heapsize": "1024", "hadoop_pid_dir_prefix": "/var/run/hadoop", "hadoop_root_logger": "INFO,RFA", "hdfs_log_dir_prefix": "/var/log/hadoop", "hdfs_user": "hdfs", "namenode_heapsize": "2048m", "namenode_opt_maxnewsize": "512m", "namenode_opt_maxpermsize": "256m", "namenode_opt_newsize": "512m", "namenode_opt_permsize": "128m", "proxyusergroup": "users" } } }, { "hadoop-policy": { "properties": { "security.admin.operations.protocol.acl": "hadoop", "security.client.datanode.protocol.acl": "", "security.client.protocol.acl": "", "security.datanode.protocol.acl": "", "security.inter.datanode.protocol.acl": "", "security.inter.tracker.protocol.acl": "", "security.job.client.protocol.acl": "", "security.job.task.protocol.acl": "", "security.namenode.protocol.acl": "_", "security.refresh.policy.protocol.acl": "hadoop", "security.refresh.usertogroups.mappings.protocol.acl": "hadoop" } } }, { "hbase-env": { "properties": { "content": "\n# Set environment variables here.\n\n# The java implementation to use. Java 1.6 required.\nexport JAVA_HOME={{java64_home}}\n\n# HBase Configuration directory\nexport HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}\n\n# Extra Java CLASSPATH elements. Optional.\nexport HBASE_CLASSPATH=${HBASE_CLASSPATH}\n\n\n# The maximum amount of heap to use, in MB. Default is 1000.\n# export HBASE_HEAPSIZE=1000\n\n# Extra Java runtime options.\n# Below are what we set by default. May only work with SUN JVM.\n# For more on why as well as other possible settings,\n# see http://wiki.apache.org/hadoop/PerformanceTuning\nexport SERVER_GC_OPTS=\"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{log_dir}}/gc.log-date +'%Y%m%d%H%M'\"\n# Uncomment below to enable java garbage collection logging.\n# export HBASE_OPTS=\"$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log\"\n\n# Uncomment and adjust to enable JMX exporting\n# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.\n# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n#\n# export HBASE_JMX_BASE=\"-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false\"\n# If you want to configure BucketCache, specify '-XX: MaxDirectMemorySize=' with proper direct memory size\n# export HBASE_THRIFT_OPTS=\"$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103\"\n# export HBASE_ZOOKEEPER_OPTS=\"$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104\"\n\n# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\nexport HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n# Extra ssh options. Empty by default.\n# export HBASE_SSH_OPTS=\"-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR\"\n\n# Where log files are stored. $HBASE_HOME/logs by default.\nexport HBASE_LOG_DIR={{log_dir}}\n\n# A string representing this instance of hbase. $USER by default.\n# export HBASE_IDENT_STRING=$USER\n\n# The scheduling priority for daemon processes. See 'man nice'.\n# export HBASE_NICENESS=10\n\n# The directory where pid files are stored. /tmp by default.\nexport HBASE_PID_DIR={{pid_dir}}\n\n# Seconds to sleep between slave commands. Unset by default. This\n# can be useful in large clusters, where, e.g., slave rsyncs can\n# otherwise arrive faster than the master can service them.\n# export HBASE_SLAVE_SLEEP=0.1\n\n# Tell HBase whether it should manage it's own instance of Zookeeper or not.\nexport HBASE_MANAGES_ZK=false\n\n{% if security_enabled %}\nexport HBASE_OPTS=\"$HBASE_OPTS -XX:+UseConcMarkSweepGC -XX:ErrorFile={{log_dir}}/hs_err_pid%p.log -Djava.security.auth.login.config={{client_jaas_config_file}}\"\nexport HBASE_MASTER_OPTS=\"$HBASE_MASTER_OPTS -Xmx{{master_heapsize}} -Djava.security.auth.login.config={{master_jaas_config_file}}\"\nexport HBASE_REGIONSERVER_OPTS=\"$HBASE_REGIONSERVER_OPTS -Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}} -Djava.security.auth.login.config={{regionserver_jaas_config_file}}\"\n{% else %}\nexport HBASE_OPTS=\"$HBASE_OPTS -XX:+UseConcMarkSweepGC -XX:ErrorFile={{log_dir}}/hs_err_pid%p.log\"\nexport HBASE_MASTER_OPTS=\"$HBASE_MASTER_OPTS -Xmx{{master_heapsize}}\"\nexport HBASE_REGIONSERVER_OPTS=\"$HBASE_REGIONSERVER_OPTS -Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70 -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}\"\n{% endif %}", "hbase_log_dir": "/var/log/hbase", "hbase_master_heapsize": "1024m", "hbase_pid_dir": "/var/run/hbase", "hbase_regionserver_heapsize": "1024m", "hbase_regionserver_xmn_max": "512", "hbase_regionserver_xmn_ratio": "0.2", "hbase_user": "hbase" } } }, { "hbase-log4j": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n# Define the root logger to the system property \"hbase.root.logger\".\nlog4j.rootLogger=${hbase.root.logger}\n\n# Logging Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender properties\nhbase.log.maxfilesize=256MB\nhbase.log.maxbackupindex=20\n\n# Rolling File Appender\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit appender\n#\nhbase.security.log.file=SecurityAuth.audit\nhbase.security.log.maxfilesize=256MB\nhbase.security.log.maxbackupindex=20\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.category.SecurityLogger=${hbase.security.logger}\nlog4j.additivity.SecurityLogger=false\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n#\n# Null Appender\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Custom Logging levels\n\nlog4j.logger.org.apache.zookeeper=INFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.hbase=DEBUG\n# Make these two classes INFO-level. Make them DEBUG to see more zk debug.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n#log4j.logger.org.apache.hadoop.dfs=DEBUG\n# Set this class to log INFO only otherwise its OTT\n# Enable this to get detailed connection error/retry logging.\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n# Uncomment this line to enable tracing on every RPC call (this can be a lot of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n# Uncomment the below if you want to remove logging of client region caching'\n# and scan of .META. messages\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO" } } }, { "hbase-policy": { "properties": { "security.admin.protocol.acl": "", "security.client.protocol.acl": "", "security.masterregion.protocol.acl": "_" } } }, { "hbase-site": { "properties": { "dfs.domain.socket.path": "/var/lib/hadoop-hdfs/dn_socket", "hbase.client.keyvalue.maxsize": "10485760", "hbase.client.scanner.caching": "100", "hbase.cluster.distributed": "true", "hbase.coprocessor.master.classes": "", "hbase.coprocessor.region.classes": "", "hbase.defaults.for.version.skip": "true", "hbase.hregion.majorcompaction": "604800000", "hbase.hregion.majorcompaction.jitter": "0.50", "hbase.hregion.max.filesize": "10737418240", "hbase.hregion.memstore.block.multiplier": "4", "hbase.hregion.memstore.flush.size": "134217728", "hbase.hregion.memstore.mslab.enabled": "true", "hbase.hstore.blockingStoreFiles": "10", "hbase.hstore.compactionThreshold": "3", "hbase.local.dir": "${hbase.tmp.dir}/local", "hbase.master.info.bindAddress": "0.0.0.0", "hbase.master.info.port": "60010", "hbase.master.port": "60000", "hbase.regionserver.global.memstore.lowerLimit": "0.38", "hbase.regionserver.global.memstore.upperLimit": "0.4", "hbase.regionserver.handler.count": "60", "hbase.regionserver.info.port": "60030", "hbase.rootdir": "hdfs://%HOSTGROUP::host_group_master_1%:8020/apps/hbase/data", "hbase.rpc.protection": "authentication", "hbase.security.authentication": "simple", "hbase.security.authorization": "false", "hbase.superuser": "hbase", "hbase.tmp.dir": "/mnt/hadoop/hbase", "hbase.zookeeper.property.clientPort": "2181", "hbase.zookeeper.quorum": "%HOSTGROUP::host_group_master_3%,%HOSTGROUP::host_group_master_1%,%HOSTGROUP::host_group_master_2%", "hbase.zookeeper.useMulti": "true", "hfile.block.cache.size": "0.40", "zookeeper.session.timeout": "30000", "zookeeper.znode.parent": "/hbase-unsecure" } } }, { "hcat-env": { "properties": { "content": "\n # Licensed to the Apache Software Foundation (ASF) under one\n # or more contributor license agreements. See the NOTICE file\n # distributed with this work for additional information\n # regarding copyright ownership. The ASF licenses this file\n # to you under the Apache License, Version 2.0 (the\n # \"License\"); you may not use this file except in compliance\n # with the License. You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n JAVA_HOME={{java64_home}}\n HCAT_PID_DIR={{hcat_pid_dir}}/\n HCAT_LOG_DIR={{hcat_log_dir}}/\n HCAT_CONF_DIR={{hcat_conf_dir}}\n HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n #DBROOT is the path where the connector jars are downloaded\n DBROOT={{hcat_dbroot}}\n USER={{hcat_user}}\n METASTORE_PORT={{hive_metastore_port}}" } } }, { "hdfs-log4j": { "properties": { "content": "\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\n\n# Define some default values that can be overridden by system properties\n# To change daemon root logger use hadoop_root_logger in hadoop-env\nhadoop.root.logger=INFO,console\nhadoop.log.dir=.\nhadoop.log.file=hadoop.log\n\n\n# Define the root logger to the system property \"hadoop.root.logger\".\nlog4j.rootLogger=${hadoop.root.logger}, EventCounter\n\n# Logging Threshold\nlog4j.threshhold=ALL\n\n#\n# Daily Rolling File Appender\n#\n\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n# Debugging Pattern format\n#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\n\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n#\n# TaskLog Appender\n#\n\n#Default values\nhadoop.tasklog.taskid=null\nhadoop.tasklog.iscleanup=false\nhadoop.tasklog.noKeepSplits=4\nhadoop.tasklog.totalLogFileSize=100\nhadoop.tasklog.purgeLogSplits=true\nhadoop.tasklog.logsRetainHours=12\n\nlog4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender\nlog4j.appender.TLA.taskId=${hadoop.tasklog.taskid}\nlog4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}\nlog4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}\n\nlog4j.appender.TLA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n\n#\n#Security audit appender\n#\nhadoop.security.logger=INFO,console\nhadoop.security.log.maxfilesize=256MB\nhadoop.security.log.maxbackupindex=20\nlog4j.category.SecurityLogger=${hadoop.security.logger}\nhadoop.security.log.file=SecurityAuth.audit\nlog4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}\nlog4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.appender.DRFAS.DatePattern=.yyyy-MM-dd\n\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\nlog4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}\n\n#\n# hdfs audit logging\n#\nhdfs.audit.logger=INFO,console\nlog4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}\nlog4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false\nlog4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log\nlog4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout\nlog4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd\n\n#\n# mapred audit logging\n#\nmapred.audit.logger=INFO,console\nlog4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}\nlog4j.additivity.org.apache.hadoop.mapred.AuditLogger=false\nlog4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log\nlog4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout\nlog4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd\n\n#\n# Rolling File Appender\n#\n\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}\n\n# Logfile size and and 30-day backups\nlog4j.appender.RFA.MaxFileSize=256MB\nlog4j.appender.RFA.MaxBackupIndex=10\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n\n\n# Custom Logging levels\n\nhadoop.metrics.log.level=INFO\n#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG\n#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}\n\n# Jets3t library\nlog4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR\n\n#\n# Null Appender\n# Trap security logger on the hadoop client side\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n# Event Counter Appender\n# Sends counts of logging messages at different severity levels to Hadoop Metrics.\n#\nlog4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter\n\n# Removes \"deprecated\" messages\nlog4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN\n\n#\n# HDFS block state change log from block manager\n#\n# Uncomment the following to suppress normal block state change\n# messages from BlockManager in NameNode.\n#log4j.logger.BlockStateChange=WARN" } } }, { "hdfs-site": { "properties_attributes": { "final": { "dfs.support.append": "true", "dfs.namenode.http-address": "true" } }, "properties": { "dfs.block.access.token.enable": "true", "dfs.blockreport.initialDelay": "120", "dfs.blocksize": "134217728", "dfs.client.read.shortcircuit": "true", "dfs.client.read.shortcircuit.streams.cache.size": "4096", "dfs.cluster.administrators": " hdfs", "dfs.datanode.address": "0.0.0.0:50010", "dfs.datanode.balance.bandwidthPerSec": "6250000", "dfs.datanode.data.dir": "/mnt/hadoop/hdfs/data", "dfs.datanode.data.dir.perm": "750", "dfs.datanode.du.reserved": "1073741824", "dfs.datanode.failed.volumes.tolerated": "0", "dfs.datanode.http.address": "0.0.0.0:50075", "dfs.datanode.https.address": "0.0.0.0:50475", "dfs.datanode.ipc.address": "0.0.0.0:8010", "dfs.datanode.max.transfer.threads": "16384", "dfs.domain.socket.path": "/var/lib/hadoop-hdfs/dn_socket", "dfs.heartbeat.interval": "3", "dfs.hosts.exclude": "/etc/hadoop/conf/dfs.exclude", "dfs.http.policy": "HTTP_ONLY", "dfs.https.port": "50470", "dfs.journalnode.edits.dir": "/hadoop/hdfs/journalnode", "dfs.journalnode.http-address": "0.0.0.0:8480", "dfs.journalnode.https-address": "0.0.0.0:8481", "dfs.namenode.accesstime.precision": "0", "dfs.namenode.avoid.read.stale.datanode": "true", "dfs.namenode.avoid.write.stale.datanode": "true", "dfs.namenode.checkpoint.dir": "/mnt/hadoop/hdfs/namesecondary", "dfs.namenode.checkpoint.edits.dir": "${dfs.namenode.checkpoint.dir}", "dfs.namenode.checkpoint.period": "21600", "dfs.namenode.checkpoint.txns": "1000000", "dfs.namenode.handler.count": "100", "dfs.namenode.http-address": "%HOSTGROUP::host_group_master_1%:50070", "dfs.namenode.https-address": "%HOSTGROUP::host_group_master_1%:50470", "dfs.namenode.name.dir": "/mnt/hadoop/hdfs/namenode", "dfs.namenode.name.dir.restore": "true", "dfs.namenode.safemode.threshold-pct": "1.0f", "dfs.namenode.secondary.http-address": "%HOSTGROUP::host_group_master_3%:50090", "dfs.namenode.stale.datanode.interval": "30000", "dfs.namenode.startup.delay.block.deletion.sec": "3600", "dfs.namenode.write.stale.datanode.ratio": "1.0f", "dfs.permissions.enabled": "true", "dfs.permissions.superusergroup": "hdfs", "dfs.replication": "3", "dfs.replication.max": "50", "dfs.support.append": "true", "dfs.webhdfs.enabled": "true", "fs.permissions.umask-mode": "022" } } }, { "hive-env": { "properties": { "content": "\n if [ \"$SERVICE\" = \"cli\" ]; then\n if [ -z \"$DEBUG\" ]; then\n export HADOOP_OPTS=\"$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit\"\n else\n export HADOOP_OPTS=\"$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit\"\n fi\n fi\n\n# The heap size of the jvm stared by hive shell script can be controlled via:\n\n# Larger heap size may be required when running queries over large number of files or partitions.\n# By default hive shell scripts use a heap size of 256 (MB). Larger heap size would also be\n# appropriate for hive server (hwi etc).\n\n\n# Set HADOOP_HOME to point to a specific hadoop install directory\nHADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n# Hive Configuration Directory can be controlled by:\nexport HIVE_CONF_DIR={{hive_config_dir}}\n\n# Folder containing extra libraries required for hive compilation/execution can be controlled by:\nif [ \"${HIVE_AUX_JARS_PATH}\" != \"\" ]; then\n if [ -f \"${HIVE_AUX_JARS_PATH}\" ]; then \n export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}\n elif [ -d \"/usr/hdp/current/hive-webhcat/share/hcatalog\" ]; then\n export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar\n fi\nelif [ -d \"/usr/hdp/current/hive-webhcat/share/hcatalog\" ]; then\n export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar\nfi \n\nexport METASTORE_PORT={{hive_metastore_port}}", "hcat_log_dir": "/var/log/webhcat", "hcat_pid_dir": "/var/run/webhcat", "hcat_user": "hcat", "hive_ambari_database": "MySQL", "hive_ambari_host": "%HOSTGROUP::host_group_master_2%", "hive_database": "New MySQL Database", "hive_database_name": "hive", "hive_database_type": "mysql", "hive_existing_mssql_server_2_host": "", "hive_existing_mssql_server_host": "", "hive_existing_mysql_host": "", "hive_existing_oracle_host": "", "hive_existing_postgresql_host": "", "hive_hostname": "%HOSTGROUP::host_group_master_2%", "hive_log_dir": "/var/log/hive", "hive_metastore_port": "9083", "hive_pid_dir": "/var/run/hive", "hive_user": "hive", "webhcat_user": "hcat" } } }, { "hive-exec-log4j": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Define some default values that can be overridden by system properties\n\nhive.log.threshold=ALL\nhive.root.logger=INFO,FA\nhive.log.dir=${java.io.tmpdir}/${user.name}\nhive.query.id=hadoop\nhive.log.file=${hive.query.id}.log\n\n# Define the root logger to the system property \"hadoop.root.logger\".\nlog4j.rootLogger=${hive.root.logger}, EventCounter\n\n# Logging Threshold\nlog4j.threshhold=${hive.log.threshold}\n\n#\n# File Appender\n#\n\nlog4j.appender.FA=org.apache.log4j.FileAppender\nlog4j.appender.FA.File=${hive.log.dir}/${hive.log.file}\nlog4j.appender.FA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\n#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n# Debugging Pattern format\nlog4j.appender.FA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n\n\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\n\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n#custom logging levels\n#log4j.logger.xxx=DEBUG\n\n#\n# Event Counter Appender\n# Sends counts of logging messages at different severity levels to Hadoop Metrics.\n#\nlog4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter\n\n\nlog4j.category.DataNucleus=ERROR,FA\nlog4j.category.Datastore=ERROR,FA\nlog4j.category.Datastore.Schema=ERROR,FA\nlog4j.category.JPOX.Datastore=ERROR,FA\nlog4j.category.JPOX.Plugin=ERROR,FA\nlog4j.category.JPOX.MetaData=ERROR,FA\nlog4j.category.JPOX.Query=ERROR,FA\nlog4j.category.JPOX.General=ERROR,FA\nlog4j.category.JPOX.Enhancer=ERROR,FA\n\n\n# Silence useless ZK logs\nlog4j.logger.org.apache.zookeeper.server.NIOServerCnxn=WARN,FA\nlog4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO=WARN,FA" } } }, { "hive-log4j": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Define some default values that can be overridden by system properties\nhive.log.threshold=ALL\nhive.root.logger=INFO,DRFA\nhive.log.dir=${java.io.tmpdir}/${user.name}\nhive.log.file=hive.log\n\n# Define the root logger to the system property \"hadoop.root.logger\".\nlog4j.rootLogger=${hive.root.logger}, EventCounter\n\n# Logging Threshold\nlog4j.threshold=${hive.log.threshold}\n\n#\n# Daily Rolling File Appender\n#\n# Use the PidDailyerRollingFileAppend class instead if you want to use separate log files\n# for different CLI session.\n#\n# log4j.appender.DRFA=org.apache.hadoop.hive.ql.log.PidDailyRollingFileAppender\n\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\n\nlog4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\n#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n\n# Debugging Pattern format\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n\n\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\n\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n\nlog4j.appender.console.encoding=UTF-8\n\n#custom logging levels\n#log4j.logger.xxx=DEBUG\n\n#\n# Event Counter Appender\n# Sends counts of logging messages at different severity levels to Hadoop Metrics.\n#\nlog4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter\n\n\nlog4j.category.DataNucleus=ERROR,DRFA\nlog4j.category.Datastore=ERROR,DRFA\nlog4j.category.Datastore.Schema=ERROR,DRFA\nlog4j.category.JPOX.Datastore=ERROR,DRFA\nlog4j.category.JPOX.Plugin=ERROR,DRFA\nlog4j.category.JPOX.MetaData=ERROR,DRFA\nlog4j.category.JPOX.Query=ERROR,DRFA\nlog4j.category.JPOX.General=ERROR,DRFA\nlog4j.category.JPOX.Enhancer=ERROR,DRFA\n\n\n# Silence useless ZK logs\nlog4j.logger.org.apache.zookeeper.server.NIOServerCnxn=WARN,DRFA\nlog4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO=WARN,DRFA" } } }, { "hive-site": { "properties": { "ambari.hive.db.schema.name": "hive", "datanucleus.cache.level2.type": "none", "hive.auto.convert.join": "true", "hive.auto.convert.join.noconditionaltask": "true", "hive.auto.convert.join.noconditionaltask.size": "357564416", "hive.auto.convert.sortmerge.join": "true", "hive.auto.convert.sortmerge.join.to.mapjoin": "false", "hive.cbo.enable": "true", "hive.cli.print.header": "false", "hive.cluster.delegation.token.store.class": "org.apache.hadoop.hive.thrift.ZooKeeperTokenStore", "hive.cluster.delegation.token.store.zookeeper.connectString": "%HOSTGROUP::host_group_master_3%:2181,%HOSTGROUP::host_group_master_1%:2181,%HOSTGROUP::host_group_master_2%:2181", "hive.cluster.delegation.token.store.zookeeper.znode": "/hive/cluster/delegation", "hive.compactor.abortedtxn.threshold": "1000", "hive.compactor.check.interval": "300L", "hive.compactor.delta.num.threshold": "10", "hive.compactor.delta.pct.threshold": "0.1f", "hive.compactor.initiator.on": "false", "hive.compactor.worker.threads": "0", "hive.compactor.worker.timeout": "86400L", "hive.compute.query.using.stats": "true", "hive.conf.restricted.list": "hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role", "hive.convert.join.bucket.mapjoin.tez": "false", "hive.enforce.bucketing": "true", "hive.enforce.sorting": "true", "hive.enforce.sortmergebucketmapjoin": "true", "hive.exec.compress.intermediate": "false", "hive.exec.compress.output": "false", "hive.exec.dynamic.partition": "true", "hive.exec.dynamic.partition.mode": "nonstrict", "hive.exec.failure.hooks": "org.apache.hadoop.hive.ql.hooks.ATSHook", "hive.exec.max.created.files": "100000", "hive.exec.max.dynamic.partitions": "5000", "hive.exec.max.dynamic.partitions.pernode": "2000", "hive.exec.orc.compression.strategy": "SPEED", "hive.exec.orc.default.compress": "ZLIB", "hive.exec.orc.default.stripe.size": "67108864", "hive.exec.parallel": "false", "hive.exec.parallel.thread.number": "8", "hive.exec.post.hooks": "org.apache.hadoop.hive.ql.hooks.ATSHook", "hive.exec.pre.hooks": "org.apache.hadoop.hive.ql.hooks.ATSHook", "hive.exec.reducers.bytes.per.reducer": "67108864", "hive.exec.reducers.max": "1009", "hive.exec.scratchdir": "/tmp/hive", "hive.exec.submit.local.task.via.child": "true", "hive.exec.submitviachild": "false", "hive.execution.engine": "tez", "hive.fetch.task.aggr": "false", "hive.fetch.task.conversion": "more", "hive.fetch.task.conversion.threshold": "1073741824", "hive.limit.optimize.enable": "true", "hive.limit.pushdown.memory.usage": "0.04", "hive.map.aggr": "true", "hive.map.aggr.hash.force.flush.memory.threshold": "0.9", "hive.map.aggr.hash.min.reduction": "0.5", "hive.map.aggr.hash.percentmemory": "0.5", "hive.mapjoin.bucket.cache.size": "10000", "hive.mapjoin.optimized.hashtable": "true", "hive.mapred.reduce.tasks.speculative.execution": "false", "hive.merge.mapfiles": "true", "hive.merge.mapredfiles": "false", "hive.merge.orcfile.stripe.level": "true", "hive.merge.rcfile.block.level": "true", "hive.merge.size.per.task": "256000000", "hive.merge.smallfiles.avgsize": "16000000", "hive.merge.tezfiles": "false", "hive.metastore.authorization.storage.checks": "false", "hive.metastore.cache.pinobjtypes": "Table,Database,Type,FieldSchema,Order", "hive.metastore.client.connect.retry.delay": "5s", "hive.metastore.client.socket.timeout": "1800s", "hive.metastore.connect.retries": "24", "hive.metastore.execute.setugi": "true", "hive.metastore.failure.retries": "24", "hive.metastore.kerberos.keytab.file": "/etc/security/keytabs/hive.service.keytab", "hive.metastore.kerberos.principal": "hive/_HOST@EXAMPLE.COM", "hive.metastore.pre.event.listeners": "org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener", "hive.metastore.sasl.enabled": "false", "hive.metastore.server.max.threads": "100000", "hive.metastore.uris": "thrift://%HOSTGROUP::host_group_master_2%:9083", "hive.metastore.warehouse.dir": "/apps/hive/warehouse", "hive.optimize.bucketmapjoin": "true", "hive.optimize.bucketmapjoin.sortedmerge": "false", "hive.optimize.constant.propagation": "true", "hive.optimize.index.filter": "true", "hive.optimize.metadataonly": "true", "hive.optimize.null.scan": "true", "hive.optimize.reducededuplication": "true", "hive.optimize.reducededuplication.min.reducer": "4", "hive.optimize.sort.dynamic.partition": "false", "hive.orc.compute.splits.num.threads": "10", "hive.orc.splits.include.file.footer": "false", "hive.prewarm.enabled": "false", "hive.prewarm.numcontainers": "10", "hive.security.authenticator.manager": "org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator", "hive.security.authorization.enabled": "false", "hive.security.authorization.manager": "org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory", "hive.security.metastore.authenticator.manager": "org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator", "hive.security.metastore.authorization.auth.reads": "true", "hive.security.metastore.authorization.manager": "org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider,org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly", "hive.server2.allow.user.substitution": "true", "hive.server2.authentication": "NONE", "hive.server2.authentication.spnego.keytab": "HTTP/_HOST@EXAMPLE.COM", "hive.server2.authentication.spnego.principal": "/etc/security/keytabs/spnego.service.keytab", "hive.server2.enable.doAs": "true", "hive.server2.logging.operation.enabled": "true", "hive.server2.logging.operation.log.location": "${system:java.io.tmpdir}/${system:user.name}/operation_logs", "hive.server2.support.dynamic.service.discovery": "true", "hive.server2.table.type.mapping": "CLASSIC", "hive.server2.tez.default.queues": "default", "hive.server2.tez.initialize.default.sessions": "false", "hive.server2.tez.sessions.per.default.queue": "1", "hive.server2.thrift.http.path": "cliservice", "hive.server2.thrift.http.port": "10001", "hive.server2.thrift.max.worker.threads": "500", "hive.server2.thrift.port": "10000", "hive.server2.thrift.sasl.qop": "auth", "hive.server2.transport.mode": "binary", "hive.server2.use.SSL": "false", "hive.server2.zookeeper.namespace": "hiveserver2", "hive.smbjoin.cache.rows": "10000", "hive.stats.autogather": "true", "hive.stats.dbclass": "fs", "hive.stats.fetch.column.stats": "false", "hive.stats.fetch.partition.stats": "true", "hive.support.concurrency": "false", "hive.tez.auto.reducer.parallelism": "false", "hive.tez.container.size": "1024", "hive.tez.cpu.vcores": "-1", "hive.tez.dynamic.partition.pruning": "true", "hive.tez.dynamic.partition.pruning.max.data.size": "104857600", "hive.tez.dynamic.partition.pruning.max.event.size": "1048576", "hive.tez.input.format": "org.apache.hadoop.hive.ql.io.HiveInputFormat", "hive.tez.java.opts": "-server -Xmx820m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps", "hive.tez.log.level": "INFO", "hive.tez.max.partition.factor": "2.0", "hive.tez.min.partition.factor": "0.25", "hive.tez.smb.number.waves": "0.5", "hive.txn.manager": "org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager", "hive.txn.max.open.batch": "1000", "hive.txn.timeout": "300", "hive.user.install.directory": "/user/", "hive.vectorized.execution.enabled": "true", "hive.vectorized.execution.reduce.enabled": "false", "hive.vectorized.groupby.checkinterval": "4096", "hive.vectorized.groupby.flush.percent": "0.1", "hive.vectorized.groupby.maxentries": "100000", "hive.zookeeper.client.port": "2181", "hive.zookeeper.namespace": "hive_zookeeper_namespace", "hive.zookeeper.quorum": "%HOSTGROUP::host_group_master_3%:2181,%HOSTGROUP::host_group_master_1%:2181,%HOSTGROUP::host_group_master_2%:2181", "javax.jdo.option.ConnectionDriverName": "com.mysql.jdbc.Driver", "javax.jdo.option.ConnectionURL": "jdbc:mysql://%HOSTGROUP::host_group_master_2%/hive?createDatabaseIfNotExist=true", "javax.jdo.option.ConnectionUserName": "hive" } } }, { "hiveserver2-site": { "properties": { "hive.security.authenticator.manager": "org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator", "hive.security.authorization.manager": "org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory" } } }, { "knox-env": { "properties": { "knox_group": "knox", "knox_pid_dir": "/var/run/knox", "knox_user": "knox" } } }, { "zeppelin-env": { "pid_dir": "/var/run/zeppelin-notebook" } }, { "zeppelin-config": { "download.prebuilt": true, "install.dir": "/root", "stack.dir": "/var/lib/ambari-server/resources/stacks/HDP/2.2/services/zeppelin-stack", "stack.log": "/var/log/zeppelin-notebook-setup.log" } }, { "ldap-log4j": { "properties": { "content": "\n # Licensed to the Apache Software Foundation (ASF) under one\n # or more contributor license agreements. See the NOTICE file\n # distributed with this work for additional information\n # regarding copyright ownership. The ASF licenses this file\n # to you under the Apache License, Version 2.0 (the\n # \"License\"); you may not use this file except in compliance\n # with the License. You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n app.log.dir=${launcher.dir}/../logs\n app.log.file=${launcher.name}.log\n\n log4j.rootLogger=ERROR, drfa\n log4j.logger.org.apache.directory.server.ldap.LdapServer=INFO\n log4j.logger.org.apache.directory=WARN\n\n log4j.appender.stdout=org.apache.log4j.ConsoleAppender\n log4j.appender.stdout.layout=org.apache.log4j.PatternLayout\n log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n\n log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender\n log4j.appender.drfa.File=${app.log.dir}/${app.log.file}\n log4j.appender.drfa.DatePattern=.yyyy-MM-dd\n log4j.appender.drfa.layout=org.apache.log4j.PatternLayout\n log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n" } } }, { "mapred-env": { "properties": { "content": "\n# export JAVA_HOME=/home/y/libexec/jdk1.6.0/\n\nexport HADOOP_JOB_HISTORYSERVER_HEAPSIZE={{jobhistory_heapsize}}\n\nexport HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA\n\n#export HADOOP_JOB_HISTORYSERVER_OPTS=\n#export HADOOP_MAPRED_LOG_DIR=\"\" # Where log files are stored. $HADOOP_MAPRED_HOME/logs by default.\n#export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger.\n#export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default.\n#export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default\n#export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0.\nexport HADOOP_OPTS=\"-Dhdp.version=$HDP_VERSION $HADOOP_OPTS\"", "jobhistory_heapsize": "900", "mapred_log_dir_prefix": "/var/log/hadoop-mapreduce", "mapred_pid_dir_prefix": "/var/run/hadoop-mapreduce", "mapred_user": "mapred" } } }, { "mapred-site": { "properties": { "mapreduce.admin.map.child.java.opts": "-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}", "mapreduce.admin.reduce.child.java.opts": "-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}", "mapreduce.admin.user.env": "LD_LIBRARYPATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64", "mapreduce.am.max-attempts": "2", "mapreduce.application.classpath": "$PWD/mr-framework/hadoop/share/hadoop/mapreduce/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/:$PWD/mr-framework/hadoop/share/hadoop/common/:$PWD/mr-framework/hadoop/share/hadoop/common/lib/:$PWD/mr-framework/hadoop/share/hadoop/yarn/:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/:$PWD/mr-framework/hadoop/share/hadoop/hdfs/:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/_:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure", "mapreduce.application.framework.path": "/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework", "mapreduce.cluster.administrators": " hadoop", "mapreduce.framework.name": "yarn", "mapreduce.job.emit-timeline-data": "false", "mapreduce.job.reduce.slowstart.completedmaps": "0.05", "mapreduce.jobhistory.address": "%HOSTGROUP::host_group_master_1%:10020", "mapreduce.jobhistory.bind-host": "0.0.0.0", "mapreduce.jobhistory.done-dir": "/mr-history/done", "mapreduce.jobhistory.intermediate-done-dir": "/mr-history/tmp", "mapreduce.jobhistory.webapp.address": "%HOSTGROUP::host_group_master_1%:19888", "mapreduce.map.java.opts": "-Xmx819m", "mapreduce.map.log.level": "INFO", "mapreduce.map.memory.mb": "1024", "mapreduce.map.output.compress": "false", "mapreduce.map.sort.spill.percent": "0.7", "mapreduce.map.speculative": "false", "mapreduce.output.fileoutputformat.compress": "false", "mapreduce.output.fileoutputformat.compress.type": "BLOCK", "mapreduce.reduce.input.buffer.percent": "0.0", "mapreduce.reduce.java.opts": "-Xmx819m", "mapreduce.reduce.log.level": "INFO", "mapreduce.reduce.memory.mb": "1024", "mapreduce.reduce.shuffle.fetch.retry.enabled": "1", "mapreduce.reduce.shuffle.fetch.retry.interval-ms": "1000", "mapreduce.reduce.shuffle.fetch.retry.timeout-ms": "30000", "mapreduce.reduce.shuffle.input.buffer.percent": "0.7", "mapreduce.reduce.shuffle.merge.percent": "0.66", "mapreduce.reduce.shuffle.parallelcopies": "30", "mapreduce.reduce.speculative": "false", "mapreduce.shuffle.port": "13562", "mapreduce.task.io.sort.factor": "100", "mapreduce.task.io.sort.mb": "410", "mapreduce.task.timeout": "300000", "yarn.app.mapreduce.am.admin-command-opts": "-Dhdp.version=${hdp.version}", "yarn.app.mapreduce.am.command-opts": "-Xmx819m -Dhdp.version=${hdp.version}", "yarn.app.mapreduce.am.log.level": "INFO", "yarn.app.mapreduce.am.resource.mb": "1024", "yarn.app.mapreduce.am.staging-dir": "/user" } } }, { "oozie-env": { "properties": { "content": "\n#!/bin/bash\n\nif [ -d \"/usr/lib/bigtop-tomcat\" ]; then\n export OOZIE_CONFIG=${OOZIE_CONFIG:-/etc/oozie/conf}\n export CATALINA_BASE=${CATALINA_BASE:-{{oozie_server_dir}}}\n export CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}\n export OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat\nfi\n\n#Set JAVA HOME\nexport JAVA_HOME={{java_home}}\n\nexport JRE_HOME=${JAVA_HOME}\n\n# Set Oozie specific environment variables here.\n\n# Settings for the Embedded Tomcat that runs Oozie\n# Java System properties for Oozie should be specified in this variable\n#\n# export CATALINA_OPTS=\n\n# Oozie configuration file to load from Oozie configuration directory\n#\n# export OOZIE_CONFIG_FILE=oozie-site.xml\n\n# Oozie logs directory\n#\nexport OOZIE_LOG={{oozie_log_dir}}\n\n# Oozie pid directory\n#\nexport CATALINA_PID={{pid_file}}\n\n#Location of the data for oozie\nexport OOZIE_DATA={{oozie_data_dir}}\n\n# Oozie Log4J configuration file to load from Oozie configuration directory\n#\n# export OOZIE_LOG4J_FILE=oozie-log4j.properties\n\n# Reload interval of the Log4J configuration file, in seconds\n#\n# export OOZIE_LOG4J_RELOAD=10\n\n# The port Oozie server runs\n#\nexport OOZIE_HTTP_PORT={{oozie_server_port}}\n\n# The admin port Oozie server runs\n#\nexport OOZIE_ADMIN_PORT={{oozie_server_admin_port}}\n\n# The host name Oozie server runs on\n#\n# export OOZIE_HTTP_HOSTNAME=hostname -f\n\n# The base URL for callback URLs to Oozie\n#\n# export OOZIE_BASE_URL=\"http://${OOZIE_HTTP_HOSTNAME}:${OOZIE_HTTP_PORT}/oozie\"\nexport JAVA_LIBRARY_PATH={{hadoop_lib_home}}/native/Linux-amd64-64\n\n# At least 1 minute of retry time to account for server downtime during\n# upgrade/downgrade\nexport OOZIE_CLIENT_OPTS=\"${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 \"\n\n# This is needed so that Oozie does not run into OOM or GC Overhead limit\n# exceeded exceptions. If the oozie server is handling large number of\n# workflows/coordinator jobs, the memory settings may need to be revised\nexport CATALINA_OPTS=\"${CATALINA_OPTS} -Xmx2048m -XX:MaxPermSize=256m \"", "oozie_admin_port": "11001", "oozie_data_dir": "/mnt/hadoop/oozie/data", "oozie_database": "New Derby Database", "oozie_derby_database": "Derby", "oozie_existing_oracle_host": "", "oozie_existing_postgresql_host": "", "oozie_heapsize": "2048m", "oozie_hostname": "%HOSTGROUP::host_group_master_1%", "oozie_log_dir": "/var/log/oozie", "oozie_permsize": "256m", "oozie_pid_dir": "/var/run/oozie", "oozieuser": "oozie" } } }, { "oozie-log4j": { "properties": { "content": "\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License. See accompanying LICENSE file.\n#\n\n# If the Java System property 'oozie.log.dir' is not defined at Oozie start up time\n# XLogService sets its value to '${oozie.home}/logs'\n\nlog4j.appender.oozie=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.oozie.DatePattern='.'yyyy-MM-dd-HH\nlog4j.appender.oozie.File=${oozie.log.dir}/oozie.log\nlog4j.appender.oozie.Append=true\nlog4j.appender.oozie.layout=org.apache.log4j.PatternLayout\nlog4j.appender.oozie.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - SERVER[${oozie.instance.id}] %m%n\n\nlog4j.appender.oozieops=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.oozieops.DatePattern='.'yyyy-MM-dd\nlog4j.appender.oozieops.File=${oozie.log.dir}/oozie-ops.log\nlog4j.appender.oozieops.Append=true\nlog4j.appender.oozieops.layout=org.apache.log4j.PatternLayout\nlog4j.appender.oozieops.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n\n\nlog4j.appender.oozieinstrumentation=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.oozieinstrumentation.DatePattern='.'yyyy-MM-dd\nlog4j.appender.oozieinstrumentation.File=${oozie.log.dir}/oozie-instrumentation.log\nlog4j.appender.oozieinstrumentation.Append=true\nlog4j.appender.oozieinstrumentation.layout=org.apache.log4j.PatternLayout\nlog4j.appender.oozieinstrumentation.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n\n\nlog4j.appender.oozieaudit=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.oozieaudit.DatePattern='.'yyyy-MM-dd\nlog4j.appender.oozieaudit.File=${oozie.log.dir}/oozie-audit.log\nlog4j.appender.oozieaudit.Append=true\nlog4j.appender.oozieaudit.layout=org.apache.log4j.PatternLayout\nlog4j.appender.oozieaudit.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n\n\nlog4j.appender.openjpa=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.openjpa.DatePattern='.'yyyy-MM-dd\nlog4j.appender.openjpa.File=${oozie.log.dir}/oozie-jpa.log\nlog4j.appender.openjpa.Append=true\nlog4j.appender.openjpa.layout=org.apache.log4j.PatternLayout\nlog4j.appender.openjpa.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n\n\nlog4j.logger.openjpa=INFO, openjpa\nlog4j.logger.oozieops=INFO, oozieops\nlog4j.logger.oozieinstrumentation=ALL, oozieinstrumentation\nlog4j.logger.oozieaudit=ALL, oozieaudit\nlog4j.logger.org.apache.oozie=INFO, oozie\nlog4j.logger.org.apache.hadoop=WARN, oozie\nlog4j.logger.org.mortbay=WARN, oozie\nlog4j.logger.org.hsqldb=WARN, oozie\nlog4j.logger.org.apache.hadoop.security.authentication.server=INFO, oozie" } } }, { "oozie-site": { "properties": { "oozie.authentication.kerberos.name.rules": "\n RULE:2:$1@$0s/./TODO-MAPREDUSER/\n RULE:2:$1@$0s/./TODO-HDFSUSER/\n RULE:2:$1@$0s/./TODO-HBASE-USER/\n RULE:2:$1@$0s/._/TODO-HBASE-USER/\n DEFAULT", "oozie.authentication.simple.anonymous.allowed": "true", "oozie.authentication.type": "simple", "oozie.base.url": "http://%HOSTGROUP::host_group_master_1%:11000/oozie", "oozie.credentials.credentialclasses": "hcat=org.apache.oozie.action.hadoop.HCatCredentials", "oozie.db.schema.name": "oozie", "oozie.service.ActionService.executor.ext.classes": "\n org.apache.oozie.action.email.EmailActionExecutor,\n org.apache.oozie.action.hadoop.HiveActionExecutor,\n org.apache.oozie.action.hadoop.ShellActionExecutor,\n org.apache.oozie.action.hadoop.SqoopActionExecutor,\n org.apache.oozie.action.hadoop.DistcpActionExecutor", "oozie.service.AuthorizationService.security.enabled": "true", "oozie.service.CallableQueueService.callable.concurrency": "3", "oozie.service.CallableQueueService.queue.size": "1000", "oozie.service.CallableQueueService.threads": "10", "oozie.service.ELService.ext.functions.coord-action-create": "\n now=org.apache.oozie.extensions.OozieELExtensions#ph2_now,\n today=org.apache.oozie.extensions.OozieELExtensions#ph2_today,\n yesterday=org.apache.oozie.extensions.OozieELExtensions#ph2_yesterday,\n currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_currentMonth,\n lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_lastMonth,\n currentYear=org.apache.oozie.extensions.OozieELExtensions#ph2_currentYear,\n lastYear=org.apache.oozie.extensions.OozieELExtensions#ph2_lastYear,\n latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,\n formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n user=org.apache.oozie.coord.CoordELFunctions#coord_user", "oozie.service.ELService.ext.functions.coord-action-create-inst": "\n now=org.apache.oozie.extensions.OozieELExtensions#ph2_now_inst,\n today=org.apache.oozie.extensions.OozieELExtensions#ph2_today_inst,\n yesterday=org.apache.oozie.extensions.OozieELExtensions#ph2_yesterday_inst,\n currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_currentMonth_inst,\n lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_lastMonth_inst,\n currentYear=org.apache.oozie.extensions.OozieELExtensions#ph2_currentYear_inst,\n lastYear=org.apache.oozie.extensions.OozieELExtensions#ph2_lastYear_inst,\n latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,\n formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n user=org.apache.oozie.coord.CoordELFunctions#coord_user", "oozie.service.ELService.ext.functions.coord-action-start": "\n now=org.apache.oozie.extensions.OozieELExtensions#ph2_now,\n today=org.apache.oozie.extensions.OozieELExtensions#ph2_today,\n yesterday=org.apache.oozie.extensions.OozieELExtensions#ph2_yesterday,\n currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_currentMonth,\n lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_lastMonth,\n currentYear=org.apache.oozie.extensions.OozieELExtensions#ph2_currentYear,\n lastYear=org.apache.oozie.extensions.OozieELExtensions#ph2_lastYear,\n latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest,\n future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future,\n dataIn=org.apache.oozie.extensions.OozieELExtensions#ph3_dataIn,\n instanceTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime,\n dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset,\n formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime,\n user=org.apache.oozie.coord.CoordELFunctions#coord_user", "oozie.service.ELService.ext.functions.coord-job-submit-data": "\n now=org.apache.oozie.extensions.OozieELExtensions#ph1_now_echo,\n today=org.apache.oozie.extensions.OozieELExtensions#ph1_today_echo,\n yesterday=org.apache.oozie.extensions.OozieELExtensions#ph1_yesterday_echo,\n currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_currentMonth_echo,\n lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_lastMonth_echo,\n currentYear=org.apache.oozie.extensions.OozieELExtensions#ph1_currentYear_echo,\n lastYear=org.apache.oozie.extensions.OozieELExtensions#ph1_lastYear_echo,\n dataIn=org.apache.oozie.extensions.OozieELExtensions#ph1_dataIn_echo,\n instanceTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n user=org.apache.oozie.coord.CoordELFunctions#coord_user", "oozie.service.ELService.ext.functions.coord-job-submit-instances": "\n now=org.apache.oozie.extensions.OozieELExtensions#ph1_now_echo,\n today=org.apache.oozie.extensions.OozieELExtensions#ph1_today_echo,\n yesterday=org.apache.oozie.extensions.OozieELExtensions#ph1_yesterday_echo,\n currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_currentMonth_echo,\n lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_lastMonth_echo,\n currentYear=org.apache.oozie.extensions.OozieELExtensions#ph1_currentYear_echo,\n lastYear=org.apache.oozie.extensions.OozieELExtensions#ph1_lastYear_echo,\n formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo", "oozie.service.ELService.ext.functions.coord-sla-create": "\n instanceTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime,\n user=org.apache.oozie.coord.CoordELFunctions#coord_user", "oozie.service.ELService.ext.functions.coord-sla-submit": "\n instanceTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed,\n user=org.apache.oozie.coord.CoordELFunctions#coorduser", "oozie.service.HadoopAccessorService.hadoop.configurations": "=/etc/hadoop/conf", "oozie.service.HadoopAccessorService.kerberos.enabled": "false", "oozie.service.HadoopAccessorService.supported.filesystems": "", "oozie.service.JPAService.create.db.schema": "false", "oozie.service.JPAService.jdbc.driver": "org.apache.derby.jdbc.EmbeddedDriver", "oozie.service.JPAService.jdbc.username": "oozie", "oozie.service.JPAService.pool.max.active.conn": "10", "oozie.service.ProxyUserService.proxyuser.falcon.groups": "", "oozie.service.ProxyUserService.proxyuser.falcon.hosts": "_", "oozie.service.PurgeService.older.than": "30", "oozie.service.PurgeService.purge.interval": "3600", "oozie.service.SchemaService.wf.ext.schemas": "shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd,email-action-0.1.xsd,email-action-0.2.xsd,hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd,ssh-action-0.1.xsd,ssh-action-0.2.xsd,distcp-action-0.1.xsd,distcp-action-0.2.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd", "oozie.service.URIHandlerService.uri.handlers": "org.apache.oozie.dependency.FSURIHandler,org.apache.oozie.dependency.HCatURIHandler", "oozie.service.WorkflowAppService.system.libpath": "/user/${user.name}/share/lib", "oozie.service.coord.check.maximum.frequency": "false", "oozie.service.coord.normal.default.timeout": "120", "oozie.service.coord.push.check.requeue.interval": "30000", "oozie.services": "\n org.apache.oozie.service.SchedulerService,\n org.apache.oozie.service.InstrumentationService,\n org.apache.oozie.service.MemoryLocksService,\n org.apache.oozie.service.UUIDService,\n org.apache.oozie.service.ELService,\n org.apache.oozie.service.AuthorizationService,\n org.apache.oozie.service.UserGroupInformationService,\n org.apache.oozie.service.HadoopAccessorService,\n org.apache.oozie.service.JobsConcurrencyService,\n org.apache.oozie.service.URIHandlerService,\n org.apache.oozie.service.DagXLogInfoService,\n org.apache.oozie.service.SchemaService,\n org.apache.oozie.service.LiteWorkflowAppService,\n org.apache.oozie.service.JPAService,\n org.apache.oozie.service.StoreService,\n org.apache.oozie.service.SLAStoreService,\n org.apache.oozie.service.DBLiteWorkflowStoreService,\n org.apache.oozie.service.CallbackService,\n org.apache.oozie.service.ShareLibService,\n org.apache.oozie.service.CallableQueueService,\n org.apache.oozie.service.ActionService,\n org.apache.oozie.service.ActionCheckerService,\n org.apache.oozie.service.RecoveryService,\n org.apache.oozie.service.PurgeService,\n org.apache.oozie.service.CoordinatorEngineService,\n org.apache.oozie.service.BundleEngineService,\n org.apache.oozie.service.DagEngineService,\n org.apache.oozie.service.CoordMaterializeTriggerService,\n org.apache.oozie.service.StatusTransitService,\n org.apache.oozie.service.PauseTransitService,\n org.apache.oozie.service.GroupsService,\n org.apache.oozie.service.ProxyUserService,\n org.apache.oozie.service.XLogStreamingService,\n org.apache.oozie.service.JvmPauseMonitorService", "oozie.services.ext": "org.apache.oozie.service.JMSAccessorService,org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService", "oozie.system.id": "oozie-${user.name}", "oozie.systemmode": "NORMAL", "use.system.libpath.for.mapreduce.and.pig.jobs": "false" } } }, { "pig-env": { "properties": { "content": "\nJAVA_HOME={{java64_home}}\nHADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\nif [ -d \"/usr/lib/tez\" ]; then\n PIG_OPTS=\"$PIGOPTS -Dmapreduce.framework.name=yarn\"\nfi" } } }, { "pig-log4j": { "properties": { "content": "\n#\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n#\n#\n\n# ** Set root logger level to DEBUG and its only appender to A.\nlog4j.logger.org.apache.pig=info, A\n\n# *** A is set to be a ConsoleAppender.\nlog4j.appender.A=org.apache.log4j.ConsoleAppender\n# *\ A uses PatternLayout.\nlog4j.appender.A.layout=org.apache.log4j.PatternLayout\nlog4j.appender.A.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n" } } }, { "pig-properties": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Pig default configuration file. All values can be overwritten by pig.properties and command line arguments.\n# see bin/pig -help\n\n# brief logging (no timestamps)\nbrief=false\n\n# debug level, INFO is default\ndebug=INFO\n\n# verbose print all log messages to screen (default to print only INFO and above to screen)\nverbose=false\n\n# exectype local|mapreduce, mapreduce is default\nexectype=mapreduce\n\n# Enable insertion of information about script into hadoop job conf \npig.script.info.enabled=true\n\n# Do not spill temp files smaller than this size (bytes)\npig.spill.size.threshold=5000000\n\n# EXPERIMENT: Activate garbage collection when spilling a file bigger than this size (bytes)\n# This should help reduce the number of files being spilled.\npig.spill.gc.activation.size=40000000\n\n# the following two parameters are to help estimate the reducer number\npig.exec.reducers.bytes.per.reducer=1000000000\npig.exec.reducers.max=999\n\n# Temporary location to store the intermediate data.\npig.temp.dir=/tmp/\n\n# Threshold for merging FRJoin fragment files\npig.files.concatenation.threshold=100\npig.optimistic.files.concatenation=false;\n\npig.disable.counter=false\n\n# Avoid pig failures when multiple jobs write to the same location\npig.location.check.strict=false\n\nhcat.bin=/usr/bin/hcat" } } }, { "ranger-hbase-plugin-properties": { "properties": { "REPOSITORY_CONFIG_PASSWORD": "hbase", "REPOSITORY_CONFIG_USERNAME": "hbase", "SSL_KEYSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-keystore.jks", "SSL_KEYSTORE_PASSWORD": "myKeyFilePassword", "SSL_TRUSTSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-truststore.jks", "SSL_TRUSTSTORE_PASSWORD": "changeit", "UPDATE_XAPOLICIES_ON_GRANT_REVOKE": "true", "XAAUDIT.DB.IS_ENABLED": "true", "XAAUDIT.HDFS.DESTINATION_DIRECTORY": "hdfs://REPLACENAME_NODE_HOST:8020/ranger/audit/%app-type%/%time:yyyyMMdd%", "XAAUDIT.HDFS.DESTINTATION_FILE": "%hostname%-audit.log", "XAAUDIT.HDFS.DESTINTATION_FLUSH_INTERVAL_SECONDS": "900", "XAAUDIT.HDFS.DESTINTATION_OPEN_RETRY_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.DESTINTATION_ROLLOVER_INTERVAL_SECONDS": "86400", "XAAUDIT.HDFS.IS_ENABLED": "false", "XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit/archive", "XAAUDIT.HDFS.LOCAL_ARCHIVE_MAX_FILE_COUNT": "10", "XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit", "XAAUDIT.HDFS.LOCAL_BUFFER_FILE": "%time:yyyyMMdd-HHmm.ss%.log", "XAAUDIT.HDFS.LOCAL_BUFFER_FLUSH_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.LOCAL_BUFFER_ROLLOVER_INTERVAL_SECONDS": "600", "common.name.for.certificate": "-", "policy_user": "ambari-qa", "ranger-hbase-plugin-enabled": "No" } } }, { "ranger-hdfs-plugin-properties": { "properties": { "REPOSITORY_CONFIG_PASSWORD": "hadoop", "REPOSITORY_CONFIG_USERNAME": "hadoop", "SSL_KEYSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-keystore.jks", "SSL_KEYSTORE_PASSWORD": "myKeyFilePassword", "SSL_TRUSTSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-truststore.jks", "SSL_TRUSTSTORE_PASSWORD": "changeit", "XAAUDIT.DB.IS_ENABLED": "true", "XAAUDIT.HDFS.DESTINATION_DIRECTORY": "hdfs://REPLACENAME_NODE_HOST:8020/ranger/audit/%app-type%/%time:yyyyMMdd%", "XAAUDIT.HDFS.DESTINTATION_FILE": "%hostname%-audit.log", "XAAUDIT.HDFS.DESTINTATION_FLUSH_INTERVAL_SECONDS": "900", "XAAUDIT.HDFS.DESTINTATION_OPEN_RETRY_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.DESTINTATION_ROLLOVER_INTERVAL_SECONDS": "86400", "XAAUDIT.HDFS.IS_ENABLED": "false", "XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit/archive", "XAAUDIT.HDFS.LOCAL_ARCHIVE_MAX_FILE_COUNT": "10", "XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit", "XAAUDIT.HDFS.LOCAL_BUFFER_FILE": "%time:yyyyMMdd-HHmm.ss%.log", "XAAUDIT.HDFS.LOCAL_BUFFER_FLUSH_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.LOCAL_BUFFER_ROLLOVER_INTERVAL_SECONDS": "600", "common.name.for.certificate": "-", "hadoop.rpc.protection": "-", "policy_user": "ambari-qa", "ranger-hdfs-plugin-enabled": "No" } } }, { "ranger-hive-plugin-properties": { "properties": { "REPOSITORY_CONFIG_PASSWORD": "hive", "REPOSITORY_CONFIG_USERNAME": "hive", "SSL_KEYSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-keystore.jks", "SSL_KEYSTORE_PASSWORD": "myKeyFilePassword", "SSL_TRUSTSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-truststore.jks", "SSL_TRUSTSTORE_PASSWORD": "changeit", "UPDATE_XAPOLICIES_ON_GRANT_REVOKE": "true", "XAAUDIT.DB.IS_ENABLED": "true", "XAAUDIT.HDFS.DESTINATION_DIRECTORY": "hdfs://REPLACENAME_NODE_HOST:8020/ranger/audit/%app-type%/%time:yyyyMMdd%", "XAAUDIT.HDFS.DESTINTATION_FILE": "%hostname%-audit.log", "XAAUDIT.HDFS.DESTINTATION_FLUSH_INTERVAL_SECONDS": "900", "XAAUDIT.HDFS.DESTINTATION_OPEN_RETRY_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.DESTINTATION_ROLLOVER_INTERVAL_SECONDS": "86400", "XAAUDIT.HDFS.IS_ENABLED": "false", "XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit/archive", "XAAUDIT.HDFS.LOCAL_ARCHIVE_MAX_FILE_COUNT": "10", "XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit", "XAAUDIT.HDFS.LOCAL_BUFFER_FILE": "%time:yyyyMMdd-HHmm.ss%.log", "XAAUDIT.HDFS.LOCAL_BUFFER_FLUSH_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.LOCAL_BUFFER_ROLLOVER_INTERVAL_SECONDS": "600", "common.name.for.certificate": "-", "jdbc.driverClassName": "org.apache.hive.jdbc.HiveDriver", "policy_user": "ambari-qa", "ranger-hive-plugin-enabled": "No" } } }, { "ranger-knox-plugin-properties": { "properties": { "KNOX_HOME": "/usr/hdp/current/knox-server", "REPOSITORY_CONFIG_PASSWORD": "admin-password", "REPOSITORY_CONFIG_USERNAME": "admin", "SSL_KEYSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-keystore.jks", "SSL_KEYSTORE_PASSWORD": "myKeyFilePassword", "SSL_TRUSTSTORE_FILE_PATH": "/etc/hadoop/conf/ranger-plugin-truststore.jks", "SSL_TRUSTSTORE_PASSWORD": "changeit", "XAAUDIT.DB.IS_ENABLED": "true", "XAAUDIT.HDFS.DESTINATION_DIRECTORY": "hdfs://REPLACENAME_NODE_HOST:8020/ranger/audit/%app-type%/%time:yyyyMMdd%", "XAAUDIT.HDFS.DESTINTATION_FILE": "%hostname%-audit.log", "XAAUDIT.HDFS.DESTINTATION_FLUSH_INTERVAL_SECONDS": "900", "XAAUDIT.HDFS.DESTINTATION_OPEN_RETRY_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.DESTINTATION_ROLLOVER_INTERVAL_SECONDS": "86400", "XAAUDIT.HDFS.IS_ENABLED": "false", "XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit/archive", "XAAUDIT.HDFS.LOCAL_ARCHIVE_MAX_FILE_COUNT": "10", "XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY": "REPLACELOG_DIR/hadoop/%app-type%/audit", "XAAUDIT.HDFS.LOCAL_BUFFER_FILE": "%time:yyyyMMdd-HHmm.ss%.log", "XAAUDIT.HDFS.LOCAL_BUFFER_FLUSH_INTERVAL_SECONDS": "60", "XAAUDIT.HDFS.LOCAL_BUFFER_ROLLOVER_INTERVAL_SECONDS": "600", "common.name.for.certificate": "-", "policy_user": "ambari-qa", "ranger-knox-plugin-enabled": "No" } } }, { "slider-client": { "properties": null } }, { "slider-env": { "properties": { "content": "\n# Set Slider-specific environment variables here.\n\n# The only required environment variable is JAVA_HOME. All others are\n# optional. When running a distributed configuration it is best to\n# set JAVA_HOME in this file, so that it is correctly defined on\n# remote nodes.\n\n# The java implementation to use. Required.\nexport JAVA_HOME={{java64_home}}\n# The hadoop conf directory. Optional as slider-client.xml can be edited to add properties.\nexport HADOOP_CONF_DIR={{hadoop_conf_dir}}" } } }, { "slider-log4j": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nlog4j.rootLogger=INFO,stdout\nlog4j.threshhold=ALL\nlog4j.appender.stdout=org.apache.log4j.ConsoleAppender\nlog4j.appender.stdout.layout=org.apache.log4j.PatternLayout\n\n# log layout skips stack-trace creation operations by avoiding line numbers and method\nlog4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} - %m%n\n\n# debug edition is much more expensive\n#log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} (%F:%M(%L)) - %m%n\n\n\nlog4j.appender.subprocess=org.apache.log4j.ConsoleAppender\nlog4j.appender.subprocess.layout=org.apache.log4j.PatternLayout\nlog4j.appender.subprocess.layout.ConversionPattern=[%c{1}]: %m%n\n#log4j.logger.org.apache.slider.yarn.appmaster.SliderAppMasterer.master=INFO,subprocess\n\n# for debugging Slider\n#log4j.logger.org.apache.slider=DEBUG\n#log4j.logger.org.apache.slider=DEBUG\n\n# uncomment to debug service lifecycle issues\n#log4j.logger.org.apache.hadoop.yarn.service.launcher=DEBUG\n#log4j.logger.org.apache.hadoop.yarn.service=DEBUG\n\n# uncomment for YARN operations\n#log4j.logger.org.apache.hadoop.yarn.client=DEBUG\n\n# uncomment this to debug security problems\n#log4j.logger.org.apache.hadoop.security=DEBUG\n\n#crank back on some noise\nlog4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR\nlog4j.logger.org.apache.hadoop.hdfs=WARN\n\n\nlog4j.logger.org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor=WARN\nlog4j.logger.org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl=WARN\nlog4j.logger.org.apache.zookeeper=WARN" } } }, { "spark-defaults": { "properties": { "spark.driver.extraJavaOptions": "", "spark.history.kerberos.keytab": "none", "spark.history.kerberos.principal": "none", "spark.history.provider": "org.apache.spark.deploy.yarn.history.YarnHistoryProvider", "spark.history.ui.port": "18080", "spark.yarn.am.extraJavaOptions": "", "spark.yarn.applicationMaster.waitTries": "10", "spark.yarn.containerLauncherMaxThreads": "25", "spark.yarn.driver.memoryOverhead": "384", "spark.yarn.executor.memoryOverhead": "384", "spark.yarn.max.executor.failures": "3", "spark.yarn.preserve.staging.files": "false", "spark.yarn.queue": "default", "spark.yarn.scheduler.heartbeat.interval-ms": "5000", "spark.yarn.submit.file.replication": "3" } } }, { "spark-env": { "properties": { "content": "\n#!/usr/bin/env bash\n\n# This file is sourced when running various Spark programs.\n# Copy it as spark-env.sh and edit that to configure Spark for your site.\n\n# Options read in YARN client mode\n#SPARK_EXECUTOR_INSTANCES=\"2\" #Number of workers to start (Default: 2)\n#SPARK_EXECUTOR_CORES=\"1\" #Number of cores for the workers (Default: 1).\n#SPARK_EXECUTOR_MEMORY=\"1G\" #Memory per Worker (e.g. 1000M, 2G) (Default: 1G)\n#SPARK_DRIVER_MEMORY=\"512 Mb\" #Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)\n#SPARK_YARN_APP_NAME=\"spark\" #The name of your application (Default: Spark)\n#SPARK_YARN_QUEUE=\"~@~Xdefault~@~Y\" #The hadoop queue to use for allocation requests (Default: @~Xdefault~@~Y)\n#SPARK_YARN_DIST_FILES=\"\" #Comma separated list of files to be distributed with the job.\n#SPARK_YARN_DIST_ARCHIVES=\"\" #Comma separated list of archives to be distributed with the job.\n\n# Generic options for the daemons used in the standalone deploy mode\n\n# Alternate conf dir. (Default: ${SPARK_HOME}/conf)\nexport SPARK_CONF_DIR=${SPARK_HOME:-{{spark_home}}}/conf\n\n# Where log files are stored.(Default:${SPARK_HOME}/logs)\n#export SPARK_LOG_DIR=${SPARK_HOME:-{{spark_home}}}/logs\nexport SPARK_LOG_DIR={{spark_log_dir}}\n\n# Where the pid file is stored. (Default: /tmp)\nexport SPARK_PID_DIR={{spark_pid_dir}}\n\n# A string representing this instance of spark.(Default: $USER)\nSPARK_IDENT_STRING=$USER\n\n# The scheduling priority for daemons. (Default: 0)\nSPARK_NICENESS=0\n\nexport HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\nexport HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-{{hadoop_conf_dir}}}\n\n# The java implementation to use.\nexport JAVA_HOME={{java_home}}\n\nif [ -d \"/etc/tez/conf/\" ]; then\n export TEZ_CONF_DIR=/etc/tez/conf\nelse\n export TEZ_CONF_DIR=\nfi", "spark_group": "spark", "spark_log_dir": "/var/log/spark", "spark_pid_dir": "/var/run/spark", "sparkuser": "spark" } } }, { "spark-javaopts-properties": { "properties": { "content": " " } } }, { "spark-log4j-properties": { "properties": { "content": "\n# Set everything to be logged to the console\nlog4j.rootCategory=INFO, console\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n\n\n# Settings to quiet third party logs that are too verbose\nlog4j.logger.org.eclipse.jetty=WARN\nlog4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR\nlog4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO\nlog4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO" } } }, { "spark-metrics-properties": { "properties": { "content": "\n# syntax: [instance].sink|source.[name].[options]=[value]\n\n# This file configures Spark's internal metrics system. The metrics system is\n# divided into instances which correspond to internal components.\n# Each instance can be configured to report its metrics to one or more sinks.\n# Accepted values for [instance] are \"master\", \"worker\", \"executor\", \"driver\",\n# and \"applications\". A wild card \"\" can be used as an instance name, in\n# which case all instances will inherit the supplied property.\n#\n# Within an instance, a \"source\" specifies a particular set of grouped metrics.\n# there are two kinds of sources:\n# 1. Spark internal sources, like MasterSource, WorkerSource, etc, which will\n# collect a Spark component's internal state. Each instance is paired with a\n# Spark source that is added automatically.\n# 2. Common sources, like JvmSource, which will collect low level state.\n# These can be added through configuration options and are then loaded\n# using reflection.\n#\n# A \"sink\" specifies where metrics are delivered to. Each instance can be\n# assigned one or more sinks.\n#\n# The sink|source field specifies whether the property relates to a sink or\n# source.\n#\n# The [name] field specifies the name of source or sink.\n#\n# The [options] field is the specific property of this source or sink. The\n# source or sink is responsible for parsing this property.\n#\n# Notes:\n# 1. To add a new sink, set the \"class\" option to a fully qualified class\n# name (see examples below).\n# 2. Some sinks involve a polling period. The minimum allowed polling period\n# is 1 second.\n# 3. Wild card properties can be overridden by more specific properties.\n# For example, master.sink.console.period takes precedence over\n# .sink.console.period.\n# 4. A metrics specific configuration\n# \"spark.metrics.conf=${SPARK_HOME}/conf/metrics.properties\" should be\n# added to Java properties using -Dspark.metrics.conf=xxx if you want to\n# customize metrics system. You can also put the file in ${SPARKHOME}/conf\n# and it will be loaded automatically.\n# 5. MetricsServlet is added by default as a sink in master, worker and client\n# driver, you can send http request \"/metrics/json\" to get a snapshot of all the\n# registered metrics in json format. For master, requests \"/metrics/master/json\" and\n# \"/metrics/applications/json\" can be sent seperately to get metrics snapshot of\n# instance master and applications. MetricsServlet may not be configured by self.\n#\n\n## List of available sinks and their properties.\n\n# org.apache.spark.metrics.sink.ConsoleSink\n# Name: Default: Description:\n# period 10 Poll period\n# unit seconds Units of poll period\n\n# org.apache.spark.metrics.sink.CSVSink\n# Name: Default: Description:\n# period 10 Poll period\n# unit seconds Units of poll period\n# directory /tmp Where to store CSV files\n\n# org.apache.spark.metrics.sink.GangliaSink\n# Name: Default: Description:\n# host NONE Hostname or multicast group of Ganglia server\n# port NONE Port of Ganglia server(s)\n# period 10 Poll period\n# unit seconds Units of poll period\n# ttl 1 TTL of messages sent by Ganglia\n# mode multicast Ganglia network mode ('unicast' or 'multicast')\n\n# org.apache.spark.metrics.sink.JmxSink\n\n# org.apache.spark.metrics.sink.MetricsServlet\n# Name: Default: Description:\n# path VARIES Path prefix from the web server root\n# sample false Whether to show entire set of samples for histograms ('false' or 'true')\n#\n# * Default path is /metrics/json for all instances except the master. The master has two paths:\n# /metrics/aplications/json # App information\n# /metrics/master/json # Master information\n\n# org.apache.spark.metrics.sink.GraphiteSink\n# Name: Default: Description:\n# host NONE Hostname of Graphite server\n# port NONE Port of Graphite server\n# period 10 Poll period\n# unit seconds Units of poll period\n# prefix EMPTY STRING Prefix to prepend to metric name\n\n## Examples\n# Enable JmxSink for all instances by class name\n#.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink\n\n# Enable ConsoleSink for all instances by class name\n#.sink.console.class=org.apache.spark.metrics.sink.ConsoleSink\n\n# Polling period for ConsoleSink\n#.sink.console.period=10\n\n#.sink.console.unit=seconds\n\n# Master instance overlap polling period\n#master.sink.console.period=15\n\n#master.sink.console.unit=seconds\n\n# Enable CsvSink for all instances\n#.sink.csv.class=org.apache.spark.metrics.sink.CsvSink\n\n# Polling period for CsvSink\n#.sink.csv.period=1\n\n#.sink.csv.unit=minutes\n\n# Polling directory for CsvSink\n#.sink.csv.directory=/tmp/\n\n# Worker instance overlap polling period\n#worker.sink.csv.period=10\n\n#worker.sink.csv.unit=minutes\n\n# Enable jvm source for instance master, worker, driver and executor\n#master.source.jvm.class=org.apache.spark.metrics.source.JvmSource\n\n#worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource\n\n#driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource\n\n#executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource" } } }, { "sqoop-env": { "properties": { "content": "\n# Set Hadoop-specific environment variables here.\n\n#Set path to where bin/hadoop is available\n#Set path to where bin/hadoop is available\nexport HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n#set the path to where bin/hbase is available\nexport HBASE_HOME=${HBASE_HOME:-{{hbase_home}}}\n\n#Set the path to where bin/hive is available\nexport HIVE_HOME=${HIVE_HOME:-{{hive_home}}}\n\n#Set the path for where zookeper config dir is\nexport ZOOCFGDIR=${ZOOCFGDIR:-/etc/zookeeper/conf}\n\n# add libthrift in hive to sqoop class path first so hive imports work\nexport SQOOP_USER_CLASSPATH=\"`ls ${HIVE_HOME}/lib/libthrift-.jar 2> /dev/null`:${SQOOP_USER_CLASSPATH}\"", "sqoop_user": "sqoop" } } }, { "tez-env": { "properties": { "content": "\n# Tez specific configuration\nexport TEZ_CONF_DIR={{config_dir}}\n\n# Set HADOOP_HOME to point to a specific hadoop install directory\nexport HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}", "tez_user": "tez" } } }, { "tez-site": { "properties": { "tez.am.am-rm.heartbeat.interval-ms.max": "250", "tez.am.container.idle.release-timeout-max.millis": "20000", "tez.am.container.idle.release-timeout-min.millis": "10000", "tez.am.container.reuse.enabled": "true", "tez.am.container.reuse.locality.delay-allocation-millis": "250", "tez.am.container.reuse.non-local-fallback.enabled": "false", "tez.am.container.reuse.rack-fallback.enabled": "true", "tez.am.launch.cluster-default.cmd-opts": "-server -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}", "tez.am.launch.cmd-opts": "-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC", "tez.am.launch.env": "LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64", "tez.am.log.level": "INFO", "tez.am.max.app.attempts": "2", "tez.am.maxtaskfailures.per.node": "10", "tez.am.resource.memory.mb": "2048", "tez.am.tez-ui.history-url.template": "HISTORY_URL_BASE**?viewPath=%2F%23%2Ftez-app%2FAPPLICATION_ID", "tez.cluster.additional.classpath.prefix": "/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure", "tez.counters.max": "2000", "tez.counters.max.groups": "1000", "tez.generate.debug.artifacts": "false", "tez.grouping.max-size": "1073741824", "tez.grouping.min-size": "16777216", "tez.grouping.split-waves": "1.7", "tez.history.logging.service.class": "org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService", "tez.lib.uris": "/hdp/apps/${hdp.version}/tez/tez.tar.gz", "tez.runtime.compress": "true", "tez.runtime.compress.codec": "org.apache.hadoop.io.compress.SnappyCodec", "tez.runtime.convert.user-payload.to.history-text": "false", "tez.runtime.io.sort.mb": "409", "tez.runtime.unordered.output.buffer.size-mb": "76", "tez.session.am.dag.submit.timeout.secs": "300", "tez.session.client.timeout.secs": "-1", "tez.shuffle-vertex-manager.max-src-fraction": "0.4", "tez.shuffle-vertex-manager.min-src-fraction": "0.2", "tez.staging-dir": "/tmp/${user.name}/staging", "tez.task.am.heartbeat.counter.interval-ms.max": "4000", "tez.task.get-task.sleep.interval-ms.max": "200", "tez.task.launch.cluster-default.cmd-opts": "-server -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}", "tez.task.launch.cmd-opts": "-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC", "tez.task.launch.env": "LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64", "tez.task.max-events-per-heartbeat": "500", "tez.task.resource.memory.mb": "1024", "tez.use.cluster.hadoop-libs": "false" } } }, { "topology": { "properties": { "content": "\n \n\n \n\n \n authentication\n ShiroProvider\n true\n \n sessionTimeout\n 30\n \n \n main.ldapRealm\n org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm\n \n \n main.ldapRealm.userDnTemplate\n uid={0},ou=people,dc=hadoop,dc=apache,dc=org\n \n \n main.ldapRealm.contextFactory.url\n ldap://{{knox_hostname}}:33389\n \n \n main.ldapRealm.contextFactory.authenticationMechanism\n simple\n \n \n urls./*\n authcBasic\n \n \n\n \n identity-assertion\n Default\n true\n \n\n \n authorization\n AclsAuthz\n true\n \n\n \n\n \n NAMENODE\n hdfs://{{namenode_host}}:{{namenode_rpc_port}}\n \n\n \n JOBTRACKER\n rpc://{{rm_host}}:{{jt_rpc_port}}\n \n\n \n WEBHDFS\n http://{{namenode_host}}:{{namenode_http_port}}/webhdfs\n \n\n \n WEBHCAT\n http://{{webhcat_server_host}}:{{templeton_port}}/templeton\n \n\n \n OOZIE\n http://{{oozie_server_host}}:{{oozie_server_port}}/oozie\n \n\n \n WEBHBASE\n http://{{hbase_master_host}}:{{hbase_master_port}}\n \n\n \n HIVE\n http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}\n \n\n \n RESOURCEMANAGER\n http://{{rm_host}}:{{rm_port}}/ws\n \n " } } }, { "users-ldif": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nversion: 1\n\n# Please replace with site specific values\ndn: dc=hadoop,dc=apache,dc=org\nobjectclass: organization\nobjectclass: dcObject\no: Hadoop\ndc: hadoop\n\n# Entry for a sample people container\n# Please replace with site specific values\ndn: ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:organizationalUnit\nou: people\n\n# Entry for a sample end user\n# Please replace with site specific values\ndn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: Guest\nsn: User\nuid: guest\nuserPassword:guest-password\n\n# entry for sample user admin\ndn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: Admin\nsn: Admin\nuid: admin\nuserPassword:admin-password\n\n# entry for sample user sam\ndn: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: sam\nsn: sam\nuid: sam\nuserPassword:sam-password\n\n# entry for sample user tom\ndn: uid=tom,ou=people,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:person\nobjectclass:organizationalPerson\nobjectclass:inetOrgPerson\ncn: tom\nsn: tom\nuid: tom\nuserPassword:tom-password\n\n# create FIRST Level groups branch\ndn: ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass:organizationalUnit\nou: groups\ndescription: generic groups branch\n\n# create the analyst group under groups\ndn: cn=analyst,ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass: groupofnames\ncn: analyst\ndescription:analyst group\nmember: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org\nmember: uid=tom,ou=people,dc=hadoop,dc=apache,dc=org\n\n\n# create the scientist group under groups\ndn: cn=scientist,ou=groups,dc=hadoop,dc=apache,dc=org\nobjectclass:top\nobjectclass: groupofnames\ncn: scientist\ndescription: scientist group\nmember: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org" } } }, { "webhcat-env": { "properties": { "content": "\n# The file containing the running pid\nPID_FILE={{webhcat_pid_file}}\n\nTEMPLETON_LOG_DIR={{templeton_log_dir}}/\n\n\nWEBHCAT_LOG_DIR={{templeton_log_dir}}/\n\n# The console error log\nERROR_LOG={{templeton_log_dir}}/webhcat-console-error.log\n\n# The console log\nCONSOLE_LOG={{templeton_log_dir}}/webhcat-console.log\n\n#TEMPLETON_JAR=templeton_jar_name\n\n#HADOOP_PREFIX=hadoop_prefix\n\n#HCAT_PREFIX=hive_prefix\n\n# Set HADOOP_HOME to point to a specific hadoop install directory\nexport HADOOP_HOME={{hadoop_home}}" } } }, { "webhcat-log4j": { "properties": { "content": "\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Define some default values that can be overridden by system properties\nwebhcat.root.logger = INFO, standard\nwebhcat.log.dir = .\nwebhcat.log.file = webhcat.log\n\nlog4j.rootLogger = ${webhcat.root.logger}\n\n# Logging Threshold\nlog4j.threshhold = DEBUG\n\nlog4j.appender.standard = org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.standard.File = ${webhcat.log.dir}/${webhcat.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern = .yyyy-MM-dd\n\nlog4j.appender.DRFA.layout = org.apache.log4j.PatternLayout\n\nlog4j.appender.standard.layout = org.apache.log4j.PatternLayout\nlog4j.appender.standard.layout.conversionPattern = %-5p | %d{DATE} | %c | %m%n\n\n# Class logging settings\nlog4j.logger.com.sun.jersey = DEBUG\nlog4j.logger.com.sun.jersey.spi.container.servlet.WebComponent = ERROR\nlog4j.logger.org.apache.hadoop = INFO\nlog4j.logger.org.apache.hadoop.conf = WARN\nlog4j.logger.org.apache.zookeeper = WARN\nlog4j.logger.org.eclipse.jetty = INFO" } } }, { "webhcat-site": { "properties": { "templeton.exec.timeout": "60000", "templeton.hadoop": "/usr/hdp/current/hadoop-client/bin/hadoop", "templeton.hadoop.conf.dir": "/etc/hadoop/conf", "templeton.hcat": "/usr/hdp/current/hive-client/bin/hcat", "templeton.hcat.home": "hive.tar.gz/hive/hcatalog", "templeton.hive.archive": "hdfs:///hdp/apps/${hdp.version}/hive/hive.tar.gz", "templeton.hive.home": "hive.tar.gz/hive", "templeton.hive.path": "hive.tar.gz/hive/bin/hive", "templeton.hive.properties": "hive.metastore.local=false,hive.metastore.uris=thrift://%HOSTGROUP::host_group_master2%:9083,hive.metastore.sasl.enabled=false,hive.metastore.execute.setugi=true", "templeton.jar": "/usr/hdp/current/hive-webhcat/share/webhcat/svr/lib/hive-webhcat-.jar", "templeton.libjars": "/usr/hdp/current/zookeeper-client/zookeeper.jar", "templeton.override.enabled": "false", "templeton.pig.archive": "hdfs:///hdp/apps/${hdp.version}/pig/pig.tar.gz", "templeton.pig.path": "pig.tar.gz/pig/bin/pig", "templeton.port": "50111", "templeton.sqoop.archive": "hdfs:///hdp/apps/${hdp.version}/sqoop/sqoop.tar.gz", "templeton.sqoop.home": "sqoop.tar.gz/sqoop", "templeton.sqoop.path": "sqoop.tar.gz/sqoop/bin/sqoop", "templeton.storage.class": "org.apache.hive.hcatalog.templeton.tool.ZooKeeperStorage", "templeton.streaming.jar": "hdfs:///hdp/apps/${hdp.version}/mapreduce/hadoop-streaming.jar", "templeton.zookeeper.hosts": "%HOSTGROUP::host_group_master_3%:2181,%HOSTGROUP::host_group_master_1%:2181,%HOSTGROUP::host_group_master_2%:2181" } } }, { "yarn-env": { "properties": { "apptimelineserver_heapsize": "1024", "content": "\nexport HADOOP_YARN_HOME={{hadoop_yarn_home}}\nexport YARN_LOG_DIR={{yarn_log_dir_prefix}}/$USER\nexport YARN_PID_DIR={{yarn_pid_dir_prefix}}/$USER\nexport HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}\nexport JAVA_HOME={{java64_home}}\n\n# User for YARN daemons\nexport HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}\n\n# resolve links - $0 may be a softlink\nexport YARN_CONF_DIR=\"${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}\"\n\n# some Java parameters\n# export JAVA_HOME=/home/y/libexec/jdk1.6.0/\nif [ \"$JAVA_HOME\" != \"\" ]; then\n #echo \"run java in $JAVA_HOME\"\n JAVA_HOME=$JAVA_HOME\nfi\n\nif [ \"$JAVA_HOME\" = \"\" ]; then\n echo \"Error: JAVA_HOME is not set.\"\n exit 1\nfi\n\nJAVA=$JAVA_HOME/bin/java\nJAVA_HEAP_MAX=-Xmx1000m\n\n# For setting YARN specific HEAP sizes please use this\n# Parameter and set appropriately\nYARN_HEAPSIZE={{yarn_heapsize}}\n\n# check envvars which might override default args\nif [ \"$YARN_HEAPSIZE\" != \"\" ]; then\n JAVA_HEAP_MAX=\"-Xmx\"\"$YARN_HEAPSIZE\"\"m\"\nfi\n\n# Resource Manager specific parameters\n\n# Specify the max Heapsize for the ResourceManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1000.\n# This value will be overridden by an Xmx setting specified in either YARN_OPTS\n# and/or YARN_RESOURCEMANAGER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_RESOURCEMANAGER_HEAPSIZE={{resourcemanager_heapsize}}\n\n# Specify the JVM options to be used when starting the ResourceManager.\n# These options will be appended to the options specified as YARN_OPTS\n# and therefore may override any similar flags set in YARN_OPTS\n#export YARN_RESOURCEMANAGER_OPTS=\n\n# Node Manager specific parameters\n\n# Specify the max Heapsize for the NodeManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1000.\n# This value will be overridden by an Xmx setting specified in either YARN_OPTS\n# and/or YARN_NODEMANAGER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_NODEMANAGER_HEAPSIZE={{nodemanager_heapsize}}\n\n# Specify the max Heapsize for the HistoryManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1024.\n# This value will be overridden by an Xmx setting specified in either YARN_OPTS\n# and/or YARN_HISTORYSERVER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_HISTORYSERVER_HEAPSIZE={{apptimelineserver_heapsize}}\n\n# Specify the JVM options to be used when starting the NodeManager.\n# These options will be appended to the options specified as YARN_OPTS\n# and therefore may override any similar flags set in YARN_OPTS\n#export YARN_NODEMANAGER_OPTS=\n\n# so that filenames w/ spaces are handled correctly in loops below\nIFS=\n\n\n# default log directory and file\nif [ \"$YARN_LOG_DIR\" = \"\" ]; then\n YARN_LOG_DIR=\"$HADOOP_YARN_HOME/logs\"\nfi\nif [ \"$YARN_LOGFILE\" = \"\" ]; then\n YARN_LOGFILE='yarn.log'\nfi\n\n# default policy file for service-level authorization\nif [ \"$YARN_POLICYFILE\" = \"\" ]; then\n YARN_POLICYFILE=\"hadoop-policy.xml\"\nfi\n\n# restore ordinary behaviour\nunset IFS\n\n\nYARN_OPTS=\"$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR\"\nYARN_OPTS=\"$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING\"\nYARN_OPTS=\"$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}\"\nif [ \"x$JAVA_LIBRARY_PATH\" != \"x\" ]; then\n YARN_OPTS=\"$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH\"\nfi\nYARN_OPTS=\"$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE\"", "min_user_id": "500", "nodemanager_heapsize": "1024", "resourcemanager_heapsize": "1024", "yarn_heapsize": "1024", "yarn_log_dir_prefix": "/var/log/hadoop-yarn", "yarn_pid_dir_prefix": "/var/run/hadoop-yarn", "yarn_user": "yarn" } } }, { "yarn-log4j": { "properties": { "content": "\n#Relative to Yarn Log Dir Prefix\nyarn.log.dir=.\n#\n# Job Summary Appender\n#\n# Use following logger to send summary to separate file defined by\n# hadoop.mapreduce.jobsummary.log.file rolled daily:\n# hadoop.mapreduce.jobsummary.logger=INFO,JSA\n#\nhadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}\nhadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log\nlog4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender\n# Set the ResourceManager summary log filename\nyarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log\n# Set the ResourceManager summary log level and appender\nyarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}\n#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY\n\n# To enable AppSummaryLogging for the RM,\n# set yarn.server.resourcemanager.appsummary.logger to\n# LEVEL,RMSUMMARY in hadoop-env.sh\n\n# Appender for ResourceManager Application Summary Log\n# Requires the following properties to be set\n# - hadoop.log.dir (Hadoop Log directory)\n# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)\n# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)\nlog4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender\nlog4j.appender.RMSUMMARY.File=${yarn.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}\nlog4j.appender.RMSUMMARY.MaxFileSize=256MB\nlog4j.appender.RMSUMMARY.MaxBackupIndex=20\nlog4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.JSA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\nlog4j.appender.JSA.DatePattern=.yyyy-MM-dd\nlog4j.appender.JSA.layout=org.apache.log4j.PatternLayout\nlog4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}\nlog4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false" } } }, { "yarn-site": { "properties": { "hadoop.registry.rm.enabled": "true", "hadoop.registry.zk.quorum": "%HOSTGROUP::host_group_master_3%:2181,%HOSTGROUP::host_group_master_1%:2181,%HOSTGROUP::host_group_master_2%:2181", "yarn.acl.enable": "false", "yarn.admin.acl": "", "yarn.application.classpath": "$HADOOP_CONFDIR,/usr/hdp/current/hadoop-client/,/usr/hdp/current/hadoop-client/lib/,/usr/hdp/current/hadoop-hdfs-client/,/usr/hdp/current/hadoop-hdfs-client/lib/,/usr/hdp/current/hadoop-yarn-client/,/usr/hdp/current/hadoop-yarn-client/lib/_", "yarn.client.nodemanager-connect.max-wait-ms": "60000", "yarn.client.nodemanager-connect.retry-interval-ms": "10000", "yarn.http.policy": "HTTP_ONLY", "yarn.log-aggregation-enable": "true", "yarn.log-aggregation.retain-seconds": "2592000", "yarn.log.server.url": "http://%HOSTGROUP::host_group_master_1%:19888/jobhistory/logs", "yarn.node-labels.fs-store.retry-policy-spec": "2000, 500", "yarn.node-labels.fs-store.root-dir": "/system/yarn/node-labels", "yarn.node-labels.manager-class": "org.apache.hadoop.yarn.server.resourcemanager.nodelabels.MemoryRMNodeLabelsManager", "yarn.nodemanager.address": "0.0.0.0:45454", "yarn.nodemanager.admin-env": "MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX", "yarn.nodemanager.aux-services": "mapreduce_shuffle", "yarn.nodemanager.aux-services.mapreduce_shuffle.class": "org.apache.hadoop.mapred.ShuffleHandler", "yarn.nodemanager.bind-host": "0.0.0.0", "yarn.nodemanager.container-executor.class": "org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor", "yarn.nodemanager.container-monitor.interval-ms": "3000", "yarn.nodemanager.delete.debug-delay-sec": "0", "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage": "90", "yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb": "1000", "yarn.nodemanager.disk-health-checker.min-healthy-disks": "0.25", "yarn.nodemanager.health-checker.interval-ms": "135000", "yarn.nodemanager.health-checker.script.timeout-ms": "60000", "yarn.nodemanager.linux-container-executor.cgroups.hierarchy": "hadoop-yarn", "yarn.nodemanager.linux-container-executor.cgroups.mount": "false", "yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage": "false", "yarn.nodemanager.linux-container-executor.group": "hadoop", "yarn.nodemanager.linux-container-executor.resources-handler.class": "org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler", "yarn.nodemanager.local-dirs": "/mnt/hadoop/yarn/local", "yarn.nodemanager.log-aggregation.compression-type": "gz", "yarn.nodemanager.log-aggregation.debug-enabled": "false", "yarn.nodemanager.log-aggregation.num-log-files-per-app": "30", "yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds": "-1", "yarn.nodemanager.log-dirs": "/mnt/hadoop/yarn/log", "yarn.nodemanager.log.retain-second": "604800", "yarn.nodemanager.recovery.dir": "{{yarn_log_dir_prefix}}/nodemanager/recovery-state", "yarn.nodemanager.recovery.enabled": "true", "yarn.nodemanager.remote-app-log-dir": "/app-logs", "yarn.nodemanager.remote-app-log-dir-suffix": "logs", "yarn.nodemanager.resource.cpu-vcores": "2", "yarn.nodemanager.resource.memory-mb": "4096", "yarn.nodemanager.resource.percentage-physical-cpu-limit": "100", "yarn.nodemanager.vmem-check-enabled": "false", "yarn.nodemanager.vmem-pmem-ratio": "2.1", "yarn.resourcemanager.address": "%HOSTGROUP::host_group_master_2%:8050", "yarn.resourcemanager.admin.address": "%HOSTGROUP::host_group_master_2%:8141", "yarn.resourcemanager.am.max-attempts": "2", "yarn.resourcemanager.bind-host": "0.0.0.0", "yarn.resourcemanager.connect.max-wait.ms": "900000", "yarn.resourcemanager.connect.retry-interval.ms": "30000", "yarn.resourcemanager.fs.state-store.retry-policy-spec": "2000, 500", "yarn.resourcemanager.fs.state-store.uri": " ", "yarn.resourcemanager.ha.enabled": "false", "yarn.resourcemanager.hostname": "%HOSTGROUP::host_group_master_2%", "yarn.resourcemanager.nodes.exclude-path": "/etc/hadoop/conf/yarn.exclude", "yarn.resourcemanager.recovery.enabled": "true", "yarn.resourcemanager.resource-tracker.address": "%HOSTGROUP::host_group_master_2%:8025", "yarn.resourcemanager.scheduler.address": "%HOSTGROUP::host_group_master_2%:8030", "yarn.resourcemanager.scheduler.class": "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler", "yarn.resourcemanager.state-store.max-completed-applications": "${yarn.resourcemanager.max-completed-applications}", "yarn.resourcemanager.store.class": "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore", "yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size": "10", "yarn.resourcemanager.system-metrics-publisher.enabled": "true", "yarn.resourcemanager.webapp.address": "%HOSTGROUP::host_group_master_2%:8088", "yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled": "false", "yarn.resourcemanager.webapp.https.address": "%HOSTGROUP::host_group_master_2%:8090", "yarn.resourcemanager.work-preserving-recovery.enabled": "true", "yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms": "10000", "yarn.resourcemanager.zk-acl": "world:anyone:rwcda", "yarn.resourcemanager.zk-num-retries": "1000", "yarn.resourcemanager.zk-retry-interval-ms": "1000", "yarn.resourcemanager.zk-state-store.parent-path": "/rmstore", "yarn.resourcemanager.zk-timeout-ms": "10000", "yarn.scheduler.maximum-allocation-mb": "4096", "yarn.scheduler.minimum-allocation-mb": "1024", "yarn.timeline-service.address": "%HOSTGROUP::host_group_master_3%:10200", "yarn.timeline-service.bind-host": "0.0.0.0", "yarn.timeline-service.client.max-retries": "30", "yarn.timeline-service.client.retry-interval-ms": "1000", "yarn.timeline-service.enabled": "true", "yarn.timeline-service.generic-application-history.store-class": "org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore", "yarn.timeline-service.http-authentication.simple.anonymous.allowed": "true", "yarn.timeline-service.http-authentication.type": "simple", "yarn.timeline-service.leveldb-timeline-store.path": "/mnt/hadoop/yarn/timeline", "yarn.timeline-service.leveldb-timeline-store.read-cache-size": "104857600", "yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size": "10000", "yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size": "10000", "yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms": "300000", "yarn.timeline-service.store-class": "org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore", "yarn.timeline-service.ttl-enable": "true", "yarn.timeline-service.ttl-ms": "2678400000", "yarn.timeline-service.webapp.address": "%HOSTGROUP::host_group_master_3%:8188", "yarn.timeline-service.webapp.https.address": "%HOSTGROUP::host_group_master_3%:8190" } } }, { "zoo.cfg": { "properties": { "autopurge.purgeInterval": "24", "autopurge.snapRetainCount": "30", "clientPort": "2181", "dataDir": "/mnt/hadoop/zookeeper", "initLimit": "10", "syncLimit": "5", "tickTime": "2000" } } }, { "zookeeper-env": { "properties": { "content": "\nexport JAVA_HOME={{java64_home}}\nexport ZOOKEEPER_HOME={{zk_home}}\nexport ZOO_LOG_DIR={{zk_log_dir}}\nexport ZOOPIDFILE={{zk_pid_file}}\nexport SERVER_JVMFLAGS={{zk_server_heapsize}}\nexport JAVA=$JAVA_HOME/bin/java\nexport CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*\n\n{% if security_enabled %}\nexport SERVER_JVMFLAGS=\"$SERVER_JVMFLAGS -Djava.security.auth.login.config={{zk_server_jaas_file}}\"\nexport CLIENT_JVMFLAGS=\"$CLIENT_JVMFLAGS -Djava.security.auth.login.config={{zk_client_jaas_file}}\"\n{% endif %}", "zk_log_dir": "/var/log/zookeeper", "zk_pid_dir": "/var/run/zookeeper", "zk_user": "zookeeper" } } }, { "zookeeper-log4j": { "properties": { "content": "\n#\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n#\n#\n\n#\n# ZooKeeper Logging Configuration\n#\n\n# DEFAULT: console appender only\nlog4j.rootLogger=INFO, CONSOLE\n\n# Example with rolling log file\n#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE\n\n# Example with rolling log file and tracing\n#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE\n\n#\n# Log INFO level and above messages to the console\n#\nlog4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender\nlog4j.appender.CONSOLE.Threshold=INFO\nlog4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\n\n#\n# Add ROLLINGFILE to rootLogger to get log file output\n# Log DEBUG level and above messages to a log file\nlog4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender\nlog4j.appender.ROLLINGFILE.Threshold=DEBUG\nlog4j.appender.ROLLINGFILE.File=zookeeper.log\n\n# Max log file size of 10MB\nlog4j.appender.ROLLINGFILE.MaxFileSize=10MB\n# uncomment the next line to limit number of backup files\n#log4j.appender.ROLLINGFILE.MaxBackupIndex=10\n\nlog4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\n\n\n#\n# Add TRACEFILE to rootLogger to get log file output\n# Log DEBUG level and above messages to a log file\nlog4j.appender.TRACEFILE=org.apache.log4j.FileAppender\nlog4j.appender.TRACEFILE.Threshold=TRACE\nlog4j.appender.TRACEFILE.File=zookeeper_trace.log\n\nlog4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout\n### Notice we are including log4j's NDC here (%x)\nlog4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n" } } } ], "host_groups": [ { "name": "host_group_client_1", "configurations": [], "components": [ { "name": "ZOOKEEPER_CLIENT" }, { "name": "PIG" }, { "name": "OOZIE_CLIENT" }, { "name": "HBASE_CLIENT" }, { "name": "HCAT" }, { "name": "KNOX_GATEWAY" }, { "name": "METRICS_MONITOR" }, { "name": "FALCON_CLIENT" }, { "name": "TEZ_CLIENT" }, { "name": "SPARK_CLIENT" }, { "name": "SLIDER" }, { "name": "SQOOP" }, { "name": "HDFS_CLIENT" }, { "name": "HIVE_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "METRICS_COLLECTOR" }, { "name": "MAPREDUCE2_CLIENT" } ], "cardinality": "1" }, { "name": "host_group_master_3", "configurations": [], "components": [ { "name": "ZOOKEEPER_SERVER" }, { "name": "APP_TIMELINE_SERVER" }, { "name": "HBASE_MASTER" }, { "name": "HDFS_CLIENT" }, { "name": "METRICS_MONITOR" }, { "name": "SECONDARY_NAMENODE" } ], "cardinality": "1" }, { "name": "host_group_slave_1", "configurations": [], "components": [ { "name": "HBASE_REGIONSERVER" }, { "name": "NODEMANAGER" }, { "name": "METRICS_MONITOR" }, { "name": "DATANODE" } ], "cardinality": "6" }, { "name": "host_group_master_2", "configurations": [], "components": [ { "name": "ZOOKEEPER_SERVER" }, { "name": "ZOOKEEPER_CLIENT" }, { "name": "PIG" }, { "name": "HIVE_SERVER" }, { "name": "METRICS_MONITOR" }, { "name": "SPARK_JOBHISTORYSERVER" }, { "name": "TEZ_CLIENT" }, { "name": "HIVE_METASTORE" }, { "name": "ZEPPELIN_MASTER" }, { "name": "HDFS_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "MAPREDUCE2_CLIENT" }, { "name": "MYSQL_SERVER" }, { "name": "RESOURCEMANAGER" }, { "name": "WEBHCAT_SERVER" } ], "cardinality": "1" }, { "name": "host_group_master_1", "configurations": [], "components": [ { "name": "ZOOKEEPER_SERVER" }, { "name": "HISTORYSERVER" }, { "name": "OOZIE_CLIENT" }, { "name": "NAMENODE" }, { "name": "OOZIE_SERVER" }, { "name": "HDFS_CLIENT" }, { "name": "YARN_CLIENT" }, { "name": "FALCON_SERVER" }, { "name": "METRICS_MONITOR" }, { "name": "MAPREDUCE2_CLIENT" } ], "cardinality": "1" } ], "Blueprints": { "blueprint_name": "hdp-spark-cluster", "stack_name": "HDP", "stack_version": "2.3" } }

mhmxs commented 9 years ago

I suspect this command: hostgroup configure --hostgroup cbgateway --recipeNames gcs-recipe: [SUCCESS]

As you see here https://github.com/sequenceiq/cloudbreak-shell/blob/master/src/main/java/com/sequenceiq/cloudbreak/shell/commands/ClusterCommands.java#L36 we compare the active host groups number with the blueprints host_groups number. For cbgateway hostgroup configuration is not necessary/allowed. Could you please run commands without the command above?

desaiak commented 9 years ago

Yep seems like that was it ... cluster create shows success. Thanks !! Will be testing out 2.3 with GCS for the first time with spark, lets see how that goes.

BTW is it possible to add timestamps to the cbshell log file, it gets really big and its hard to find things without timestamp.

mhmxs commented 9 years ago

Glad to hear. I would close this issue. Could you please create an other one with your improvement? And also could you please share you experiences with spark?