steven-matison / dfhz_hdp_mpack

Install Ambari 2.7.5 with HDP 3.1.4 without using Hortonworks repositories.
Apache License 2.0
48 stars 38 forks source link

Ambari metrics collector and Ambari Metrics Grafana is not working. #1

Closed Bunny1992 closed 3 years ago

Bunny1992 commented 3 years ago

While trying to create Amabri using the stacks from your url makesourcegreat again, we are facing the follwing issue. The grafana metrics, infrsolr, kafka, ranger are not getting installed as required. Facing the below issues. Can you kindly help us with it.

image image

image image

Ambari- Grafana

stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/AMBARI_METRICS/package/scripts/metrics_grafana.py", line 84, in AmsGrafana().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/AMBARI_METRICS/package/scripts/metrics_grafana.py", line 48, in start not_if = params.grafana_process_exists_cmd, File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in init self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run returns=self.resource.returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/sbin/ambari-metrics-grafana start' returned 127. -bash: /usr/sbin/ambari-metrics-grafana: No such file or directory stdout: 2021-03-01 11:32:59,645 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2021-03-01 11:32:59,682 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2021-03-01 11:33:00,062 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2021-03-01 11:33:00,073 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2021-03-01 11:33:00,075 - Group['livy'] {} 2021-03-01 11:33:00,078 - Group['spark'] {} 2021-03-01 11:33:00,078 - Group['ranger'] {} 2021-03-01 11:33:00,078 - Group['hdfs'] {} 2021-03-01 11:33:00,079 - Group['zeppelin'] {} 2021-03-01 11:33:00,079 - Group['hadoop'] {} 2021-03-01 11:33:00,080 - Group['users'] {} 2021-03-01 11:33:00,081 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,083 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,084 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,086 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,088 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,089 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None} 2021-03-01 11:33:00,091 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2021-03-01 11:33:00,093 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None} 2021-03-01 11:33:00,095 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2021-03-01 11:33:00,096 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2021-03-01 11:33:00,098 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2021-03-01 11:33:00,100 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,101 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2021-03-01 11:33:00,103 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,105 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,107 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,108 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:33:00,110 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2021-03-01 11:33:00,112 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2021-03-01 11:33:00,122 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2021-03-01 11:33:00,122 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2021-03-01 11:33:00,124 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2021-03-01 11:33:00,127 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2021-03-01 11:33:00,128 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2021-03-01 11:33:00,141 - call returned (0, '1009') 2021-03-01 11:33:00,143 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2021-03-01 11:33:00,152 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] due to not_if 2021-03-01 11:33:00,153 - Group['hdfs'] {} 2021-03-01 11:33:00,154 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2021-03-01 11:33:00,155 - FS Type: HDFS 2021-03-01 11:33:00,156 - Directory['/etc/hadoop'] {'mode': 0755} 2021-03-01 11:33:00,186 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2021-03-01 11:33:00,188 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2021-03-01 11:33:00,218 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2021-03-01 11:33:00,237 - Skipping Execute[('setenforce', '0')] due to not_if 2021-03-01 11:33:00,238 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2021-03-01 11:33:00,243 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2021-03-01 11:33:00,244 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2021-03-01 11:33:00,245 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2021-03-01 11:33:00,253 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2021-03-01 11:33:00,256 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2021-03-01 11:33:00,268 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2021-03-01 11:33:00,291 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2021-03-01 11:33:00,292 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2021-03-01 11:33:00,294 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2021-03-01 11:33:00,301 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2021-03-01 11:33:00,309 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2021-03-01 11:33:00,315 - Skipping unlimited key JCE policy check and setup since it is not required 2021-03-01 11:33:00,331 - Skipping stack-select on AMBARI_METRICS because it does not exist in the stack-select package structure. 2021-03-01 11:33:00,742 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2021-03-01 11:33:00,747 - checked_call['hostid'] {} 2021-03-01 11:33:00,762 - checked_call returned (0, '10ac1215') 2021-03-01 11:33:00,769 - Directory['/etc/ambari-metrics-grafana/conf'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2021-03-01 11:33:00,773 - Directory['/var/log/ambari-metrics-grafana'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2021-03-01 11:33:00,774 - Directory['/var/lib/ambari-metrics-grafana'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2021-03-01 11:33:00,774 - Directory['/var/run/ambari-metrics-grafana'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2021-03-01 11:33:00,782 - File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] {'content': InlineTemplate(...), 'owner': 'ams', 'group': 'hadoop'} 2021-03-01 11:33:00,790 - File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] {'content': InlineTemplate(...), 'owner': 'ams', 'group': 'hadoop', 'mode': 0600} 2021-03-01 11:33:00,792 - Execute[('chown', '-R', u'ams', '/etc/ambari-metrics-grafana/conf')] {'sudo': True} 2021-03-01 11:33:00,803 - Execute[('chown', '-R', u'ams', u'/var/log/ambari-metrics-grafana')] {'sudo': True} 2021-03-01 11:33:00,815 - Execute[('chown', '-R', u'ams', u'/var/lib/ambari-metrics-grafana')] {'sudo': True} 2021-03-01 11:33:00,826 - Execute[('chown', '-R', u'ams', u'/var/run/ambari-metrics-grafana')] {'sudo': True} 2021-03-01 11:33:00,852 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2021-03-01 11:33:00,854 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-ambari-metrics.json 2021-03-01 11:33:00,854 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-ambari-metrics.json'] {'content': Template('input.config-ambari-metrics.json.j2'), 'mode': 0644} 2021-03-01 11:33:00,856 - Execute['/usr/sbin/ambari-metrics-grafana start'] {'not_if': "ambari-sudo.sh su ams -l -s /bin/bash -c 'test -f /var/run/ambari-metrics-grafana/grafana-server.pid && ps -p cat /var/run/ambari-metrics-grafana/grafana-server.pid'", 'user': 'ams'} 2021-03-01 11:33:01,069 - Skipping stack-select on AMBARI_METRICS because it does not exist in the stack-select package structure.

Command failed after 1 tries

Ambari - Ranger stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 258, in RangerAdmin().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 117, in start solr_cloud_util.setup_solr_client(params.config, custom_log4j = params.custom_log4j) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/solr_cloud_util.py", line 249, in setup_solr_client content=StaticFile(solrCliFilename) File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in init self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 123, in action_create content = self._get_content() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 160, in _get_content return content() File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 52, in call return self.get_content() File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 76, in get_content raise Fail("{0} Source file {1} is not found".format(repr(self), path)) resource_management.core.exceptions.Fail: StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh') Source file /usr/lib/ambari-infra-solr-client/solrCloudCli.sh is not found stdout: 2021-03-01 11:28:41,349 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2021-03-01 11:28:41,387 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2021-03-01 11:28:41,723 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2021-03-01 11:28:41,733 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2021-03-01 11:28:41,736 - Group['livy'] {} 2021-03-01 11:28:41,738 - Group['spark'] {} 2021-03-01 11:28:41,739 - Group['ranger'] {} 2021-03-01 11:28:41,739 - Group['hdfs'] {} 2021-03-01 11:28:41,740 - Group['zeppelin'] {} 2021-03-01 11:28:41,740 - Group['hadoop'] {} 2021-03-01 11:28:41,740 - Group['users'] {} 2021-03-01 11:28:41,741 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,743 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,745 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,747 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,749 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,751 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None} 2021-03-01 11:28:41,752 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2021-03-01 11:28:41,754 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None} 2021-03-01 11:28:41,756 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2021-03-01 11:28:41,758 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2021-03-01 11:28:41,759 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2021-03-01 11:28:41,761 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,763 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2021-03-01 11:28:41,765 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,766 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,768 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,770 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2021-03-01 11:28:41,771 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2021-03-01 11:28:41,774 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2021-03-01 11:28:41,784 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2021-03-01 11:28:41,785 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2021-03-01 11:28:41,787 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2021-03-01 11:28:41,790 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2021-03-01 11:28:41,791 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2021-03-01 11:28:41,806 - call returned (0, '1009') 2021-03-01 11:28:41,807 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2021-03-01 11:28:41,816 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] due to not_if 2021-03-01 11:28:41,817 - Group['hdfs'] {} 2021-03-01 11:28:41,818 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2021-03-01 11:28:41,819 - FS Type: HDFS 2021-03-01 11:28:41,819 - Directory['/etc/hadoop'] {'mode': 0755} 2021-03-01 11:28:41,850 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2021-03-01 11:28:41,852 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2021-03-01 11:28:41,881 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2021-03-01 11:28:41,893 - Skipping Execute[('setenforce', '0')] due to not_if 2021-03-01 11:28:41,894 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2021-03-01 11:28:41,899 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2021-03-01 11:28:41,900 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2021-03-01 11:28:41,901 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2021-03-01 11:28:41,908 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2021-03-01 11:28:41,911 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2021-03-01 11:28:41,922 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2021-03-01 11:28:41,942 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2021-03-01 11:28:41,943 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2021-03-01 11:28:41,945 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2021-03-01 11:28:41,952 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2021-03-01 11:28:41,959 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2021-03-01 11:28:41,966 - Skipping unlimited key JCE policy check and setup since it is not required 2021-03-01 11:28:42,439 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2021-03-01 11:28:42,555 - File['/var/lib/ambari-agent/tmp/postgresql-42.2.19.jar'] {'content': DownloadSource('http://ambari.server:8080/resources/postgresql-42.2.19.jar'), 'mode': 0644} 2021-03-01 11:28:42,568 - Not downloading the file from http://ambari.server:8080/resources/postgresql-42.2.19.jar, because /var/lib/ambari-agent/tmp/postgresql-42.2.19.jar already exists 2021-03-01 11:28:42,585 - Execute[('cp', '--remove-destination', u'/var/lib/ambari-agent/tmp/postgresql-42.2.19.jar', u'/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True} 2021-03-01 11:28:42,599 - File['/usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar'] {'mode': 0644} 2021-03-01 11:28:42,600 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...} 2021-03-01 11:28:42,627 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties 2021-03-01 11:28:42,646 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2021-03-01 11:28:42,647 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': u'/usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar'}} 2021-03-01 11:28:42,647 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties 2021-03-01 11:28:42,649 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2021-03-01 11:28:42,650 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'audit_store': u'solr'}} 2021-03-01 11:28:42,651 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties 2021-03-01 11:28:42,653 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2021-03-01 11:28:42,654 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'ranger_admin_max_heap_size': u'1g'}} 2021-03-01 11:28:42,654 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties 2021-03-01 11:28:42,656 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2021-03-01 11:28:42,657 - Separate DBA property not set. Assuming Ranger DB and DB User exists! 2021-03-01 11:28:42,657 - Execute['ambari-python-wrap /usr/hdp/current/ranger-admin/db_setup.py'] {'logoutput': True, 'environment': {'RANGER_ADMIN_HOME': u'/usr/hdp/current/ranger-admin', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'user': 'ranger'} 2021-03-01 11:28:46,000 [I] DB FLAVOR :POSTGRES 2021-03-01 11:28:46,001 [I] --------- Verifying Ranger DB connection --------- 2021-03-01 11:28:46,001 [I] Checking connection.. 2021-03-01 11:28:46,002 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select 1;" 2021-03-01 11:28:46,555 [I] Checking connection passed. 2021-03-01 11:28:46,555 [I] --------- Verifying version history table --------- 2021-03-01 11:28:46,555 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select from (select table_name from information_schema.tables where table_catalog='ranger' and table_name = 'x_db_version_h') as temp;" 2021-03-01 11:28:47,294 [I] Table x_db_version_h already exists in database 'ranger' 2021-03-01 11:28:47,294 [I] --------- Importing Ranger Core DB Schema --------- 2021-03-01 11:28:47,306 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/ org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'CORE_DB_SCHEMA' and active = 'Y';" 2021-03-01 11:28:47,847 [I] CORE_DB_SCHEMA is already imported 2021-03-01 11:28:47,848 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'DB_PATCHES' and inst_by = 'Ranger 1.2.0.3.1.4.0-315' and active = 'Y';" 2021-03-01 11:28:48,341 [I] DB_PATCHES have already been applied 2021-03-01 11:28:48,353 - Directory['/usr/hdp/current/ranger-admin/conf'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True} 2021-03-01 11:28:48,355 - File['/var/lib/ambari-agent/tmp/postgresql-42.2.19.jar'] {'content': DownloadSource('http://ambari.server:8080/resources/postgresql-42.2.19.jar'), 'mode': 0644} 2021-03-01 11:28:48,355 - Not downloading the file from http://ambari.server:8080/resources/postgresql-42.2.19.jar, because /var/lib/ambari-agent/tmp/postgresql-42.2.19.jar already exists 2021-03-01 11:28:48,359 - Execute[('cp', '--remove-destination', u'/var/lib/ambari-agent/tmp/postgresql-42.2.19.jar', u'/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True} 2021-03-01 11:28:48,375 - File['/usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar'] {'mode': 0644} 2021-03-01 11:28:48,376 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...} 2021-03-01 11:28:48,377 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties 2021-03-01 11:28:48,393 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2021-03-01 11:28:48,394 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': u'/usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar'}} 2021-03-01 11:28:48,395 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties 2021-03-01 11:28:48,397 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2021-03-01 11:28:48,398 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://ambari.server:8080/resources/DBConnectionVerification.jar'), 'mode': 0644} 2021-03-01 11:28:48,404 - Not downloading the file from http://ambari.server:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists 2021-03-01 11:28:48,458 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2021-03-01 11:28:48,459 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-ranger.json 2021-03-01 11:28:48,459 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-ranger.json'] {'content': Template('input.config-ranger.json.j2'), 'mode': 0644} 2021-03-01 11:28:48,465 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/ews/lib/ org.apache.ambari.server.DBConnectionVerification 'jdbc:postgresql://172.16.21.18:5432/ranger' rangeradmin [PROTECTED] org.postgresql.Driver'] {'environment': {}, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10} 2021-03-01 11:28:49,725 - Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', u'/usr/hdp/current/ranger-admin/conf')] {'not_if': 'ls /usr/hdp/current/ranger-admin/conf', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf'} 2021-03-01 11:28:49,739 - Skipping Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', u'/usr/hdp/current/ranger-admin/conf')] due to not_if 2021-03-01 11:28:49,740 - Directory['/usr/hdp/current/ranger-admin/'] {'owner': 'ranger', 'group': 'ranger', 'recursive_ownership': True} 2021-03-01 11:28:50,451 - Directory['/var/run/ranger'] {'owner': 'ranger', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2021-03-01 11:28:50,452 - Directory['/var/log/ranger/admin'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2021-03-01 11:28:50,453 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-default-site.xml'] {'owner': 'ranger', 'group': 'ranger'} 2021-03-01 11:28:50,454 - File['/usr/hdp/current/ranger-admin/conf/security-applicationContext.xml'] {'owner': 'ranger', 'group': 'ranger'} 2021-03-01 11:28:50,455 - Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] {'not_if': 'ls /usr/bin/ranger-admin', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh'} 2021-03-01 11:28:50,464 - Skipping Execute[('ln', '-sf', u'/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] due to not_if 2021-03-01 11:28:50,465 - XmlConfig['ranger-admin-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'ranger', 'configurations': ...} 2021-03-01 11:28:50,484 - Generating config: /usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml 2021-03-01 11:28:50,485 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'} 2021-03-01 11:28:50,607 - Directory['/usr/hdp/current/ranger-admin/conf/ranger_jaas'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0700} 2021-03-01 11:28:50,613 - File['/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'ranger', 'group': 'ranger', 'mode': 0644} 2021-03-01 11:28:50,616 - Execute[(u'/usr/jdk64/jdk1.8.0_112/bin/java', '-cp', u'/usr/hdp/current/ranger-admin/cred/lib/', 'org.apache.ranger.credentialapi.buildks', 'create', u'rangeradmin', '-value', [PROTECTED], '-provider', u'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'sudo': True} SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. The alias rangeradmin already exists!! Will try to delete first. FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true] Deleting credential: rangeradmin from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks Credential rangeradmin has been successfully deleted. Provider jceks://file/etc/ranger/admin/rangeradmin.jceks was updated. WARNING: You have accepted the use of the default provider password by not configuring a password in one of the two following locations:

rangeradmin has been successfully created. Provider jceks://file/etc/ranger/admin/rangeradmin.jceks was updated. 2021-03-01 11:28:52,569 - Execute[(u'/usr/jdk64/jdk1.8.0_112/bin/java', '-cp', u'/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', u'trustStoreAlias', '-value', [PROTECTED], '-provider', u'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'sudo': True} SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. The alias trustStoreAlias already exists!! Will try to delete first. FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true] Deleting credential: trustStoreAlias from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks Credential trustStoreAlias has been successfully deleted. Provider jceks://file/etc/ranger/admin/rangeradmin.jceks was updated. WARNING: You have accepted the use of the default provider password by not configuring a password in one of the two following locations:

trustStoreAlias has been successfully created. Provider jceks://file/etc/ranger/admin/rangeradmin.jceks was updated. 2021-03-01 11:28:53,839 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'only_if': 'test -e /etc/ranger/admin/rangeradmin.jceks', 'group': 'ranger', 'mode': 0640} 2021-03-01 11:28:53,847 - File['/etc/ranger/admin/.rangeradmin.jceks.crc'] {'owner': 'ranger', 'only_if': 'test -e /etc/ranger/admin/.rangeradmin.jceks.crc', 'group': 'ranger', 'mode': 0640} 2021-03-01 11:28:53,853 - XmlConfig['core-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'ranger', 'configurations': ...} 2021-03-01 11:28:53,872 - Generating config: /usr/hdp/current/ranger-admin/conf/core-site.xml 2021-03-01 11:28:53,873 - File['/usr/hdp/current/ranger-admin/conf/core-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'} 2021-03-01 11:28:53,970 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env.sh'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0755} 2021-03-01 11:28:53,977 - Execute['ambari-python-wrap /usr/hdp/current/ranger-admin/db_setup.py -javapatch'] {'logoutput': True, 'environment': {'RANGER_ADMIN_HOME': u'/usr/hdp/current/ranger-admin', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'user': 'ranger'} 2021-03-01 11:28:54,452 [I] DB FLAVOR :POSTGRES 2021-03-01 11:28:54,453 [I] --------- Verifying Ranger DB connection --------- 2021-03-01 11:28:54,453 [I] Checking connection.. 2021-03-01 11:28:54,453 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select 1;" 2021-03-01 11:28:54,929 [I] Checking connection passed. 2021-03-01 11:28:54,929 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'JAVA_PATCHES' and inst_by = 'Ranger 1.2.0.3.1.4.0-315' and active = 'Y';" 2021-03-01 11:28:55,364 [I] JAVA_PATCHES have already been applied 2021-03-01 11:28:55,378 - Execute['ambari-python-wrap /usr/hdp/current/ranger-admin/db_setup.py -changepassword -pair admin [PROTECTED] [PROTECTED] -pair rangerusersync [PROTECTED] [PROTECTED] -pair rangertagsync [PROTECTED] [PROTECTED] -pair keyadmin [PROTECTED] [PROTECTED] '] {'logoutput': True, 'environment': {'RANGER_ADMIN_HOME': u'/usr/hdp/current/ranger-admin', 'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}, 'tries': 3, 'user': 'ranger', 'try_sleep': 5} 2021-03-01 11:28:55,861 [I] DB FLAVOR :POSTGRES 2021-03-01 11:28:55,861 [I] --------- Verifying Ranger DB connection --------- 2021-03-01 11:28:55,862 [I] Checking connection.. 2021-03-01 11:28:55,862 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select 1;" 2021-03-01 11:28:56,298 [I] Checking connection passed. 2021-03-01 11:28:56,299 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/postgresql-42.2.19.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://172.16.21.18/ranger -u rangeradmin -p '****' -noheader -trim -c \; -query "select version from x_db_version_h where version = 'DEFAULT_ALL_ADMIN_UPDATE' and active = 'Y';" 2021-03-01 11:28:56,759 [I] Ranger all admins default password has already been changed!! 2021-03-01 11:28:56,772 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2021-03-01 11:28:56,774 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2021-03-01 11:28:56,775 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}

Command failed after 1 tries

quannh-uet commented 3 years ago

You can use this rpm for re-install on grafana node: ambari-metrics-grafana rpm file.

niccolomondino commented 3 years ago

@quannh-uet Thanks for the file. How can I re-install it ?

quannh-uet commented 3 years ago

@niccolomondino Just re-install use rpm package above.

# ssh into grafana node
wget https://makeopensourcegreatagain.com/rpms/ambari-metrics-grafana-2.7.5.0-0.x86_64.rpm
rpm -i ambari-metrics-grafana-2.7.5.0-0.x86_64.rpm
alinanasir01 commented 3 years ago

Hi! we are facing the same issue, but using the rpm technique mentioned above , we were able to start grfana but ambari metric collector still remains in stopped state. Any idea how we can fix that ?

niccolomondino commented 3 years ago

Try to install all 5 ambari-metrics rpms into the corresponding nodes (not just grafana). Fix it for me.

alinanasir01 commented 3 years ago

Thanks! it worked

niccolomondino commented 3 years ago

Glad that it helps. Have you managed to install HUE ? I've opened another issue because I have trouble to install it.

alinanasir01 commented 3 years ago

sorry I'm not using HUE

meenzoon commented 3 years ago

When i installed that service 'ambari-metrics-collector'. Does not started ambari-metrics-collector. I checked the file'https://makeopensourcegreatagain.com/repos/centos/7/ambari/2.7.5.0/ambari-metrics-hadoop-sink-2.7.5.0-0.noarch.rpm' maybe the file was empty. So downloaded the rpm 'https://makeopensourcegreatagain.com/rpms/ambari-metrics-hadoop-sink-2.7.5.0-0.x86_64.rpm' and execute on collector server that command

rpm -Uvh ambari-metrics-hadoop-sink-2.7.5.0-0.x86_64.rpm

and my problem was solved.

Hoping to help someone