Closed SGITLOGIN closed 7 months ago
The second question:
ranger-kms install failed,because ODP should have copied mysql-connector-java.jar to the/usr/odp/current/ranger kms/ews/webapp/lib/directory, but in reality, mysql-connector-java.jar was copied to/usr/odp/current/ranger kms/ews/webapp/lib, and lib became a file instead of a directory
@lucasbak Could you please take a look at the problem? Thank you.
Hi @SGITLOGIN,
Thanks for the report, we also identified ranger-tagsync problem with postgresql connector. Indeed the tree directory for connector should not contain webapp. Will reproduce internally.
Best regards
@lucasbak Solutions to two problems
The solution to the first problem: mkdir -p /etc/spark2/conf
The solution to the second problem: mv /usr/odp/current/ranger-kms/ews/webapp/lib /usr/odp/current/ranger-kms/ews/webapp/lib_bak mkdir /usr/odp/current/ranger-kms/ews/webapp/lib
Will you iterate these two issues into subsequent versions?
@SGITLOGIN ,
Yes, we will add also the fix for ranger-tagsync.
@SGITLOGIN
on which Operating system do you install the packages ?
@lucasbak CentOS 7.9
@SGITLOGIN
We will try to reproduce on centos 7.9
Okay, thank you very much
@lucasbak Could you please also install the Ranger KMS component when installing the cluster? My installation of the Ranger KMS component has also failed
@lucasbak Here are all the components I installed. When you install the cluster, all of the following components are installed.
HDFS
YARN + MapReduce2 Hive Tez Atlas Kafka Hbase Ranger Infra Solr Ranger KMS ZooKeeper Ambari Metrics Spark2 Zeppelin Notebook Flink
@lucasbak Phoenix Query Server startup failed!!! Report no file error: /usr/odp/current/phoenix-server/bin/queryserver.py
@lucasbak Could you please also install the Ranger KMS component when installing the cluster? My installation of the Ranger KMS component has also failed
Yes will do
@lucasbak Atlas startup failed,Work hard to analyze the problem.
@SGITLOGIN Which version of Ambari do you use ?
@lucasbak odp:1.2.2.0-50 odp-utils:1.2.2.0 ambari:2.7.9.0.0-16
@SGITLOGIN
Alright, will reproduce your installation and keep you up to date
Best regards
@lucasbak There is another question. The installed odp version is 1.2.2.0-50, but there is also a 1.2.2.0-53 directory under the /usr/odp/ directory. Is this normal?
@SGITLOGIN.
This is not normal. Did you not mixed up your repository files ?
check in /etc/yum.repos.d/
files
@lucasbak no,I suspect it was added accidentally when making the installation package.
@lucasbak I installed it using Local repository.
We are currently deploying a new cluster We will reproduce your install from the WebUI. no worries ;-)
OK
@lucasbak The Spark2 Thrift Server service started successfully, but after a while, it showed that the service was in a failed state.
Do you require spark2 instead of Spark3 ?
yes,The spark version currently used by our spark program code is 2.*
@lucasbak
@lucasbak Here are all the components I installed. When you install the cluster, all of the following components are installed.
Choose File System
HDFS
Choose Services
YARN + MapReduce2 Hive Tez Atlas Kafka Hbase Ranger Infra Solr Ranger KMS ZooKeeper Ambari Metrics Spark2 Zeppelin Notebook Flink
@SGITLOGIN
OK. Do you use mysql/mariadb for all services backend ?
@lucasbak yes,mysql
Ok. Currently reproducing every error internally and fix it. It may take time as we may need to rebuild RPMs
Ok
@SGITLOGIN
the new version for both ambari and odp stack should be ready next days
Thanks for your support :)
@lucasbak Ok,I have one more request.
@SGITLOGIN
- We have successfully reproduced and found solution for ranger-tagsync not installing/starting
- We have successfully reproduced and found solution for ranger-kms starting
- We have successfully reproduced and found solution for spark2-client install
- We have successfully reproduced and found solution for atlas metadata server not starting
- We have successfully reproduced and found solution for Phoenix Queryserver not starting
the new version for both ambari and odp stack should be ready next days
Thanks for your support :)
@lucasbak
@lucasbak The yarn application log cannot be viewed and the error is as follows: "Logs are unavailable because Application Timeline Service seems unhealthy and could not connect to the JobHistory server.",Please analyze this issue too.
@lucasbak Hi,spark3-shell execution error:Error: Missing application resource. You said "It's a identified bug fixed in later version of ODP 1.2. Spark-shell is not rightly rendered",Has this issue not been fixed yet?
@lucasbak When the odp version is 1.2.2.0-50, the hadoop-aliyun version is hadoop-aliyun-3.3.6.1.2.2.0-50.jar When the odp version is 1.2.1.0-134, the hadoop-aliyun version is hadoop-aliyun-3.3.4.1.2.1.0-134.jar
Questions are as follows: When the odp version is 1.2.1.0-134, put hadoop-aliyun-3.3.4.1.2.1.0-134.jar in the /usr/odp/current/spark3-client/jars directory. There will be no problem when spark accesses oss. . When the odp version is 1.2.2.0-50, hadoop-aliyun-3.3.6.1.2.2.0-50.jar is placed in the /usr/odp/current/spark3-client/jars directory, and spark will report an error when accessing oss.
@SGITLOGIN ,
All issues taken in count and be shipped in the next build:
For Yarn logs we need to reproduce.
However about the hadoop-aliyun, as it is specific to your cluster, we need to discuss it first internally. They are normally reserved for Support Customers.
Will keep you up to date when the build will be released :).
Best regards
@lucasbak Ok, But regarding the hadoop-aliyun package, I think you should also consider the situation where spark accesses hive tables (the underlying data is stored on oss), so the hadoop-aliyun package you provide should also be equipped with the spark version.
@lucasbak Will the next version of ODP fix all the problems I mentioned in this issue? When will the next version of ODP be released?
@SGITLOGIN
Alright. the next version of ODP 1.2.2.0 with all issues will be available before end of the week. Will keep you up to date
@SGITLOGIN ,
For Logs, you can use Yarn ui V1 it will work.
@lucasbak Sorry,I didn't understand what "Yarn ui V1" is,Can you give an example or take a screenshot?
@SGITLOGIN ,
For Logs, you can use Yarn ui V1 it will work.
@lucasbak OK, thank you
@lucasbak There are two more questions about HIVE that need to be reproduced here.
@SGITLOGIN ,
We have identified the issue for DEBUG level. Can you check if the Hiveserver2 and hivemetastore logs are also in DEBUG ?
@lucasbak Hiveserver2 and Hivemetastore log levels are also DEBUG
@SGITLOGIN ,
as a workaround, try to create the /etc/hive/conf/logback.xml
file with the content at https://github.com/clemlabprojects/ambari/blob/5598b04ff598d115af16c987a1ed3978cd46b7a8/ambari-server/src/main/resources/stacks/ODP/1.0/services/HIVE/package/templates/zookeeper-logback.xml.j2
replace zookeeper_log_level by INFO
restart hive service
installed version odp:1.2.2.0-50 odp-utils:1.2.2.0 ambari:2.7.9.0.0-16
The first question: The /etc/spark2/conf directory was not generated during the installation process of Spark2 Client !!!