I've tried to use the hive integration using with hive on tez, but got the following error:
java.lang.RuntimeException: java.lang.RuntimeException: class org.opensearch.hadoop.mr.OpenSearchOutputFormat$OpenSearchOutputCommitter not org.apache.hadoop.mapred.OutputCommitter
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2734)
at org.apache.tez.mapreduce.committer.MROutputCommitter.getOutputCommitter(MROutputCommitter.java:143)
at org.apache.tez.mapreduce.committer.MROutputCommitter.initialize(MROutputCommitter.java:82)
at org.apache.tez.dag.app.dag.impl.VertexImpl$2.run(VertexImpl.java:2452)
at org.apache.tez.dag.app.dag.impl.VertexImpl$2.run(VertexImpl.java:2431)
at java.security.AccessController.doPrivileged(Native Method)
I tried to specify that I want to use the new API but I only got a new error message:
ERROR : Failed to execute tez graph.
java.lang.RuntimeException: java.lang.RuntimeException: class org.opensearch.hadoop.mr.OpenSearchOutputFormat$OpenSearchOutputCommitter not org.apache.hadoop.mapred.OutputCommitter
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2734)
at org.apache.hadoop.mapred.JobConf.getOutputCommitter(JobConf.java:725)
at java.util.Optional.map(Optional.java:215)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.collectCommitInformation(TezTask.java:381)
Configuration cfg = getConf();
// NB: we can't just merge the table properties in, we need to save them per input/output otherwise clashes occur which confuse Hive
Settings settings = HadoopSettingsManager.loadFrom(cfg);
//settings.setProperty((read ? HiveConstants.INPUT_TBL_PROPERTIES : HiveConstants.OUTPUT_TBL_PROPERTIES), IOUtils.propsToString(tableDesc.getProperties()));
if (read) {
// no generic setting
}
else {
// replace the default committer when using the old API
HadoopCfgUtils.setOutputCommitterClass(cfg, OpenSearchOutputFormat.OpenSearchOutputCommitter.class.getName());
}
We do specify the outputcommitter class, but this is useless, it is already set implicitly. And the comment is misleading, according to the comment we use the old API, but this is the new API. And I don't know why we check the read property here, it has noting to do with the old/new API.
Issues Resolved
Removing this codepart solved the problem for me.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Description
I've tried to use the hive integration using with hive on tez, but got the following error:
The code assumed that we want to use the old api and not the new api: https://github.infra.cloudera.com/CDH/tez/blob/cdpd-master/tez-mapreduce/src/main/java/org/apache/tez/mapreduce/committer/MROutputCommitter.java#L117-L124
I tried to specify that I want to use the new API but I only got a new error message:
I think at this point we require the old api: https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L366
Later I realised that the problem is here: https://github.com/opensearch-project/opensearch-hadoop/blob/main/hive/src/main/java/org/opensearch/hadoop/hive/OpenSearchStorageHandler.java#L120
We do specify the outputcommitter class, but this is useless, it is already set implicitly. And the comment is misleading, according to the comment we use the old API, but this is the new API. And I don't know why we check the read property here, it has noting to do with the old/new API.
Issues Resolved
Removing this codepart solved the problem for me.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here.