Alluxio / alluxio

Alluxio, data orchestration for analytics and machine learning in the cloud
https://www.alluxio.io
Apache License 2.0
6.79k stars 2.93k forks source link

SparkSql query Hudi received kryo exception #17259

Open rhh777 opened 1 year ago

rhh777 commented 1 year ago

Alluxio Version: 2.9.0-1

Describe the bug

Sparksql version: 3.1.2
Hudi version: 0.11.1

hudi data location is alluxio . when use sparksql query hudi table and enabled kryo , I received an exception: KryoException: java.lang.UnsupportedOperationException

To Reproduce

spark-sql --packages org.apache.hudi:hudi-spark3.1-bundle_2.12:0.11.1 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

create table if not exists hudi_cow_test (
  uuid int,
  name string,
  price double
) using hudi
tblproperties (
  primaryKey = 'uuid'
) location 'alluxio:///pathto/hudi_cow_test';

insert into table hudi_cow_test select 2,'aa',1.22;

select * from hudi_cow_test;
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Exception while getting task result: com.esotericsoftware.kryo.KryoException: java.lang.UnsupportedOperationException
Serialization trace:
mTiers (alluxio.wire.TieredIdentity)
mTieredIdentity (alluxio.wire.WorkerNetAddress)
mWorkerAddress (alluxio.wire.BlockLocation)
mLocations (alluxio.wire.BlockInfo)
mBlockInfo (alluxio.wire.FileBlockInfo)
mFileBlockInfoList (alluxio.wire.FileInfo)
mInfo (alluxio.client.file.URIStatus)
mUriStatus (alluxio.hadoop.AlluxioFileStatus)
right (org.apache.hudi.common.util.collection.ImmutablePair)
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2259)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2208)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2207)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2207)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2446)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2388)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2377)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2261)
    at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
    at org.apache.spark.api.java.JavaRDDLike.collect(JavaRDDLike.scala:362)
    at org.apache.spark.api.java.JavaRDDLike.collect$(JavaRDDLike.scala:361)
    at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
    at org.apache.hudi.client.common.HoodieSparkEngineContext.map(HoodieSparkEngineContext.java:103)
    at org.apache.hudi.metadata.FileSystemBackedTableMetadata.getAllPartitionPaths(FileSystemBackedTableMetadata.java:86)
    at org.apache.hudi.common.fs.FSUtils.getAllPartitionPaths(FSUtils.java:313)
    ... 117 more

Expected behavior no exception.

Urgency hudi is unavailable on alluxio 2.9.0-1.

Are you planning to fix it Change the mTiers of the TieredIdentity object to another List.

Additional context Before using alluxio2.9.0-1, I used 2.7.3.1 as normal.

rhh777 commented 1 year ago

https://github.com/Alluxio/alluxio/blob/master/core/common/src/main/java/alluxio/wire/TieredIdentity.java#L45 when i Change the mTiers of the TieredIdentity object to another List, eg: ArrayList, it's ok.

public TieredIdentity(@JsonProperty("tiers") List<LocalityTier> tiers) {
    //mTiers = ImmutableList.copyOf(Preconditions.checkNotNull(tiers, "tiers"));
    mTiers = new ArrayList<>();
    mTiers.addAll(tiers);
  }
huanghua78 commented 1 year ago

Thank you @rhh777 for reporting this issue and also a fix to this issue. We will have Alluxio engineers to verify this bug and the patch.

huanghua78 commented 1 year ago

@jiacheliu3 Can you please take a look at this issue?

jodang99 commented 1 year ago

Hi there, I got the same issue. Could you let me know the date to clear this issue?

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions.