apache / amoro

Apache Amoro (incubating) is a Lakehouse management system built on open data lake formats.
https://amoro.apache.org/
Apache License 2.0
762 stars 267 forks source link

[Bug]: deserialization error when merging small files using the spark engine #2892

Closed daxiguaA666 closed 4 weeks ago

daxiguaA666 commented 1 month ago

What happened?

deserialization error when merging small files using the spark engine 企业微信截图_1716966460202

Affects Versions

master

What engines are you seeing the problem on?

Spark

How to reproduce

The iceberg version is 1.4.2. The spark engine ran normally for a long time until data was written to a table using iceberg1.0.0. After that, all merge tasks failed. All tables were deleted, and a new iceberg1.4.2 table was created and used to write data. The merge task still failed.

Relevant log output

java.lang.IllegalArgumentException: deserialization error 
    at org.apache.amoro.utils.SerializationUtil.simpleDeserialize(SerializationUtil.java:76)
    at org.apache.amoro.optimizer.common.OptimizerExecutor.executeTask(OptimizerExecutor.java:133)
    at org.apache.amoro.optimizer.spark.SparkOptimizingTaskFunction.call(SparkOptimizingTaskFunction.java:45)
    at org.apache.amoro.optimizer.spark.SparkOptimizingTaskFunction.call(SparkOptimizingTaskFunction.java:33)
    at org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070)
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
    at scala.collection.Iterator.foreach(Iterator.scala:943)
    at scala.collection.Iterator.foreach$(Iterator.scala:943)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
    at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
    at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
    at scala.collection.AbstractIterator.to(Iterator.scala:1431)
    at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
    at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
    at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1021)
    at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2278)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:136)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.InvalidClassException: org.apache.iceberg.BaseFile; local class incompatible: stream classdesc serialVersionUID = -5355849892016662001, local class serialVersionUID = 5090992017134487949
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
    at org.apache.amoro.utils.SerializationUtil.simpleDeserialize(SerializationUtil.java:73)
    ... 31 more

Anything else

The iceberg version is 1.4.2. The spark engine ran normally for a long time until data was written to a table using iceberg1.0.0. After that, all merge tasks failed. All tables were deleted, and a new iceberg1.4.2 table was created and used to write data. The merge task still failed.

Are you willing to submit a PR?

Code of Conduct

klion26 commented 1 month ago

seems the serialversionUID of org.apache.iceberg.BaseFile did not match, could you please check the serialversionUID of org.apache.iceberg.BaseFile in iceberg 1.4.2 and 1.0.0 jar file

daxiguaA666 commented 4 weeks ago

The BaseFile of iceberg1.0.0 cannot be viewed, but I cleared the database, rebuilt the table, and used iceberg 1.4.2 to write it. The rewritten file serialization problem still exists.

daxiguaA666 commented 4 weeks ago

image

daxiguaA666 commented 4 weeks ago

OK, the problem is solved, the reason is that there is an additional iceberg dependency in the spark jar directory