apache / incubator-gluten

Gluten is a middle layer responsible for offloading JVM-based SQL engines' execution to native engines.
https://gluten.apache.org/
Apache License 2.0
1.14k stars 415 forks source link

[VL] Spark execute muilti tpcds sql (tpcds tt test mode), Gluten report error or crash #4196

Open yixi-gu opened 8 months ago

yixi-gu commented 8 months ago

Backend

VL (Velox)

Bug description

[Expected behavior] and [actual behavior]. spark vanilla execute tpcds tt ( executor 4 tpcds 99 queries same time)test mode pass and no error report;

Spark version

None

Spark configurations

spark.master yarn spark.deploy-mode client spark.eventLog.enabled true spark.eventLog.dir hdfs://master:9000/sparklogs spark.driver.cores 8 spark.driver.memory 9g spark.driver.maxResultSize 27g spark.executor.instances 24 spark.executor.cores 4 spark.executor.memory 6g spark.executor.memoryOverhead 1g spark.memory.offHeap.enabled true spark.gluten.enabled true spark.plugins io.glutenproject.GlutenPlugin spark.gluten.sql.columnar.backend.lib velox spark.shuffle.manager org.apache.spark.shuffle.sort.ColumnarShuffleManager spark.executorEnv.VELOX_HDFS hdfs://master:9000 spark.gluten.loadLibFromJar true spark.io.compression.codec lz4 spark.memory.offHeap.size 10g spark.task.maxFailures 1 spark.default.parallelism 192 spark.kryoserializer.buffer 288k spark.memory.fraction 0.55 spark.shuffle.file.buffer 128k spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold 8m spark.sql.autoBroadcastJoinThreshold 1m spark.sql.broadcastTimeout 6900 spark.sql.files.maxPartitionBytes 384m spark.sql.files.openCostInBytes 3m spark.sql.shuffle.partitions 192

System information

No response

Relevant logs

error1:

Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Task 134 in stage 27290.0 failed 1 times, most recent failure: Lost task 134.0 in stage 27290.0 (TID 1215849) (node3 executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Container from a bad node: container_1702888231711_0041_01_000009 on host: node3. Exit status: 134. Diagnostics: 041_01_000009/stderr
Last 4096 bytes of stderr :
utor$Worker.run(ThreadPoolExecutor.java:624)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 more
, Source: RUNTIME, ErrorCode: INVALID_STATE
E1226 12:51:21.793781 213655 Exceptions.h:69] Line: ../../velox/exec/Driver.cpp:550, Function:runInternal, Expression:  Operator::getOutput failed for [operator: ValueStream, plan node ID: 1]: Error during calling Java code from native code: org.apache.spark.shuffle.FetchFailedException
        at org.apache.spark.errors.SparkCoreErrors$.fetchFailedError(SparkCoreErrors.scala:312)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:1180)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:918)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:85)
        at org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29)
        at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:32)
        at io.glutenproject.vectorized.GeneralInIterator.hasNext(GeneralInIterator.java:31)
        at io.glutenproject.vectorized.ColumnarBatchOutIterator.nativeHasNext(Native Method)
        at io.glutenproject.vectorized.ColumnarBatchOutIterator.hasNextInternal(ColumnarBatchOutIterator.java:65)
        at io.glutenproject.vectorized.GeneralOutIterator.hasNext(GeneralOutIterator.java:37)
        at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
        at io.glutenproject.utils.IteratorCompleter.hasNext(Iterators.scala:69)
        at io.glutenproject.utils.PayloadCloser.hasNext(Iterators.scala:35)
        at io.glutenproject.utils.PipelineTimeAccumulator.hasNext(Iterators.scala:98)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at org.apache.spark.shuffle.ColumnarShuffleWriter.internalWrite(ColumnarShuffleWriter.scala:102)
        at org.apache.spark.shuffle.ColumnarShuffleWriter.write(ColumnarShuffleWriter.scala:218)
        at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.ExecutorDeadException: The relative remote executor(Id: 22), which maintains the block data to fetch is dead.
        at org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(NettyBlockTransferService.scala:136)
        at org.apache.spark.network.shuffle.RetryingBlockTransferor.transferAllOutstanding(RetryingBlockTransferor.java:173)
        at org.apache.spark.network.shuffle.RetryingBlockTransferor.lambda$initiateRetry$0(RetryingBlockTransferor.java:206)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 more
, Source: RUNTIME, ErrorCode: INVALID_STATE

error2:

/bin/bash: line 1: 213071 Aborted                 (core dumped) /usr/lib/jvm/java/bin/java -server -Xmx4096m '-XX:+IgnoreUnrecognizedVMOptions' '--add-opens=java.base/java.lang=ALL-UNNAMED' '--add-opens=java.base/java.lang.invoke=ALL-UNNAMED' '--add-opens=java.base/java.lang.reflect=ALL-UNNAMED' '--add-opens=java.base/java.io=ALL-UNNAMED' '--add-opens=java.base/java.net=ALL-UNNAMED' '--add-opens=java.base/java.nio=ALL-UNNAMED' '--add-opens=java.base/java.util=ALL-UNNAMED' '--add-opens=java.base/java.util.concurrent=ALL-UNNAMED' '--add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED' '--add-opens=java.base/sun.nio.ch=ALL-UNNAMED' '--add-opens=java.base/sun.nio.cs=ALL-UNNAMED' '--add-opens=java.base/sun.security.action=ALL-UNNAMED' '--add-opens=java.base/sun.util.calendar=ALL-UNNAMED' '--add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED' -Djava.io.tmpdir=/data/data5/local/usercache/root/appcache/application_1702888231711_0041/container_1702888231711_0041_01_000009/tmp '-Dspark.driver.port=35929' -Dspark.yarn.app.container.log.dir=/data/data5/log/application_1702888231711_0041/container_1702888231711_0041_01_000009 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@master:35929 --executor-id 8 --hostname node3 --cores 4 --app-id application_1702888231711_0041 --resourceProfileId 0 > /data/data5/log/application_1702888231711_0041/container_1702888231711_0041_01_000009/stdout 2> /data/data5/log/application_1702888231711_0041/container_1702888231711_0041_01_000009/stderr
Last 4096 bytes of stderr :
utor$Worker.run(ThreadPoolExecutor.java:624)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 more
, Source: RUNTIME, ErrorCode: INVALID_STATE
E1226 12:51:21.793781 213655 Exceptions.h:69] Line: ../../velox/exec/Driver.cpp:550, Function:runInternal, Expression:  Operator::getOutput failed for [operator: ValueStream, plan node ID: 1]: Error during calling Java code from native code: org.apache.spark.shuffle.FetchFailedException
        at org.apache.spark.errors.SparkCoreErrors$.fetchFailedError(SparkCoreErrors.scala:312)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:1180)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:918)
        at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:85)
        at org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29)
        at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:32)
        at io.glutenproject.vectorized.GeneralInIterator.hasNext(GeneralInIterator.java:31)
        at io.glutenproject.vectorized.ColumnarBatchOutIterator.nativeHasNext(Native Method)
        at io.glutenproject.vectorized.ColumnarBatchOutIterator.hasNextInternal(ColumnarBatchOutIterator.java:65)
        at io.glutenproject.vectorized.GeneralOutIterator.hasNext(GeneralOutIterator.java:37)
        at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
        at io.glutenproject.utils.IteratorCompleter.hasNext(Iterators.scala:69)
        at io.glutenproject.utils.PayloadCloser.hasNext(Iterators.scala:35)
        at io.glutenproject.utils.PipelineTimeAccumulator.hasNext(Iterators.scala:98)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at org.apache.spark.shuffle.ColumnarShuffleWriter.internalWrite(ColumnarShuffleWriter.scala:102)
        at org.apache.spark.shuffle.ColumnarShuffleWriter.write(ColumnarShuffleWriter.scala:218)
        at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.ExecutorDeadException: The relative remote executor(Id: 22), which maintains the block data to fetch is dead.
        at org.apache.spark.network.netty.NettyBlockTransferService$$anon$2.createAndStart(NettyBlockTransferService.scala:136)
        at org.apache.spark.network.shuffle.RetryingBlockTransferor.transferAllOutstanding(RetryingBlockTransferor.java:173)
        at org.apache.spark.network.shuffle.RetryingBlockTransferor.lambda$initiateRetry$0(RetryingBlockTransferor.java:206)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 more
, Source: RUNTIME, ErrorCode: INVALID_STATE
Yohahaha commented 8 months ago
org.apache.spark.shuffle.FetchFailedException

You may try with Celeborn.