apache / incubator-gluten

Gluten is a middle layer responsible for offloading JVM-based SQL engines' execution to native engines.
https://gluten.apache.org/
Apache License 2.0
1.08k stars 392 forks source link

[VL] s3 endpoint can't use default setting of instance #2638

Closed yma11 closed 11 months ago

yma11 commented 11 months ago

As checked on aws instance, if spark.hadoop.fs.s3a.endpoint and env AWS_ENDPOINT is not set correctly, such as default value "localhost:9000", the endpoint will be come incorrect and result in wrong file path. It will cause error like following:

java.lang.RuntimeException: Exception: VeloxRuntimeError Error Source: RUNTIME Error Code: INVALID_STATE Reason: Failed to get metadata for S3 object due to: 'Network connection'. Path:'s3://gluten-perf/sf50/supplier/part-00067-83922bc6-91d0-415b-84dc-ca1e232216c0-c000.snappy.parquet', SDK Error Type:99, HTTP Status Code:-1, S3 Service:'Unknown', Message:'curlCode: 7, Couldn't connect to server'

sagarlakshmipathy commented 4 months ago

@zhouyuan

i'm hitting the same error with v1.1.0. i wonder what needs to change in my case?

./spark-3.4.1-bin-hadoop3/bin/spark-shell --master yarn     --deploy-mode client  --conf spark.plugins=io.glutenproject.GlutenPlugif spark.memory.offHeap.enabled=true   --conf spark.memory.offHeap.size=30g   --conf spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager  --conf spark.serializer=org.apache.spark.serializer.KryoSerializer   --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension"   --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog"   --jars hudi-benchmarks-0.1-SNAPSHOT.jar --packages org.apache.hadoop:hadoop-aws:3.3.4,org.apache.hudi:hudi-spark3.4-bundle_2.12:0.14.1

scala> (0 until 10).toDF("a").write.format("delta").mode("overwrite").save("s3a://s3-calls-log-bucket/test/test_gluten_table")

scala> spark.read.format("delta").load("s3a://s3-calls-log-bucket/test/test_gluten_table").show()

24/03/14 07:52:15 WARN TaskSetManager: Lost task 0.0 in stage 10.0 (TID 105) (ip-10-0-102-188.us-west-2.compute.internal executor 1): io.glutenproject.exception.GlutenException: java.lang.RuntimeException: Exception: VeloxRuntimeError
Error Source: RUNTIME
Error Code: INVALID_STATE
Reason: Failed to get metadata for S3 object due to: 'Network connection'. Path:'s3://s3-calls-log-bucket/test/test_gluten_table/part-00000-e172ddcb-2f73-4248-b692-2a7b496f82af-c000.snappy.parquet', SDK Error Type:99, HTTP Status Code:-1, S3 Service:'Unknown', Message:'curlCode: 7, Couldn't connect to server', RequestID:''
Retriable: False

Don't worry about the hudi jars, those are part of my application

yma11 commented 4 months ago

@sagarlakshmipathy So your app can run successfully without enabling Gluten? can't tell it's a credential issue from the error message. What's your configuration for s3 access?

sagarlakshmipathy commented 4 months ago

yes it was able to. here's my config

./spark-3.4.1-bin-hadoop3/bin/spark-shell --master yarn --deploy-mode client  --conf spark.serializer=org.apache.spark.serializer.KryoSerializer   --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension"   --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog"   --jars hudi-benchmarks-0.1-SNAPSHOT.jar --packages org.apache.hadoop:hadoop-aws:3.3.4,org.apache.hudi:hudi-spark3.4-bundle_2.12:0.14.1

scala> (0 until 10).toDF("a").write.format("delta").mode("overwrite").save("s3a://s3-calls-log-bucket/test/test_gluten_table")

scala> spark.read.format("delta").load("s3a://s3-calls-log-bucket/test/test_gluten_table").show() +---+ | a| +---+ | 0| | 1| | 2| | 3| | 4| | 5| | 6| | 7| | 8| | 9| +---+

I'm running this on EMR, it picks up access using the instance role. I want to call out that it writes without any issues. Just the read fails.

sagarlakshmipathy commented 4 months ago

Here's a bigger stacktrace FWIW

Caused by: io.glutenproject.exception.GlutenException: java.lang.RuntimeException: Exception: VeloxRuntimeError
Error Source: RUNTIME
Error Code: INVALID_STATE
Reason: Failed to get metadata for S3 object due to: 'Network connection'. Path:'s3://s3-calls-log-bucket/test/test_gluten_table/part-00001-6332af6a-b5ba-4962-b81a-56477d893db9-c000.snappy.parquet', SDK Error Type:99, HTTP Status Code:-1, S3 Service:'Unknown', Message:'curlCode: 7, Couldn't connect to server', RequestID:''
Retriable: False
Context: Split [Hive: s3a://s3-calls-log-bucket/test/test_gluten_table/part-00001-6332af6a-b5ba-4962-b81a-56477d893db9-c000.snappy.parquet 0 - 458] Task Gluten_Stage_19_TID_220
Top-Level Context: Same as context.
Function: initialize
File: /root/src/oap-project/gluten/ep/build-velox/build/velox_ep/velox/connectors/hive/storage_adapters/s3fs/S3FileSystem.cpp
Line: 93
Stack trace:
# 0  _ZN8facebook5velox7process10StackTraceC1Ei
# 1  _ZN8facebook5velox14VeloxExceptionC1EPKcmS3_St17basic_string_viewIcSt11char_traitsIcEES7_S7_S7_bNS1_4TypeES7_
# 2  _ZN8facebook5velox6detail14veloxCheckFailINS0_17VeloxRuntimeErrorERKSsEEvRKNS1_18VeloxCheckFailArgsET0_
# 3  _ZN8facebook5velox12_GLOBAL__N_110S3ReadFile10initializeEv
# 4  _ZN8facebook5velox11filesystems12S3FileSystem15openFileForReadESt17basic_string_viewIcSt11char_traitsIcEERKNS1_11FileOptionsE
# 5  _ZN8facebook5velox19FileHandleGeneratorclERKSs
# 6  _ZN8facebook5velox13CachedFactoryISsSt10shared_ptrINS0_10FileHandleEENS0_19FileHandleGeneratorEE8generateERKSs
# 7  _ZN8facebook5velox9connector4hive14HiveDataSource8addSplitESt10shared_ptrINS1_14ConnectorSplitEE
# 8  _ZN8facebook5velox4exec9TableScan9getOutputEv
# 9  _ZN8facebook5velox4exec6Driver11runInternalERSt10shared_ptrIS2_ERS3_INS1_13BlockingStateEERS3_INS0_9RowVectorEE
# 10 _ZN8facebook5velox4exec6Driver4nextERSt10shared_ptrINS1_13BlockingStateEE
# 11 _ZN8facebook5velox4exec4Task4nextEPN5folly10SemiFutureINS3_4UnitEEE
# 12 _ZN6gluten24WholeStageResultIterator4nextEv
# 13 Java_io_glutenproject_vectorized_ColumnarBatchOutIterator_nativeHasNext
# 14 0x00007f5ba1018427

  at io.glutenproject.vectorized.GeneralOutIterator.hasNext(GeneralOutIterator.java:39)
  at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
  at io.glutenproject.utils.InvocationFlowProtection.hasNext(Iterators.scala:135)
  at io.glutenproject.utils.IteratorCompleter.hasNext(Iterators.scala:69)
  at io.glutenproject.utils.PayloadCloser.hasNext(Iterators.scala:35)
  at io.glutenproject.utils.PipelineTimeAccumulator.hasNext(Iterators.scala:98)
  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
  at scala.collection.Iterator.isEmpty(Iterator.scala:387)
  at scala.collection.Iterator.isEmpty$(Iterator.scala:387)
  at org.apache.spark.InterruptibleIterator.isEmpty(InterruptibleIterator.scala:28)
  at io.glutenproject.execution.VeloxColumnarToRowExec$.toRowIterator(VeloxColumnarToRowExec.scala:95)
  at io.glutenproject.execution.VeloxColumnarToRowExec.$anonfun$doExecuteInternal$1(VeloxColumnarToRowExec.scala:79)
  at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:853)
  at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:853)
  at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
  at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
  at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
  at org.apache.spark.scheduler.Task.run(Task.scala:139)
  at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:750)
yma11 commented 4 months ago

can you try adding following in your spark conf?

spark.hadoop.fs.s3a.impl           org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.aws.credentials.provider com.amazonaws.auth.InstanceProfileCredentialsProvider
spark.hadoop.fs.s3a.endpoint ***
spark.hadoop.fs.s3a.use.instance.credentials true
spark.hadoop.fs.s3a.connection.ssl.enabled true
spark.hadoop.fs.s3a.path.style.access false
sagarlakshmipathy commented 4 months ago

yeah that was it, I used --conf spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.DefaultAWSCredentialsProviderChain instead of InstanceProfileCredentialsProvider

thanks a bunch @yma11